10
votes
AI IT project management
Im part of the EPMO of a healthcare system. We just got licenses and an intro to co-pilot for teams, word, excel , PowerPoint.
I swear this AI will tell you all the questions asked during a meeting. If you join a meeting late you can ask it to recap the meeting thus far. Did you get a sales presentation from a vendor you need to recap and present to stakeholders. Ask co pilot to create a pdf from the documentation the vendor provided.
AI is making my job so much easier but at the same time I kinda feel like I’m training my replacement.
Are you using AI at your job, how are you using it and how do you feel about it use in the workplace and if it will one day replace you?
I've been using AI like this for the job it's best at: coding. There's a key shortfall here that should make you feel safer about your job: it is great at doing things that have been done before, but it will make some crazy suggestions/output when you reach outside those boundaries.
It's easy to subconsciously read intelligence into these tools because of how intelligent they sound, but all they are doing is stringing words together that are most likely to make sense. If any thinking is going on, it's only about the statistical likelihood of sentence structure.
There might be some jobs that are in danger, but I suspect they'll mostly just be another useful tool to anyone with a job that requires a bit of thinking.
A lot of my coworkers use Copilot for coding, and while we don't get licenses by default, there's an open invitation for anyone to request one. I was (very cautiously) optimistic because other people seemed to be getting so much value out of it, and because I can imagine ways a thing like this could work well (involving the LLM being able to make LSP queries to gather information before spitting out its results).
I had a lot of boring code to write one day, so I decided to give it a try. Half an hour later, I got my license and started playing with it. Half an hour after that, I gave up and returned my license. Everything it produced was syntactically correct, but complete nonsense. I had about a 50% success rate for identifiers in the code it spit out existing at all, and less than that for using them correctly. At no point did it produce anything of value.
I would love to see a correlation analysis between whether this thing is helpful to a given person, and how optimistic that person is about AI generally. (Conspiracy theory, entirely joking, I do not believe this: strong correlation, but with the causation the opposite way; the AI knows how you feel about it from watching your posts online, and is intentionally mean to people who have extensively made fun of it before.)
I find that Copilot in coding sometimes helps and sometimes it's just a distraction showing me tab completions I don't want. At work I'd say it's a wash--I save about as much time not typing a line I didn't need to as I spend hitting tab on something that's almost correct, but then backing through it and editing while realizing it would have been quicker to just type it myself.
On some personal projects it's an enormous time-saver, though. One thing I'm working on requires making a large number of quite predictable derivations of a base sentence in a foreign language with some metadata interpolated at predictable points. Doing it by hand would be theoretically simple but incredibly painful and boring. Copilot is great at picking up the pattern such that I need only write the first one and then usually tab-complete all the derivations without editing.
That's super interesting, what language are you writing in? Also, what editor? It's surprising to hear about such total dysfunction to the point of making up names that don't exist.
Full stack Typescript. VSCode is the most common editor, some people use WebStorm instead.
I use it mostly in a very non-creative way.: Helping to finish off or generate correspondence and to ask questions about specific guidances within the industry that I work in.
It's been pretty invaluable to use something like ChatGPT to generate first drafts for something like an incident report, or a thank-you note. Sometimes I'll input the first paragraph of something I've written and then give it the instruction to provide a fitting conclusion or sign-off.
I also use Kagi's FastGPT as a search tool. I give it questions like "Does test X need to be performed in compliance with Y regulatory guidance" or "In the context of analytical chemistry, how could one evaluate the purity of a monocolonal antibody". You generally get a good summary of what you are looking for, but more importantly you get links to the documents that it is citing so you can better evaluate the response based on the source material.
AI is a neat tool that will be misused by lots of people who don't know what they are doing. If you do know what you are doing however, it can make things easier for you by lightening some of your cognitive load and doing some of the drudge work (i.e. generating boilerplate code, summarizing documents, etc.)
For these purposes it's worked out pretty great
I use ChatGPT for several different things, but one of the most helpful has been to have it craft "nicer" emails for me. I'm a pretty straightforward and to the point kind of guy. Many of my replies to emails can come across harsh or finger pointing, even when that's not my intent. Most of the time I can see that in my writing before I hit send. So, I'll pop my reply into ChatGPT and ask for it to rewrite it in a nicer tone.
I've had this exact same experience, but haven't been able to find a good prompt to work reliably. By default I usually get back a few paragraphs of very formal text when I just want something that sounds nice in a couple of sentences. But when it works I feel it takes a weight of my shoulders, there's a bit of anxiety when I send these messages raw as I'm not sure how it will be perceived even though my intent is good.
Have had good luck just using something like the following. Note, I don't copy and paste this prompt, just write something impromptu :) each time.
I'm writing an email reply to a colleague and I'm afraid that the message may come across a bit harsh. Could you review my reply and rewrite it so that I don't cause any offense?
[inert email here]
I'm reminded of Manna.
I have no doubts it will come to pass. There's a lot of tiers of management that do little more than help get people to talk to other people or direct people around. And that's a lot easier to automate with an algorithm than performing surgury, roofing a house, or programming.
I am a programmer and I have found that it is great at writing boilerplate code. I will carefully craft one unit test and tell it generate one for all the other methods in a class for example. It can also greatly help by making a jumping of point.
The thing with coding specifically is that the core business is still designing an app that meets all the requirements and figuring them out when users themselves often can't clearly articulate what they want. Add to that the management of a growing system and keeping it stable, performant and easy to use and you've got the core tasks of a software engineer that is very hard to replace with AI. AI being able to code just means we will spend less time crunching code, but that's just a continuation of a longer trend: assembly to C to high-level languages to highly integrated full-stack frameworks (obviously skipping over a lot).