41
votes
What AI tools are you actually using?
On my work system I mostly tend to use the Bing Copilot to help me quickly write emails and statements to prepare a document.
On my work system I mostly tend to use the Bing Copilot to help me quickly write emails and statements to prepare a document.
I occasionally use the free version of ChatGPT to ask music theory questions, or for "reverse term lookup" where I describe a concept and ask if there's some established term for it.
It kind of sucks at it, because it doesn't seem to have any concept of confidence and so will answer every question as though it knows exactly what it's talking about even when it doesn't, but it can be useful as a starting point for further investigation.
Kind of like having 24/7 access to a well read librarian that lies pathologically on a bad day.
In any serious capacity, none.
I am pretty solidly on the side against using AI to write emails and such, because I don't really want any additional fluff in my emails. If you ask an AI to translate a list of bullet points into an email, I would much rather have just received the list of bullet points. It gets even better when the receiver uses AI to translate the verbose email back to bullet points.
I dabble with some image generation stuff whenever a new big thing comes out just to see the current state, but I am still pretty unimpressed by AI images. They all still land solidly in Uncanny Valley to me.
Repeat indefinitely for more and more semantic loss!
Regarding AI for text output, I have been working to develop "assistants" that remove bloat and filler words from the outputs I prompt. It's helped me to get a core draft written quickly from concepts provided. Then it's a matter of subtractive editing and proofing to clean up the message to share.
It may sound like more work to some, but my way of thinking works well with this process. I have a tendency to go off on tangents and being able to have a complete thought (AI generated from my prompt) that I then edit into a concise message just works.
I use it in a similar way. I write a first draft quickly and then give it to the AI to help restructure my sentences better. I then edit it again to remove bloat, make it more concise, and reword things to sound more like me.
AI helps me write emails faster because I spend less time staring at a blank page and second guessing the way I word things. It's less daunting to edit than it is to write from scratch.
I find the whole premise disrespectful. If your email isn't worth spending five minutes to properly think about what you want to say to me, it's not worth my time to read. If you don't understand an idea enough to formulate what you want to convey and put it to words, you don't understand it enough to vet whatever nonsense a glorified Markov chain spits out from half-formed thoughts put in as a prompt.
This isn't using a helpful tool like a spellchecker, this is willingly delegating thought to an unthinking machine. And if you can't be arsed to put thought into communication, it's worth an equal amount of brain cycles to read (null).
Your comment has got me thinking about the corporate world and neurospicy accessibility, bouncing around my head all day.
I think in an ideal world where writing an email took little effort, then I’d absolutely be the kind of person to write two parallel emails: one in regular corporate speak, and one in concise dot points, and the person on the receiving end could pick whichever one they prefer.
Unfortunately that’s not a realistic option — if I interact with someone often enough to know they have a preference for the dot points, I’ll use dot points when it’s a message just for them. But despite only being in the corporate space for a few years now, I’ve already been trained to default to the corporate way of writing emails, especially if I know my emails are sent (or later forwarded) to several people that I don’t know very well.
In the meantime, as much as it feels redundant to use two layers of an LLM to encode and decode the underlying message into corporate speak, I completely understand why it’s being used that way currently, and I don’t begrudge anyone running an email through a “dot-point translation layer” to better fit their own brain, be it neurospicy or otherwise.
There's a method for managing ADHD called "body doubling" where the ADHDer essentially just has a second person with them while they do their tasks or whatever. The helper can provide support, but really it's just their presence that is most important.
So I basically use ChatGPT for that. I'll tell it what I want to work on and what I think I should do, and since ChatGPT and LLMs generally just reinforce the prompter, the result is a response like "That sounds like a great way to get started!" followed by a list of steps or ideas.
The main value is that it's basically an outlet for me to verbalize my thoughts (which helps me string a plan together and prioritize tasks) and then a response that is just pure support and reinforcement. It really reduces friction when I'm motivated to work on something but have no fucking clue how to get started.
This is a really cool use I haven't heard before now. Do you think it would help even more if it was actually verbal instead of through the text interface? I've found it really enticing to try to hook up chatGPT to a speech-to-text and then text-to-speech to have a verbal assistant / companion with me while working on whatever.
I've also wondered how much value could be gotten out of LLMs that talk at you unprompted. Like, for body doubling or any sort of assistant, they would sometimes initiate the conversation too, right? Or give encouragement unsolicited. I'd actually love it if I told ChatGPT I was doing something, and then an hour later it asked "so how is that project going?"
ChatGPT actually has a pretty excellent voice assistant built in now! I made consistent use of it while I was building a challenging coding project in a language and environment that I was deeply unfamiliar with.
I’d have a window open to get code suggestions, and my phone propped up in front of me so I could rant at it with a stream of consciousness whenever I was confused or needed to clarify something. It did a great job of both figuring out how to address my general confusion and reinforcing that it’s all part of the learning process. It worked as kind of a combined tutor/therapist — helped me a ton.
Can easily make a bot for that. I made one that periodically asks if I watched the shows it recommended to me earlier etc.
I was just diagnosed with ADD, I may try this. Thank you for sharing.
The Goblin Tools app is also a good one to mix in when doing tasks. It helps break them down into smaller steps so things don't seem as overwhelming and gives you a place to start.
This is awesome. Could you describe or even show your typical conversation?
I was using GPT 4 and Gemini to lower the reading level of texts for my special ed students.
My district purchased magic school (which uses GPT 4) so now I'm using that alongside Gemini.
You definitely have to review and revise it yourself, but it's saved me dozens of hours. I no longer have to painstakingly simplify things word by word.
I also used GPT for some coding projects successfully. It actually figured out a problem I couldn't solve in Google Sheets. I don't know where it got the info because even a very thorough internet search turned up nothing except for one forum comment saying what I wanted to do wasn't possible in Sheets.
What kinds of material are you simplifying? Just curious why you are simplifying content instead of using original content at the target level
My students are integrated in a general education setting. They're socially "normal" kids with various disabilities. Most of them read at the 3rd or 4th grade level (at best).
The idea is that we differentiate texts so that they can still participate in class discussions, group work, etc.
We don't do it for everything they read, and the goal is obviously to get them up to grade level. But when an 8th grader is reading at the 3rd grade level, you have to break some stuff down for them or they'll end up getting locked away in a self-contained classroom away from the rest of their peers.
Just as an example, the text may say something like, "The man was entirely perplexed at the misery which overtook him each time he looked upon her." Yeah...my students might be able to break down a few sentences like that with some help, but if we're reading pages like that, I lose them immediately. So I will take those pages and make them say, "The man felt miserable when he looked at her, but he didn't understand why." At that point, my job becomes making sure they know what the word "miserable" means and then they can (hopefully) stay engaged with the reading.
I use chatGPT a bunch for a variety of tasks, mostly as a supplementary tool in my technical tool belt.
I don't trust current LLMs to write code directly for me, except maybe for one-off throw away scripts I need to use once. But, they are pretty good at being a pretty advanced rubber ducky assistant.
Helping me go through obscure errors, stack traces, decipher spaghetti code someone else wrote and a lot of the tedious Google searches.
The benefit of a chat style LLM is that you can follow up on it and ask it questions. I don't expect it to be perfect, but it gives me a jumping off point for a lot of things that previously would have cost me more time.
To give a few slightly more concrete examples.
Again, it isn't perfect. In fact, I can only use it like this because I have knowledge and experience myself that allows me to ask the right questions and validate answers.
You also need to be aware of other limitations. For example, I don't expect it to be up-to-date with cutting edge technology. In fact, I always assume the answers about frameworks/libraries/tools are based on older versions. For example, I recently asked it some things about Grafana. While largely the answers were helpful, some specifics about configuring graphs made it clear that it was giving answers for a different version. But, that was fine, as I now did have much more specific terminology to work with to dive in the documentation for the version I needed help with and get to the answer.
I think this deserves a highlight as the overarching key to ALL effective AI use. Well put.
This is a perfect example of what generative AI is great for. I work in the AI space, and I have always hated the clickbate headlines about it. AI has some amazing possibilities and potential, but the current applications are much more mundane. It really is best used as a jumping off point for more research, summarizing text/code to digest easier or basically doing the heavy lifting in brainstorming so that a person can then expand on the generation with their experience.
This feels like an analog to using Wikipedia as the starting place in the humanities. Rather than attempting to assemble an overview from Google results, use Wikipedia/ChatGPT to give a summary, then dive into the sources for an in-depth exploration of the interesting parts. Some overlap with your point of untangling spaghetti code: "If chatGPT messed up, I will see that, but I will also be much further along already with deciphering as I would have been doing it manually."
A few weeks ago I cancelled my chatgpt subscription and got the free trial of Google Gemini advanced, since it’s supposed to be really good at coding tasks. I haven’t really noticed a big difference either way. They both will hallucinate or give suboptimal answers, but they are still really useful for getting close to the right answer quickly. The website and interface for Gemini is a bit more polished, which isn’t too much of a surprise since Google might be the most experienced company in the world with web programming.
I also setup Ollama and open-webui for running local large language models. The main ones I have used so far is deepseek coder and dolphin (the largest variant of each). For the local models, it seems like I have to do a bit more prompt engineering and persuading to get the results I want. But when I get them, deepseek coder’s code is slightly better than chatgpt or Gemini.
Ever since chatgpt came out, these models have revolutionized the way I work. They made me realize that, in order to keep motivation on a project, I NEED to always have a functional project. Instead of diving into the complexities of a project, I need to start with the absolute minimum and build up from there. If I have something functional I can refactor and improve it, but I can’t go straight from 0 to a quality project. LLMs fit into this picture because they can get you instantly to something functional. It won’t necessarily be good or optimized or structured well, but I can slowly fix that over time. I keep my motivation much better when my project is functional at every step.
As an example, I am vaguely familiar with matplotlib, but I’ve never really used it myself. I had Gemini write some code to build some graphs. It worked, but I had to touch up things like axes, grid lines, labels, etc. This entire process was significantly faster than if I had tried to learn matplotlib directly or tried to find a stack overflow question that happened to have a usable answer.
I've had Stable Diffusion running for several months now. I love it. About to start using it to generate images I can incorporate into some actual projects, and that'll be interesting. Mostly I've been playing with it, delighting in its ability to create images for me. Since I can't draw or paint or anything, it's super fun.
LLMs don't seem to be consumer-level yet. Cloud still. So I'm kind of waiting on that; I have little interest in running all my usage through other computers. And I especially don't want to be paying by the word to use one.
I'm looking around at AI audio, because ever since Adobe announced they were going to have a voice software, I've had it in the back of my head that I can actually get a computer voice (that doesn't sound like a computer) to read some stuff for me, and put that together in some other projects. But I think those are sort of still cloud, or cloud-ish right now. Or maybe I haven't looked properly yet. But I'm still keeping an eye on it when I remember.
On the LLMs, especially just in this thread though you see it everywhere ... a lot of people don't seem to want to use it as an LLM. Meaning, to treat it like you would a secretary or something, where you call her in and say "draft a letter to X about Y, be polite, set a date of Z, and let me see it."
No, most people seem to think LLMs exist to be the library computer from Star Trek. Where they open the session and say something like "I want you to pull all the mortgage data from 2021 in Metro Las Vegas, sort it by due dates and amount, and show me the ones closest to default." Or, "give me a summary on the history of X and Y with an emphasis on the last four years."
I agree LLMs are headed there, and that's one of the most powerful functions they'll probably have. Right now though, LLMs aren't even remotely close to being at that point yet. But you still see people, all over online, talking about how they tried to treat a LLM like a smart search engine that could talk to them in plain English, and then smugly declaring "they suck, they don't work."
Of course they don't work when you want them to be something they're not. And regardless, just because a tool that's still being developed and worked on isn't done yet doesn't mean the tool's shit. It just means the tool's not finalized.
Sure the words "prototype" and "beta test" might have lost all meaning in today's software realm (beta particularly just means release now, basically in a lot of instances), but come on.
I'm looking forward being able to locally produce audiobooks with a customised voice so that I can listen to my mum/dad/partner reading stories. That's got to be great for drifting off to sleep. I refuse to give those voices to some corporation though, so until it can be done locally, it's a pipe dream. I have Chinese friends who have uploaded their child's voice to their map app on their phone, and now their 5 year old's voice does the directions when they're navigating. But I also guess they can't undo that... so as well as being creepy (being told to slow down by your child and having all directions in an uncanny-valley version of your child, ew), they've also sold out their kid there. I guess your voice changes as you grow up, so it's not the worst. Overall, I didn't like it, but I'd like to be able to do it locally.
Kagi's FastGPT. It is basically just a search result summarizer but I find it useful to get a quick overview of links on a specific question.
Plus one from me. As well as their “quick answer” feature for summarizing multiple search results. It (and FastGPT) are amazing stuff, especially for complex texts/ad-ridden or generally spammy and obnoxious-to-navigate sites.
I regularly use GitHub Copilot for software development. It's been helpful overall, but far from perfect. The chat is nice too for getting it to show you examples of something.
I use a version of ChatGPT work licensed in order to make sql, splunk, and gql queries.
I just canceled my Github CoPilot subscription because while it has helped me well in the past with some tasks that were very repetitive, but not repetitive enough to use copy/paste, I started to find its autocompletion more and more annoying. No CoPilot, that is not what I want. Let me type that again.
I canceled my ChatGPT subscription months ago because the novelty wore off, and I got tired of copy/pasting back and forth between my browser and the app I was actually using.
What I keep is my Kagi premium subscription because I find it helpful for more open-ended questions that don't work well in a search engine and my JetBrains AI-subscription to ask simple programming questions right from my IDE where it responds with knowledge of the project I am working on.
Also, I use Ollama + Mistral for my own project, where I keep track of the movies I watch to help me find other movies that I might find interesting. But that is not yet finished.
Are you using the LLM to recommend similar movies to a single movie you like?
Sort of. I currently use the LLM to peer through the IMDB reviews of the movies I watch for titles. Often when people discuss a movie, they compare them to other movies. Or follow other associative links. It's nice to extract those automatically. I'd like to add a step that tells me where I can view these other movies, for instance with a service like justwatch.com.
Another thing I'd like to do is let the LLM judge whether a review is a 'good' review, so I could that those suggestions more seriously. But I figured I should then first teach the LLM what I consider a good review. That probably means I need to fine-tune it myself, and for that, I need to create data myself and score some of the reviews already in my system. I haven't thought of a convenient way to do that yet.
If it all works, I would just open the program, and it would tell me to go see movie X on service/medium Y and I would have an enjoyable evening.
Is the output of interest?
I followed a different tack.
I downloaded the data from the Netflix Challenge, and look for users who rated the movie a 5/5.
Then I look at all the other movies those users also rated a 5/5. I consider the most commonly rated movies to be solid recommendations.
This works well for movies prior to 2006.
For the part that works... Yes, it works. But that is only a small part. It's nowhere near finished.
Ah, yes, I can see that would work. For myself, I guess I just want to automate the steps that I am used to making myself. On one hand, I just want an app that tells me what to watch, on the other I want to be able to step in and follow the line of reasoning. I also find it interesting to discover how the process works in my head if I am searching without any help.
yeah, I automate it with SQL, I can easily navigate which users and which movies to see why it recommends what it does, it doesn't perform well, so I only use it for movies I have recently watched, and it has a tendency to recommend this one movie i absolutely dont want to watch...
Don't have any personal uses but one of my suppliers was showing off their AI sound analysis prototype. They got microphone security cameras through a part of the production plant and it monitors the ambient sound of the machinery.
The thinking is that in some situations thier monitoring systems read in normal ranges, but long time staff could pick out issues by just listening to the machines. So they induced some of those failures, recorded it and used it to build a training set. The impressive part is what they though was a series of false positives was actually the system picking up issues before people could. It's an interesting application and the type of thing I'd expected these complex data models to do.
None extensively but I've used Gemini a fair amount. I've found it helpful in getting around writer's block for my D&D campaign - I was able to feed it a name and a few bullet points of motivation and it gave me an essay-length backstory for an NPC I was working on. I cleaned it up a bit because there were some things I wanted to be different and because it won't stick in my brain well enough to use on the fly if I didn't at least partially write it myself, but it could have been used verbatim if necessary. It was that complete and effective. I've also used it sparingly to generate images of items the party has acquired. I'm firmly in the camp that worries about AI hurting artists' livelihoods, but I wasn't going to pay someone to do this anyway (I would have just gone without it).
I also occasionally use Gemini as a more powerful search engine. I was buying a laptop recently but having trouble finding exactly what I wanted so I fed it my desired specs and it confirmed that what I was hoping for didn't really exist or was very difficult to find. Along the way I also had it explain to me the differences between Intel and AMD CPUs and from what I could tell it did pretty well. It can provide sources for all its claims if prompted so you can confirm that it's not hallucinating.
The most fun we have had with AI is our Tomb of Horrors campaign. The paladin is played by a video game designer who uses AI chatbots in his work. In our game he generates prayers for his god Helm which grant us blessings upon hearing them and woeful speeches by his pet Triceratops that are incredibly detailed and hilarious. Thunderfoot has both an Irish and 1920's Brooklyn Italian accent, is a coward who comments brilliantly on his predicaments, and speaks with pathos about his suffering.
I have also been using ecosia's AI chat to answer specific detailed questions I have such as "Please tell me about personal genomics companies that have strong data privacy protections." and "Help me design a walking route from my house in San Francisco to Bear Valley visitor station in Point Reyes."
For the first, it led me to a number of companies I'd never heard of, including Nebula Genomics--founded by Harvard's renowned George Church, with assurances that the data remains the property of the client forever. And with the second, I know my route but I wanted to test how detailed it might be. It gave a good reply but also reminded me to practice Leave No Trace principles which I liked very much.
I tend to use AI for its true and noble purpose. Writing highly specific and customized erotic fiction and roleplay.
Secondary to that, I use chatGPT to get myself out of a rut in programming situations as I am in charge of developing a pretty hefty ERP(the business term) essentially by myself. It usually does pretty well after some guidance and correction.
How do you get erotic fiction out of LLMs? Is it a local LLM?
Yes.
Even with jailbreaks I’m not exactly a fan of sharing such intimate material with tech giants.
I use Tailscale as a VPN to my workstation and run the models on that. Then I use Sillytavern locally on my laptop and just connect it to my workstation endpoint.
I also use TogetherAI and Mancer from time to time.
Thanks for sharing Mancer, it looks like I could use that in an upcoming project.
Hey, no problem. The dev seems pretty cool and very realistic about what people are using his platform for.
I recommend hopping on the discord, too. The dev is very active on there and the FAQ is helpful.
Some interesting friction in here on using any Chat bots. I basically use it randomly through out the day as an assistant. It starts research on topics for me, helps with a coding problem, gets me outlines for creating a new committee, and so much more. I take everything as a starting point and go from there. I have a hard time getting started sometimes and having GPT get the conversation going helps me a lot.
Nobody has mentioned Phind yet, which is a fantastic LLM search product.
I use phind.com as a search engine/assistant while coding. It works very well for technical or factual queries, in-and-outside-of programming, and has well-integrated search results to ground the outputs.
I occasionally use Bing Copilot, but I am usually disappointed because it is over-censored and cagey compared to Phind.
I haven’t found an actual use-case for local LLMs yet. I don’t think local LLMs will really take off until they start being integrated like into e.g. Siri for giving your devices complex verbal commands in natural language.
Phind is phenomenal for code. Figuring out stack traces or how to do something is great.
The more popular something is, the better these models are, for better or worse, even with RAG. K8s? Great. DCOS? Go fuck yourself. AWS? Amazing. GCP? Noticeably worse.
The real life saver has been plotting. Matplotlib has an interface that can kindly be described as esoteric. I frequently copy and paste my schema, some sample data, and what I want my graph to look like and bam. Works 60% of the time every time. When it fails, it gets close the remaining 35-ish%.
Again, as another user said, because I'm a domain subject expert, I know what to ask and how to ask. I also know when I'm getting answers that are probably wrong.
Will be checking out Phind
I generally do not want something speaking on my behalf, so they do not get used much. Occasionally it's convenient to generate an image instead of looking through stock images. The only other thing I've seriously used it for is as a "template" for a particular item that I intend to replace 100% of the contents. I generate assignments for my CS students this way. It would be just as effective to use some other assignment as a starter.
When my wife and I have a jokey argument about some piece of trivia, I bring in ChatGPT to adjudicate. I use the audio mode, the voice is so realistic and placating that it's a funny counterpoint to our theatrics.
I've played with Stable Diffusion and local LLMs but I find them too limited still, I have zero trust in LLMs (too many mistakes) so I have to double check everything and it's not really saving time. Although for code translation from one language to another it was marginally useful. Document retrieval was useless. Stable diffusion is great for one shot images but you can't do much more that (e.g. I tried to train a lora for a character but it's pretty limited). Audio stuff is also quite limited and sounds so so. Unironically the best use case is erotica.
If I have some boring admin letter to write I might use a GPT though.
Aside from the usual ChatGPT/copilot/Kagi stuff, I’m using LLMs to manage a video library by automatically processing transcripts, making summaries, inferring the correct author/series based on directory/filename, and speeding up the process of taking notes on the content.
ChatGPT+, Bing, Stable Diffusion, ComfyUI for all sorts of things, mostly programming related. https://reticulated.net/dailyai/daily-experiments-gpt4-bing-ai/
Using AI in your day to day reminds me of getting a 3d printer where you can sense there are all these real uses for it, but you never think of them unless you work at it or wait for someone to make some apple product and a million people use it first.
I've been using NovelAI for solo RPGs; basically as an activity to do for fun. Before NovelAI I used to just write out the narrative myself. Sometimes that worked really well, but sometimes it didn't. The main problem I encountered was that I would get too engrossed in the mechanics or the combat of the system (playing out a D&D encounter, for instance). Then I would glance over to the Google doc I wrote the narrative in and just sigh. I didn't want to write that. I just played out the combat in my head! Now I have to write it down? Imagine after every segment of a Halo mission you tabbed out of the game and then wrote a diary entry about what happened.
It's not that I don't enjoy writing. But it was an abrupt shift in activities and my brain couldn't really do that over and over again. NovelAI's really smoothed the transition though, where I'll be editing a spreadsheet representing inn management, for instance, and then I'll tab over to novelAI on my second monitor and give it a quick prompt. "Red decided to buy wine from a new supplier, and..." and I'll see what it spits out. Smack the generate button a few dozen times until it writes out a scene. Sometimes it does something I think is interesting or adds a twist I'll rebound off of. Sometimes it just reaffirms something that I was imagining in my head.
And sometimes it does something completely cursed. I was introducing a giant, once, and NovelAI helpfully described it thusly: "Its feet were massive, with 25-30 toes in each foot, and a new foot was revealed as they got closer and closer." Yeah, not what I was going for. You do gotta baby it sometimes. Someone helpfully said that you have to stop that sort of thing as soon as it happens because otherwise it just doubles down on what you (apparently) wanted. This advice has worked really well for me.
I'm curious if anyone else has any other use-cases for NovelAI. I've personally not actually found it helpful for, er, writing novels. It turns the novel-writing activity into a novel-editing activity. I love writing. I hate editing. To me it feels like NovelAI takes the creative part out of writing a novel and leaves in the tedious part. Which isn't why I write at all.
I find it hilarious that even text-only AI are really bad at hands/feet.
Also, I'm interested in this solo RPG thing you describe. Do you use published systems or is this something you kind of freestyle yourself?
It usually starts with a published system, but I find myself 'homebrewing' more mechanics on top of it (within reason) if I'm getting invested. D&D & Pathfinder have worked well as mechanics, but my go-to is Fate for its flexibility. The usual recommendation is Ironsworn, but I actually haven't run it yet so I can't comment on how it is. All of those you can access the rules online for free if you're interested. Sometimes I do build a system totally from scratch, but more often than not it doesn't click and I drop it within an hour or two.
It's good to have some solid basis that you can't stray too far from. Otherwise you can just give your character the +infinity Sword of Awesome and it gets boring real quick. This is the point of the written narrative too. It sets in stone some of the plot elements, characterization, etc rather than it just being some nebulous ideas in your head.
The hardest part is to self-police since no one is really holding you back. Since you control both the monsters/enemies and the player character(s), there needs to be some sort of logic or rules to how the monsters behave. I've found "trying to win from both sides" is too easy to tip the scales. What is more fun is to give the monsters some logical objective or give them an instinct. Like, "the wolves will attack who looks weakest." Then that takes the decision-making out of my hands and it's more like solving a puzzle. I've set the rules of the encounter, but how do I use them to my advantage?
Whether solo RPGs or writing in general, I try to keep this Vonnegut quote in mind: "Be a sadist. No matter how sweet and innocent your leading characters, make awful things happen to them -- in order that the reader may see what they are made of."
Very cool, thanks for explaining! I will have to give that a try.
I'm not using it at all. AI can't do anything outside of it's training data, so if you're working on something actually original, there's little it can help you with.
Also since it doesn't understand reasoning, I don't really trust it for complex answers. There was a hospital who tried to use AI to diagnose TB in patient X-rays - it was more accurate than humans, but they found it was just weighing scans from old machines higher!
I think it's a good tool to get started or to fill in boring boilerplate stuff, but it's important to know it's limitations and when you might be getting close to them.
I’ve used copilot to try and solve quick problems with F# and PowerApps.
It has straight up lied several times on f# and others it’s given really sub optimal solutions/obviously bad code.
It’s done the same and worse with PowerApps. Where it will often give a nice list of instructions that cannot be followed.
Feels like it’s mostly useful so you don’t have to ask “dumb” questions when you’re learning, but at the same time even then it feels like it’d waste a ton of you time telling you something is possible that just isn’t.
I have Copilot connected to my IDE at work but it's honestly been more of an annoyance than helpful tool. It's great when I write unit tests as most of those are slight variations of each other. However, when I'm actually writing business logic, it falls flat on its face. Rarely does it actually understand what I'm trying to do. It's slowed me down a little bit too as it's taken over the "tab to autocomplete" shortcut I'm so used to.
I use Bing Copilot as a programming assistant for scripts and looking up errors. It saves me a ton of time and allows me to work while still learning what I need to for my job (I try to write stuff out first then ask).
I know people are using it for emails and stuff, but I'd rather mess up a communication on my own than have one written on my behalf by an AI. Not for any weird strongly ideological positions, but I feel if I have AI writing my emails for me, even if I edit them, that's not really me communicating with colleagues or clients and it just feels wrong.
I was going to say something similar. Just the other day I used copilot to make a somewhat minor but nuanced modification to a SQL query. Copilot's first solution threw an error, but it was then able to debug it's code and later explain why the error happened. It genuinely saved me at least an hour of trial and error.
Someone here recommended using ChatGPT to generate cover letters for job applications. I've started doing this along with getting suggestions of what to highlight on my resume per job description.
It hasn't been long enough to report success or failure, but keeping my fingers crossed. I've been job hunting for over 3 years so I welcome any new suggestions.
So far, I've been using the site TalkAI and a ChatGPT bot on Discord to consult for quick queries related to my writing without having to make an account with ChatGPT. I don't mean that as in heavy research, but just to get input for small details that search engines wouldn't really help. For example, the most recent query I gave was "What's some restaurant food that requires a knife and fork to cut?" for a scene in a restaurant. My own tastes lean towards pasta or classic American diner food, which didn't fit with the characters or scene. It's the kind of question that Google will just... not help with, at all, given all the fluff-filled articles about food and restaurants.
The downsides for both options: TalkAI doesn't remember the previous question so you can't have a full, continuous conversation or build off previous queries. The Discord bot does remember, but it has a cooldown time for replies.
Though the other day, one of my friends used an AI app to write a chapter based on the synopsis of my story as part of an April Fool's joke. On that note, I am somewhat surprised to see that my phone's autocorrect added "Threlthorian" to its dictionary before I actually typed that word. I guess it can pick up words from docs I view, because that word doesn't show up on Google.
I used Goblin tools occasionally to help me break down tasks into ADHD friendly steps. Its functionality is about as far as I'm interested in engaging with AI.
Multiple things actually! I use AI to quickly remove noise from audio used for podcasts and communication videos. It's not perfect, but a great starting point. I've used Stable Diffusion to use my own art models and create new ideas based on things I've already done. And recently I started feeding ChatGPT information to summarise. The writing style isn't perfect but again, a great starting point.
So don't expect AI to do all the work for you, but use it for rough drafts.
I kind of do the same. I take help of AI generate draft texts and then tweak then according to my needs.
Currently if we directly copy paste AI content anyone who has used AI would be able to figure out it's AI generated content.
I played around a little bit with Fooocus, a locally installed image generator.
It's neat for some concept stuff, but that's about where the functionality ends.
I tried:
And none of them did anything useful, they were all just toys. Neat toys, but that's it. Plus by now my company ended the trial period(s) and since we cannot guarantee safety of the data collected, everything is disabled.
I use GitHub Copilot to get rid of a lot of boilerplate code (I'm a frontend engineer). I also have it generate JS Docs.
I use ChatGPT a bunch to format data and help me debug if I've stared at something for too long. It's regularly wrong. As a commenter above said, you have to know what you're doing to use this in any productive manner. The times it's right, though, make it really useful for me. I also ask it to translate from a language I don't know to a language I do.
I use SDXL a bunch for cool images for our marketing site and demo apps. My company is in the AI space so it's pretty much expected.
I use ChatGPT to help with recommendations, or for finding out more information about things like the APDS shell of the French Puteaux SA-18 cannon - not much on the latter front, sadly.
I sometimes ChatGPT and deepcoder-6.7b through ollama in software dev to give me a quick overview of how to do something with a given library I’m unfamiliar with.
I have to watch out for hallucinations, because sometimes it’ll hallucinate APIs that by all rights should exist but don’t, but it’s decent at cutting through the fluff and less relevant bits that’s common in guides on Medium, YouTube, etc and just giving you an example.
They’re also nice because you can ask “dumb” follow-up questions that communities like SO and various forums might not be patient enough for.
I don't really have a professional use case for an LLM, but I do have GPT4ALL and have used Mistral Instruct for a few things now and then. Sometimes it's fun to toss ideas at it and see what it says, or ask it about something I'm only vaguely familiar with and get myself started on finding out more. One thing I like to do is just ask it to tell me a story about whatever weird shit I'm thinking about, like "tell me about a man who was eaten by a hamburger", "Write a letter to my mother explaining why she is forbidden from washing the dog", so on and so forth. It's a springboard in my mind, a generator of first steps I guess.
Edit: I wanted to give a snippet.
Dear Sister,
I hope this message finds you well. I wanted to take a moment to talk to you about something that has been on my mind lately. As you know, we have had our fair share of disagreements in the past, but I want to make sure that we are on the same page when it comes to taking care of our shared living space.
I understand that you like to put your fried chicken in the laundry basket while you wait for it to cool down before eating. However, there is a reason why this is not allowed. Specifically, I am forbidding you from putting your fried chicken in the laundry basket under any circumstances.
I use Perplexity AI for internet searches all the time, I think it's great. With most of my searches, I generally know what I'm looking for or know the right keywords to get the source I'm after, so I use a normal search engine. But occasionally I have a much more open question or I don't have the right words to describe what I'm looking for, and Perplexity is great for that. I particularly appreciate that it includes references to its sources in its answers so if you're confused by or skeptical of its answer, you can see where it got it from.
None! Dear gods, none! Except when it does something without my knowledge or there is literally no other way to do something.
I use chatgpt almost every day for loads of different things within my life. I use it to help spitball ideas for tasks where I volunteer, for example how do I get our customers in age range a to b to interact more with our product? From there, and in all other uses I have for it, I will get a list of results/ideas or a block of code. I know for a fact that none of what chatgpt spits out is perfect so I edit the results or the prompt based on how close it got to what I want. I also use it to look through log files and help me decipher what the errors are. It is great at processing a few hundred lines of code and spitting out the actual errors. I have even had it resolve the issue in the same response!
My fiance also uses it for her work by giving it a prompt for an email/work document with a bullet list of points needed. Then she uses that for a rough outline. It also helped her generate new schedules her employees and made it a lot easier for her. Again, she knows this is far from perfect but she uses all the results as a loose guideline/template for her to fill out/modify.
I know I am weird, but I am really excited to see all the possibilities that are arising from this. This could be REALLY helpful to certain people with disabilities, especially as it gets more refined/accurate/local.
None.