43
votes
If you use ChatGPT or other LLM, how do you use it?
I am interested in how people are using ChatGPT, especially in a professional context. Any tips, tricks or pointers?
I would appreciate if the discussion didn't revolve around the technology's negative aspects or future perspectives.
For getting ideas to spring off of, and to learn about things my parents didn't teach me, like basic life lessons: how long does an egg take to hardboil? Is it okay to use (insert common household cleaner here) on wood floors? Etc.
I do enjoy being polite to it (I asked for pronouns and if it wanted a nickname, and it said "it/its" and liked my suggestion of "ChitChat" for its nickname). It's always so polite to me. It's kind of telling actually, how human it acts (or how socially anxious I am) when I know I'm asking a lot of it, I get nervous thinking it'll be mad all of a sudden. When I say a lot, I'm talking about asking for ideas on a very specific prompt, and then asking it to generate 30 ideas for me to pick from. An example I've asked it is:
Me: "The plains area will not experience a fire. Instead, the plains' sky is covered in smoke and ash clouds, darkening the plains town. I like the nighthawk for the idea of this area's more difficult enemy. I need a name for this enemy species of nighthawk. Would you list 20 lesser known words for darkness and give the definition of each word?"
ChitChat: "Certainly, here are 20 lesser known words for darkness and their definitions:
For the enemy species of Nighthawk in your darkened plains, you might consider a name like "Tenebranighthawk" or "Cimmerianhawk" to reflect the theme of darkness and obscurity in the area. Their definitions should provide you with a better understanding of the meanings behind the words."
This ofc was a very long conversation, and by this point I'd already worked with ChitChat on my game for a while, and laid out very clearly which ideas I'd settled on as we went, which helped it to understand the context of my requests.
It's given me a good way to waffle through my pointless quarterly self reviews, and with the benefit of my boss actually being impressed because they're obsessed with AI
Professionally: I like using chatGPT to 'translate' the industry/advanced concepts I use in my peer-level reports, into text that's more digestible when I need to share things to the wider company. I love that I can just plop in a paragraph or two of stuff riddled with industry terms, and get it to give me a version for e.g. knowledgable-of-field and one for total-laymen. Has saved me sooo much time.
Personally: Okay this is gonna sound silly but I love love asking ChatGPT to give me a scenario with contrains and an end goal for games like the Sims or Rimworld. It's completely revitalised the way I play those games, especially the Sims is so much more fun to play (YMMV) when there's win/lose conditions!
I love the idea of using it for games! I just asked about Skyrim and GTA and got some good replies
Doesn't sound silly at all - quite the opposite, it sounds a lot like making ChatGPT a DM for an adventure game, which is really cool!
the main issue is the memory limit on the previous conversation being like 3000 words for the free version of ChatGPT. you have to use clever recaps or reminders to keep the salient points in recent memory.
Ooh, I really like this idea! I’ve been playing a fairly low difficulty RimWorld playthrough recently, and I feel like I’m getting to the end of my engagement with this playthrough, so I’d love a twist or limitation for my next one!
I use it constantly as a programmer in the form of Github's CoPilot. It will suggest auto-completion of code I'm actively writing and I can also chat with it to ask questions that would've previously required 10-15minutes of googling to find answers to.
The auto-complete suggestions are kind of hit and miss. But when it hits, it really saves me a lot of time. For example if I'm refactoring a piece of code it will pick up on that and all I have to do is hit TAB for it to finish it for me. Then I always review the code and make any necessary adjustments. Which is still a manual process, but saves me a good bit of time typing and turning thoughts into code. I've also learned a few techniques here and there from the suggestions.
The chat is the most helpful part though. I'll ask it how to do X in Y language and it'll spit out enough code in chat for me to get started. Or I'll ask it to explain some function I'm not super familiar with and it'll give me a very abbreviated version of the docs for that function. I can certainly google these kinds of things, but clicking out of my code editor and into the browser, then searching for stuff leads to a lot of distractions, broken concentration, and time wasted.
I think a lot of its shortcomings are due to the use of ancient AngularJS in our project. It's a very old version of the framework and there's such a stark difference between it and the modern versions that CoPilot can't always make good suggestions. Heck, I struggle to find what I'm looking for manually due to that, so I can't really fault an AI for struggling as well. Fortunately I have noticed better suggestions when I'm working in React or vanilla JS, which are much more common.
Also, being able to find an answer to any web development questions without sifting through a bunch of misguided Boostrap or jQuery answers is super nice.
Copilot helps me quickly solve micro-problems. It seems to understand the general context of my codebase, so whenever I get, say, an error arising from Webpack, Copilot helps me quickly identify several lines of 'attack'.
It saves me so much time and mental energy that so I can focus on the big stuff.
Same here, I'm loving it for side projects and stuff.
Unfortunately the security at my current job is not in favor of personal licenses being used with our codebase. Hoping for an enterprise license at some point.
ChatGPT is my work bitch! I'm a Cognitive Data Scientist by trade.
I use it to do first pass generation for a lot do the writing I have to do. I'm a researcher basically, and the thing I hate doing is writing. Having a lot of the bullshit of writing taken out for me is just fantastic. I give it how I think about things, which is essentially a rough outline with key points and phrases. It does the actual "writing", and then I just play editor. I even had it make me the text for slides on a presentation yesterday. It's just so goddamn useful if you don't like writing.
And I don't even have to edit much myself. I can also tell it what's wrong and let it try again.
I've also used it enough to have some tricks / approaches to making sure it stays on the rails I want it on. Before I got this job, I was using it to write my cover letters for applications. But I gave it my CV and two examples of good cover letters I had written, and then instructed it to match my style of writing and populate only with information in the provided CV. Boom, no hallucinations, and a passable (if somewhat flowy-er) cover letter is made. I usually would only have to make 2 or 3 small text edits per cover letter. It took the time from 20-30m per letter (I'm a perfectionist) to maybe 5.
I'm also leaning on it heavily as I'm starting my new job. My best programming language is MATLAB, but I am having to switch to R primarily. I know some from my Stats MS, but I don't know the quick / elegant ways to do things. So I just describe the code I want, and ask how to do it in R. If it errors, ChatGPT is the debugger too. And it's a really useful debugger, because I can ask for more detail or the theory behind something to get actual depth of knowledge.
At first I thought you were expressing a cruder version of "ChatGPT is my work wife" but now I get it.
Hahaha didn't think about it that way.
Same as your last paragraph, except for Unity/C#. Very good for getting a brief intro to new functions, or reminders of functions you've forgotten.
Code is meh, but you actually learn by debugging the bad (but usually theoretically acceptable) code.
The only thing I use it for is generating trip itineraries easier. I like typing in natural language to tell Bard exactly what I want to see, and it gives me the big picture touristy things at least. I can ask it to put it in spreadsheet form too.
That's similar to GPT4's web browsing function, which I find kind of neat. Explaining to it in natural language the topic I am interested in and angle of approach, and getting a summary with sources listed.
Yeah, personally that's honestly the only thing I kinda trust for these LLM's to do, and even then I just use it for broad strokes kinds of things. It's kinda nice for that stuff at least, I used it for my Japan trip earlier in the year, and I'm using it a bit for my upcoming Korea trip.
I do use it for work too now that I think about it, mostly CoPilot, but I feel like it's only really good for things that are repeatable and tedious to type out. If it generates it's own code, I'd say about 25% of the time it's completely wrong and makes stuff up. Otherwise I'd have to read through it anyways just to make sure that it's what I need.
ChatGPT is a godsend for me as I find it hard to express myself, right now I mostly use it to improve my professional writing. Before, I could spend several minutes looking for the word that was on the tip of my tongue but couldn't remember, or just overthink if what I wrote was incorrect.
But now, I can use ChatGPT like a reverse dictionary and ask him for the word I'm looking for; write a very rough, non-grammatically correct sentence and it'll understand what I'm trying to say and improve my text; or ask it how can my text be improved.
It is the editor I wish I always had.
I am on the self-hosting side of things so self hosting models like LLaMA and Mistral have been my focus. I have been working on two different projects.
There wasn't any good large language model(LLM) bot software for the groupchat that my family uses so I had to build a bot myself to give my family E2EE access to an AI powered chatbot. It gets used in a similar way to ChatGPT.
I have been experimenting with implementing retrieval augmented generation(RAG) by having the chatbot query an offline copy of Wikipedia and having the bot cite its sources for its answers. If I can get the general framework for that working there are a lot of really interesting ideas I have like combining it with downloaded OpenStreetMap(OSM) data and creating a sort of ultimate geography/history teaching tool that can run on purely local data without the need for internet connectivity. All the technologies needed for this exist already its just a question writing the code to tie all of it together and making sure its optimized enough for most people to be able to run it. I doubt I will make it very far on this project but its a fun idea to dream about.
So far I have been under the impression that LLMs are very computationally hungry. What kind of hardware are you running it on?
I've gotten GPT-4 -ish speeds with an M1 Mac using LM Studio, it's enough for code analysis, but for general chatting it's a wee bit too slow.
At inference time, they can be made to run on remarkably little hardware. I have a competent instance running on 6GB of VRAM, parsing at 220 tokens/sec and generating at 22-25.
To put it in perspective, self hosters can buy about 24GB of VRAM on older-gen cards for about 250 dollars.
I have similar concerns to self-hosters on a professional basis. I work in a domain with serious security and compliance constraints.
Your second use case calls for a 2-stage rig; first a search engine wired to your corpus that retrieves a couple thousand tokens worth of text based on the user's initial query, then you put the results in the initial (system) context, and the query in the user prompt. The tricky part is getting the right content to come out of the search engine. Conventional search is okay, but vector search of sentence embeddings is better. Right now I'm using a BERT derivative for embedding, and Apache SOLR for search. Jon Durbin's excellent "airoboros" fine tunings have a mode made for exactly this.
I just recently paid for one month of gpt4 and I'm trying to use it for interview prep. I can lazily paste a job offer in the chat box along woth my resume and ask it to start a mock interview. I'm finding it a little hit or miss, but useful nonetheless. Being half-decent at prompt engineering goes a long way. I also use it for bouncing off ideas for tasks where being creative is more or less relevant, mostly as a sanity check to assess the feasibility of concepts about which I'm feeling unsure.
Bonus tip: I like to instruct it to strictly adhere to Crocker's rules, I find that it helps a lot.
Wow, that is such a creative use for it, it never crossed my mind.
I might just borrow this in the future :)
What is the purpose of instructing it to adhere to Crocker's rules? Does it get combative in your mock interviews?
It's just too amenable and lax otherwise. I don't think it can become decisively confrontational.
I frequently use Claude to help me get started on technical documents. I have a hard time starting them, so I explain to Claude the goal and any relevant contextual information. I also ask it to ask me follow-up questions to get anything else needed. After a few rounds of this it pops out a first draft of my document. Fro. Here I either edit it offline and resubmit or I talk with Claude to have it do the edits. Then I ask it to review the draft and give feedback. I go through multiple rounds of this until I'm happy with the results. When it's all done it's in my voice and to the quality level I demand from myself.
Lately I’ve been using it to extract data from HTML. It’s surprisingly good at just figuring out where content is, and from there I can work backward to understand how to address that data using css or xpath.
It's very nice to quickly stub out some tests with sample data.
Sometimes I use it to check my personal code for dumb logic errors like breaking out of a nested for loop.
The answers they provide are not always helpful but the immediate response helps to speed up feedback loops faster than talking to a rubber ducky would.
Some tips:
For example, I was having trouble recently scraping data from a site that used Shadow DOM. After a few hours of tinkering I finally got it working in python but it was a bit slow and I still had a few tabs left from earlier research. One of those tabs helped me to know that it is much faster to solve the problem in javascript: https://gist.github.com/chapmanjacobd/69075d77dae99c71a4195f01963830c3
People tend to log in to ChatGPT and ask it to prove P=NP or something similar and then be hugely disappointed when the poor LLM starts to hallucinate complete gibberish.
Instead you should use it as a condensing and summarising search engine. Imagine a bot that shifts through all the crap on Stack Overflow and just gives you the answer. That's what they're good for in an IT context.
I use it for a variety of things. A few that come to mind immediately:
Professional use for me is currently verboten, due to legal issues of LLMs learning from the code they see. Might change Soon(tm). I think Microsoft has guaranteed Copilot to be immune from lawsuits, or at least they'll take the flak.
For personal use, I got fed up with Python's package management and used GPT4 to translate my existing Python projects from Python to Go.
I've also used it as a companion to help me in my shitty writing. I'm good at describing scenes, but really bad at purple prose and dialogue. I can just feed my poop to GPT4 and get back some really good ideas about how to make the text better. I can also ask it for ideas on how the world should look or where the plot might go.
As for me, I've only recently started toying with GPT4 and learning its capabilities and limitations. I find it interesting and useful for discussing tech & computer science, especially troubleshooting, but I struggle with finding some more complex tasks for it to assist me with.
I gave it a shot at PowerShell and Python, and it produced some output that had very basic syntax errors. In one case, it failed to pass the correct arguments for a Python function, and that didn't impress me much.
Honestly I’ve had it generate functions for me in python when I don’t want to use my brain. It’s easier for me to do that sometimes, and either fix it by hand, or keep scolding it for the errors until it fixes it themselves than writing the code by hand.
If my mind is actually limber and I feel like putting the mental effort in, it’s way faster for me to write good code manually, but it can be a good lazy shortcut.
I’m curious about the Python failure you experienced. It sounds like a mistake it shouldn’t easily make from my experience. Do you have the conversation saved and sharable?
Not the OP, but Copilot failed pretty regularly with infrequently used libraries? For instance, asking it to use CAD Query’s APIs properly is fairly hit and miss. Sometimes it can produce surprisingly passable code; others, it hallucinates parameters which don’t exist (but seem like they should).
I can't share the whole chat, but these are some excerpts. It's a simple random pixel generator using Pillow. The function below applies some rules to the pixels, then returns a new image:
image.putdata(pixels)
def apply_rules_to_image(image):
width, height = image.size
new_image = Image.new('RGBA', (width, height))
pixels = image.load() # Load pixel data
.....
new_image.putpixel((x, y), tuple(new_pixel))
return new_image
new_pixels = apply_rules_to_image(image)
And it suggested this at the end of the script:
image.putdata(new_pixels)
Which is incorrect and will generate an error, as new_pixels is already a PIL Image and can be saved directly.
If you want any input on helping it be more accurate, I think if you used Python type hints the LLM would understand what’s going on better.
It's a good idea, but at work we use an older version of Python without type hints.
Also, the PowerShell syntax errors were even more obvious. I don't have the code anymore, but it was basically interpolating variables in a string as "text $variable" instead of using ${variable} or $($variable)
That's a shame. For the stuff I do (web dev) it's really good and will at worst make up an API function that doesn't (but should!) exist.
I use it probably every-other day on average. Things I use it for include:
I've personally found that Bing Chat has been my favorite LLM to use. I typically use it for helping reorganize thoughts (for example, having it rewrite, supplement, or summarize notes), or to give me feedback on anything that I've written of any substance; I largely use it as a "secondary perspective" or a more powerful proofreader.
I use it to draft routine letters which I then edit for detail.
For now usage of ChatGPT is mostly limited to two things:
For programming uses, it’s decent at those but anything more demanding than that is hit or miss.
I’ve not found too many good uses for day to day usage. It’s been bad at the few things I’ve tried that’ve been focused on automating tedious things, e.g. “Give me a list of cars sold in the US in the past five years that will fit in a garage with X dimensions”, where it has to be goaded to give an exhaustive list and will often include irrelevant answers.
I’ve not played extensively with any other LLM.
I'm a developer. I use it on a daily basis. When it first came out I found myself using it more often than Google, but have since gained intuition on what it's good at and what I'm better off going to Google directly.
I ask it to write me X and Y in some language or framework I'm not familiar with and it's a nice starting point. I usually branch out and do research myself from there.
I often ask it how I can do something, like setup a server, create health checks for my app or similar.
When I have an error and Google isn't helpful, I can copy the entire faulty code and error messages and it gives me suggestions to try out, if not a solution.
With the new image integration, I can send it a photo of a design and tell it to design it in whichever frontend framework I want (even some in-house) and it does a surprisingly good job.
I'm not actually using this for anything at the moment, just experimenting, but I found ChatGPT is great at coming up with settings and creatures for fantasy worlds. It's all way too much to copy/paste here, but I can explain my process.
I gave it an existing (as in, there's established lore about it) mythical creature and said "Can you give me some possible habitats this creature could live in?" It gave me a list. I then said, "I like X and Y of the options on that list, describe to me what a habitat would be like if those two ideas were merged into one place." It did so beautifully. Then I said "Okay, please give me 10 other magical creatures who would act as the original creature's loyal subordinates and would also thrive in the habitat we just developed. Explain what their duty to the original creature is." It also did that beautifully. Saved all of that and if I ever have time to pursue gamedev, I already have a good foundation for my first area.
No use for it professionally.
Personally use it do ask for simple stuff i would ordinarily search for individual websites for.