45
votes
What have you made using an AI tool?
I'm curious what people have made with the assistance of any of the new AI tools.
Let's skip low-effort things like asking ChatGPT to generate an essay and posting it as-is. But besides that, if you made something you think is cool, post it here.
It's great for brainstorming - the ratio I get is usually 3:1 for generic standard ideas to genuinely useful things I didn't think of before.
When trying to plan out a DnD session, asking it "Please generate 10 reasons why a wizard might be sneaking around in the back of a store after hours" gives me a lot of useful information.
Even if I discard 95% of what I get, AI is a great creative collaborator.
I totally use it like this, albeit much less creatively. Anytime I need to write a longer email, review, or something like that, ChatGPT helps me skirt the “blank canvas” problem. I just spit out my raw thoughts into it, it sends back something overly formal, and I can reference that to write my actual text.
I’m so glad to hear other people are doing this too. I struggle so hard with keeping my story ideas organized and consistent these new tools really help me keep a flow of ideas running while making sure it remains I’m not constantly getting to lost in the weeds. The simple ability to summarize what I’ve got together in a writing session keeps me focused.
Have you tried out Claude yet? With a 100k context window you can upload a small novel to it and discuss/edit the novel. For DnD this could mean you upload all of you campaign notes and world lore and then ask it for assistance and get much more in-world style responses.
I asked chat GPT to create a PowerShell script to insert random keystrokes into the teams app because I don't like being monitored by my employer for productivity.
It worked the first try.
If they really want to get anal about it, they can pull your telemetry data. I'm still trying to figure out a way to block that from being sent out, but probably by doing so I will send out a red flag somewhere. Maybe DNS-level?
For years, I have been using a python script. It runs between a set period of time and at the end of the day when I am finished, it stops. It also stops if it detects my mouse moving so I don't have to worry about stopping it all the time. My method is to just have it press volume up, wait two minutes, then press volume down.
I also have a little device that I got from ThinkGeek many years ago while they were still good called the annoyatron usb. It's a USB device that sends mouse jiggles, text, and caps lock signals by acting as if it is a keyboard and mouse. PowerShell script is just for fun.
I work in IT and I know that both are detectable but that would also require that our IT department wasn't so woefully understaffed that we can't even keep on top of regular duties.
I'll have to try that!!
If your IT security department is any good at all they can probably detect it if you use PowerShell. They may not be able to detect it if you use Python but I'm not sure about that.
For years now I've been in the practice of creating artwork to serve as worldbuilding content for a story I publish weekly. Prior to the availability of AI generative image software, I had been doing things like fabricating fake newspaper articles or photoshopping real posters from the WWI era into propaganda relevant to my story, since that was more or less a thematic match to what I needed.
As I was running out of posters to shop, Midjourney got good enough that I could do all manner of crap with it - character art, propaganda posters, assets for smaller pieces, etc. It's been a godsend, and has let me do a lot of stuff that I otherwise wouldn't have had the time or resources to include.
Some examples:
https://i.imgur.com/NT4In8q.png
https://i.imgur.com/4y4zIqr.png
https://i.imgur.com/pWdgTig.png
It's been pretty incredible watching the rate of progress and seeing the growth in what the generation can create. I think it'll be a really valuable tool for authors to help visualize things and create incidental artwork that otherwise would never have happened.
Naturally, I started the topic because I had something to share too. Here are couple of blog posts I wrote about using ChatGPT's Code Interpreter:
How to use ChatGPT's Code Interpreter
Making a Pac-Man animated gif with ChatGPT's Code Interpreter
Also, I usually use MidJourney to make images for my blog, though not for these posts.
I have aphantasia so it's hard for me to make art. I use it for my own enjoyment, but knowing how unethical it is I don't think I would share them.
Mostly weird lovecraftian or horror stuff is what I do
I've heard people tell me AI is "unethical" but I still don't buy it.
I've used MidJourney to generate hundreds of useful images so far this year, but never would I have ever contracted an artist to draw something for me so my gain isn't the same in loss.
It's the same reasoning Hollywood uses to go after piracy, and I don't buy it. Instead I make cool stuff and enjoy myself.
Yeah I used to think the same way. It's more about the ai model "learning" off of others art. I watched a long format video about ai art and it changed my mind about the subject.
Please link the video. Human artists learn by looking at and studying the corpus of existing art. Many talented artists I know use artboards covered in existing reference art while generating new works. So there is nothing very new about that aspect of creation. What changed your mind?
More than anything, its the economics. Especially since its readily apparent that the companies training AI models are skirting copyright already.
There's a world of difference between "I read every Stephen King novel and it heavily influences my writing." and "I fed every Stephen King novel to an AI and now it can spit out large chunks of Stephen King's writing to anyone who asks"
But GPTs dont actually spit out Stephen King's writing. They learn high-level features about that writing, and then generate new text that is consistent with those features.
I'm not sure you can successfully argue that a writer's high-level features should be copyrighted. It'll be interesting to see how the arguments play out in the cases of the writers who are suing AI companies.
I understand the pushback against creative AI, but I have yet to see an argument to convince me it's any different than writer/artists being inspired and using styles of other artists. It looks and sounds fundamentally the same to me. I'm a software developer and it's rare for me to write completely original code. There's only so many thoughts in the world. Why is AI different than some rando using learning from your art and basing their future pieces off that?
Not to be pedantic, but because it's people vs. an algorithm. People are actively making choices, actively learning and picking and choosing what aspects, ratios, colors, format, etc. that they might copy or learn from and their art evolves naturally from that. Machine learning is not doing that. It's taking approximations of datasets it's been trained on that's been tagged to say "this is a hand" or "this is van gogh" or what have you. I personally believe the sentient nature of actively making decisions as a human being is the difference, whereas algorithms aren't truly making those choices.
But the fact that I have seen not just you but so many other people try to justify AI generated images as the same as ACTUAL HUMANS learning a skill is very depressing.
I'd also like to be clear on my stance; I'm not inherently against AI anything. But I AM against where their data sources have come from and the inherent biases that they present. I am against creative having their work "automated" because many people have lost jobs and will continue to lose jobs because for corporations it's just a way to save a dollar.
AI as is should be a spring board, particularly ChatGPT. ChatGPT with straight up state falsehoods as truth, but the average person is too willfully ignorant to check the validity of the information being provided. I've used it for small creative project, or to get prompts for writing or to help make word maps.
Even AI generated images can be used for inspiration, a starting place, to the use and adjust and put the human element back.
Sorry, this got a little rambly!! But really it's people vs machine, and until these nuggets become truly sentient, how they generate information and images is not the same as people.
I do not think that is how people actually learn (although it might appear that they do). I think most people drastically underestimate the level of subconscious processing that runs under the hood. On the most fundamental level, what we learn are also just certain configurations of networks of neurons that either fire or not based on connections that are strengthened or weakened by the input they receive. How it appears to you in your conscience is not necessarily "real" and we knew that long before Sigmund Freud.
You might think that you "made up" a piece of art but in fact, some feeling just triggered the neurons that were active when you had that feeling when you were 3 years old. You don't remember it, consciously, but the shape of a building that was there is imprinted in your neural network together with those colors and lights you saw when you were in front of that building 2 years later... and voila, you are an artist now.
I think old Greeks were wise with their idea of being visited by a Muse, something external that you do not consciously control. I think the concept of authorship of art is way less obvious than you might think. What looks like a product of your creativity (or mine) is just a manifestation of specific connectivity properties of the neural networks that we have in our heads combined with the input we received.
And that input is a lot more than just the things that people usually mention as "inspiration" when someone's art is really similar to some previous art. That input is every piece of art you ever saw, every landscape, every building, every chair and glass and every human design, every shape, every tool, every face and every human body, every piece of clothing, every animal, every book you have read and imagined, every song you have ever heard, every conversation you ever had... well, you get the idea. Honestly, I think that the only difference between people and Midjourney is that most people attribute the authorship of their creation to themselves.
And don't even get me started about ChatGPT and that idea of a probabilistic parrot vs "real" intelligence. I would tell you what I really think about the way human intelligence works... but this is already too long and cynical, and it would get much worse, I can tell you that.
Except this IS how artists make art. I think you're digging deep in the philosophical side of what constitutes our consciousness and subconsciousness, which isn't really the point I'm trying to make. Of course there are going to be elements of the subconscious in the way people do things. Art is an amalgamation of a PERSON and their experiences. But artists specifically DO actively make decisions on what their art look likes, whether it's posing, colors, ratio, etc.
And I can speak of this with some authority. I've been making art for 16 years and steeped within the artist community. There's a lot of times that people will take specific aspects of someone else's work and incorporate it into their own style or specific pieces. It can be something as small as the way someone colors an iris or nose, or something larger like overall coloring style or general composition. When artists want to improve and explore they DO make these active decisions. That's not to say it's ONLY those active decisions. As you said, there are just things that have been imprinted into us, moments and glances that we don't actually recognize on a conscious level.
But AI image generators do not make these active choices. I'm sure at some point you'll be able to really lean into scope and explore more with things like the golden ratio. But until then, lots are very bad at that and composition overall on the grand scale. And what I mean by that, is the barrier of entry is so low, that for each one person who puts a lot of time and effort into generating an image that has good composition and looks actually good, there's 100 more not putting in that effort in and getting low to mid-level pieces that have all the hallmarks of poorly generated AI, but is good enough and gets plastered everywhere online.
I’m an amateur musician, not an artist, but there is a fair amount of trying things and seeing if you like them. Sometimes this is informed by theory but it can also be accidental. Would the same be true of art?
It seems like evaluation is a big part of this, and evaluation is partly rational (did you play the right note, is your rhythm what you want it to be) and partially intuitive. The part about “does it sound good” seems more unconscious and perhaps the domain of neural network-style thinking, or so I imagine. Eventually you come up with theories about what’s likely to sound good, so you can do it more often.
Using an image generator is a collaborative, trial-and-error process where the generator doesn’t evaluate the art, you do. (Or at least, your evaluation is what counts.) After evaluating, you make changes by editing the prompt or other settings. It’s easy to get started, but the amount of control you have is frustratingly limited. With better control, I think it could become more useful to serious artists?
It’s possible to make music entirely with music software, without being able to play an instrument. I expect that’s where we’re headed with art - people will be able to make interesting images by “fooling around” with little control and getting lucky, but more serious artists will want tools that give them more precise control.
I also expect that average taste will improve somewhat. When desktop publishing got started, there was were a lot of badly-done newsletters that used every font just because they could, but it settled down after a while. Early websites were similar.
Also, there is nostalgia for badly-done art of previous eras. Fortunately most of it disappears. People only remember and look for the interesting stuff. I expect there will be a nostalgia for early image generated art too, eventually.
I don't disagree with anything you said. People implicitly trusting that chatGPT knows what it's talking about is kinda concerning.
I'm also not against limiting how AI is allowed to be used to make money. What I am against is people trying to abolish the existence of creative AI in it's entirety. AI art and story generation has been a lot of fun for private projects and especially DnD world building. How these tools can be built in a way everyone is happy with isn't something I have an answer for though.
I guess like most stuff in this world we're going to have a hard time with people taking extreme stances on both ends.
I agree with what you've said and want to add that for me, the most significant difference (between a person making art without AI vs. a person using an AI tool) is ethical.
A person getting inspired by art, and then making their own art, can just rely on their own labor without necessarily exploiting anyone else in the process.
On the other hand, widely used generative AI tools are dependent on exploitative processes.
Of course there are also non-AI creative tools that are products made using exploitative processes, it's just generally hard to avoid exploitation 100% in manufacturing. But there are just so many more options for ethical non-AI creative tools, and sometimes making art doesn't even require any tools (for example, singing, or drawing on the sand).
Personally I see AI as a tool so I don't categorically object to people using it to make art, I think that can be valid. But just as non-AI art can be problematic and subject to criticism, so can AI art. I do see AI art as being a huge enough problem at the moment to warrant widespread public protest against it (or at least widespread public scrutiny and pressure on governments to regulate it properly instead of just letting big corporations use this tech to squeeze wealth out of the public and into the hands of the few, as usual).
It doesn't spit out new text though. It's just randomly spitting out whatever is highest probability.
If you created an AI and only trained it on
The most likely outputs would be:
In the case I provided, it wouldn't just be King's high-level. It would be literally only the words King wrote (and published under copyright). It wouldn't have any other language at its disposal.
It's just the sum of all its parts, spitting out whatever is most likely to come next. The only ethical AI would be one that was only trained on public domain data.
So in the case of these large LLM models, if you prompt for a story in the style of Stephen King, it's going to just spit out language that is closest to what is associated to Stephen King. Which will likely copy characters, settings, themes, and tropes from his existing stories and spit them out in a slightly different order.
So... first of all, when you say "The only ethical AI would be one that was only trained on public domain data." I think that's an interesting proposition worth exploring. So if I make some factual corrections, it's not an attack or anything, I just want to better explore this proposition.
First, you say: "It's just randomly spitting out whatever is highest probability." this is usually not true. Most GPT-based systems have a setting like "temperature" which indicates the extent to which the GPT can generate something with high vs low probability. I once accidentally set a temperature value to be extremely high, and the output was barely intelligible. But I believe that even at lower temperatures, there's some element of randomness involved, so it's not always the same output. (this will vary by implementation, of course.)
Second, you say if you created an AI and trained it on two sentences, then it would most likely produce those two sentences. Aside from point 1 above, there's the question of whether such a system would actually be called an AI. It would certainly be strange to train an LLM on only those two sentences, after all the first "L" stands for "large". The most likely model to train it on would be a character n-gram generator. I happen to have a character n-gram generator handy and put those two sentences in it. for character bigrams I got, as expected, nonsense:
For character n-grams, what you're really looking for is portmanteaux, like "workese" looks cool. Anyway, character 5-grams is a bit better:
and just for kicks this is what word (as opposed to character) bigrams looks like:
so the point is: it's important to be specific about the types of algorithms used, LLMs aren't trained on only two sentences, there exist simpler models that generate novel words but they don't generally spit out exactly the same words.
Third, when you say "It would be literally only the words King wrote (and published under copyright). It wouldn't have any other language at its disposal." this is not how GPTs are generally trained. Usually what happens is: an LLM is trained on a large amount of data, and the neural network learns high-level features. Then the system is fine-tuned on a smaller set of data more relevant to whatever specialized application you're trying to build. The idea is that the larger LLM learns features, and through transfer learning those features are also applied to the specialized application through the fine-tuning. (If it's a chat GPT, then there's also additional training you do to enable the GPT to participate in a dialogue.) So even Stephen King has not produced enough text to populate an LLM by himself, so his text would probably be used for fine-tuning. But even then, the larger LLM would be providing the structures that guide the final output (i.e. it's not just King's writings that are providing the final output.)
Finally, when you say "Which will likely copy characters, settings, themes, and tropes from his existing stories and spit them out in a slightly different order." this is analogous to William S. Burrough's "cut-up" method which is admittedly problematic from a copyright perspective. Burroughs used to just take pages of text, cut them in half, and rearrange them. Similarly, ancient Romans and Greeks used to cut up lines of poetry and re-arrange them; they used to call them "centos". However, as I describe above, the approaches used by modern GPT methods, or even older n-gram methods, do not work in this simple way.
Anyway, I'm not an expert in this, I'm just trying to share what little I know, if I got something wrong someone feel free to tell me, thx!
FWIW I was just using an extremely brief oversimplification to make the main point.
Yes you are correct. But I am also correct, just that the weighting and possibilities grow as the dataset does.
Its not 'learning' high level concepts and applying what it learned while making its own creative choices. Its weighing different properties based on how frequently they are seen based on the context of what surrounds them. There are knobs and dials to fine tune, but that really just adjust which thing it randomly grabbed. It's always grabbing randomly, the knobs just adjust how much the weights matter.
In the realm of copyright, the LLM should be subjected to every copyright of everything it consumed in its training data. So if an LLM consumes AGPL code, that LLM should be fully AGPL. I am aware of the incompatabilities, and thats part of why I say the only ethical dataset is one that is public domain.
I think it's a bit more complex than that, but we're starting to reach the boundaries of what I'm familiar with...
I'm not a lawyer, so I'm not sure... is this settled legistation or case law, either in the US or EU or elsewhere? It'd be interesting to see what the arguments for and against are/were. If not, I think this might be the kind of thing that (in the US) goes to the Supreme Court, though I suspect legislation will probably come first. I'm not sure what that means in the interim.
Alternately, are you making a moral statement? I kind of get the "gut feeling" of the argument, but for a more thorough argument, I think a lot of it comes down to the technical details of licensing and copyright laws. And my impression is that copyright laws are pretty outdated in the US.
TBH I kind of agree with you, I don't want some corporation taking stuff I post online and using it for marketing purposes etc. But I'm not 100% sure, and I think it's important to understand the details involved.
Should as in "the morally correct thing," yes.
It could take decades to sort it out in courts.
Yes, but that's an extremely small amount of training data. A big LLM has seen training data on every word in the dictionary. It don't just have training data from one writer, it has many. It's seen all the plots, and examples of every kind of character. It isn't just going to just draw on Stephen King's plots and characters even if you ask it to imitate him.
We also don't know what level of abstraction it uses to do randomized partial copying. It's clearly not word-for-word or translating between languages wouldn't work.
I would guess the level of abstraction currently falls well short of what's needed to really do well at writing new fiction. But it doesn't seem like there are fundamental limits on an LLM's creativity? As they say about music, there are only so many notes.
There's a thought experiment about how if you got enough monkeys typing long enough, they will eventually write a Shakespeare play. That's extraordinarily unlikely for random typing, enough that it rounds to "it will never happen."
LLM's are random text generators, but they aren't just random. Although LLM's aren't writing Shakespeare plays yet, they are writing sonnets.
Sure here's the video. https://youtu.be/9xJCzKdPyCo
There's a part at 49:20 about what you said. That's specifically the part ghat changed my mind
That part you linked - how did it convince you that AI art is unethical??
I just listened to it and the way I understand it, that argument is about originality and hinges on the claim that AI just remixes things but humans are able to add something original because they filter the inspiration through "personal experiences"... but what else are those personal experiences other than just another input? Sure, we value our personal experiences because we lived them... but how does that add originality? If I hooked an AI to a dedicated camera and mixed that feed with the rest of the input, would that make the AI an artist?
Anyway, I don't find that argument convincing but even if I did, how would that be related to ethics? Do you think that any art that is unoriginal is unethical?
I think using AI for your enjoyment is probably fine. It is unethical to make money from AI products trained on scraped "public" data, or from outputs of such AI.
I understand where you're coming from with that comparison to movie piracy, but the power structure simply does not match. Artists are not mega-corporations, they do not have the resources to go against the makers of AI products (OpenAI, Midjourney, etc.) not to mention the users of these products.
It is more convenient and cheaper to use AI than to commission a bunch of independent artists, and so people choose to pay for AI instead of paying artists. Now those companies that trained the AI probably deserve to make money. But their product wouldn't exist without the hard labour of millions of artists, writers, archivists, etc. And right now none of these people are being compensated for their involvement in these AI products.
In a way I do think people would be a little more fine with it, if these AI products and its outputs were non-commercial, but as it stands they are directly competing with the livelihood of artists.
Many people pay a small price to make piracy more convenient (or safe) instead of paying the artists. The people involved in creating the products for Netflix and co. are currently fighting to get fair pay. Music piracy is mostly dead, but musicians aren't much better off for it, they aren't unionized. Maybe there isn't a real solution for art under capitalism.
Either way, what we have right now does not help allowing the people making art to live.
Whenever these conversations happen, it inevitably comes up that someone says something along the lines of:
Capitalism didn't do this, the internet did.
The internet is so ubiquitous that it's a de facto public space. Just as you can't claim an expectation to privacy in the middle of the street (because that's a place where everyone looks around and observes their surroundings) I think artists can't claim an expectation to copyright when publishing to the open web(because that's a place where everyone shares media and saves/copies/uses the things they find).
You can make moral arguments about how that's not very nice but I'm just stating facts. Every artist whose art was placed in an AI training set chose to put that information out there somehow. This shouldn't be a lesson about how AI is terrible and bad, this should be a lesson about how putting all your full-resolution work on DeviantArt is terrible and bad.
There are very solid post-copyright monetization systems that exist - the Patreon system works very well for many artists, where instead of trying to carefully ration out your content and naively pretend that it won't be shared or distributed, simply produce a content stream and sell access to the stream. Sell voting rights on a "what should I draw next" poll. Sell access to a livestream where you doodle. These are all valid ways for an artist to monetize themselves that doesn't depend on an intellectual system with the same birthday as the Gutenberg press.
You must be serious, but what makes you believe this?
Yes, copyright on the internet has a fraught history. Art posted publicly is expected to be shared around. Motivations are mixed, but they don't really matter here. I believe we (the internet, artists) generally agree that attribution on shared art is considered nice. But even if it isn't done, people who are really interested (potential customers) have tools to figure it out for themselves.
This model has worked since the inception of the internet. Yes there are artists who have been more protective. Either way, we never allowed for corporations to do the same, this has always been non-commercial. Until AI we were always able to hold them accountable.
Why do you believe that?
Do you think it would be okay to sell a book borrowed for free from a library because they can just print more? If not, then does the paper and ink make the product non-fungible?
I agree that in a better future we shouldn't need copyright. I would prefer living in a world where I don't have to be producing profits for someone else, by selling a part of my time to be able to live. Copyright or intellectual property can vary wildly between nations, so its role and applicability on the internet is particularly difficult. In general though, it serves to protect both the interests of corporations, and individual creators.
And at is this point, we still need it, at least for the latter.
I feel like you didn't read what I said, but maybe I expressed myself poorly.
OpenAI isn't going to subscribe to anyone's Patreon, and neither are the majority of its paying customers.
Therefore we have to make them, whichever way we can.
Sorry for the long delay, but I wanted to come back to this after some thought.
Agreed. We can all say that attributing artist credit and other gestures are nice but they're optional. I'd like to specifically focus on what's required by law because that's much more concrete.
I don't quite understand what you're saying here. Non-commercial use of copyright material has been rampant, but the DMCA and other laws have been holding people accountable - at least somewhat.
If you're specifically talking about the AI use of public images to train its data set, then we ought to specifically talk about what is being 'taken' from the artists, as well as what is being 'made' by the AI.
The entire time here, let's draw a parallel by imagining a human person going through the same things that an AI is doing:
1 - They examine publicly viewable images, such as DeviantArt or photography galleries.
2 - They learn from those images and use them as their internal standard for good art.
3 - They generate original works from those internal standards and generate a bunch of art.
Frank Frazetta has a huge amount of artwork that's publicly available. It's on comics, paperback covers, albums and more! He also has a style so well defined that most knowledgeable people can pick out his work.
If I, a human being, were to study these pieces and practice until I could paint "like Frazetta" then am I stealing his work? This is my own paint on my own canvas, all I did was take inspiration from him.
Now compare that exact same process to an AI that's been given a catalogue of Frazetta's work to study, trained through a few million iterations until its work is high quality, and now produces work "like Frazetta" on demand. What's the difference?
My point with this comparison is that this seems like a bunch of hype surrounding AI rather than legitimate claims of copyright. Art students regularly mimic the works of others, hoping to replicate or learn from these earlier masters. Are they illegally remembering some artist's work? If they sell their painting, have they stolen from someone?
Yes, exactly. To be a little more clear, using library books as an analogy for digital files the library would never even be missing their book. I could instantly and perfectly copy the thing with no loss, but this also means that what I've done is of very little value. Who would pay money for a book when they could get a copy from the library for free? It's about the same business model as selling burned copies of CD albums - laughably backwards and not likely to succeed.
If OpenAI doesn't subscribe to people's Patreon, then they will not have access to the Patreon-only content. Simple as that. Ultimately it's the content creator who's giving their information away. If you don't want your files copied, don't put them out there.
The ease with which we can copy data means that copyright enforcement will be nearly impossible - and all the methods of making it happen are incredibly draconic solutions like DRM or the DMCA, which overwhelmingly help corporations like Disney more than they ever would some individual artist with a Patreon.
That better future where we don't need copyright, it's here. The legacy dinosaur corporations are the ones that insist it can't be done, and yet we have thousands of successful artists and content creators online who are doing exactly that.
I've been running a set of mildly easy cryptography puzzles with a few friends and this time around decided to add a story. ChatGPT is pretty poor for actual advice, but with the ability to churn out fake blogposts, pictures of characters, and copy for websites I've been able to add a blog that gets an update when the teams finish a puzzle, and throw up a simple website for a character so that the puzzles take on an interactive element - last week they found that site and had to email the fake guy to progress.
It's made it a lot less stale than it would have been otherwise and I've got content for a couple of months stashed away.
I use GitHub Copilot pretty much daily, if that counts.
I recently asked a GPT to generate a text adventure game for me. You know, the kind where it says "You wake up. You're in a room. You can look around, leave through the door... etc" and you type your command and it tells you what happens.
But in this case I had the GPT make the adventure based on George Eliot's "Middlemarch." I kind of wanted to be Casaubon but it made me Dorothea after I was married to Casaubon, so I went with it. So I woke up, puttered around the house in my unhappy marriage. GPT was like: "you remember the past days when you used to wear colorful clothes" and I was like "wtf GPT, Dorothea never wore colorful clothes" and GPT was like "I'm taking creative liberties, and by the way Dorothea wore colorful clothes by the end of the book." and I was like: whatever.
Anyway, I go talk to Casaubon and he's sneering at me. So I ask GPT for info about Casaubon's character. I'd re-read Middlemarch during my PhD studies (in CS!) and during that reading I'd been very alert to the themes of intellectual yearning and failure (Dorothea, Lydgate, Mr Brooke, Ladislaw, but especially Casaubon.) GPT found some points (and cited someone's writing!) about Casaubon that were kind of obvious but it was still interesting to see them stated that way (because every time I read Middlemarch I never had anyone to discuss it with) and GPT and I also had a nice little discussion on what kinds of things Casaubon could have done to be more successful (which might have been encouraging for me to hear when I was doing my PhD!)
Back in the game, I'm doing some note-taking task for Casaubon and I don't quite understand, and Casaubon's disrespecting me. So I'm like, "GPT, what are some ways to deal with troublesome people?" And GPT gives me a bunch of strategies which sound like they came from a self-help book. So I pick some of those strategies, and Casaubon starts to chill out a bit. Casaubon ends up finding a task that I can actually do, and because of the strategies that GPT suggested, Casaubon and I (Dorothea) start getting along better. After I finish the task, we go to dinner and I have the GPT end it there, and GPT provided a hopeful-sounding ending paragraph.
The main thing I wish I'd done is have GPT generate the text in Eliot's style -- I love her sentences. One thing I noticed is that the GPT wasn't as "insightful" as Eliot. In Middlemarch, every page has several great insights on people's ways of being, in a way that seems universal yet is tightly dependent on the narrative situation at hand. That sort of insight might be currently possible for GPTs with clever prompting. Alternately, it's possible that newer networks will have that insight by default. That's one way, though difficult to empirically measure, to subjectively track improvements in GPT "intelligence".
I tried to engage chatgtp for some Terry Pratchett style tiny stories or even snippets or quotes or ideas or discussions. It seems that the training dataset for fantasy writing dips too much into boring regular children's fairy tales, and AI has no wit. So overall the exercise was a complete wash.
I have also come to dislike the "confidently wrong" default style of chatgtp output. With human conversations you can tell when someone is clearly sprouting bull, they could become flustered when caught, and usually shame or better yet curiosity would enable them to listen more and talk less. Even the most surface level observation would at least be genuinely held and one could go on about it, but with a dumb language model at best I'm accessing literary criticism plagiarized from a better writer, at worst I'm being served nonsense confidently presented as fact.
Neat! Which version of GPT did you use? I wonder how Claude would do?
Unfortunately, I was using Bing Chat (GPT 4), which I do not recommend for this.
I did the same kind of game yesterday, playing Casaubon in Middlemarch. I began by repenting in prayer and altering my will. Then I thought to myself about what scholars I could communicate with to improve my arguments, and GPT helpfully offered several scholars of the area, including Mary Ann Evans (George Eliot) who wrote Middlemarch. Well I couldn't resist that, so Casaubon and Dorothea had tea with Ms Evans and Mr Lewes.
One thing that immediately bothered me was that Bing Chat's GPT was so unerringly optimistic and obsequious . In GPTs story, Ms Evans and Mr Lewes were fawning and loved Casaubon's writing (which they wouldn't have.) There was no drama or conflict. So I had Casaubon stay in London (with Dorothea) and look for a job, and sure enough GPT made sure he found one. I had GPT come up with a rival, but he was pretty easily defeated. I tried to come up with a dangerous scholarly trip but GPT just says "You face some difficulties and dangers along the way, such as rough weather, scarce resources, hostile locals, or rival explorers. You overcome them with courage and perseverance, with reason and evidence, with grace and eloquence."
So I had Casaubon and Dorothea visit an opium den, thinking that surely there'd be some conflict or drama there. GPT says "You explore the room with interest and curiosity, hoping to find some hidden treasures or secrets. You talk to some of the people who are smoking opium, asking them some questions or making some comments that are respectful and constructive. You try to learn more about their lives and their reasons for smoking opium. You also smoke some opium yourself, as part of your research for your article. You smoke it with caution and moderation, hoping to experience its effects without becoming addicted or harmed." So I told GPT that Casaubon and Dorothea visit the opium den again, because they've had some strange urges after their first visit. That's when it got weird. The first paragraph was plain GPT positive blather, but then it got dark, talking about Dorothea asking Casaubon what was wrong with him, "you don't know whether to laugh or scream" and there was another paragraph after that, but I didn't get a chance to read it because GPT blanked it all out and just said "I can't answer that right now."
So that whole experience really highlighted how Bing Chat in particular (using RHLF or other means) was trained to constrain its output to the sort of thing that's appropriate in a professional setting. And sure, that's OK, you don't want to be using a GPT in the office and suddenly it starts spewing offensive stuff in front of your boss. But for the purposes of exploring literary writing, it's a bad match.
Maybe I'll try that Claude, thanks for the tip.
Let’s see, rewrote emails to be professional, rewrote sms to be more compassionate, gave opinions on my essays, helped my chemistry homework, used to scour the net to find a handsoap with the right ingredients, made a lot of porn, made a logo for a report, rewrote my code to be more efficient, asked for code recommendations
My boss asked for a 4th of July social media image to post and I am sick of making things like that. For giggles, I asked Bing Chat:
"Would you please make me the most American looking image for the 4th of July? Something like an Eagle wrapped in the USA flag and add in some fireworks?"
I'm impressed at the four images it span out:
https://i.imgur.com/8ZOhzgF.jpg
https://i.imgur.com/sE5eplP.jpg
https://i.imgur.com/sLd3224.jpg
https://i.imgur.com/dm4Qyar.jpg
Unfortunately, he didn't go for those. I ended asking it to generate an oil painting of fireworks and overlayed some words and the company branding instead.
The main thing I've used it for is background removal and image upscaling for my website. While I don't think I'd ever used it to fully create or even make the basis of a piece of writing or artwork, it's been immensely helpful in those ways. My site is pretty heavily themed around the chocobos from Final Fantasy and a lot of the assets I find and want to use are quite old or low-fidelity, so being able to just clean them up a little is very nice (even if it's possibly not the nicest on networks :P)
I used it to help write a bat file that did some semi-complex renaming on 100s of files. I also wanted to try automating a process I was doing using AutoIt scripts so I utilized chatgpt to help with that.
I find its a huge time saver in respect to having to learn things from the ground up. I can usually knock something out in an hour through trial and error if I have a rudimentary grasp of things.
A few times while programming, I've used ChatGPT to help surface hard to find documentation on various APIs or to generate similarly difficult to find examples of APIs in usage. It's also pretty good at succinct explanations of how to use command line utilities.
I've not made use of image generating models for anything beyond amusement, mainly because I'm decent enough with graphics editors that it's not a problem to produce what I need myself.
Biological virus/plasmid design. Before you freak out, neuroscientists use empty virus shells (capsids) and add customized DNA to these shells to modify the genes of cells in mice. The custom DNA has nothing to do with virus proteins, so the viruses can't replicate. They're one time use and only work on mouse tissue.
With that preface out of the way, occasionally I'll need to use specific strands of DNA which define what kinds of cells you want your protein to be expressed in. For instance, maybe the virus can get into any mouse neuron, but I only want it to make proteins in serotonin neurons. That piece of DNA promotes the expression of my protein in serotonin neurons, and doesn't drive expression in any other kind of neuron.
This is all a long way of saying I sometimes need specific DNA promoters, and ChatGPT is great at giving me a no-nonsense table of possible promoters that I can then research more. I have yet to have it hallucinate here, but it obviously requires verification for everything it outputs, especially in my domain.
That’s very cool. I’m wondering if there are more specialized tools for this? Is ChatGPT better than other ways you tried?
There are basic ways. For instances, most scientists buy viruses through places like Addgene. They have a basic search function which will let you select cell type, reporter, etc. Some other sites are similar. However, just because you don't find a virus/plasmid that matches your needs doesn't mean it's not available. It might not be organized correctly or be obscure. That's where ChatGPT can be especially helpful to cover my bases. This is in conjunction with looking up reviews (when available) going over the latest options.
When I need to use a viral tool, there's a heavy preference to use ones already made, since making it myself requires months of work. Often times I'm aware of a promoter that would work, but I'll need a virus with all the correct pieces, so these extra constraints would make me look for more obscure options that might already exists.
I used artbreeder when i first started writing my series to get headshots for all my characters. Have since discontinued its use because i cant find whether their machine learns off open source art or not.
I commonly ask for help with generating hashtags for posting social media posts for the business.
i use chatgpt pretty regularly to help code.
Same. I find the more steps required to do something I ask it to code, the more likely it is to hallucinate variables that weren't present. But if I use it to write short functions for me, it's perfect. I can delegate the syntactical nuts and bolts, while I focus on the higher-level program flow. We really gel as a partnership. I'll pepper in words of encouragement and it'll mirror that behaviour. By this point it's my favourite colleague.
I use ChatGPT all day long making training data for my own LLM.
I’ve used it to create quick samples of logo concepts. Firstly for brainstorming, but also to save time and money for three client so that they can give a quick general idea of what direction they’d like to take it into.
Then I start from that brain storming and feedback session, but I design it from scratch, with just the comments as guidance, as the AI generated stuff are still quite bad.