A lot of what the author is arguing against are strawmen here. I've never heard any serious opinion that LLMs, or AI as a whole arent useful for anything. They're obviously good at certain things....
Exemplary
A lot of what the author is arguing against are strawmen here. I've never heard any serious opinion that LLMs, or AI as a whole arent useful for anything. They're obviously good at certain things.
The argument that people keep having to make over and over until theyre blue in the face is that they're not good at everything, and they keep having to make this argument because business people keep trying to put it into everything no matter if it actually improves things or not.
I keep having to talk my boss down when she says "why do we need to still do the budget manually? Why can't ChatGPT do it?".
Business people just don't understand the limitations of LLMs, or they're intentionally misunderstanding them. It's really frustrating to constantly get asked "why can't AI just do this?" Or to see an LLM in some application where it can't actually do a single useful thing, or have the ten millionth app pitched that just sends a prompt to GPT4.
No one is arguing that it's not interesting, or useful tech. We just want it to stop being shoved down our throats or pitched as a solutions to problems it's very bad at solving.
I hear this opinion all the time here on Tildes. I have repeatedly heard in so many words that LLMs are useless, not good for anything, that they destroy the environment and give nothing in...
I've never heard any serious opinion that LLMs, or AI as a whole arent useful for anything.
I hear this opinion all the time here on Tildes. I have repeatedly heard in so many words that LLMs are useless, not good for anything, that they destroy the environment and give nothing in return.
One of my favorite authors, Diane Duane, who wrote some of my favorite books about thinking machines, is on an anti-LLM crusade. For months she reposted every vitriolic take about LLMs she could find on her social media, including encouraging others to drive any pro-LLM discussion off Bluesky. Eventually I unfollowed this beloved author who's been important to my worldview for decades.
She's mad that LLMs were trained on her books without her consent. I get that; authors should be able to opt out of LLM training. But there's nothing I can do about that. I could spend the rest of my life campaigning for or against LLMs and it wouldn't make the slightest bit of difference to these companies. Better to focus on the things you can do something about.
We just want it to stop being shoved down our throats or pitched as a solutions to problems it's very bad at solving.
I agree; lots of businesses are trying to squeeze this new technology into places where it will only make things worse. It's a new technology and people are still figuring out what it's good for. The companies that sell it are talking like it's magic pixie dust that you can sprinkle on anything to make it better. Eventually people will figure out which use cases make sense and which ones don't and things will settle down.
It's possible to be vehemently anti-LLM and still admit that they're good at some things. That just means that as a whole, you think LLMs net out to an overall negative. It's kind of like...
It's possible to be vehemently anti-LLM and still admit that they're good at some things. That just means that as a whole, you think LLMs net out to an overall negative.
It's kind of like admitting that the Soviets were good at building housing while being anti-authoritarian.
I don't think anyone can honestly make the argument that they're not good at making convincing looking text. I see some random Twitter hot takes that they're bad at that too, but I don't think I've ever seen a serious person write an article or story about that.
The problem is that that's pretty much the only thing they're good at. Granted, writing convincing looking text is a really broad task with a lot of potential applications, but basically all of the anti LLM sentiment I read has argued that they're not good at things other than that, which I basically agree with.
I have a caveat with this sentiment, at least with the current top of the line models in mind. Gemini 2.5 pro is REALLY good at problem solving. If we argue that what it produces is "convincing...
The problem is that that's pretty much the only thing they're good at.
I have a caveat with this sentiment, at least with the current top of the line models in mind. Gemini 2.5 pro is REALLY good at problem solving. If we argue that what it produces is "convincing looking text", then I would argue that it's semantics because it's convincingly good at solving coding problems, regardless of what we decide to call their outputs.
I use it in my work and has been a time saver, tasks that I would usually take a few days to do, now take one afternoon.
Edit: to be clear, I’m not talking about vibe coding. The time I save here is time that I would spend on google searches and browsing stack overflow
How does it fare at the "Will Water Freeze at 27F?" question? Most bots spit out a thesis and non-answer at best or a confident wrong answer at worst. A friend tried to justify it to me by saying...
Most bots spit out a thesis and non-answer at best or a confident wrong answer at worst.
A friend tried to justify it to me by saying "Well, if you word your prompt precisely as such...", to which I replied "That's not how humans ask questions and expect answers."
Most common models will reply correctly, gemini 2.5 pro for example replied: The post shows a screenshot of Gemini's AI in google search, which I don't know for sure which model it used, but very...
Most common models will reply correctly, gemini 2.5 pro for example replied:
Yes, water will freeze at 27°F.
The standard freezing point of pure water at sea level pressure is 32°F (0°C). Since 27°F is below this temperature, water will transition from a liquid to a solid (ice).
The post shows a screenshot of Gemini's AI in google search, which I don't know for sure which model it used, but very likely it was Gemini 1.5 flash, which was notoriously very bad. Thus all the memes.
A friend tried to justify it to me by saying "Well, if you word your prompt precisely as such...", to which I replied "That's not how humans ask questions and expect answers."
I'll say one thing about this, I'm not trying to say that this model is pure magic and doesn't have the problems that LLM's in general have, nor that you can have normal "human" conversations with it. But in my experience, using an LLM is like using google search, there's a proper way to talk to them to get the answers that you want. Talking with it like it's a person and expecting a deeper understanding from it, is just not gonna happen with today's models.
But if you know how to use them, they're a time saver
Problem solving is an even more broad category than producing convincing looking text, but there's also significant overlap there. I would say "producing convincing looking text" much more closely...
Problem solving is an even more broad category than producing convincing looking text, but there's also significant overlap there. I would say "producing convincing looking text" much more closely aligns with the strengths of LLMs than the much broader category of "problem solving".
To go back to my original example, creating a budget for a large organization is a task that would fall under problem solving, but the kind of task LLMs are especially bad at.
I'd say that writing code is a subset of creating convincing looking text. The code it produces doesn't always run, and it doesn't always solve the problem we've asked it to solve in natural language, but it is convincing looking. Usually it's convincing looking enough for a compiler to run it without warnings too.
I would say that, yes, macro management is something I wouldn't trust any LLM to deal with either. Heck, I don't trust them to plan the design and structure of my scripts either. (not because they...
I would say that, yes, macro management is something I wouldn't trust any LLM to deal with either. Heck, I don't trust them to plan the design and structure of my scripts either. (not because they wouldn't work, but because I like to understand what I'm doing and writing, so I need to design them myself)
The way I use them is like this: I draft a plan, think it through, and then start coding. But instead of writing myself, I ask it to give me each parts of the code. E.g. "Give me a function that receives this dataframe and returns it multiplied by 2 in every numerical column". Basically, I ask it for the pieces in a puzzle that I planned myself, and go from there.
The problem solving skills where I find Gemini 2.5 to be really good at, is at receiving my request, understand what I'm saying, go through different possible solutions, and deliver me one. It's good at that, but also at resolving issues and errors, it quickly understands why the error happened, explains why, and provides different fixes. It basically saves me the time of searching the web for that error.
I'd say that writing code is a subset of creating convincing looking text. The code it produces doesn't always run, and it doesn't always solve the problem we've asked it to solve in natural language, but it is convincing looking. Usually it's convincing looking enough for a compiler to run it without warnings too.
Reading this, I get vibes from the times that I worked with GPT 4. I totally get this if we were talking about those earlier models. But with Gemini 2.5 Pro, Deepseek R1 and Sonnet 3.7... I can't say that has been my experience.
I'm using deepseek R1 for work sometimes and it's definitively good at "reasoning" (I think of it more as "computing with tokens") , I've given it some non-textbook problems (that master/PhD level...
The problem is that that's pretty much the only thing they're good at.
I'm using deepseek R1 for work sometimes and it's definitively good at "reasoning" (I think of it more as "computing with tokens") , I've given it some non-textbook problems (that master/PhD level struggle with) and it often does a really good job at it, the reasoning trace is also very convincing. I think for those models in particular saying they just write convincing looking text is wrong.
Notably, Anthropic just released research saying LLMs lie about their reasoning: https://arstechnica.com/ai/2025/04/researchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/
the reasoning trace is also very convincing
Notably, Anthropic just released research saying LLMs lie about their reasoning:
It bothers me how often people use the word "lie" when it comes to LLMs. I know there is a definition where a lie can be said to be any falsehood, but in common parlance lying usually implies...
It bothers me how often people use the word "lie" when it comes to LLMs. I know there is a definition where a lie can be said to be any falsehood, but in common parlance lying usually implies intent. This article also uses other language that implies intent like "hiding" rather than something more neutral like "unknown."
If I give someone the wrong answer because I was taught the wrong answer, or because my brain misfired in that moment, that's wrong but not a lie. If the chain of thought mechanisms are not a good representation of the actual reasoning of the model, that means the LLM is not good at generating an accurate output regarding its reasoning steps, but it's not a lie.
Yeah, this is why I much prefer the "stochastic parrots" description to descriptions that further anthropomorphize what LLMs are doing. Humans already naturally try to anthropomorphize shit...
Yeah, this is why I much prefer the "stochastic parrots" description to descriptions that further anthropomorphize what LLMs are doing. Humans already naturally try to anthropomorphize shit constantly, and it's funny (but frustrating) that even people criticizing LLMs will anthropomorphize them the same way as the hype bros do.
One could argue that an LLM has intent in the same way a battalion, a group of voters, a corporation, or a constitution has intent. These abstractions are often said to have intent because they...
One could argue that an LLM has intent in the same way a battalion, a group of voters, a corporation, or a constitution has intent. These abstractions are often said to have intent because they are the cumulation of the intention of multiple individuals. They do not possess intent themselves but they convey the cumulation of the intent of intentional beings.
I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs: The ARC-AGI-2 benchmark could...
I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs:
I encourage people to read that post and comment on it. There is now limited evidence that at least one of the newest frontier AI models — namely, OpenAI's o3 — is capable, to a limited extent, of something we could reasonably call reasoning. This challenges common narrative framings of AI that attempt to downplay its capabilities or potential. It also challenges the idea, common in some circles, that AI models have already possessed impressive reasoning ability since 2023, since the reasoning ability detected in o3 is so small and so recent.
well this aspect is mostly true. Last thing we needed as people are still shilling NFTs and Crypto is to add another reason to whip up 1 million server farms. The use cases certainly don't justify...
they destroy the environment and give nothing in return.
well this aspect is mostly true. Last thing we needed as people are still shilling NFTs and Crypto is to add another reason to whip up 1 million server farms. The use cases certainly don't justify that as of now.
I hope Deepseek eventually ends that brute force approach and we can go back to reasonable computing for such tech. Our single machines are more powerful than ever. let's take advantadge of that.
She's mad that LLMs were trained on her books without her consent. I get that; authors should be able to opt out of LLM training. But there's nothing I can do about that.
Sympathy is the least you can do. Solidarity is even better. The lawsuits will eventually settle this stuff, but it's not like there's literally nothing to do as an individual.
I will add this: If you think it is bad on here, do not engage about LLMs on Mastodon (some circles, at least). Consensus is way less nuanced and way more convinced over there about how...
I will add this: If you think it is bad on here, do not engage about LLMs on Mastodon (some circles, at least).
Consensus is way less nuanced and way more convinced over there about how essentially research should be stopped immediately on these for a variety of reasons – in my experience at least.
And I’m not even sure if I’m disagreeing with them, it’s just surprising to see such an overwhelming majority opinion among a traditionally "techy" crowd.
LLMs as they are today are pretty much everything Mastodon avoids being. They are centralized tech relying on huge server racks owned by billlionaires, a black box with no transparency on how they...
LLMs as they are today are pretty much everything Mastodon avoids being. They are centralized tech relying on huge server racks owned by billlionaires, a black box with no transparency on how they compute, and have zero accountability in how they procure data or even in the answers they outrput.
I'm not surprised a decetralized, open source community is so vehemetly against the model. Even if the tech was actually sound. They spun up Mastodon to get away from such models, after all.
This makes sense, thank you. Even in the case that they weren’t functioning as a black box – which is a massive if – I guess even for the already open-sourced model weights, there had to have been...
This makes sense, thank you. Even in the case that they weren’t functioning as a black box – which is a massive if – I guess even for the already open-sourced model weights, there had to have been a big company (Facebook for Llama, I think?) behind for its initial training, with all the data stealing and scraping that entails.
Yeah, open-source model weights don't really prevent models like these from being black boxes, unfortunately. They're more or less black boxes even to the researchers who most directly work on...
Yeah, open-source model weights don't really prevent models like these from being black boxes, unfortunately. They're more or less black boxes even to the researchers who most directly work on them and have access to those weights.
There was a time that VCs were throwing money at anything Blockchain related. I can't find it now, but I read about one that didn't seem to do anything except accept funding and give talks about...
There was a time that VCs were throwing money at anything Blockchain related. I can't find it now, but I read about one that didn't seem to do anything except accept funding and give talks about how great they were
We're either in that place now with AI or moving toward it quickly. AI solutions get funding and make investors happy, even when they're senseless or pointless - the vast majority of investors can't tell the difference. So if a company is doing poorly, the CEO announces the awesome stuff they're going to do with AI and that might make folks ignore reality.
We're definitely there, but the main difference is that AI is legitimately useful in a lot of applications, and has been for decades. No one has found a good use case for blockchains except for...
We're definitely there, but the main difference is that AI is legitimately useful in a lot of applications, and has been for decades. No one has found a good use case for blockchains except for cryptocurrencies, which are also arguably not very useful.
Every blockchain pitch was just some variation of "we invented distributed databases, but worse in every way". It was a solution desperately in search for a problem. Most AI investments are predicated on the fact that it's a solution desperately in search of a particular implementation to solve a real problem that's already been identified.
I liked Spock's World by Diane Duane. As someone who is trying to become an author, AI scares me. I don't think it can produce good fiction yet, but it already poses a threat to low-quality,...
I liked Spock's World by Diane Duane.
As someone who is trying to become an author, AI scares me. I don't think it can produce good fiction yet, but it already poses a threat to low-quality, human-made fiction. As long as someone works on it, I don't think raw AI output is usable yet.
I'm talking about chatGPT which is the only one I use. If I give it a sample and ask it to make it more comedic, scary, serious, or irreverent, the changes it produces are impressive. They are not usable as they have the distinct soulless affectation of every AI, but I could use that result as a starting point for something. So it is usable in the sense that I will still have to work on it quite a bit. At that point, I might just write it myself.
What really scares me is thinking about what LLMs will be in 5 or 10 years from now. Text is AI's strongest, most basic element. Am I trying to break into a career that will no longer exist?
I suppose it will always exist for highly sophisticated, "high art", well-known authors who cannot be easily reproduced. They also embody a certain provenance. But what about those writing genre fiction that draws from cliches and stereotypes? This is not to say that one is better than the other, they just have different goals. The latter might be at a greater risk than the former.
It's really common to see anything made with AI dismissed as "slop" completely regardless of the quality. There's a lot of people that seem really locked into this mindset of "AI is bad so...
It's really common to see anything made with AI dismissed as "slop" completely regardless of the quality. There's a lot of people that seem really locked into this mindset of "AI is bad so everything made with AI is bad."
Plus all the bizarre environmental claims that make it sound like every picture destroys gallons of clean water and burns down a small rainforest.
Thus far, the only practical application of LLMs that I've seen that qualifies as "not slop" is tagging masses of unknown content. Most other things like code completion either worked 90% well...
Thus far, the only practical application of LLMs that I've seen that qualifies as "not slop" is tagging masses of unknown content. Most other things like code completion either worked 90% well without, or requires such a fine toothed comb of review I might as well have written it myself.
The AI art stuff is somewhat interesting as a toy. But it's exploitative AF as a replacement for the content creators that (largely without consent) trained it. And is targeting the wrong thing: We should automate other things to give people more time to create art, not the reverse. AI music and voice acting in particular is dangerous beyond just being a jokey parody toy.
I've used chat GPT for writing Unity code. It's not perfect, but it's surprisingly good at getting me from "I have no idea how to even start this" to something that will work pretty well but maybe...
I've used chat GPT for writing Unity code. It's not perfect, but it's surprisingly good at getting me from "I have no idea how to even start this" to something that will work pretty well but maybe needs some adjustment.
The thing it's REALLY good at is when you tell it "here's some code. It's supposed to do this, but it's doing that". Then it instantly says "on line 37 you did x but it looks like you probably meant to type y" and fixes everything.
I've talked before about the AI code review bot $work uses, which I'm pretty sure is just ChatGPT in a trenchcoat resold as a SaaSaaS product. Probably two thirds of what it says is somewhere...
I've talked before about the AI code review bot $work uses, which I'm pretty sure is just ChatGPT in a trenchcoat resold as a SaaSaaS product.
Probably two thirds of what it says is somewhere between completely inane and accurate-but-not-helpful. It frequently suggests adding null checks for non-nullable variables, for example. Last week I had it suggest adding some absolutely nonsensical type casts that would not even have compiled.
The other third is actually very good. Today it caught a subtle bug, where I wrote a test case, made it pass, then refactored the test case such that it still passed but no longer actually tested what it should have.
If someone made a tool that reliably did the second thing but not the first, I'd call it an amazing technological advance. But that's not what we have today. It's the noise mixed in that makes it frustrating. It's not quite bad enough to blanket ignore it, but it's still a lot of mental effort on my part to fact check everything it says.
LLMs are good at the obvious, language digestion. Their output is quite useful to a human, such as yourself, who needs to distill information. They're not there to provide unquestionable answers,...
LLMs are good at the obvious, language digestion. Their output is quite useful to a human, such as yourself, who needs to distill information. They're not there to provide unquestionable answers, but to bring hypotheses to the forefront of your mind. In that case, the fine toothed comb is part of the process, and the overall resulting workflow is often faster with the LLM than without. This significant degree of usefulness can apply to many domains, and therefore many people. On the other hand, I believe this output shouldn't be sold in the mass market under any circumstances, by itself or sneakily disguised as part of something else. The human consumer is essential. If there isn't a specific human who needs it, it's slop.
Honestly I've seen one use that impressed me: Cline with an internal MCP server for the company intranet. It's basically a glorified search engine that can perform its own follow-up queries on...
Honestly I've seen one use that impressed me: Cline with an internal MCP server for the company intranet.
It's basically a glorified search engine that can perform its own follow-up queries on intranet APIs. For finding answers to questions like, "How is property A determined and what processes does it control?" It does a remarkable job surfacing all the relevant documentation.
It is hard to deny that there is an environmental impact involved. There is the insane amount of compute used to train these models, the staggering amount of datacenters being build specifically...
Plus all the bizarre environmental claims that make it sound like every picture destroys gallons of clean water and burns down a small rainforest.
It is hard to deny that there is an environmental impact involved. There is the insane amount of compute used to train these models, the staggering amount of datacenters being build specifically build to run these models. Many of those do need a lot of water to be kept cool and not all of them build in areas where water is actually a resource that should be taken for granted.
While generating a picture doesn't directly destroy "gallons of clean water" indirectly making use of these models does have an impact. It is still easy to be dismissive of that. But, try looking at it from the context of millions of people using LLMs and other generative models to just generate trivialities. In that context, I feel, it is not warranted to dismiss these concerns as readily as I think you are doing here.
So yeah, the few images you generate for your PowerPoint slides, blog post, etc might individually not actually consume that much. But they most certainly have a much higher impact than going already available stock material, certainly when you scale it up from just you to everyone else using these models.
To be very clear, I do think all these models can provide value and be useful in some contexts. It just that for a huge amount of cases they provide little to no added value. Once you balance that against the environmental impact of training and running these models it is easy, to me anyway, to see how people just have a dismissive attitude towards most AI generated content.
I mean, if people can tell it's AI, its probably slop by default. That's part of the issue. I don't think I need to highlight all the weird proclivities evident in generative AI. And the ourtight...
I mean, if people can tell it's AI, its probably slop by default. That's part of the issue. I don't think I need to highlight all the weird proclivities evident in generative AI. And the ourtight uncanny feelings in current Gen AI videos. It will just feel off if you take time to do more than look at the surface of the image/video.
It will be interesting when even a trained eye can't tell the difference. I'm not exactly optimistic we're close to that point, though. Maybe in a few years it will genuinely fool non-artists (not just people casually scrolling on Twitter).
I'm not really sure if "slop" in these situations necessarily means "something lazily cobbled together"; It seems that a lot of people are just using that term to mean "the output of an AI...
I'm not really sure if "slop" in these situations necessarily means "something lazily cobbled together"; It seems that a lot of people are just using that term to mean "the output of an AI program".
Which is not to say that I'm doubting your claims or your reading of the situations you're taking about. It's just something I've noticed recently.
To add onto this, even before “AI” was mainstream in the last few years, we’ve still had chatbots or phone trees, not to mention poorly trained offshore firms sources for customer service or tech...
To add onto this, even before “AI” was mainstream in the last few years, we’ve still had chatbots or phone trees, not to mention poorly trained offshore firms sources for customer service or tech support.
There’s already a bias against the technology in terms of usefulness when someone has to go thru 10 million prompts or the right phrases/statements. This is all just to have something escalated to a place where your problem can be solved.
So when a business touts “we’re unrolling XYZ to make things better, now with AI” the general public is skeptical at best, cynical at worst that it just adds another layer of getting screwed.
I’ve personally been involved in airgapped systems developed with LLMs used in EHR software. The amount of charting providers have to do is a whole other conversation but the amount of “hallucinations” which happen is unnerving.
I personally have no problem with them using it as a tool, similar to UpToDate as a reference, but when they automate the whole thing, it’s going to be super shitty that you were misdiagnosed due to an automatic review of your health records and test results.
If they ever kill the 'press 0 to talk to a human' option I'm just gonna light the world on fire. In my experience, the pre-LLM chatbots ranged from OK to useless. The LLM chatbots also range from...
it just adds another layer of getting screwed.
If they ever kill the 'press 0 to talk to a human' option I'm just gonna light the world on fire.
In my experience, the pre-LLM chatbots ranged from OK to useless. The LLM chatbots also range from OK to useless, but now with a whole lot of extra bullshit noise in the middle.
If your chatbots just spews info from the FAQ, it is useless if my problem isn't on the FAQ.
Or my personal favorite, when the thing I want to do online is not able to be done online, so I have to call in, only to be put on a 30 minute hold loop for a human with the loop telling me "for...
Or my personal favorite, when the thing I want to do online is not able to be done online, so I have to call in, only to be put on a 30 minute hold loop for a human with the loop telling me "for faster service, you can service your account online!"
No! No I can't! I tried that! That's why I'm calling! Just put me in touch with someone who can actually solve my problem!
I'm not sure if they legally can. Or at least logistically. I remember quite a few fast food joints trying to automate the drive thru process. And by now they all switched back to humans. For now....
I'm not sure if they legally can. Or at least logistically. I remember quite a few fast food joints trying to automate the drive thru process. And by now they all switched back to humans. For now. The service was just slower, the bot couldn't always perceive the right orders, and I remember a few more or less programmed to offer discounts when things escalated.
Legally, who knows? We're still working on accountability in the courts. If harm comes in a humanless joint, logically that means someone high up takes the hit. Even if they complain about paying $20/hr, it's much cheaper to let a human take that fall than worry about that situation.
I find that interesting, because to me, "LLMs are useful for everything" thing feels like as much of a strawman as "LLMs aren't useful for anything", in the sense that I rarely see anyone say...
I find that interesting, because to me, "LLMs are useful for everything" thing feels like as much of a strawman as "LLMs aren't useful for anything", in the sense that I rarely see anyone say either thing outright, but I often hear people complain about other people saying those things.
I can believe that this has a lot to do with the bubbles I'm in, though. My work is fairly tech-y, so most people have a fairly good understanding of how AIs work, and what their limitations are, but also are mostly excited to try out and play with new things. Outside of work, most of the people I know aren't particularly tech-y, and just haven't found a use for it yet outside playing around with it a bit. But I can imagine if you know people in that niche of tech-adjacent, mostly managerial work, they have a lot more exposure to the AI marketing, and a lot less understanding of what's going on under the hood.
I've also heard people being forced to use more AI as a KPI thing, I can imagine that would really rile me up, but it's just so completely outside my experience of either work or AI that I can't really comprehend how anyone would come up with that idea.
I have an app that automatically shields you from LLM overdose. Interested? It's basically a wrapper on top of ChatGPT. (I kid, I kid, please do label as noise)
I have an app that automatically shields you from LLM overdose.
The author doesn't really engage with any of the criticisms. He just handwaves them away and says something to effect of 'it'll get better.' For someone arguing against 'dismissiveness,' he...
The author doesn't really engage with any of the criticisms. He just handwaves them away and says something to effect of 'it'll get better.' For someone arguing against 'dismissiveness,' he certainly is acting rather... dismissive.
The argument about stealing art is particularly grating to me. Yes, knock-off t-shirt shops and pirating services have existed since the beginning, but LLMs are stealing and profiteering on a...
The argument about stealing art is particularly grating to me. Yes, knock-off t-shirt shops and pirating services have existed since the beginning, but LLMs are stealing and profiteering on a global scale.
How much of Anthropic's new $200/month tier goes to licencing the training data from their original creators?
The argument most companies have been using for this boils down to "there are so many of them, it's not possible", which doesn't sound like it'll get better anytime soon.
As far as I could tell from following the court cases, basically the only form of "stealing" they did was scraping, and even that is mostly a problem if you then distribute this data (this would...
As far as I could tell from following the court cases, basically the only form of "stealing" they did was scraping, and even that is mostly a problem if you then distribute this data (this would be piracy). Basically the same reason seeding a torrent can be punished, but just viewing (i.e. copying to your machine) a movie on a pirate website isn't (the pirate website is the one breaking the law, not the viewer). So if you actually consistently apply the standards we've been using to AI companies, they're not doing anything wrong and certainly not "stealing" anything. And thank god for that, because imagine if it would be a copyright violation to base your work, no matter your contribution or how the end product changes (in this case the end product is not even in a different medium, it's a different entity entirely), on someone's copyrighted material without asking for permission or license first. It would basically be a copyright nightmare with an end to creativity, since nothing is truly "original", everything is based on something.
Now you might argue that due to various reasons like the scale of it all it needs its own regulation, which is fine, but under our current understanding of IP they seem to be in the clear. Certainly not "stealing".
There's a clear difference between basing your own creative work on an existing work, and using existing work in the creation of commercial for-profit software. The first isn't a commercial use,...
There's a clear difference between basing your own creative work on an existing work, and using existing work in the creation of commercial for-profit software. The first isn't a commercial use, and the second is.
Even if the first is actually commercial use, it's still fine. The line isn't whether it's commercial use or not (copyright cares precious little for that), the line is whether it's a Derivative...
Even if the first is actually commercial use, it's still fine. The line isn't whether it's commercial use or not (copyright cares precious little for that), the line is whether it's a Derivative Work (technical term) or not. Most AI artwork is not derivative work. Studio Ghibli's style isn't copyrighted, their individual works are. So unless you can point to a specific work and provide specific, concrete elements that were straight up copied with minor adaptation, ripping off Studio Ghibli's style isn't infringement. That'd certainly be easy to argue if you prompted the AI using copyrighted material, but not at all easy if it's just a bunch of training data.
A major factor when determining fair use vs copyright infringement is the impact of the work being defended on the marketability of the original. It's pretty plain to see that these AIs are...
A major factor when determining fair use vs copyright infringement is the impact of the work being defended on the marketability of the original. It's pretty plain to see that these AIs are outcompeting and crowding out the people whose work they are based and owe their existence to.
And if we're talking about copying, there are certainly plenty of instances where AI does just that, reproducing characters, likenesses, and even whole images at a high enough fidelity to get you in trouble. In some instances, a high enough fidelity (check out the PDF linked inside the article, it has better examples) that you could only spot the difference with close, side-by-side inspection.
While you're correct about it being a major factor in a fair use analysis, one of the prerequisites to a fair use analysis even needing to happen to begin with is that there's actually something...
While you're correct about it being a major factor in a fair use analysis, one of the prerequisites to a fair use analysis even needing to happen to begin with is that there's actually something that counts as use of their copyrighted material involved. Reproducing copyrighted images, as in your links, obviously counts, but style cannot be copyrighted -- and it would be absolutely horrifying for human artists of all stripes if it could. Way too many people are conflating very different things when they argue against AI copyright infringement, and the validity of the arguments likewise varies dramatically depending on the case. Recreating the Studio Ghibli style, as in the currently popular trend, is obviously not copyright infringement, regardless of whether it's for commercial use and even if it were trying to compete in the same market as Studio Ghibli's films (which I'd argue the current trend definitely isn't). Whether using Studio Ghibli's films as training data would be copyright infringement is currently an unsettled question, legally speaking, but that's a completely separate question legally speaking and way too many people are talking about AI copyright infringement like they're the same.
Copyright law is already pretty broken imho, but even if I were a huge stan for it, the ability to copyright style would absolutely annihilate humans' ability to legally produce creative works, and a lot of people are ignorantly arguing for that to be reality when they don't distinguish between actual copyright infringement and mimicking a style when it comes from AI.
Not talking about styles, like you say, the issue is whether the models themselves are derivative works. When assessing that, it's hard to ignore the fact that they are competing in the market...
Not talking about styles, like you say, the issue is whether the models themselves are derivative works. When assessing that, it's hard to ignore the fact that they are competing in the market with the copyrighted content they acquired and ingested against all licensing agreements and all explicit notices that "This work is not to be used in the training of AI models." It's not arguable that they've done this. Especially because it's clear that these models are copying, however lossy that copy is. Only whether it's defensible by claiming it falls under fair use.
While you may not be talking about style, I wanted to explicitly emphasize the difference between the two on a copyright level because a lot of the criticisms of LLMs on copyright grounds do not...
While you may not be talking about style, I wanted to explicitly emphasize the difference between the two on a copyright level because a lot of the criticisms of LLMs on copyright grounds do not distinguish between the use of copyrighted material to train the models and the output resembling something copyrighted. I think your analysis is correct for the former -- whether they actually count as fair use being the unsettled legal question atm, not whether copying took place afaik. But way too many people extend their accusations of copyright infringement to any output that resembles a copyrighted work in any way, including uncopyrightable elements like style (this is very common on Tumblr in my experience). Because of this, I wanted to very explicitly delineate that these are two separate questions when it comes to copyright.
Fair Use is a tough one. Commercial use is a factor, though not usually the most important one. Gauging if the works are transformative vs derivative is often the largest question, as well as if...
Fair Use is a tough one. Commercial use is a factor, though not usually the most important one. Gauging if the works are transformative vs derivative is often the largest question, as well as if they can displace the original in a market. I think the results may be very different for images than for text, but we'll have to see.
Your links are good examples of that, though I will mention that Feb 2023 is basically a digital millennia ago now. The tech is moving a lot faster than the courts do.
That's certainly a valid way to look at it, and it's hard to argue what the law says. However it's pretty clear that regulation has been behind the pace of technology for so long that applying...
That's certainly a valid way to look at it, and it's hard to argue what the law says.
However it's pretty clear that regulation has been behind the pace of technology for so long that applying current laws is a fools errand imo. And in the absence of well-informed regulation, the only thing we can fall back on is the intuition of well meaning experts. I don't claim to be one, I'm just saying my intuition is that LLMs are substantially different from any previous technology and I'm trying to think about it from first principles.
Your comment saying I'm too hand-wavy sure feels rather... dismissive. jkjk I was already feeling long winded and didn't want to draw out my point. Things will get better, and I want to inject...
Your comment saying I'm too hand-wavy sure feels rather... dismissive. jkjk I was already feeling long winded and didn't want to draw out my point. Things will get better, and I want to inject optimism into the discourse. Too much pessimism, cynicism, and dismissal out there and I want to balance it out.
I didn't find it hand-wavy. I enjoyed your post, and found it to be down-to-Earth and relatable, even though I didn't necessarily agree with every point.
I didn't find it hand-wavy. I enjoyed your post, and found it to be down-to-Earth and relatable, even though I didn't necessarily agree with every point.
For controversial issues, there is an endless stream of hot takes that don't really deserve a full response, or maybe any response. Disagreeing with hot takes can be done endlessly. People will...
For controversial issues, there is an endless stream of hot takes that don't really deserve a full response, or maybe any response. Disagreeing with hot takes can be done endlessly. People will keep posting hot takes regardless of what you say.
So why do it? For some of us, disagreeing is addictive.
well, I'm here on Tildes precisely to try and engage in good faith and not be some contraranism. I do want to try and understand different world views and expand my horizons past my personal...
well, I'm here on Tildes precisely to try and engage in good faith and not be some contraranism. I do want to try and understand different world views and expand my horizons past my personal bubble,
This artiicle, sadly, did not feel very well argued. the opening was promising, but the meat of it was simply raw dismissal instead of reasoning.
What makes you think that this will get better? The costs to crawl are not particularly high compared to the costs of serving the data.
Valid. AI crawlers are a real problem, and companies need to do better to be good stewards of the models they're creating. This is a solved problem with other types of crawlers on the internet and I expect to see better behavior after this initial gold rush is over.
What makes you think that this will get better? The costs to crawl are not particularly high compared to the costs of serving the data.
It's a solved problem which the AI companies ignored the solution to in their rush to gobble up as much data as possible before their competitors got their first or regulators could slow them...
It's a solved problem which the AI companies ignored the solution to in their rush to gobble up as much data as possible before their competitors got their first or regulators could slow them down.
I'm not sure how anyone can be surprised that that leads to resentment or a lack of faith that AI companies will ever be motivated to fix their crawlers
I mean they're not broken. They're not just ignoring robots.txt, they're actively engaged in an arms race against being stopped from crawling. People have shown that they are spoofing user agents...
I mean they're not broken.
They're not just ignoring robots.txt, they're actively engaged in an arms race against being stopped from crawling. People have shown that they are spoofing user agents and coming from distributed IPs specifically to get around throttling measures.
The most important part to point out though is that this isn't "rogue AI" or anything so ridiculous or Sci-fi. It's people programming the crawlers to do those things because they want the data. It's textbook "Computer Fraud and Abuse Act" stuff that's just not being enforced because "AI might be important and the CEOs with the most vested interest said they need this".
I’m not sure it ever was enforced very well? Anyone who has looked at http server logs knows that there have been weird user agents all along, though at a lower rate. Enforcement across...
I’m not sure it ever was enforced very well? Anyone who has looked at http server logs knows that there have been weird user agents all along, though at a lower rate.
Enforcement across international borders is non-trivial.
Enforced, no, not legally anyway. But it was generally "bad form". For decades big companies and standards organizations said "don't do this" and almost everyone behaved because no one wants their...
Enforced, no, not legally anyway. But it was generally "bad form". For decades big companies and standards organizations said "don't do this" and almost everyone behaved because no one wants their web hosting rates to spike because someone just repeatedly grabs their whole site, over and over and over. Ads started because they were a dynamic way to generate revenue if your site got popular overnight, you could use the boost in views to directly cover the increased hosting costs.
Until the last few years there was no reason for the mass crawling we're seeing now. There were bad actors and the occasional mad scientists archivists but not an organized effort with billion dollar companies behind it to just ignore all engineered norms in pursuit of profit.
User agent spoofing isn't malicious in the same way phone call forwarding isn't. There are legitimate use cases for each, like convincing a site you can display the data you're requesting or seamlessly routing phone calls internally/externally. The problem is when someone uses that benign functionality maliciously.
It’s been a trend since the early days of the Internet that security requirements have gone up as activity by bad actors increased. Denial of service attacks aren’t new. Botnets aren’t new. A lot...
It’s been a trend since the early days of the Internet that security requirements have gone up as activity by bad actors increased. Denial of service attacks aren’t new. Botnets aren’t new. A lot of websites have to hide behind Cloudflare or similar.
This is a new pattern of behavior, though. Denial of service attacks didn’t used to look like crawlers.
How much do we really know about who is behind these bots?
Like with combatting spam, there are multiple ways to go about it that are partially effective. Sometimes the cost of serving the data can be made very low via caching and content networks....
Like with combatting spam, there are multiple ways to go about it that are partially effective. Sometimes the cost of serving the data can be made very low via caching and content networks. Sometimes crawlers can be blocked.
It's hard to say which way it will go, but I think it's reasonable to believe that, like spam, it will always be with us, but might be suppressed enough that it's not a big problem day-to-day.
...Is the author suggesting short essays should be kicked out of school curriculum? (There was a link to a podcast but no transcript was provided so I can't confirm if this is indeed the point.)...
Kids absolutely need to learn fundamentals without outsourcing thinking to LLMs, but they don't learn it writing five-paragraph essays.
...Is the author suggesting short essays should be kicked out of school curriculum? (There was a link to a podcast but no transcript was provided so I can't confirm if this is indeed the point.)
Five paragraph essays are, in my opinion, very reasonable. That's just long enough to require a bit of research, which then paves the way for using that information to come to conclusions. Essays of this size have perhaps contributed the most to my learning.
I'm just astonished by how many people make bold claims about how we should turn the education system upside down because of what ChatGPT can do. Sure, math teachers of past times making claims about how "you won't always have a calculator in your pocket" were very wrong. But that doesn't mean the ability to perform basic mental math is useless.
And essays are quite similar to educated discussions. If someone asks you which historical figure you find the most interesting, are you going to reply with "uhh hold on" and then ask ChatGPT? (There's probably a lot of better examples but I think that gets my point across.)
It's weirdly hard to find a transcript for that relatively popular podcast. The takeaway from it is that five paragraph essays are taking away a students' ability to creatively express an idea and...
It's weirdly hard to find a transcript for that relatively popular podcast. The takeaway from it is that five paragraph essays are taking away a students' ability to creatively express an idea and are stuffing it into this standardized formula of an essay. It's asking students to be more machine-like.
I'm reminded of doing piano exercises. Sometimes relatively rigid training helps you learn the skills that you can later use for self-expression. The key will be putting the exercise into context,...
I'm reminded of doing piano exercises. Sometimes relatively rigid training helps you learn the skills that you can later use for self-expression. The key will be putting the exercise into context, because grinding at getting good at an exercise can certainly turn people off, too.
Thanks for sharing the summary. Genuine question without an assumption before I respond further (to everyone/anyone, not just you) - do you agree with this? Setting bounds on assignments is...
five paragraph essays are taking away a students' ability to creatively express an idea and are stuffing it into this standardized formula of an essay. It's asking students to be more machine-like.
Thanks for sharing the summary. Genuine question without an assumption before I respond further (to everyone/anyone, not just you) - do you agree with this?
Setting bounds on assignments is actually a good thing. Constraints exist in the real world in various formats. A manager may tell an employee they need some visual reports from last month's sales data, and that they have to do it in Excel. The employee (likely) isn't going to be able to convince the manager/company that they need to purchase a multi-thousand-dollar piece of software that does it better. They probably also won't be able to convince IT to let them install python so they can do it more flexibly with code.
These five-paragraph essays are also likely part of a longer plan to prepare students for higher education.
I agree with it to an extent. I think a lot of kids need that structure, but when someone who doesn't is being forced to fit in this box at the exclusion of being allowed to express their...
I agree with it to an extent. I think a lot of kids need that structure, but when someone who doesn't is being forced to fit in this box at the exclusion of being allowed to express their reasoning in a different (maybe better!) way... that's unfortunate. I like your point that restrictions can also breed creativity.
When I was studying for the SAT II, I learned a highly structured template for writing the essay portion. At the time, I felt like it was stupid and robotic, but after a few years of reading...
When I was studying for the SAT II, I learned a highly structured template for writing the essay portion. At the time, I felt like it was stupid and robotic, but after a few years of reading emails in the corporate world I started wishing that more people had learned to do something like this. I can't tell you how many emails I've read where the ask is not clear, the supporting info is not there, or the structure is so bad that it's impossible to decipher the meaning. Lots of people suck at communicating, especially the ones who think they're great at it. It shouldn't be the only thing you learn, but learning how to write using a strict template is a valuable life skill.
I like the way the author writes this article. I am also frustrated by the AI dismissiveness. In my opinion, anyone saying AI has no benefits is just as wrong as someone saying it has no...
If you can't see the value of LLMs at all at this point, you're holding it wrong.
I like the way the author writes this article. I am also frustrated by the AI dismissiveness. In my opinion, anyone saying AI has no benefits is just as wrong as someone saying it has no drawbacks. The arguments of both people should be subject to scrutiny.
Do you see a lot of people suggesting that there are "zero benefits"? Most of the criticisms that I see are of the "bad benefit to cost/problem ratio" type, but maybe we use different circles of...
Do you see a lot of people suggesting that there are "zero benefits"? Most of the criticisms that I see are of the "bad benefit to cost/problem ratio" type, but maybe we use different circles of the internet.
I’ve seen a fair amount of discourse that could be interpreted as claiming AI brings no upside, but almost all of it is an abbreviated form of, “the benefits are not worth the costs”. This is...
I’ve seen a fair amount of discourse that could be interpreted as claiming AI brings no upside, but almost all of it is an abbreviated form of, “the benefits are not worth the costs”.
This is probably at partially driven by short form communication having taken over, which lends itself to misinterpretation, but it’s also coming from a sense that, “this is so obvious it doesn’t need to be mentioned”. To many, AI very much looks like a, “step over a dollar to pick up a dime” sort of situation in terms of cost:benefit ratio which makes the problems with it feel extremely obvious.
It seems pretty common in my experience, even here on tildes. The most common aspect I see is people saying that programmer assistance AI isn’t useful at all. Often it includes a good helping of...
It seems pretty common in my experience, even here on tildes. The most common aspect I see is people saying that programmer assistance AI isn’t useful at all. Often it includes a good helping of no true Scotsman fallacy (real programmers don’t get any benefit from AIs).
Eh, I just think they don't produce good code. They add dependencies you don't need, they write code with bad style, they get confused really quickly (using Claude code / aider + sonnet 3.7) and...
Eh, I just think they don't produce good code. They add dependencies you don't need, they write code with bad style, they get confused really quickly (using Claude code / aider + sonnet 3.7) and I'd rather engage in thinking about the problem than trying to convince some agent to solve it for me.
Programming is mostly the exercise of crafting working theories of operation and all I see is AI softening my skills in that regard while not being trustworthy. I keep using it in the hopes I get better usage out of it, but I'm not that hopeful.
I've had a better time discussing projects directly with tools like ChatGPT than I have with things like copilot. Granted, I'm working on machine learning projects, so most of the work is figuring...
I've had a better time discussing projects directly with tools like ChatGPT than I have with things like copilot. Granted, I'm working on machine learning projects, so most of the work is figuring out the best choices to make with architecture and data processing, and these models are really good as a partner to "discuss" these types of problems with. The code implementations I've gotten from it have been good but not perfect, but with me carefully checking the output and prompting it to change mistakes, I can generally get pretty okay results. A lot of the issues with AI code quality come from a lack of attentiveness from the human part of the pipeline when it comes to spotting mistakes.
My domain is mostly firmware / async rust / async scala / netty / high performance iot pipelines. I'm not really looking for AI guidance, I'm mostly looking to avoid typing.
My domain is mostly firmware / async rust / async scala / netty / high performance iot pipelines. I'm not really looking for AI guidance, I'm mostly looking to avoid typing.
I generally share your opinions on them. Have you tried any of the tools for generating tests and test cases? That's one area where I've found the autocomplete functionality very handy.
I generally share your opinions on them. Have you tried any of the tools for generating tests and test cases? That's one area where I've found the autocomplete functionality very handy.
I haven't tried tests, but I have found aider in particular to be good at "here are the docs for this json / dbus gvariant / something similar, I want a a bitflags instance", or some other...
I haven't tried tests, but I have found aider in particular to be good at "here are the docs for this json / dbus gvariant / something similar, I want a a bitflags instance", or some other serialisation boiler plate, but honestly I have spent long enough with multiple cursors that it's not that much of a saving vs copy pasting the docs and editing it myself? Still less keystrokes is less keystrokes and my fingers are better off for it :D
When I used Copilot in neovim for a bit, it mostly seemed like a glorified LSP with snippets. I don't think it was super amazing, but it was helpful with boiler plate sometimes. At the same time,...
The most common aspect I see is people saying that programmer assistance AI isn’t useful at all.
When I used Copilot in neovim for a bit, it mostly seemed like a glorified LSP with snippets. I don't think it was super amazing, but it was helpful with boiler plate sometimes. At the same time, I don't think the fact that they're stealing from loads of other peoples' work or that they're using immense amounts of energy and creating emissions are worth that.
Also, that seems like people saying they don't think it's useful for one specific task, i.e. not saying it has "no benefits" at all.
I've been making a conscious effort to use it more lately so I can get that perspective, and honestly I'm not terribly impressed. Or, I am very impressed, but with some weighty qualifiers on that....
I've been making a conscious effort to use it more lately so I can get that perspective, and honestly I'm not terribly impressed. Or, I am very impressed, but with some weighty qualifiers on that.
At boilerplate, it's pretty good. It's occasionally easier to describe a generic block or structure and let it be generated. It's quite good at picking up on repetitive-but-not-identical lines. It often flips a sign incorrectly or grabs the wrong variable, but so far not having to chisel out a dozen variations of var = p1 + r1 * sin(r1-r2) has been the largest benefit.
It seems abysmal at going any higher-level than that, no matter how much handholding, what model, or how much thoughtfully-pruned context. When working on existing codebases, it totally fails unless "the first thing you think of without any specific knowledge of the codebase" also happens to work in context. Even when explicitly given contrary constraints, they just can't rip themselves out of that rut.
For what it is, statistically shrugging at text, it's astounding it's gotten this far. But if you're not writing some standard SaaS CRUD, even the newest, largest models very quickly show their limitations. And given that I can run a mini-model that can handle repetitive line completion even on 8GB of RAM with no hardware acceleration, (for this narrow use case) it's not clear what the value of the big boys is.
I personally found it worse than LSP - regularly in large Java projects it would spit out completions with incorrect argument counts and types, which is table-stakes in the Java world. I do have...
I personally found it worse than LSP - regularly in large Java projects it would spit out completions with incorrect argument counts and types, which is table-stakes in the Java world.
I do have relatively limited access to models at work because of security policy, but I have been unimpressed whenever I've played with what we have. I'm not really interested in nannying an LLM when I have other tools that work perfectly fine.
I am leading an AI learning program at our small company, lots of non-technical people, and I hear a lot of negatives about it. Probably about 30% of the people I interact with have a pretty hard...
I am leading an AI learning program at our small company, lots of non-technical people, and I hear a lot of negatives about it. Probably about 30% of the people I interact with have a pretty hard "this thing is useless" stance. The rest are either really excited or aren't quite sure what's going on and have minimal complaints. I don't try to overhype it, I am very realistic about what it's good and bad at.
Well you dismiss a good 80% of the reason I really hate current AI and why its actually bad. I'm fine with experiential tools, I'm not fine with pretending they are ready to do t Stuff they are...
I'm going to focus on the technology itself and not the cult of personality surrounding the people and companies working on the technology.
Well you dismiss a good 80% of the reason I really hate current AI and why its actually bad. I'm fine with experiential tools, I'm not fine with pretending they are ready to do t
Stuff they are now. That's entirely driven by the cult who just wants to hype up for investor money.
As for the 20%...
"It's stealing but also really cool" is a pretty concise way to state it.
I don't even know how to respond to this. When has rule of cool been used to dismiss crimes?
they all devolve into specifics of the strawman that don't really apply
How is the specifics on how LLM not apply to how they are stealing? Yes, storing copies of other people's stuff on your server without permission is "stealing" in the modern sense.
There's been attribution problems on the internet since the beginning.
And AI is making it worse. Why do we ignore that? Oh yea, "it's really cool".
The CD example in the beginning didn't need to steal from encyclopedias to do it's things. It's not good when an LLM needs to.
Instead of attacking the tool, focus on the output from the tool.
"don't attack the gun, attack the bullet" okay. That is certainly one way to dismiss literal thievery.
Even the Ealy rulings already decided the output isn't worth any copyright. It's the input that really matters for artists.
I think a big part of it is that we all grew up being pretty dismissive of copyright. We downloaded songs off of Napster, pirated movies and games, and generally accepted that even if you weren't...
I think a big part of it is that we all grew up being pretty dismissive of copyright. We downloaded songs off of Napster, pirated movies and games, and generally accepted that even if you weren't really supposed to do it, we all were going to copy that floppy. My whole teenage and young adult life pretty firmly set up the idea that this isn't really that big a deal.
As an adult with more money than time, I now buy my games and pay for a few streaming platforms, but I still treat copyright as more of a suggestion. If something I want isn't somewhere convenient to me, I'm just going to go ahead and download it. I'm not saying it's right, but it's the way a lot of people live.
So then when you tell me that we shouldn't use this cool new technology because it looked at someone's pictures when it wasn't supposed to, I kind of feel the same way. I get why artists don't like it, and I get the general idea that some level of copyright is important, but it doesn't feel like such a big deal that we should just boycott AI or freak out every time someone makes a picture with it.
I think there's a pretty big gulf between "piracy for personal use" and "piracy for profit." It's generally a bannable offense on private trackers to use content from their collections for profit.
I think there's a pretty big gulf between "piracy for personal use" and "piracy for profit." It's generally a bannable offense on private trackers to use content from their collections for profit.
It's not that it looked at someone's pictures when it wasn't supposed to, it's that these companies are making insane amounts of money using these artists' work without their consent, while...
So then when you tell me that we shouldn't use this cool new technology because it looked at someone's pictures when it wasn't supposed to, I kind of feel the same way.
It's not that it looked at someone's pictures when it wasn't supposed to, it's that these companies are making insane amounts of money using these artists' work without their consent, while simultaneously putting these artists out of work. Like, I am entirely in favor of violating copyright, but if the sites I use to watch anime were actively putting anime studios out of business, I would be against using them too
I recall from the old ads "You wouldn't steal a car," the retort was always "Yea but if I could download a car for free while not depriving anybody else of a car, you bet I would." I think a large...
I think a big part of it is that we all grew up being pretty dismissive of copyright.
I recall from the old ads "You wouldn't steal a car," the retort was always "Yea but if I could download a car for free while not depriving anybody else of a car, you bet I would."
I think a large quantity of us grew up in an era where we realized that the internet enabled post-scarcity. Create once, share infinitely.
It's not just incompatible with copyright: It's completely antithical to the entirely of all prior physical economics. If we only needed a team of skilled engineers to design a car, and then 3D print them for $50, we'd use tax dollars to pay some highly skilled engineers to improve it and then let everyone have cars at-cost.
But the powers that be didn't like that kind of disruption and thus spend billions of dollars to hold a post-scarcity medium on a leash, making it heel to the economics of old.
Pirates still have post-scarcity entertainment: They are only constrained by time to injest it. We could all have that, legally, but it requires being willing to let the old system die and letting a new one arise from the ashes.
Edit: Also humans have never respected copyright. I have in my house reel-to-reel tapes from the 50s with bootleg Sinatra.
That makes sense, but there's two aspects that makes the situation very different Who's doing the stealing: few will complain if some single person downloads a song for private use. It is stealing...
That makes sense, but there's two aspects that makes the situation very different
Who's doing the stealing: few will complain if some single person downloads a song for private use. It is stealing but the impact is minimial. It's when you try to resell stuff that isn't yours when things get really thorny and very immoral.
Nowadays, the people doing that are both big corporations (ones who can definitely afford to pay licenses), and many private individuals engaging in this very much have profit driven motives. Some are trying to outright steal likenesses to get views, and I hope we all agree that goes way past any line of "copyright isn't a big deal". You may actually be harming personalities and impacting their career over lies.
the targets. The people subject to pirating most often tended to be very popular artists whoare already well off. Or record labels who are woth billions. These people aren't suffering from a few pirated copies.
Now, the targets inverted to disproportionately hurt smaller artists. Ones trying to kickstart careers. And their careers may never kickstart if instead others steal their work and the algorithms reward that over quality. That's how you kill an industry,
it doesn't feel like such a big deal that we should just boycott AI or freak out every time someone makes a picture with it.
Well I don't punch down and I do punch up. As long as the ones benefitting are the same ones trying to kill off artists and more as a career, of course I'll go down swinging, at worst. Likewise, I won't exploit my peers and learn art the way they did instead of resorting to stealing their efforts. My whole goal was to help support artists, after all,
Arguably the extent of US copyritht law is pretty insane though. If it had stuff copyrighted for 30 years after creation so that the original creator can draw value from it within a reasonable...
I think a big part of it is that we all grew up being pretty dismissive of copyright.
Arguably the extent of US copyritht law is pretty insane though. If it had stuff copyrighted for 30 years after creation so that the original creator can draw value from it within a reasonable timeframe that would be fine. But almost a century after their death with various options for corporations to evergreen certain aspects if copyright? Also a law that allows companies to enforce arbitrary copyright claims without any sort of due process before enforcement? That's just beyond unreasonable.
At least that's one thing that's potentially subject to change if Trump continues his current path, given that the only reason it's enforcable internationally atm is the US soft power he's currently throwing away.
People have already commented on the actual contents of the article here, so I just want to say that having an AI-generated logo is an instant -90% decrease in trustworthiness for me. I've read...
People have already commented on the actual contents of the article here, so I just want to say that having an AI-generated logo is an instant -90% decrease in trustworthiness for me. I've read multiple posts on @mattsayar's blog and every single time I find myself initially biased against them, even if I end up actually agreeing with them, just because of the AI-generated logo.
I'm not even an AI art hater, but people actually using AI art for commercial/branding purposes is crazy to me. It's like, ok, you fancy the logo that AI came up with, but you couldn't have spent literal 10 minutes just tracing it in Inkscape or Photoshop just to make sure that the straight lines are actually straight? It always looks so lazy and ugly and feels like you're on some random scam website. The craziest thing is that I've seen even huge multi-million dollar companies use AI-generated images for their advertisement.
This is not meant to be a dig on OP, in fact, as I said, I like and agree with many of their posts. But I just really don't understand the usage of an AI-generated logo on a website.
I've said it before, generative AI is digital plastic. Whatever good uses it might have, 95% of its use by volume is going to be churning out mass produced slop at scale, so it's inevitably going...
I've said it before, generative AI is digital plastic. Whatever good uses it might have, 95% of its use by volume is going to be churning out mass produced slop at scale, so it's inevitably going to feel cheap to people.
Oh yeah that's rough. I always hand vectorize any AI images I put into projects I work on. It gives you an opportunity to clean up all of the artifacts, adjust design elements, and leaves people...
Oh yeah that's rough. I always hand vectorize any AI images I put into projects I work on. It gives you an opportunity to clean up all of the artifacts, adjust design elements, and leaves people at least unsure of whether it's AI generated. For those that are sure it's from AI I assume they respect the minimal effort I make.
SVG favicons are also supported everywhere, except Safari (as usual). But to save an HTTP request elsewhere, you can serve both assets. <link rel="icon" href="/logo.ico" sizes="any"> <link...
SVG favicons are also supported everywhere, except Safari (as usual). But to save an HTTP request elsewhere, you can serve both assets.
Also, you can 100% just rename a .png to .ico and browsers will understand it. They care more about file headers than file extensions. This is ideal, since native .ico are just repackaged bitmap images and are very inefficient.
Thanks for calling me out on the logo. That is a long-overdue-to-be-replaced placeholder. Now I have to actually put thought into what I want it to be...
Thanks for calling me out on the logo. That is a long-overdue-to-be-replaced placeholder. Now I have to actually put thought into what I want it to be...
I was surprised not to read any counterpoint to the argument that LLMs are very resource intensive. They consume much more electricity to use than normal web queries (even if they are more...
I was surprised not to read any counterpoint to the argument that LLMs are very resource intensive. They consume much more electricity to use than normal web queries (even if they are more efficient, it's surely not efficient to have an AI summary at the top of every search result). And they are driving a rapid increase in both the number and size of data centres, consuming a lot of gains in renewable energy production so that that energy can't replace fossil fuels. Of course we increase our energy use all the time but the electricity cost of LLMs feels more than irresponsible to me.
Personally, I also find it hard to dismiss the "hallucination" point... if I'm going to use an LLM to summarise and present information for me, then I need to be able to trust that it will be right where I can't tell when it's wrong, and I don't.
Just a note that hallucinations aren't really an issue with summation, since the data is being placed directly into the context window. Hallucinations are much more common when sourcing from...
Just a note that hallucinations aren't really an issue with summation, since the data is being placed directly into the context window. Hallucinations are much more common when sourcing from poorly-sampled data in the model itself. It's like the difference between recalling something you read just moments ago vs describing a half-remembered dream.
Summarizing, and interrogating an LLM about a piece of content are actually very good uses for the technology.
Unfortunately having good information in the context can't hold them back if they're dedicated. When (trying to) use them for programming, I always summarize all relevant portions of the relevant...
Unfortunately having good information in the context can't hold them back if they're dedicated. When (trying to) use them for programming, I always summarize all relevant portions of the relevant APIs in the context, including usage examples for each. On occasion I try giving it a mathematical formula, and ask it to implement it with the given API, but I still have it hallucinate functions with the names and signatures it wants rather than those that exist and are actually present in its context.
Admittedly, that isn't exactly summarization, but there is a deep well of instances of search engines attributing information to articles that definitely isn't present and Apple Intelligence's notification summaries telling tall tales that don't exist in the underlying messages.
I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with. This comment will...
I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with.
This comment will mostly focus on the common ethical arguments against AI. I could also talk about AI hype (e.g., how despite huge business investment and apparent enthusiasm for AI, it doesn't seem to increasing productivity or profitability), but it seems like most Tildes users already believe that AI is overhyped.
1. The anti-AI art narrative seems to contain a contradiction
The discourse about AI-generated art is confusing. The detractors of AI-generated art make two claims that seem incompatible (or at least in tension with each other):
AI-generated art is terrible in quality, and obviously so to anyone who looks at it.
AI-generated art is displacing human-generated art in the market and costing human artists revenue.
I agree with (1). As for (2), I want to see data that supports this claim. I've looked for it and I haven't been able to find much data.
What nags at me most is that (1) and (2) seem to be incompatible. If AI-generated art is so terrible, why do consumers putatively prefer it? And if consumers don't prefer it, how could it be displacing human labour in creating art? How can these two claims, which are often made by the same people, be reconciled?
What seems to me most likely to be true is that AI art sucks and because it sucks, there is a marginal market for it, and there's very little displacement of human artists' labour.
2. Talking about how much electricity AI uses seems like it's just a proxy for talking about useful AI is
I'm skeptical about environmentalist arguments against AI. I'm skeptical because I've tried to find hard data on how much electricity AI consumes and I can't find strong support for the idea that an individual consumer using an LLM uses a lot of electricity when compared to things like using a computer, playing a video game, keeping some LED lightbulbs turned on, running a dishwasher, etc.
The predictable rejoinder is "those other things have some utility, while AI doesn't". If that's what this debate comes down to, then the environmentalist stuff is just a proxy argument for the argument about whether AI is useful or not. If you thought AI were useful, you probably wouldn't object to it using a modest amount of electricity on a per consumer basis. If you don't think it's useful, even if it consumed zero electricity, you would still have other reasons to oppose it. So, it seems like nobody's opinion about AI actually depends on the energy usage of AI.
I also dislike how much discourse about energy in general is focused on promoting energy conservation rather than promoting increased production of sustainable energy when the latter is far more important for mitigating climate change and also benefits people economically (whereas energy conservation, if anything, harms people economically).
3. AI and copyright
A lot of people assert that AI models "steal" training data or that training on copyrighted text or images amounts to "plagiarism" or "copyright infringement". Two things that bother me about this sort of assertion:
It's not obvious what constitutes "theft" in the context of training AI models. This is an unprecedented situation and I don't see people trying to justify why their non-obvious interpretation of "theft" is correct. Humans are allowed to consume as much text and as many images as they can in order to produce new text and images. If we treated AI models like humans in this respect, then this would not be theft. I don't think it's obvious we should treat AI models like humans in this respect. I don't know exactly what we should do. Why does it seem like people are not engaging with the complexity and ambiguity of this issue? Why does it seem like people are asserting that it's theft without a supporting argument, as if it should be obvious, when it's really not obvious whether it's theft or not?
The people who are angry about AI allegedly infringing copyright seem mostly indifferent or supportive of media piracy. I don't understand why the zeal against AI exists, especially when AI is a more ambiguous case with regard to copyright, and there isn't any zeal against piracy, especially when piracy is such a clear-cut instance of copyright infringement. Being anti-AI and pro-piracy (or neutral on piracy) aren't necessarily inconsistent positions, but I haven't seen many attempts to reconcile these positions.
Is this a symptom of people feeling uncomfortable with ambiguity and uncertainty and attempting to resolve the discomfort by rushing to angry, confident opinions?
4. General properties of the discourse that I don't like
Some of the general things that bother me about the AI discourse are:
Strong factual claims, e.g., about AI displacing artist labour and AI using a lot of energy, without clear supporting data.
Apparent tensions or contradictions that aren't resolved; obvious questions or objections that go unanswered.
Opinions so strongly held against AI that it is sometimes said or implied that no reasonable disagreement with an anti-AI stance could possibly exist and that people who use or defend AI are clearly doing something severely unethical and maybe should even be ostracized on this basis. Wow.
I take seriously the possibility that generative AI isn't actually that important or impactful (at least for now and in terms of what's foreseeable over the next few years), and that it's not really worth this much attention. This is a boring, possibly engagement-nullifying opinion, which might make it memetically disadvantaged on the Internet. But maybe also some people would find this idea refreshing!
The polarization isn't just on one side. In a way, both sides might be overrating how impactful AI is, with anti-AI people seeing the impact as highly net negative and the pro-AI people seeing the impact as highly net positive. I don't see AI as a credible threat to artists, the environment, or copyright law and I also don't see AI as a driver of economic productivity or firm/industry profitability. I think LLMs' actually good use cases are pretty limited and I definitely don't see generative AI as "revolutionary" or worth the amount of hype it has been receiving in the tech industry or in other industries where businesses have been eager to integrate AI.
I see this repeated often, but I think that AI-generated art is like plastic surgery: you only notice it when it's done badly. The latest models can generate images that are virtually...
AI-generated art is terrible in quality, and obviously so to anyone who looks at it.
I see this repeated often, but I think that AI-generated art is like plastic surgery: you only notice it when it's done badly. The latest models can generate images that are virtually indistinguishable from the work of good artists and illustrators. A prompter with good taste can get fantastic results.
Not at all. It's cheaper, and for some people that's all that matters. It's also good at convincing ignorant middle managers to eliminate jobs and heap additional, more tedious, less interesting...
The anti-AI art narrative seems to contain a contradiction
Not at all. It's cheaper, and for some people that's all that matters. It's also good at convincing ignorant middle managers to eliminate jobs and heap additional, more tedious, less interesting work on their existing employees despite the quality issue. It's not a contradiction, these issues stem from different sources.
It's also led to severe distortions of expectations and devaluing of the effort people put into art. Artists get told "this took you 14 hours? smh AI could do better in 10 seconds." In shirt/sign/print shops, someone bringing in a screenshot of a photo in text messages that their grandson sent to them is not new. We digitize things for people all the time. But AI is increasing the frequency of these events, and it's also trashing their expectations for turnaround. We've told people that their design is not suitable for print and would have to be recreated from scratch and had them ask moments later in the same conversation if they could see it already. People are impatient, but it's new to have people expecting that we toss it at the machine and it'll be done in a few minutes like a microwave burrito.
I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically...
I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically measurable displacement of paid human artistic labour by AI art generation?
To my knowledge nobody has attempted to collect any data, the anecdotes are just deafening from every corner of my art and graphic design networks. Everyone's department is downsizing, not...
To my knowledge nobody has attempted to collect any data, the anecdotes are just deafening from every corner of my art and graphic design networks. Everyone's department is downsizing, not growing.
I don't know much actually about how the masses are accepting it, only that clients and managers are using it and demanding its use.
It's not the masses, it's the employers - I agree I don't have data but I can point at Disney's Fantastic Four marketing art and how absolutely awful the AI art is. It doesn't look like an actual...
It's not the masses, it's the employers - I agree I don't have data but I can point at Disney's Fantastic Four marketing art and how absolutely awful the AI art is. It doesn't look like an actual artist touched it. And if big corporations go that route because it's cheaper... Someone has to pay artists for art. "The masses" aren't usually the direct purchasers of that product.
I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it. Also, the comment you replied to was replying to Diff's comment, and in that comment,...
I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it.
Also, the comment you replied to was replying to Diff's comment, and in that comment, Diff wasn't talking about large corporations making popular movies. They were (I thought) talking about individual customers who have a direct, one-to-one relationship with an artist or a small business producing art at small scale. So, that was about individual consumer choice. That was about "the masses" directly purchasing products.
I would appreciate it if you didn't reply to my comments in the future.
I don't think that's in the spirit of the site, or a reasonable ask. If somebody is being respectful, as @DefinitelyNotAFae is, then we have no individual right to prevent others from sharing...
I would appreciate it if you didn't reply to my comments in the future.
I don't think that's in the spirit of the site, or a reasonable ask. If somebody is being respectful, as @DefinitelyNotAFae is, then we have no individual right to prevent others from sharing their thoughts alongside us.
I agree that if someone has been respectful to you then it’s not polite to ask them not to engage with you. If someone has been disrespectful to you, then I guess your only two options are to ask...
I agree that if someone has been respectful to you then it’s not polite to ask them not to engage with you. If someone has been disrespectful to you, then I guess your only two options are to ask them to stop engaging with you or quit the site. I’m trying the first option before I quit as a last resort.
I think it’s a good idea for social sites/apps to have a block feature, since it lets people maintain their own boundaries without quitting the site/app altogether.
I think that insisting that a specific, well-behaved user never reply to you like this on a social media site like Tildes is more disrespectful than any comment DefinitelyNotAFae has made, in this...
I think that insisting that a specific, well-behaved user never reply to you like this on a social media site like Tildes is more disrespectful than any comment DefinitelyNotAFae has made, in this thread or otherwise, and is overall an attitude that would be harmful to the community this site tries to create if it spread beyond one user.
If you think a user is being sufficiently disrespectful to you here on Tildes, use the site's labels to report them for malice. That's seen directly by Deimos, the admin of the site, and he does take action to delete comments and nuke threads when things get out of hand. But I suspect he, like me, would not consider the comments DefinitelyNotAFae has left here to be disrespectful the way you seem to.
I disagree with your analysis of the situation. I would like to be granted the autonomy to maintain my own boundaries. That’s why I think just having a block feature is a good solution for most...
I disagree with your analysis of the situation.
I would like to be granted the autonomy to maintain my own boundaries. That’s why I think just having a block feature is a good solution for most social media sites/apps.
If it’s going to turn out that trying to maintain my boundaries isn’t possible, or isn’t possible without some onerous level of stress and effort (e.g. spending hours justifying myself to strangers who have no investment in my well-being), then unfortunately the best option for me is to just quit the site.
I already took a break from the site for about a month after having a bad experience. I am not one of these people who relishes conflict or who wants to get into it with strangers online. Very much the opposite.
I just want to have the ability to stop interacting with someone if I have a bad experience with them. I see no utility in getting into protracted conflict with strangers online. It’s hard enough to resolve conflict with people you know and love, let alone with strangers where you have no established trust, rapport, affinity, or common ground. Why would that be a good use of my very limited resources?
Tildes.net is not "most social media sites": it's an invite-only community for people who want to discuss in good faith and escape the contentious argumentativeness that has taken over most...
Tildes.net is not "most social media sites": it's an invite-only community for people who want to discuss in good faith and escape the contentious argumentativeness that has taken over most platforms. While this thread is more polarized than most on this community, your contributions stand up as grating and dismissive, not a good fit for our culture.
Your account is merely two months old and it can take a moment to shed the toxicity we learned in other platforms and adopt this culture, but it's not a good fit for everyone. And this is fine, this community is not about userbase growth, it's about nurturing something: https://docs.tildes.net/philosophy
This is super rude and unkind. I don't know why you thought this would be constructive — maybe you didn't think it would be constructive and said it anyway, I don't know. I will be deactivating my...
This is super rude and unkind. I don't know why you thought this would be constructive — maybe you didn't think it would be constructive and said it anyway, I don't know.
I think you're expecting others to do more than you're willing to. This site doesn't have a blocking feature because it has pretty firm moderation. Firm moderation establishes healthy boundaries...
If it’s going to turn out that trying to maintain my boundaries isn’t possible, or isn’t possible without some onerous level of stress and effort
I think you're expecting others to do more than you're willing to. This site doesn't have a blocking feature because it has pretty firm moderation. Firm moderation establishes healthy boundaries here, but if they're not enough for you, I think not participating may be the only reasonable other option that gives you what you want without making tildes users you don't like feeling personally attacked for arbitrary reasons.
My personal boundaries are not the consensus boundaries of the community — and that's as it should be. Just because I've decided I don't want to interact with someone doesn't mean they should be...
My personal boundaries are not the consensus boundaries of the community — and that's as it should be. Just because I've decided I don't want to interact with someone doesn't mean they should be banned from the site. By analogy, there are people in my local community who I've decided I don't want to talk to, but I would never advocate for them to be excluded from IRL or online community spaces. There is a difference between "I don't personally want to interact with this person for reasons that are important to me" and "I think this person should be ostracized from this community". If you're saying that the only good reasons to not want to interact with someone are because they've done something that warrants censure or ostracism from the community at large, I don't know what to tell you, you're just wrong — and you probably don't even actually believe that.
I had to look them up because this was weird, and apparently they don't like me from a previous post where they also asked me not to every speak with them again lest they quit the site, and I,...
I had to look them up because this was weird, and apparently they don't like me from a previous post where they also asked me not to every speak with them again lest they quit the site, and I, therefore, didn't reply to their assumptions about my posts.
I respected their boundaries in both circumstances once I was aware of them, but noticing a username in a thread before I respond and remembering the details of that username on top of it, when we've interacted... twice basically? three times technically? Meh. I'm not trying to antagonize anyone, though personally it's quite shitty feeling to be told interacting with you is bad enough to make them quit, but I've been on the internet before. 乁( ⁰͡ Ĺ̯ ⁰͡ ) ㄏ
Diff was talking about middle managers. Those middle managers and businesses who contract with artists are definitely the ones judging turnaround time and devaluing art across the board. And Diff...
Diff was talking about middle managers. Those middle managers and businesses who contract with artists are definitely the ones judging turnaround time and devaluing art across the board. And Diff was replying to your presumed contradiction which is what I'm addressing.
ETA: I have no idea who you are and didn't see your final line until after I replied, if you don't want to interact with me, please just don't reply to me, as I have no way to tag you on my app.
Edit on 4/23/25 - it has been incredibly weird to have someone go on this much because they didn't like interacting with you. In attempting to respect the boundaries of this user I've not been able to defend myself from their original misinterpretation of my posts and they've spent ten days seeking ways to block me including making a separate post and writing paragraphs about it. I'm mostly working this for posterity so in the future I have a fucking clue about what this is about when we accidentally cross paths in the future and I once again have to hear about how awful it is to interact with me.
I'm not going to get into the disagreement, but just say that one thing that might help you would be to find the Tildes Reextended plugin for your browser. You can use it to tag usernames. While...
I'm not going to get into the disagreement, but just say that one thing that might help you would be to find the Tildes Reextended plugin for your browser. You can use it to tag usernames. While it won't block, tagging a user would enable you to see that a comment is from them and choose to ignore it.
I will say that when I left the site a year or two ago, it was because there were some users who were - politely - posting things I disagreed with to the point of not being able to refrain from replying. It did cause me to take a break.
I haven't had that problem since returning, but I have strategy ideas on how to handle it. And in fact, there was a topic recently where someone expressed some opinions that angered me. I was able to ignore the topic and walk away.
So I have some sympathy for your general situation.
I think the nature of the design and intent for how Tildes operates doesn't allow for user blocking. I understand that philosophy. But it mean that when you find yourself in a position of needing that functionality, just have to find another way.
Thanks. I've tried that extension and I found the tagging username feature to be better than nothing, but ultimately not what I was looking for. I really don't like the idea that if you want to...
Thanks. I've tried that extension and I found the tagging username feature to be better than nothing, but ultimately not what I was looking for.
I really don't like the idea that if you want to block someone, it's your problem. Blocking someone doesn't necessarily imply a strong moral condemnation or a belief that they should be punished. Like you described in your example, sometimes some people just express opinions that bother you, and you might just not want to engage with those opinions. Or you might just find that someone is too rude for you to want to interact with them further, while not being so rude that it warrants moderator action against them. There are many, many more examples like that.
The line that the admin/moderator/community is going to draw for what's acceptable behaviour or what's an acceptable opinion is going to be different from my line. Perhaps more importantly, the line I draw for who and what I want to engage with personally is going to be very different from the line I would draw if I were acting in the capacity of moderator. There are many instances in my experience moderating other communities where I have extended leagues more patience and tolerance for people when I was acting as a moderator than I would ever extend if I were interacting with those people in a personal capacity — and I think that's as it should be. As a moderator, you have a much greater obligation to be impartial, lenient, measured, patient, and so on than you have as an ordinary person having personal interactions.
To say that personal boundaries = community rules, and then rebuking anyone who tries to have personal boundaries that are stricter than (or just different from) the community rules, is, I think, just unfair, unreasonable, unrealistic, and unkind. That feels really unhealthy and unwise to me. I don't want things that are personal to me and that I feel should be up to me to have to go through approval from a committee process, and a community that requires that seems conducive to many bad outcomes, including bullying and emotional abuse. I really don't like this way of doing things. I also really don't like how mean a few users have been to me in the comments on this post.
This will be the last comment I make on Tildes for at least the next 6 months. I am going to deactivate (or pseudo-deactivate) my account using the steps I described here.
I wish you well and wish everyone else on Tildes well too. And I really mean that and I'm not saying it passive-aggressively. I am unhappy with my experience here and have some grievances (as I just described at length), but I believe in peace above all else and I believe (as much for the well-being of my soul as for the benefit of the world) in wanting good things for people even if you've had conflict with them, even if you've cut off ties, even if you have untenable disagreements, even if they've hurt you or wronged you. So, I genuinely wish everyone on Tildes well.
This may be true in 90% of cases, but I don't believe it's inherent to the method, it's more related to the people using it. Have you tried the AI Turing test, a set of handpicked real and AI...
AI-generated art is terrible in quality, and obviously so to anyone who looks at it.
This may be true in 90% of cases, but I don't believe it's inherent to the method, it's more related to the people using it.
Have you tried the AI Turing test, a set of handpicked real and AI images? The selection deliberately avoids a few of the obvious AI visual styles, like what Dall-E does whenever you request something "realistic", and tries to focus on styles that imitate real art, historic and modern, to make the comparisons fair.
The median score for correctly recognizing what's real and what's AI among readers of the author's blog was 60%. I think I had about 85% correct, which is apparently quite uncommon, probably because I have both real interest in art and many hours spent playing with AI image generators.
This was art specifically. Many freelance illustrators make money drawing commissions, like specific existing characters in new situations, avatars for people in certain style, a lot of porn etc. This is imo more of a craft than art, and it's just a matter of time before AI does some part of it well enough to effectively take away work, or significantly increase productivity of those who adopt those tools and change the market that way - I bet both is happening already.
Another job for illustrators is doing concept art, storyboarding etc. - things that are part of the process for something bigger and only used internally. This is an area where AI image generation was adopted pretty much instantly - people were generating images with the first version of Midjourney, which was not realistic at all, and then drawing over them, because it gave them interesting ideas and made their work easier. Nowadays things like storyboards are often generated purely with AI in areas like advertising.
Imo those two points you mention are not necessarily contradictions, part of the explanation may be that making money doing art is a very broad category that contains different jobs, some of which are already being reduced due to AI and others which are not, another part may be that AI creates AI slop in some situations and not in others, and this may depend both of the style chosen and on the skill of the prompter (even if the skill is just "having some taste and correctly deciding which images to keep and which to throw out"). And also angry illustrators who do some of the less creative work that is more at risk are not exactly rushing to admit that their jobs really seem like they can be partially automated out quite soon because their work is not that special despite being in the category of art.
This is an interesting perspective. Thank you. I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and...
This is an interesting perspective. Thank you.
I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and have non-Euclidean geometry or don't attempt to show perspective in a realistic way. That makes it particularly hard to judge.
I also saw a few examples, particularly the cartoony images of porcelain women, that I find ugly and low-quality but I don't doubt they could have been made by humans. Sometimes I wonder if part of the reason for diffusion models like DALL-E and Midjourney outputting art that looks bad is that they're trained on a lot of art from DeviantArt or Tumblr or wherever that is bad. It makes sense that most of the drawings on the Internet would be made by people who have closer to beginner-level skill than expert-level skill, just like how most fanfiction is closer to a "14-year-old who has never written anything before" level of quality than a "experienced writer who could realistically get a publishing deal" level of quality.
I also think of this post about LLMs generating short fiction. The author's view is that LLMs are good at generating short stories that look like good writing upon a cursory inspection, but if you scratch the surface, you start to notice how bad it really is.
I worry about the same thing happening with the "AI Art Turing Test". Realistically, how long am I going to spend looking at fifty images? Maybe like ten seconds or less, which is not long enough for my eyes to even take in all the detail in the image? Passable at a glance is not the same thing as good.
If a great piece of art is something you can stand in front of at a museum for an hour and continually appreciate more detail in, then a bad piece of AI art is something that looks impressive for the first 30 seconds you spend looking at it before you notice some messed up, ugly detail.
This is an assumption that I don't think is necessarily correct, given the average reader of ACX, and it wasn't true for me. Some of those really are hard to judge. Counterpoint: while I don't...
I worry about the same thing happening with the "AI Art Turing Test". Realistically, how long am I going to spend looking at fifty images? Maybe like ten seconds or less
This is an assumption that I don't think is necessarily correct, given the average reader of ACX, and it wasn't true for me. Some of those really are hard to judge.
If a great piece of art is something you can stand in front of at a museum for an hour and continually appreciate more detail in, then a bad piece of AI art is something that looks impressive for the first 30 seconds you spend looking at it before you notice some messed up, ugly detail.
Counterpoint: while I don't think AI is close to producing art that can be admired for an hour in a museum (but then again, neither are most artists), a lot of AI art is more interesting and artistic than 99% of what online commission artists produce. I mostly enjoy surreal, weird and dreamy styles, that is where some models seem to excel since the beginning. Part of it is probably because the model's imperfection adds to the overall vibe, part of it may be because it somewhat avoids low quality deviant art illustration sources of training. Often there's a tradeoff - the more photorealistic the models get, the less creative they are and the worse they are at generating weird stuff, but Midjourney seems to be able to keep the balance.
I do think that real art with real value (something that could go to a serious museum) would come when the outputs from the AI mentioned above are used to create something more sophisticated, sort of like "advanced late-stage concept art", the outputs themselves have inherently less value. But I think they can be really creative and better than most derivative art available online and some would stand on their own even in the offline world, although not at "hour at a museum" level.
My wife works in education, and I wanted to expand on this point just a smidge: In my wife’s class, her students have an online textbook. The online textbook software has a “handy” “Summarize this...
My wife works in education, and I wanted to expand on this point just a smidge:
However, its [ChatGPT’s] simplicity is surely tempting kids who otherwise wouldn't cheat.
In my wife’s class, her students have an online textbook. The online textbook software has a “handy” “Summarize this chapter” button.
That is horrifying. One of the best ways to learn and understand something, is to be able to summarize it yourself. The act of summarizing helps you better retain the information, so often my wife’s assignments included a question for the students to summarize what they read. This was a homework assignment, so they could summarize as they read the text, but now they can just “click a button” to do it for them.
Sure, maybe summarizing things is going to be a task delegated to LLMs in the future, and it is a skill that will no longer be needed. But it’s also impeding students ability to retain information and actually learn the material.
I think this article kind of neglects the downfalls of a lot of the technologies/progress it espouses. I would say that the advent of the calculator did reduce people's overall number/arithmetic...
I think this article kind of neglects the downfalls of a lot of the technologies/progress it espouses.
I would say that the advent of the calculator did reduce people's overall number/arithmetic literacy in day to day life, lots of people cannot do mental math anymore (making change, comparing the cost of something if the unit price is not available, etc.) and while yes, now we have calculators everywhere so you can do this math to figure it out, it has reduced our intuition with numbers. I still love calculators and would not throw them into a pit of fire and bury my head deep into the sand and ignore their existence, but I wouldn't encourage over-reliance on them, or act like they had no negative impact on our lives.
From a copyright standpoint, I think people are willfully ignorant or purposefully ignoring that piracy has changed a lot with each iteration of technology. From people literally making physical copies of records to taping things off the radio, and eventually mp3s and beyond, copyright infringement has been around for a long time, but to ignore the changes in: ease, access, and scope, is a bit disingenuous, in my opinion. Saying a lot of these issues are really a parenting issue or other regulatory issue, while valid, is like saying, "if the poor would just save their money, they'd stop being poor", or more accurately, "guns don't kill people, people kill people". It just dismisses the problem by making it someone else's problem.
Copyright infringement and other IP "crimes" are difficult to parse, they are like porn, "I'll know it when I see it", but tbh, will we always know it when we see it? Something about technology has changed where it seems like we have more "photocopier" type things, direct replicas, or things good enough to trick us into being the real thing(or a disregard for whether something is the "real" thing or not), and this era or area of infringement is...interesting...to say the least. Something in society has changed where we have become over-reliant on external validation and vetting. These two things together make me afraid that we will not always be able to know it when we see "it".
A related take by Simon Willison that encapsulates my thoughts around LLM critique better: https://simonwillison.net/2024/Dec/31/llms-in-2024/#llms-need-better-criticism
He sure hand waves away the theft of art. My wife is an indie author but still had her books stolen by Facebook to train their garbage. Stealing other people's stuff to make your thing work is not...
He sure hand waves away the theft of art. My wife is an indie author but still had her books stolen by Facebook to train their garbage.
Stealing other people's stuff to make your thing work is not ethical. Full stop. It's an extreme example, but if you're against Nazi scientists experimenting on people, this is a similar idea.
I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now...
I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now copyright infringement is akin to Nazi crimes against humanity? Please step back from this hyperbole.
I'm not saying you're wrong and I'm not trying to invalidate your wife's hard feelings about having her work used in this way. But we really can't go around comparing things cavalierly to Nazi atrocities.
Yeah I think it's genuinely pretty horrifying to compare the theft of any property, much less mere intellectual property, to Nazi atrocities like that. Even if you believe both are unethical,...
Yeah I think it's genuinely pretty horrifying to compare the theft of any property, much less mere intellectual property, to Nazi atrocities like that. Even if you believe both are unethical, there is an unfathomable gulf in degree between them.
A lot of what the author is arguing against are strawmen here. I've never heard any serious opinion that LLMs, or AI as a whole arent useful for anything. They're obviously good at certain things.
The argument that people keep having to make over and over until theyre blue in the face is that they're not good at everything, and they keep having to make this argument because business people keep trying to put it into everything no matter if it actually improves things or not.
I keep having to talk my boss down when she says "why do we need to still do the budget manually? Why can't ChatGPT do it?".
Business people just don't understand the limitations of LLMs, or they're intentionally misunderstanding them. It's really frustrating to constantly get asked "why can't AI just do this?" Or to see an LLM in some application where it can't actually do a single useful thing, or have the ten millionth app pitched that just sends a prompt to GPT4.
No one is arguing that it's not interesting, or useful tech. We just want it to stop being shoved down our throats or pitched as a solutions to problems it's very bad at solving.
I hear this opinion all the time here on Tildes. I have repeatedly heard in so many words that LLMs are useless, not good for anything, that they destroy the environment and give nothing in return.
One of my favorite authors, Diane Duane, who wrote some of my favorite books about thinking machines, is on an anti-LLM crusade. For months she reposted every vitriolic take about LLMs she could find on her social media, including encouraging others to drive any pro-LLM discussion off Bluesky. Eventually I unfollowed this beloved author who's been important to my worldview for decades.
She's mad that LLMs were trained on her books without her consent. I get that; authors should be able to opt out of LLM training. But there's nothing I can do about that. I could spend the rest of my life campaigning for or against LLMs and it wouldn't make the slightest bit of difference to these companies. Better to focus on the things you can do something about.
I agree; lots of businesses are trying to squeeze this new technology into places where it will only make things worse. It's a new technology and people are still figuring out what it's good for. The companies that sell it are talking like it's magic pixie dust that you can sprinkle on anything to make it better. Eventually people will figure out which use cases make sense and which ones don't and things will settle down.
It's possible to be vehemently anti-LLM and still admit that they're good at some things. That just means that as a whole, you think LLMs net out to an overall negative.
It's kind of like admitting that the Soviets were good at building housing while being anti-authoritarian.
I don't think anyone can honestly make the argument that they're not good at making convincing looking text. I see some random Twitter hot takes that they're bad at that too, but I don't think I've ever seen a serious person write an article or story about that.
The problem is that that's pretty much the only thing they're good at. Granted, writing convincing looking text is a really broad task with a lot of potential applications, but basically all of the anti LLM sentiment I read has argued that they're not good at things other than that, which I basically agree with.
I have a caveat with this sentiment, at least with the current top of the line models in mind. Gemini 2.5 pro is REALLY good at problem solving. If we argue that what it produces is "convincing looking text", then I would argue that it's semantics because it's convincingly good at solving coding problems, regardless of what we decide to call their outputs.
I use it in my work and has been a time saver, tasks that I would usually take a few days to do, now take one afternoon.
Edit: to be clear, I’m not talking about vibe coding. The time I save here is time that I would spend on google searches and browsing stack overflow
How does it fare at the "Will Water Freeze at 27F?" question?
Most bots spit out a thesis and non-answer at best or a confident wrong answer at worst.
A friend tried to justify it to me by saying "Well, if you word your prompt precisely as such...", to which I replied "That's not how humans ask questions and expect answers."
Most common models will reply correctly, gemini 2.5 pro for example replied:
The post shows a screenshot of Gemini's AI in google search, which I don't know for sure which model it used, but very likely it was Gemini 1.5 flash, which was notoriously very bad. Thus all the memes.
I'll say one thing about this, I'm not trying to say that this model is pure magic and doesn't have the problems that LLM's in general have, nor that you can have normal "human" conversations with it. But in my experience, using an LLM is like using google search, there's a proper way to talk to them to get the answers that you want. Talking with it like it's a person and expecting a deeper understanding from it, is just not gonna happen with today's models.
But if you know how to use them, they're a time saver
Problem solving is an even more broad category than producing convincing looking text, but there's also significant overlap there. I would say "producing convincing looking text" much more closely aligns with the strengths of LLMs than the much broader category of "problem solving".
To go back to my original example, creating a budget for a large organization is a task that would fall under problem solving, but the kind of task LLMs are especially bad at.
I'd say that writing code is a subset of creating convincing looking text. The code it produces doesn't always run, and it doesn't always solve the problem we've asked it to solve in natural language, but it is convincing looking. Usually it's convincing looking enough for a compiler to run it without warnings too.
I would say that, yes, macro management is something I wouldn't trust any LLM to deal with either. Heck, I don't trust them to plan the design and structure of my scripts either. (not because they wouldn't work, but because I like to understand what I'm doing and writing, so I need to design them myself)
The way I use them is like this: I draft a plan, think it through, and then start coding. But instead of writing myself, I ask it to give me each parts of the code. E.g. "Give me a function that receives this dataframe and returns it multiplied by 2 in every numerical column". Basically, I ask it for the pieces in a puzzle that I planned myself, and go from there.
The problem solving skills where I find Gemini 2.5 to be really good at, is at receiving my request, understand what I'm saying, go through different possible solutions, and deliver me one. It's good at that, but also at resolving issues and errors, it quickly understands why the error happened, explains why, and provides different fixes. It basically saves me the time of searching the web for that error.
Reading this, I get vibes from the times that I worked with GPT 4. I totally get this if we were talking about those earlier models. But with Gemini 2.5 Pro, Deepseek R1 and Sonnet 3.7... I can't say that has been my experience.
I'm using deepseek R1 for work sometimes and it's definitively good at "reasoning" (I think of it more as "computing with tokens") , I've given it some non-textbook problems (that master/PhD level struggle with) and it often does a really good job at it, the reasoning trace is also very convincing. I think for those models in particular saying they just write convincing looking text is wrong.
Notably, Anthropic just released research saying LLMs lie about their reasoning:
https://arstechnica.com/ai/2025/04/researchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/
It bothers me how often people use the word "lie" when it comes to LLMs. I know there is a definition where a lie can be said to be any falsehood, but in common parlance lying usually implies intent. This article also uses other language that implies intent like "hiding" rather than something more neutral like "unknown."
If I give someone the wrong answer because I was taught the wrong answer, or because my brain misfired in that moment, that's wrong but not a lie. If the chain of thought mechanisms are not a good representation of the actual reasoning of the model, that means the LLM is not good at generating an accurate output regarding its reasoning steps, but it's not a lie.
Yeah, this is why I much prefer the "stochastic parrots" description to descriptions that further anthropomorphize what LLMs are doing. Humans already naturally try to anthropomorphize shit constantly, and it's funny (but frustrating) that even people criticizing LLMs will anthropomorphize them the same way as the hype bros do.
One could argue that an LLM has intent in the same way a battalion, a group of voters, a corporation, or a constitution has intent. These abstractions are often said to have intent because they are the cumulation of the intention of multiple individuals. They do not possess intent themselves but they convey the cumulation of the intent of intentional beings.
I wrote a post on Tildes a week ago with the intention of cutting through some of the polarized, it's-either-black-or-white discourse on the capabilities of LLMs:
The ARC-AGI-2 benchmark could help reframe the conversation about AI performance in a more constructive way
I encourage people to read that post and comment on it. There is now limited evidence that at least one of the newest frontier AI models — namely, OpenAI's o3 — is capable, to a limited extent, of something we could reasonably call reasoning. This challenges common narrative framings of AI that attempt to downplay its capabilities or potential. It also challenges the idea, common in some circles, that AI models have already possessed impressive reasoning ability since 2023, since the reasoning ability detected in o3 is so small and so recent.
well this aspect is mostly true. Last thing we needed as people are still shilling NFTs and Crypto is to add another reason to whip up 1 million server farms. The use cases certainly don't justify that as of now.
I hope Deepseek eventually ends that brute force approach and we can go back to reasonable computing for such tech. Our single machines are more powerful than ever. let's take advantadge of that.
Sympathy is the least you can do. Solidarity is even better. The lawsuits will eventually settle this stuff, but it's not like there's literally nothing to do as an individual.
I will add this: If you think it is bad on here, do not engage about LLMs on Mastodon (some circles, at least).
Consensus is way less nuanced and way more convinced over there about how essentially research should be stopped immediately on these for a variety of reasons – in my experience at least.
And I’m not even sure if I’m disagreeing with them, it’s just surprising to see such an overwhelming majority opinion among a traditionally "techy" crowd.
LLMs as they are today are pretty much everything Mastodon avoids being. They are centralized tech relying on huge server racks owned by billlionaires, a black box with no transparency on how they compute, and have zero accountability in how they procure data or even in the answers they outrput.
I'm not surprised a decetralized, open source community is so vehemetly against the model. Even if the tech was actually sound. They spun up Mastodon to get away from such models, after all.
This makes sense, thank you. Even in the case that they weren’t functioning as a black box – which is a massive if – I guess even for the already open-sourced model weights, there had to have been a big company (Facebook for Llama, I think?) behind for its initial training, with all the data stealing and scraping that entails.
Yeah, open-source model weights don't really prevent models like these from being black boxes, unfortunately. They're more or less black boxes even to the researchers who most directly work on them and have access to those weights.
There was a time that VCs were throwing money at anything Blockchain related. I can't find it now, but I read about one that didn't seem to do anything except accept funding and give talks about how great they were
We're either in that place now with AI or moving toward it quickly. AI solutions get funding and make investors happy, even when they're senseless or pointless - the vast majority of investors can't tell the difference. So if a company is doing poorly, the CEO announces the awesome stuff they're going to do with AI and that might make folks ignore reality.
We're definitely there, but the main difference is that AI is legitimately useful in a lot of applications, and has been for decades. No one has found a good use case for blockchains except for cryptocurrencies, which are also arguably not very useful.
Every blockchain pitch was just some variation of "we invented distributed databases, but worse in every way". It was a solution desperately in search for a problem. Most AI investments are predicated on the fact that it's a solution desperately in search of a particular implementation to solve a real problem that's already been identified.
I liked Spock's World by Diane Duane.
As someone who is trying to become an author, AI scares me. I don't think it can produce good fiction yet, but it already poses a threat to low-quality, human-made fiction. As long as someone works on it, I don't think raw AI output is usable yet.
I'm talking about chatGPT which is the only one I use. If I give it a sample and ask it to make it more comedic, scary, serious, or irreverent, the changes it produces are impressive. They are not usable as they have the distinct soulless affectation of every AI, but I could use that result as a starting point for something. So it is usable in the sense that I will still have to work on it quite a bit. At that point, I might just write it myself.
What really scares me is thinking about what LLMs will be in 5 or 10 years from now. Text is AI's strongest, most basic element. Am I trying to break into a career that will no longer exist?
I suppose it will always exist for highly sophisticated, "high art", well-known authors who cannot be easily reproduced. They also embody a certain provenance. But what about those writing genre fiction that draws from cliches and stereotypes? This is not to say that one is better than the other, they just have different goals. The latter might be at a greater risk than the former.
It's really common to see anything made with AI dismissed as "slop" completely regardless of the quality. There's a lot of people that seem really locked into this mindset of "AI is bad so everything made with AI is bad."
Plus all the bizarre environmental claims that make it sound like every picture destroys gallons of clean water and burns down a small rainforest.
Thus far, the only practical application of LLMs that I've seen that qualifies as "not slop" is tagging masses of unknown content. Most other things like code completion either worked 90% well without, or requires such a fine toothed comb of review I might as well have written it myself.
The AI art stuff is somewhat interesting as a toy. But it's exploitative AF as a replacement for the content creators that (largely without consent) trained it. And is targeting the wrong thing: We should automate other things to give people more time to create art, not the reverse. AI music and voice acting in particular is dangerous beyond just being a jokey parody toy.
I've used chat GPT for writing Unity code. It's not perfect, but it's surprisingly good at getting me from "I have no idea how to even start this" to something that will work pretty well but maybe needs some adjustment.
The thing it's REALLY good at is when you tell it "here's some code. It's supposed to do this, but it's doing that". Then it instantly says "on line 37 you did x but it looks like you probably meant to type y" and fixes everything.
I've talked before about the AI code review bot $work uses, which I'm pretty sure is just ChatGPT in a trenchcoat resold as a SaaSaaS product.
Probably two thirds of what it says is somewhere between completely inane and accurate-but-not-helpful. It frequently suggests adding null checks for non-nullable variables, for example. Last week I had it suggest adding some absolutely nonsensical type casts that would not even have compiled.
The other third is actually very good. Today it caught a subtle bug, where I wrote a test case, made it pass, then refactored the test case such that it still passed but no longer actually tested what it should have.
If someone made a tool that reliably did the second thing but not the first, I'd call it an amazing technological advance. But that's not what we have today. It's the noise mixed in that makes it frustrating. It's not quite bad enough to blanket ignore it, but it's still a lot of mental effort on my part to fact check everything it says.
LLMs are good at the obvious, language digestion. Their output is quite useful to a human, such as yourself, who needs to distill information. They're not there to provide unquestionable answers, but to bring hypotheses to the forefront of your mind. In that case, the fine toothed comb is part of the process, and the overall resulting workflow is often faster with the LLM than without. This significant degree of usefulness can apply to many domains, and therefore many people. On the other hand, I believe this output shouldn't be sold in the mass market under any circumstances, by itself or sneakily disguised as part of something else. The human consumer is essential. If there isn't a specific human who needs it, it's slop.
Honestly I've seen one use that impressed me: Cline with an internal MCP server for the company intranet.
It's basically a glorified search engine that can perform its own follow-up queries on intranet APIs. For finding answers to questions like, "How is property A determined and what processes does it control?" It does a remarkable job surfacing all the relevant documentation.
It is hard to deny that there is an environmental impact involved. There is the insane amount of compute used to train these models, the staggering amount of datacenters being build specifically build to run these models. Many of those do need a lot of water to be kept cool and not all of them build in areas where water is actually a resource that should be taken for granted.
While generating a picture doesn't directly destroy "gallons of clean water" indirectly making use of these models does have an impact. It is still easy to be dismissive of that. But, try looking at it from the context of millions of people using LLMs and other generative models to just generate trivialities. In that context, I feel, it is not warranted to dismiss these concerns as readily as I think you are doing here.
So yeah, the few images you generate for your PowerPoint slides, blog post, etc might individually not actually consume that much. But they most certainly have a much higher impact than going already available stock material, certainly when you scale it up from just you to everyone else using these models.
To be very clear, I do think all these models can provide value and be useful in some contexts. It just that for a huge amount of cases they provide little to no added value. Once you balance that against the environmental impact of training and running these models it is easy, to me anyway, to see how people just have a dismissive attitude towards most AI generated content.
I mean, if people can tell it's AI, its probably slop by default. That's part of the issue. I don't think I need to highlight all the weird proclivities evident in generative AI. And the ourtight uncanny feelings in current Gen AI videos. It will just feel off if you take time to do more than look at the surface of the image/video.
It will be interesting when even a trained eye can't tell the difference. I'm not exactly optimistic we're close to that point, though. Maybe in a few years it will genuinely fool non-artists (not just people casually scrolling on Twitter).
I'm not really sure if "slop" in these situations necessarily means "something lazily cobbled together"; It seems that a lot of people are just using that term to mean "the output of an AI program".
Which is not to say that I'm doubting your claims or your reading of the situations you're taking about. It's just something I've noticed recently.
I don't disagree, but I would argue that using "slop" as a shorthand to immediately dismiss anything made of AI, is exactly what we're talking about.
To add onto this, even before “AI” was mainstream in the last few years, we’ve still had chatbots or phone trees, not to mention poorly trained offshore firms sources for customer service or tech support.
There’s already a bias against the technology in terms of usefulness when someone has to go thru 10 million prompts or the right phrases/statements. This is all just to have something escalated to a place where your problem can be solved.
So when a business touts “we’re unrolling XYZ to make things better, now with AI” the general public is skeptical at best, cynical at worst that it just adds another layer of getting screwed.
I’ve personally been involved in airgapped systems developed with LLMs used in EHR software. The amount of charting providers have to do is a whole other conversation but the amount of “hallucinations” which happen is unnerving.
I personally have no problem with them using it as a tool, similar to UpToDate as a reference, but when they automate the whole thing, it’s going to be super shitty that you were misdiagnosed due to an automatic review of your health records and test results.
If they ever kill the 'press 0 to talk to a human' option I'm just gonna light the world on fire.
In my experience, the pre-LLM chatbots ranged from OK to useless. The LLM chatbots also range from OK to useless, but now with a whole lot of extra bullshit noise in the middle.
If your chatbots just spews info from the FAQ, it is useless if my problem isn't on the FAQ.
Or my personal favorite, when the thing I want to do online is not able to be done online, so I have to call in, only to be put on a 30 minute hold loop for a human with the loop telling me "for faster service, you can service your account online!"
No! No I can't! I tried that! That's why I'm calling! Just put me in touch with someone who can actually solve my problem!
I'm not sure if they legally can. Or at least logistically. I remember quite a few fast food joints trying to automate the drive thru process. And by now they all switched back to humans. For now. The service was just slower, the bot couldn't always perceive the right orders, and I remember a few more or less programmed to offer discounts when things escalated.
Legally, who knows? We're still working on accountability in the courts. If harm comes in a humanless joint, logically that means someone high up takes the hit. Even if they complain about paying $20/hr, it's much cheaper to let a human take that fall than worry about that situation.
I find that interesting, because to me, "LLMs are useful for everything" thing feels like as much of a strawman as "LLMs aren't useful for anything", in the sense that I rarely see anyone say either thing outright, but I often hear people complain about other people saying those things.
I can believe that this has a lot to do with the bubbles I'm in, though. My work is fairly tech-y, so most people have a fairly good understanding of how AIs work, and what their limitations are, but also are mostly excited to try out and play with new things. Outside of work, most of the people I know aren't particularly tech-y, and just haven't found a use for it yet outside playing around with it a bit. But I can imagine if you know people in that niche of tech-adjacent, mostly managerial work, they have a lot more exposure to the AI marketing, and a lot less understanding of what's going on under the hood.
I've also heard people being forced to use more AI as a KPI thing, I can imagine that would really rile me up, but it's just so completely outside my experience of either work or AI that I can't really comprehend how anyone would come up with that idea.
I have an app that automatically shields you from LLM overdose.
Interested?
It's basically a wrapper on top of ChatGPT.
(I kid, I kid, please do label as noise)
The author doesn't really engage with any of the criticisms. He just handwaves them away and says something to effect of 'it'll get better.' For someone arguing against 'dismissiveness,' he certainly is acting rather... dismissive.
The argument about stealing art is particularly grating to me. Yes, knock-off t-shirt shops and pirating services have existed since the beginning, but LLMs are stealing and profiteering on a global scale.
How much of Anthropic's new $200/month tier goes to licencing the training data from their original creators?
The argument most companies have been using for this boils down to "there are so many of them, it's not possible", which doesn't sound like it'll get better anytime soon.
As far as I could tell from following the court cases, basically the only form of "stealing" they did was scraping, and even that is mostly a problem if you then distribute this data (this would be piracy). Basically the same reason seeding a torrent can be punished, but just viewing (i.e. copying to your machine) a movie on a pirate website isn't (the pirate website is the one breaking the law, not the viewer). So if you actually consistently apply the standards we've been using to AI companies, they're not doing anything wrong and certainly not "stealing" anything. And thank god for that, because imagine if it would be a copyright violation to base your work, no matter your contribution or how the end product changes (in this case the end product is not even in a different medium, it's a different entity entirely), on someone's copyrighted material without asking for permission or license first. It would basically be a copyright nightmare with an end to creativity, since nothing is truly "original", everything is based on something.
Now you might argue that due to various reasons like the scale of it all it needs its own regulation, which is fine, but under our current understanding of IP they seem to be in the clear. Certainly not "stealing".
There's a clear difference between basing your own creative work on an existing work, and using existing work in the creation of commercial for-profit software. The first isn't a commercial use, and the second is.
Even if the first is actually commercial use, it's still fine. The line isn't whether it's commercial use or not (copyright cares precious little for that), the line is whether it's a Derivative Work (technical term) or not. Most AI artwork is not derivative work. Studio Ghibli's style isn't copyrighted, their individual works are. So unless you can point to a specific work and provide specific, concrete elements that were straight up copied with minor adaptation, ripping off Studio Ghibli's style isn't infringement. That'd certainly be easy to argue if you prompted the AI using copyrighted material, but not at all easy if it's just a bunch of training data.
A major factor when determining fair use vs copyright infringement is the impact of the work being defended on the marketability of the original. It's pretty plain to see that these AIs are outcompeting and crowding out the people whose work they are based and owe their existence to.
And if we're talking about copying, there are certainly plenty of instances where AI does just that, reproducing characters, likenesses, and even whole images at a high enough fidelity to get you in trouble. In some instances, a high enough fidelity (check out the PDF linked inside the article, it has better examples) that you could only spot the difference with close, side-by-side inspection.
While you're correct about it being a major factor in a fair use analysis, one of the prerequisites to a fair use analysis even needing to happen to begin with is that there's actually something that counts as use of their copyrighted material involved. Reproducing copyrighted images, as in your links, obviously counts, but style cannot be copyrighted -- and it would be absolutely horrifying for human artists of all stripes if it could. Way too many people are conflating very different things when they argue against AI copyright infringement, and the validity of the arguments likewise varies dramatically depending on the case. Recreating the Studio Ghibli style, as in the currently popular trend, is obviously not copyright infringement, regardless of whether it's for commercial use and even if it were trying to compete in the same market as Studio Ghibli's films (which I'd argue the current trend definitely isn't). Whether using Studio Ghibli's films as training data would be copyright infringement is currently an unsettled question, legally speaking, but that's a completely separate question legally speaking and way too many people are talking about AI copyright infringement like they're the same.
Copyright law is already pretty broken imho, but even if I were a huge stan for it, the ability to copyright style would absolutely annihilate humans' ability to legally produce creative works, and a lot of people are ignorantly arguing for that to be reality when they don't distinguish between actual copyright infringement and mimicking a style when it comes from AI.
Not talking about styles, like you say, the issue is whether the models themselves are derivative works. When assessing that, it's hard to ignore the fact that they are competing in the market with the copyrighted content they acquired and ingested against all licensing agreements and all explicit notices that "This work is not to be used in the training of AI models." It's not arguable that they've done this. Especially because it's clear that these models are copying, however lossy that copy is. Only whether it's defensible by claiming it falls under fair use.
While you may not be talking about style, I wanted to explicitly emphasize the difference between the two on a copyright level because a lot of the criticisms of LLMs on copyright grounds do not distinguish between the use of copyrighted material to train the models and the output resembling something copyrighted. I think your analysis is correct for the former -- whether they actually count as fair use being the unsettled legal question atm, not whether copying took place afaik. But way too many people extend their accusations of copyright infringement to any output that resembles a copyrighted work in any way, including uncopyrightable elements like style (this is very common on Tumblr in my experience). Because of this, I wanted to very explicitly delineate that these are two separate questions when it comes to copyright.
Fair Use is a tough one. Commercial use is a factor, though not usually the most important one. Gauging if the works are transformative vs derivative is often the largest question, as well as if they can displace the original in a market. I think the results may be very different for images than for text, but we'll have to see.
Your links are good examples of that, though I will mention that Feb 2023 is basically a digital millennia ago now. The tech is moving a lot faster than the courts do.
That's certainly a valid way to look at it, and it's hard to argue what the law says.
However it's pretty clear that regulation has been behind the pace of technology for so long that applying current laws is a fools errand imo. And in the absence of well-informed regulation, the only thing we can fall back on is the intuition of well meaning experts. I don't claim to be one, I'm just saying my intuition is that LLMs are substantially different from any previous technology and I'm trying to think about it from first principles.
Your comment saying I'm too hand-wavy sure feels rather... dismissive. jkjk I was already feeling long winded and didn't want to draw out my point. Things will get better, and I want to inject optimism into the discourse. Too much pessimism, cynicism, and dismissal out there and I want to balance it out.
I didn't find it hand-wavy. I enjoyed your post, and found it to be down-to-Earth and relatable, even though I didn't necessarily agree with every point.
For controversial issues, there is an endless stream of hot takes that don't really deserve a full response, or maybe any response. Disagreeing with hot takes can be done endlessly. People will keep posting hot takes regardless of what you say.
So why do it? For some of us, disagreeing is addictive.
well, I'm here on Tildes precisely to try and engage in good faith and not be some contraranism. I do want to try and understand different world views and expand my horizons past my personal bubble,
This artiicle, sadly, did not feel very well argued. the opening was promising, but the meat of it was simply raw dismissal instead of reasoning.
What makes you think that this will get better? The costs to crawl are not particularly high compared to the costs of serving the data.
It's a solved problem which the AI companies ignored the solution to in their rush to gobble up as much data as possible before their competitors got their first or regulators could slow them down.
I'm not sure how anyone can be surprised that that leads to resentment or a lack of faith that AI companies will ever be motivated to fix their crawlers
I mean they're not broken.
They're not just ignoring robots.txt, they're actively engaged in an arms race against being stopped from crawling. People have shown that they are spoofing user agents and coming from distributed IPs specifically to get around throttling measures.
The most important part to point out though is that this isn't "rogue AI" or anything so ridiculous or Sci-fi. It's people programming the crawlers to do those things because they want the data. It's textbook "Computer Fraud and Abuse Act" stuff that's just not being enforced because "AI might be important and the CEOs with the most vested interest said they need this".
I’m not sure it ever was enforced very well? Anyone who has looked at http server logs knows that there have been weird user agents all along, though at a lower rate.
Enforcement across international borders is non-trivial.
Enforced, no, not legally anyway. But it was generally "bad form". For decades big companies and standards organizations said "don't do this" and almost everyone behaved because no one wants their web hosting rates to spike because someone just repeatedly grabs their whole site, over and over and over. Ads started because they were a dynamic way to generate revenue if your site got popular overnight, you could use the boost in views to directly cover the increased hosting costs.
Until the last few years there was no reason for the mass crawling we're seeing now. There were bad actors and the occasional mad
scientistsarchivists but not an organized effort with billion dollar companies behind it to just ignore all engineered norms in pursuit of profit.User agent spoofing isn't malicious in the same way phone call forwarding isn't. There are legitimate use cases for each, like convincing a site you can display the data you're requesting or seamlessly routing phone calls internally/externally. The problem is when someone uses that benign functionality maliciously.
It’s been a trend since the early days of the Internet that security requirements have gone up as activity by bad actors increased. Denial of service attacks aren’t new. Botnets aren’t new. A lot of websites have to hide behind Cloudflare or similar.
This is a new pattern of behavior, though. Denial of service attacks didn’t used to look like crawlers.
How much do we really know about who is behind these bots?
Like with combatting spam, there are multiple ways to go about it that are partially effective. Sometimes the cost of serving the data can be made very low via caching and content networks. Sometimes crawlers can be blocked.
It's hard to say which way it will go, but I think it's reasonable to believe that, like spam, it will always be with us, but might be suppressed enough that it's not a big problem day-to-day.
...Is the author suggesting short essays should be kicked out of school curriculum? (There was a link to a podcast but no transcript was provided so I can't confirm if this is indeed the point.)
Five paragraph essays are, in my opinion, very reasonable. That's just long enough to require a bit of research, which then paves the way for using that information to come to conclusions. Essays of this size have perhaps contributed the most to my learning.
I'm just astonished by how many people make bold claims about how we should turn the education system upside down because of what ChatGPT can do. Sure, math teachers of past times making claims about how "you won't always have a calculator in your pocket" were very wrong. But that doesn't mean the ability to perform basic mental math is useless.
And essays are quite similar to educated discussions. If someone asks you which historical figure you find the most interesting, are you going to reply with "uhh hold on" and then ask ChatGPT? (There's probably a lot of better examples but I think that gets my point across.)
It's weirdly hard to find a transcript for that relatively popular podcast. The takeaway from it is that five paragraph essays are taking away a students' ability to creatively express an idea and are stuffing it into this standardized formula of an essay. It's asking students to be more machine-like.
I'm reminded of doing piano exercises. Sometimes relatively rigid training helps you learn the skills that you can later use for self-expression. The key will be putting the exercise into context, because grinding at getting good at an exercise can certainly turn people off, too.
Thanks for sharing the summary. Genuine question without an assumption before I respond further (to everyone/anyone, not just you) - do you agree with this?
Setting bounds on assignments is actually a good thing. Constraints exist in the real world in various formats. A manager may tell an employee they need some visual reports from last month's sales data, and that they have to do it in Excel. The employee (likely) isn't going to be able to convince the manager/company that they need to purchase a multi-thousand-dollar piece of software that does it better. They probably also won't be able to convince IT to let them install python so they can do it more flexibly with code.
These five-paragraph essays are also likely part of a longer plan to prepare students for higher education.
I agree with it to an extent. I think a lot of kids need that structure, but when someone who doesn't is being forced to fit in this box at the exclusion of being allowed to express their reasoning in a different (maybe better!) way... that's unfortunate. I like your point that restrictions can also breed creativity.
When I was studying for the SAT II, I learned a highly structured template for writing the essay portion. At the time, I felt like it was stupid and robotic, but after a few years of reading emails in the corporate world I started wishing that more people had learned to do something like this. I can't tell you how many emails I've read where the ask is not clear, the supporting info is not there, or the structure is so bad that it's impossible to decipher the meaning. Lots of people suck at communicating, especially the ones who think they're great at it. It shouldn't be the only thing you learn, but learning how to write using a strict template is a valuable life skill.
I like the way the author writes this article. I am also frustrated by the AI dismissiveness. In my opinion, anyone saying AI has no benefits is just as wrong as someone saying it has no drawbacks. The arguments of both people should be subject to scrutiny.
Do you see a lot of people suggesting that there are "zero benefits"? Most of the criticisms that I see are of the "bad benefit to cost/problem ratio" type, but maybe we use different circles of the internet.
I’ve seen a fair amount of discourse that could be interpreted as claiming AI brings no upside, but almost all of it is an abbreviated form of, “the benefits are not worth the costs”.
This is probably at partially driven by short form communication having taken over, which lends itself to misinterpretation, but it’s also coming from a sense that, “this is so obvious it doesn’t need to be mentioned”. To many, AI very much looks like a, “step over a dollar to pick up a dime” sort of situation in terms of cost:benefit ratio which makes the problems with it feel extremely obvious.
It seems pretty common in my experience, even here on tildes. The most common aspect I see is people saying that programmer assistance AI isn’t useful at all. Often it includes a good helping of no true Scotsman fallacy (real programmers don’t get any benefit from AIs).
Eh, I just think they don't produce good code. They add dependencies you don't need, they write code with bad style, they get confused really quickly (using Claude code / aider + sonnet 3.7) and I'd rather engage in thinking about the problem than trying to convince some agent to solve it for me.
Programming is mostly the exercise of crafting working theories of operation and all I see is AI softening my skills in that regard while not being trustworthy. I keep using it in the hopes I get better usage out of it, but I'm not that hopeful.
I've had a better time discussing projects directly with tools like ChatGPT than I have with things like copilot. Granted, I'm working on machine learning projects, so most of the work is figuring out the best choices to make with architecture and data processing, and these models are really good as a partner to "discuss" these types of problems with. The code implementations I've gotten from it have been good but not perfect, but with me carefully checking the output and prompting it to change mistakes, I can generally get pretty okay results. A lot of the issues with AI code quality come from a lack of attentiveness from the human part of the pipeline when it comes to spotting mistakes.
My domain is mostly firmware / async rust / async scala / netty / high performance iot pipelines. I'm not really looking for AI guidance, I'm mostly looking to avoid typing.
ah yeah, I'm not surprised that the results are rough there.
I generally share your opinions on them. Have you tried any of the tools for generating tests and test cases? That's one area where I've found the autocomplete functionality very handy.
I haven't tried tests, but I have found aider in particular to be good at "here are the docs for this json / dbus gvariant / something similar, I want a a bitflags instance", or some other serialisation boiler plate, but honestly I have spent long enough with multiple cursors that it's not that much of a saving vs copy pasting the docs and editing it myself? Still less keystrokes is less keystrokes and my fingers are better off for it :D
When I used Copilot in neovim for a bit, it mostly seemed like a glorified LSP with snippets. I don't think it was super amazing, but it was helpful with boiler plate sometimes. At the same time, I don't think the fact that they're stealing from loads of other peoples' work or that they're using immense amounts of energy and creating emissions are worth that.
Also, that seems like people saying they don't think it's useful for one specific task, i.e. not saying it has "no benefits" at all.
I've been making a conscious effort to use it more lately so I can get that perspective, and honestly I'm not terribly impressed. Or, I am very impressed, but with some weighty qualifiers on that.
At boilerplate, it's pretty good. It's occasionally easier to describe a generic block or structure and let it be generated. It's quite good at picking up on repetitive-but-not-identical lines. It often flips a sign incorrectly or grabs the wrong variable, but so far not having to chisel out a dozen variations of
var = p1 + r1 * sin(r1-r2)
has been the largest benefit.It seems abysmal at going any higher-level than that, no matter how much handholding, what model, or how much thoughtfully-pruned context. When working on existing codebases, it totally fails unless "the first thing you think of without any specific knowledge of the codebase" also happens to work in context. Even when explicitly given contrary constraints, they just can't rip themselves out of that rut.
For what it is, statistically shrugging at text, it's astounding it's gotten this far. But if you're not writing some standard SaaS CRUD, even the newest, largest models very quickly show their limitations. And given that I can run a mini-model that can handle repetitive line completion even on 8GB of RAM with no hardware acceleration, (for this narrow use case) it's not clear what the value of the big boys is.
I personally found it worse than LSP - regularly in large Java projects it would spit out completions with incorrect argument counts and types, which is table-stakes in the Java world.
I do have relatively limited access to models at work because of security policy, but I have been unimpressed whenever I've played with what we have. I'm not really interested in nannying an LLM when I have other tools that work perfectly fine.
On social media, people post hot takes that could be interpreted that way. Adding caveats to turn it into a reasonable opinion would spoil the joke.
I am leading an AI learning program at our small company, lots of non-technical people, and I hear a lot of negatives about it. Probably about 30% of the people I interact with have a pretty hard "this thing is useless" stance. The rest are either really excited or aren't quite sure what's going on and have minimal complaints. I don't try to overhype it, I am very realistic about what it's good and bad at.
Well you dismiss a good 80% of the reason I really hate current AI and why its actually bad. I'm fine with experiential tools, I'm not fine with pretending they are ready to do t
Stuff they are now. That's entirely driven by the cult who just wants to hype up for investor money.
As for the 20%...
I don't even know how to respond to this. When has rule of cool been used to dismiss crimes?
How is the specifics on how LLM not apply to how they are stealing? Yes, storing copies of other people's stuff on your server without permission is "stealing" in the modern sense.
And AI is making it worse. Why do we ignore that? Oh yea, "it's really cool".
The CD example in the beginning didn't need to steal from encyclopedias to do it's things. It's not good when an LLM needs to.
"don't attack the gun, attack the bullet" okay. That is certainly one way to dismiss literal thievery.
Even the Ealy rulings already decided the output isn't worth any copyright. It's the input that really matters for artists.
I think a big part of it is that we all grew up being pretty dismissive of copyright. We downloaded songs off of Napster, pirated movies and games, and generally accepted that even if you weren't really supposed to do it, we all were going to copy that floppy. My whole teenage and young adult life pretty firmly set up the idea that this isn't really that big a deal.
As an adult with more money than time, I now buy my games and pay for a few streaming platforms, but I still treat copyright as more of a suggestion. If something I want isn't somewhere convenient to me, I'm just going to go ahead and download it. I'm not saying it's right, but it's the way a lot of people live.
So then when you tell me that we shouldn't use this cool new technology because it looked at someone's pictures when it wasn't supposed to, I kind of feel the same way. I get why artists don't like it, and I get the general idea that some level of copyright is important, but it doesn't feel like such a big deal that we should just boycott AI or freak out every time someone makes a picture with it.
I think there's a pretty big gulf between "piracy for personal use" and "piracy for profit." It's generally a bannable offense on private trackers to use content from their collections for profit.
It's not that it looked at someone's pictures when it wasn't supposed to, it's that these companies are making insane amounts of money using these artists' work without their consent, while simultaneously putting these artists out of work. Like, I am entirely in favor of violating copyright, but if the sites I use to watch anime were actively putting anime studios out of business, I would be against using them too
I recall from the old ads "You wouldn't steal a car," the retort was always "Yea but if I could download a car for free while not depriving anybody else of a car, you bet I would."
I think a large quantity of us grew up in an era where we realized that the internet enabled post-scarcity. Create once, share infinitely.
It's not just incompatible with copyright: It's completely antithical to the entirely of all prior physical economics. If we only needed a team of skilled engineers to design a car, and then 3D print them for $50, we'd use tax dollars to pay some highly skilled engineers to improve it and then let everyone have cars at-cost.
But the powers that be didn't like that kind of disruption and thus spend billions of dollars to hold a post-scarcity medium on a leash, making it heel to the economics of old.
Pirates still have post-scarcity entertainment: They are only constrained by time to injest it. We could all have that, legally, but it requires being willing to let the old system die and letting a new one arise from the ashes.
Edit: Also humans have never respected copyright. I have in my house reel-to-reel tapes from the 50s with bootleg Sinatra.
That makes sense, but there's two aspects that makes the situation very different
Nowadays, the people doing that are both big corporations (ones who can definitely afford to pay licenses), and many private individuals engaging in this very much have profit driven motives. Some are trying to outright steal likenesses to get views, and I hope we all agree that goes way past any line of "copyright isn't a big deal". You may actually be harming personalities and impacting their career over lies.
Now, the targets inverted to disproportionately hurt smaller artists. Ones trying to kickstart careers. And their careers may never kickstart if instead others steal their work and the algorithms reward that over quality. That's how you kill an industry,
Well I don't punch down and I do punch up. As long as the ones benefitting are the same ones trying to kill off artists and more as a career, of course I'll go down swinging, at worst. Likewise, I won't exploit my peers and learn art the way they did instead of resorting to stealing their efforts. My whole goal was to help support artists, after all,
Arguably the extent of US copyritht law is pretty insane though. If it had stuff copyrighted for 30 years after creation so that the original creator can draw value from it within a reasonable timeframe that would be fine. But almost a century after their death with various options for corporations to evergreen certain aspects if copyright? Also a law that allows companies to enforce arbitrary copyright claims without any sort of due process before enforcement? That's just beyond unreasonable.
At least that's one thing that's potentially subject to change if Trump continues his current path, given that the only reason it's enforcable internationally atm is the US soft power he's currently throwing away.
People have already commented on the actual contents of the article here, so I just want to say that having an AI-generated logo is an instant -90% decrease in trustworthiness for me. I've read multiple posts on @mattsayar's blog and every single time I find myself initially biased against them, even if I end up actually agreeing with them, just because of the AI-generated logo.
I'm not even an AI art hater, but people actually using AI art for commercial/branding purposes is crazy to me. It's like, ok, you fancy the logo that AI came up with, but you couldn't have spent literal 10 minutes just tracing it in Inkscape or Photoshop just to make sure that the straight lines are actually straight? It always looks so lazy and ugly and feels like you're on some random scam website. The craziest thing is that I've seen even huge multi-million dollar companies use AI-generated images for their advertisement.
This is not meant to be a dig on OP, in fact, as I said, I like and agree with many of their posts. But I just really don't understand the usage of an AI-generated logo on a website.
I've said it before, generative AI is digital plastic. Whatever good uses it might have, 95% of its use by volume is going to be churning out mass produced slop at scale, so it's inevitably going to feel cheap to people.
Oh yeah that's rough. I always hand vectorize any AI images I put into projects I work on. It gives you an opportunity to clean up all of the artifacts, adjust design elements, and leaves people at least unsure of whether it's AI generated. For those that are sure it's from AI I assume they respect the minimal effort I make.
@mattsayar if you'd like I did an amateur's pass at vectorizing your logo https://gist.github.com/danthedaniel/7dfd053b1a05fd1056ac492f0e7cc271
Ooooh this is way better! Is it alright if I use this on my site until I think of something else?
It's yours
Thank you so much, I really appreciate it
You also forgot to replace the old one in the favicon
I recommend this tool for favicon generation https://favicon.io/
It takes a village! Done, thanks for the recommendations
SVG favicons are also supported everywhere, except Safari (as usual). But to save an HTTP request elsewhere, you can serve both assets.
Also, you can 100% just rename a .png to .ico and browsers will understand it. They care more about file headers than file extensions. This is ideal, since native .ico are just repackaged bitmap images and are very inefficient.
Thanks for calling me out on the logo. That is a long-overdue-to-be-replaced placeholder. Now I have to actually put thought into what I want it to be...
I was surprised not to read any counterpoint to the argument that LLMs are very resource intensive. They consume much more electricity to use than normal web queries (even if they are more efficient, it's surely not efficient to have an AI summary at the top of every search result). And they are driving a rapid increase in both the number and size of data centres, consuming a lot of gains in renewable energy production so that that energy can't replace fossil fuels. Of course we increase our energy use all the time but the electricity cost of LLMs feels more than irresponsible to me.
Personally, I also find it hard to dismiss the "hallucination" point... if I'm going to use an LLM to summarise and present information for me, then I need to be able to trust that it will be right where I can't tell when it's wrong, and I don't.
Just a note that hallucinations aren't really an issue with summation, since the data is being placed directly into the context window. Hallucinations are much more common when sourcing from poorly-sampled data in the model itself. It's like the difference between recalling something you read just moments ago vs describing a half-remembered dream.
Summarizing, and interrogating an LLM about a piece of content are actually very good uses for the technology.
Unfortunately having good information in the context can't hold them back if they're dedicated. When (trying to) use them for programming, I always summarize all relevant portions of the relevant APIs in the context, including usage examples for each. On occasion I try giving it a mathematical formula, and ask it to implement it with the given API, but I still have it hallucinate functions with the names and signatures it wants rather than those that exist and are actually present in its context.
Admittedly, that isn't exactly summarization, but there is a deep well of instances of search engines attributing information to articles that definitely isn't present and Apple Intelligence's notification summaries telling tall tales that don't exist in the underlying messages.
I see a lot of problems with the current, popular AI discourse. I wrote about where I find fault in the discourse about AI capabilities here. But there's more I take issue with.
This comment will mostly focus on the common ethical arguments against AI. I could also talk about AI hype (e.g., how despite huge business investment and apparent enthusiasm for AI, it doesn't seem to increasing productivity or profitability), but it seems like most Tildes users already believe that AI is overhyped.
1. The anti-AI art narrative seems to contain a contradiction
The discourse about AI-generated art is confusing. The detractors of AI-generated art make two claims that seem incompatible (or at least in tension with each other):
I agree with (1). As for (2), I want to see data that supports this claim. I've looked for it and I haven't been able to find much data.
What nags at me most is that (1) and (2) seem to be incompatible. If AI-generated art is so terrible, why do consumers putatively prefer it? And if consumers don't prefer it, how could it be displacing human labour in creating art? How can these two claims, which are often made by the same people, be reconciled?
What seems to me most likely to be true is that AI art sucks and because it sucks, there is a marginal market for it, and there's very little displacement of human artists' labour.
2. Talking about how much electricity AI uses seems like it's just a proxy for talking about useful AI is
I'm skeptical about environmentalist arguments against AI. I'm skeptical because I've tried to find hard data on how much electricity AI consumes and I can't find strong support for the idea that an individual consumer using an LLM uses a lot of electricity when compared to things like using a computer, playing a video game, keeping some LED lightbulbs turned on, running a dishwasher, etc.
The predictable rejoinder is "those other things have some utility, while AI doesn't". If that's what this debate comes down to, then the environmentalist stuff is just a proxy argument for the argument about whether AI is useful or not. If you thought AI were useful, you probably wouldn't object to it using a modest amount of electricity on a per consumer basis. If you don't think it's useful, even if it consumed zero electricity, you would still have other reasons to oppose it. So, it seems like nobody's opinion about AI actually depends on the energy usage of AI.
I also dislike how much discourse about energy in general is focused on promoting energy conservation rather than promoting increased production of sustainable energy when the latter is far more important for mitigating climate change and also benefits people economically (whereas energy conservation, if anything, harms people economically).
3. AI and copyright
A lot of people assert that AI models "steal" training data or that training on copyrighted text or images amounts to "plagiarism" or "copyright infringement". Two things that bother me about this sort of assertion:
It's not obvious what constitutes "theft" in the context of training AI models. This is an unprecedented situation and I don't see people trying to justify why their non-obvious interpretation of "theft" is correct. Humans are allowed to consume as much text and as many images as they can in order to produce new text and images. If we treated AI models like humans in this respect, then this would not be theft. I don't think it's obvious we should treat AI models like humans in this respect. I don't know exactly what we should do. Why does it seem like people are not engaging with the complexity and ambiguity of this issue? Why does it seem like people are asserting that it's theft without a supporting argument, as if it should be obvious, when it's really not obvious whether it's theft or not?
The people who are angry about AI allegedly infringing copyright seem mostly indifferent or supportive of media piracy. I don't understand why the zeal against AI exists, especially when AI is a more ambiguous case with regard to copyright, and there isn't any zeal against piracy, especially when piracy is such a clear-cut instance of copyright infringement. Being anti-AI and pro-piracy (or neutral on piracy) aren't necessarily inconsistent positions, but I haven't seen many attempts to reconcile these positions.
Is this a symptom of people feeling uncomfortable with ambiguity and uncertainty and attempting to resolve the discomfort by rushing to angry, confident opinions?
4. General properties of the discourse that I don't like
Some of the general things that bother me about the AI discourse are:
Strong factual claims, e.g., about AI displacing artist labour and AI using a lot of energy, without clear supporting data.
Apparent tensions or contradictions that aren't resolved; obvious questions or objections that go unanswered.
Opinions so strongly held against AI that it is sometimes said or implied that no reasonable disagreement with an anti-AI stance could possibly exist and that people who use or defend AI are clearly doing something severely unethical and maybe should even be ostracized on this basis. Wow.
I take seriously the possibility that generative AI isn't actually that important or impactful (at least for now and in terms of what's foreseeable over the next few years), and that it's not really worth this much attention. This is a boring, possibly engagement-nullifying opinion, which might make it memetically disadvantaged on the Internet. But maybe also some people would find this idea refreshing!
The polarization isn't just on one side. In a way, both sides might be overrating how impactful AI is, with anti-AI people seeing the impact as highly net negative and the pro-AI people seeing the impact as highly net positive. I don't see AI as a credible threat to artists, the environment, or copyright law and I also don't see AI as a driver of economic productivity or firm/industry profitability. I think LLMs' actually good use cases are pretty limited and I definitely don't see generative AI as "revolutionary" or worth the amount of hype it has been receiving in the tech industry or in other industries where businesses have been eager to integrate AI.
I see this repeated often, but I think that AI-generated art is like plastic surgery: you only notice it when it's done badly. The latest models can generate images that are virtually indistinguishable from the work of good artists and illustrators. A prompter with good taste can get fantastic results.
Not at all. It's cheaper, and for some people that's all that matters. It's also good at convincing ignorant middle managers to eliminate jobs and heap additional, more tedious, less interesting work on their existing employees despite the quality issue. It's not a contradiction, these issues stem from different sources.
It's also led to severe distortions of expectations and devaluing of the effort people put into art. Artists get told "this took you 14 hours? smh AI could do better in 10 seconds." In shirt/sign/print shops, someone bringing in a screenshot of a photo in text messages that their grandson sent to them is not new. We digitize things for people all the time. But AI is increasing the frequency of these events, and it's also trashing their expectations for turnaround. We've told people that their design is not suitable for print and would have to be recreated from scratch and had them ask moments later in the same conversation if they could see it already. People are impatient, but it's new to have people expecting that we toss it at the machine and it'll be done in a few minutes like a microwave burrito.
I guess your argument is that the masses have poor taste and will accept low-quality art? Is there data that supports the idea that this is happening at scale, i.e., there is some statistically measurable displacement of paid human artistic labour by AI art generation?
To my knowledge nobody has attempted to collect any data, the anecdotes are just deafening from every corner of my art and graphic design networks. Everyone's department is downsizing, not growing.
I don't know much actually about how the masses are accepting it, only that clients and managers are using it and demanding its use.
That's interesting information. Thank you for telling me about that.
I think the topic warrants further study.
It's not the masses, it's the employers - I agree I don't have data but I can point at Disney's Fantastic Four marketing art and how absolutely awful the AI art is. It doesn't look like an actual artist touched it. And if big corporations go that route because it's cheaper... Someone has to pay artists for art. "The masses" aren't usually the direct purchasers of that product.
I'm not sure I understand your intended meaning. If consumers don't consume AI art, there is no market for it.
Also, the comment you replied to was replying to Diff's comment, and in that comment, Diff wasn't talking about large corporations making popular movies. They were (I thought) talking about individual customers who have a direct, one-to-one relationship with an artist or a small business producing art at small scale. So, that was about individual consumer choice. That was about "the masses" directly purchasing products.
I would appreciate it if you didn't reply to my comments in the future.
I don't think that's in the spirit of the site, or a reasonable ask. If somebody is being respectful, as @DefinitelyNotAFae is, then we have no individual right to prevent others from sharing their thoughts alongside us.
I agree that if someone has been respectful to you then it’s not polite to ask them not to engage with you. If someone has been disrespectful to you, then I guess your only two options are to ask them to stop engaging with you or quit the site. I’m trying the first option before I quit as a last resort.
I think it’s a good idea for social sites/apps to have a block feature, since it lets people maintain their own boundaries without quitting the site/app altogether.
I think that insisting that a specific, well-behaved user never reply to you like this on a social media site like Tildes is more disrespectful than any comment DefinitelyNotAFae has made, in this thread or otherwise, and is overall an attitude that would be harmful to the community this site tries to create if it spread beyond one user.
If you think a user is being sufficiently disrespectful to you here on Tildes, use the site's labels to report them for malice. That's seen directly by Deimos, the admin of the site, and he does take action to delete comments and nuke threads when things get out of hand. But I suspect he, like me, would not consider the comments DefinitelyNotAFae has left here to be disrespectful the way you seem to.
I disagree with your analysis of the situation.
I would like to be granted the autonomy to maintain my own boundaries. That’s why I think just having a block feature is a good solution for most social media sites/apps.
If it’s going to turn out that trying to maintain my boundaries isn’t possible, or isn’t possible without some onerous level of stress and effort (e.g. spending hours justifying myself to strangers who have no investment in my well-being), then unfortunately the best option for me is to just quit the site.
I already took a break from the site for about a month after having a bad experience. I am not one of these people who relishes conflict or who wants to get into it with strangers online. Very much the opposite.
I just want to have the ability to stop interacting with someone if I have a bad experience with them. I see no utility in getting into protracted conflict with strangers online. It’s hard enough to resolve conflict with people you know and love, let alone with strangers where you have no established trust, rapport, affinity, or common ground. Why would that be a good use of my very limited resources?
Tildes.net is not "most social media sites": it's an invite-only community for people who want to discuss in good faith and escape the contentious argumentativeness that has taken over most platforms. While this thread is more polarized than most on this community, your contributions stand up as grating and dismissive, not a good fit for our culture.
Your account is merely two months old and it can take a moment to shed the toxicity we learned in other platforms and adopt this culture, but it's not a good fit for everyone. And this is fine, this community is not about userbase growth, it's about nurturing something: https://docs.tildes.net/philosophy
This is super rude and unkind. I don't know why you thought this would be constructive — maybe you didn't think it would be constructive and said it anyway, I don't know.
I will be deactivating my Tildes account shortly.
I think you're expecting others to do more than you're willing to. This site doesn't have a blocking feature because it has pretty firm moderation. Firm moderation establishes healthy boundaries here, but if they're not enough for you, I think not participating may be the only reasonable other option that gives you what you want without making tildes users you don't like feeling personally attacked for arbitrary reasons.
My personal boundaries are not the consensus boundaries of the community — and that's as it should be. Just because I've decided I don't want to interact with someone doesn't mean they should be banned from the site. By analogy, there are people in my local community who I've decided I don't want to talk to, but I would never advocate for them to be excluded from IRL or online community spaces. There is a difference between "I don't personally want to interact with this person for reasons that are important to me" and "I think this person should be ostracized from this community". If you're saying that the only good reasons to not want to interact with someone are because they've done something that warrants censure or ostracism from the community at large, I don't know what to tell you, you're just wrong — and you probably don't even actually believe that.
I will be deactivating my Tildes account shortly.
The implication from this comment is that Fae was being disrespectful, which I don’t see any indication of in their comment.
I had to look them up because this was weird, and apparently they don't like me from a previous post where they also asked me not to every speak with them again lest they quit the site, and I, therefore, didn't reply to their assumptions about my posts.
I respected their boundaries in both circumstances once I was aware of them, but noticing a username in a thread before I respond and remembering the details of that username on top of it, when we've interacted... twice basically? three times technically? Meh. I'm not trying to antagonize anyone, though personally it's quite shitty feeling to be told interacting with you is bad enough to make them quit, but I've been on the internet before. 乁( ⁰͡ Ĺ̯ ⁰͡ ) ㄏ
That was not my intended implication.
I really would love just to have a block button and not get into some huge public argument.
Diff was talking about middle managers. Those middle managers and businesses who contract with artists are definitely the ones judging turnaround time and devaluing art across the board. And Diff was replying to your presumed contradiction which is what I'm addressing.
ETA: I have no idea who you are and didn't see your final line until after I replied, if you don't want to interact with me, please just don't reply to me, as I have no way to tag you on my app.
Edit on 4/23/25 - it has been incredibly weird to have someone go on this much because they didn't like interacting with you. In attempting to respect the boundaries of this user I've not been able to defend myself from their original misinterpretation of my posts and they've spent ten days seeking ways to block me including making a separate post and writing paragraphs about it. I'm mostly working this for posterity so in the future I have a fucking clue about what this is about when we accidentally cross paths in the future and I once again have to hear about how awful it is to interact with me.
I'm not going to get into the disagreement, but just say that one thing that might help you would be to find the Tildes Reextended plugin for your browser. You can use it to tag usernames. While it won't block, tagging a user would enable you to see that a comment is from them and choose to ignore it.
I will say that when I left the site a year or two ago, it was because there were some users who were - politely - posting things I disagreed with to the point of not being able to refrain from replying. It did cause me to take a break.
I haven't had that problem since returning, but I have strategy ideas on how to handle it. And in fact, there was a topic recently where someone expressed some opinions that angered me. I was able to ignore the topic and walk away.
So I have some sympathy for your general situation.
I think the nature of the design and intent for how Tildes operates doesn't allow for user blocking. I understand that philosophy. But it mean that when you find yourself in a position of needing that functionality, just have to find another way.
Thanks. I've tried that extension and I found the tagging username feature to be better than nothing, but ultimately not what I was looking for.
I really don't like the idea that if you want to block someone, it's your problem. Blocking someone doesn't necessarily imply a strong moral condemnation or a belief that they should be punished. Like you described in your example, sometimes some people just express opinions that bother you, and you might just not want to engage with those opinions. Or you might just find that someone is too rude for you to want to interact with them further, while not being so rude that it warrants moderator action against them. There are many, many more examples like that.
The line that the admin/moderator/community is going to draw for what's acceptable behaviour or what's an acceptable opinion is going to be different from my line. Perhaps more importantly, the line I draw for who and what I want to engage with personally is going to be very different from the line I would draw if I were acting in the capacity of moderator. There are many instances in my experience moderating other communities where I have extended leagues more patience and tolerance for people when I was acting as a moderator than I would ever extend if I were interacting with those people in a personal capacity — and I think that's as it should be. As a moderator, you have a much greater obligation to be impartial, lenient, measured, patient, and so on than you have as an ordinary person having personal interactions.
To say that personal boundaries = community rules, and then rebuking anyone who tries to have personal boundaries that are stricter than (or just different from) the community rules, is, I think, just unfair, unreasonable, unrealistic, and unkind. That feels really unhealthy and unwise to me. I don't want things that are personal to me and that I feel should be up to me to have to go through approval from a committee process, and a community that requires that seems conducive to many bad outcomes, including bullying and emotional abuse. I really don't like this way of doing things. I also really don't like how mean a few users have been to me in the comments on this post.
This will be the last comment I make on Tildes for at least the next 6 months. I am going to deactivate (or pseudo-deactivate) my account using the steps I described here.
I wish you well and wish everyone else on Tildes well too. And I really mean that and I'm not saying it passive-aggressively. I am unhappy with my experience here and have some grievances (as I just described at length), but I believe in peace above all else and I believe (as much for the well-being of my soul as for the benefit of the world) in wanting good things for people even if you've had conflict with them, even if you've cut off ties, even if you have untenable disagreements, even if they've hurt you or wronged you. So, I genuinely wish everyone on Tildes well.
This may be true in 90% of cases, but I don't believe it's inherent to the method, it's more related to the people using it.
Have you tried the AI Turing test, a set of handpicked real and AI images? The selection deliberately avoids a few of the obvious AI visual styles, like what Dall-E does whenever you request something "realistic", and tries to focus on styles that imitate real art, historic and modern, to make the comparisons fair.
The median score for correctly recognizing what's real and what's AI among readers of the author's blog was 60%. I think I had about 85% correct, which is apparently quite uncommon, probably because I have both real interest in art and many hours spent playing with AI image generators.
This was art specifically. Many freelance illustrators make money drawing commissions, like specific existing characters in new situations, avatars for people in certain style, a lot of porn etc. This is imo more of a craft than art, and it's just a matter of time before AI does some part of it well enough to effectively take away work, or significantly increase productivity of those who adopt those tools and change the market that way - I bet both is happening already.
Another job for illustrators is doing concept art, storyboarding etc. - things that are part of the process for something bigger and only used internally. This is an area where AI image generation was adopted pretty much instantly - people were generating images with the first version of Midjourney, which was not realistic at all, and then drawing over them, because it gave them interesting ideas and made their work easier. Nowadays things like storyboards are often generated purely with AI in areas like advertising.
Imo those two points you mention are not necessarily contradictions, part of the explanation may be that making money doing art is a very broad category that contains different jobs, some of which are already being reduced due to AI and others which are not, another part may be that AI creates AI slop in some situations and not in others, and this may depend both of the style chosen and on the skill of the prompter (even if the skill is just "having some taste and correctly deciding which images to keep and which to throw out"). And also angry illustrators who do some of the less creative work that is more at risk are not exactly rushing to admit that their jobs really seem like they can be partially automated out quite soon because their work is not that special despite being in the category of art.
(here's the key to check the correct answers for anyone who wants to try the AI Turing test, and here's more information about the results - obviously do not open this until you do the test)
Don't forget the post from the guy who made the most deceptive AI images!
This is an interesting perspective. Thank you.
I briefly looked through about the first half of the examples in the "AI Art Turing Test". A lot of the pieces are abstract, weird, fantastical, and have non-Euclidean geometry or don't attempt to show perspective in a realistic way. That makes it particularly hard to judge.
I also saw a few examples, particularly the cartoony images of porcelain women, that I find ugly and low-quality but I don't doubt they could have been made by humans. Sometimes I wonder if part of the reason for diffusion models like DALL-E and Midjourney outputting art that looks bad is that they're trained on a lot of art from DeviantArt or Tumblr or wherever that is bad. It makes sense that most of the drawings on the Internet would be made by people who have closer to beginner-level skill than expert-level skill, just like how most fanfiction is closer to a "14-year-old who has never written anything before" level of quality than a "experienced writer who could realistically get a publishing deal" level of quality.
I also think of this post about LLMs generating short fiction. The author's view is that LLMs are good at generating short stories that look like good writing upon a cursory inspection, but if you scratch the surface, you start to notice how bad it really is.
I worry about the same thing happening with the "AI Art Turing Test". Realistically, how long am I going to spend looking at fifty images? Maybe like ten seconds or less, which is not long enough for my eyes to even take in all the detail in the image? Passable at a glance is not the same thing as good.
If a great piece of art is something you can stand in front of at a museum for an hour and continually appreciate more detail in, then a bad piece of AI art is something that looks impressive for the first 30 seconds you spend looking at it before you notice some messed up, ugly detail.
This is an assumption that I don't think is necessarily correct, given the average reader of ACX, and it wasn't true for me. Some of those really are hard to judge.
Counterpoint: while I don't think AI is close to producing art that can be admired for an hour in a museum (but then again, neither are most artists), a lot of AI art is more interesting and artistic than 99% of what online commission artists produce. I mostly enjoy surreal, weird and dreamy styles, that is where some models seem to excel since the beginning. Part of it is probably because the model's imperfection adds to the overall vibe, part of it may be because it somewhat avoids low quality deviant art illustration sources of training. Often there's a tradeoff - the more photorealistic the models get, the less creative they are and the worse they are at generating weird stuff, but Midjourney seems to be able to keep the balance.
I do think that real art with real value (something that could go to a serious museum) would come when the outputs from the AI mentioned above are used to create something more sophisticated, sort of like "advanced late-stage concept art", the outputs themselves have inherently less value. But I think they can be really creative and better than most derivative art available online and some would stand on their own even in the offline world, although not at "hour at a museum" level.
My wife works in education, and I wanted to expand on this point just a smidge:
In my wife’s class, her students have an online textbook. The online textbook software has a “handy” “Summarize this chapter” button.
That is horrifying. One of the best ways to learn and understand something, is to be able to summarize it yourself. The act of summarizing helps you better retain the information, so often my wife’s assignments included a question for the students to summarize what they read. This was a homework assignment, so they could summarize as they read the text, but now they can just “click a button” to do it for them.
Sure, maybe summarizing things is going to be a task delegated to LLMs in the future, and it is a skill that will no longer be needed. But it’s also impeding students ability to retain information and actually learn the material.
I think this article kind of neglects the downfalls of a lot of the technologies/progress it espouses.
I would say that the advent of the calculator did reduce people's overall number/arithmetic literacy in day to day life, lots of people cannot do mental math anymore (making change, comparing the cost of something if the unit price is not available, etc.) and while yes, now we have calculators everywhere so you can do this math to figure it out, it has reduced our intuition with numbers. I still love calculators and would not throw them into a pit of fire and bury my head deep into the sand and ignore their existence, but I wouldn't encourage over-reliance on them, or act like they had no negative impact on our lives.
From a copyright standpoint, I think people are willfully ignorant or purposefully ignoring that piracy has changed a lot with each iteration of technology. From people literally making physical copies of records to taping things off the radio, and eventually mp3s and beyond, copyright infringement has been around for a long time, but to ignore the changes in: ease, access, and scope, is a bit disingenuous, in my opinion. Saying a lot of these issues are really a parenting issue or other regulatory issue, while valid, is like saying, "if the poor would just save their money, they'd stop being poor", or more accurately, "guns don't kill people, people kill people". It just dismisses the problem by making it someone else's problem.
Copyright infringement and other IP "crimes" are difficult to parse, they are like porn, "I'll know it when I see it", but tbh, will we always know it when we see it? Something about technology has changed where it seems like we have more "photocopier" type things, direct replicas, or things good enough to trick us into being the real thing(or a disregard for whether something is the "real" thing or not), and this era or area of infringement is...interesting...to say the least. Something in society has changed where we have become over-reliant on external validation and vetting. These two things together make me afraid that we will not always be able to know it when we see "it".
A related take by Simon Willison that encapsulates my thoughts around LLM critique better: https://simonwillison.net/2024/Dec/31/llms-in-2024/#llms-need-better-criticism
He sure hand waves away the theft of art. My wife is an indie author but still had her books stolen by Facebook to train their garbage.
Stealing other people's stuff to make your thing work is not ethical. Full stop. It's an extreme example, but if you're against Nazi scientists experimenting on people, this is a similar idea.
I can't imagine that analogy doing anything to persuade someone who isn't already convinced. All it does is polarize the discourse. We've spent decades making fun of anti-piracy PSAs on TV. Now copyright infringement is akin to Nazi crimes against humanity? Please step back from this hyperbole.
I'm not saying you're wrong and I'm not trying to invalidate your wife's hard feelings about having her work used in this way. But we really can't go around comparing things cavalierly to Nazi atrocities.
Yeah I think it's genuinely pretty horrifying to compare the theft of any property, much less mere intellectual property, to Nazi atrocities like that. Even if you believe both are unethical, there is an unfathomable gulf in degree between them.