As a developer, I've been an LLM opponent since it began to become mainstream, for all the usual reasons. Then I've been heavily nudged to use it at work, and I have actually used it. I conversed...
Exemplary
As a developer, I've been an LLM opponent since it began to become mainstream, for all the usual reasons.
Then I've been heavily nudged to use it at work, and I have actually used it. I conversed with colleagues that are much more enthusiastic than me, and others that are also more cautious. I learned (and I'm learning, as they're evolving way too fast) LLMs strengths and weaknesses, and while my views are mostly unchanged, I now know what they can(not) do, and I've experienced the effects on my work. I think I'm dead in the biased center the article describes.
But because those people kept using the tools long enough to learn those lessons, they can appear compromised to outsiders. And worse: if they continue to use them, contribute thoughts and criticism back, they are increasingly thrown in with the same people who are devoid of any criticism.
Absolutely, and that's terrible. I'm active in a few communities where people are strongly anti-AI, and I no longer engage in such discussions because I'm afraid to be seen as "one of them", a traitor developer encouraging intellectual theft and ecocide for convenience. And quite honestly, that's also how I see myself.
Every developer I've heard from online before LLMs was staunchly against IP. I only ever heard calls to eliminate software patents, complaints about proprietary licensing, etc.
a traitor developer encouraging intellectual theft
Every developer I've heard from online before LLMs was staunchly against IP. I only ever heard calls to eliminate software patents, complaints about proprietary licensing, etc.
It's a little more nuanced than that, actually. Licensing can be not proprietary, especially in the open source world. One of these licenses, the GNU General Public License (GPL) basically says...
It's a little more nuanced than that, actually. Licensing can be not proprietary, especially in the open source world.
One of these licenses, the GNU General Public License (GPL) basically says "do whatever you want with this project, as long as you contribute back and also open source your own fork". I'm simplifying, but that's the gist. The deal is I give you my work for free, but if you're making something with it, you're obligated to share your own work with the same license. That effectively makes monetization really hard, and enterprises usually refuse to touch GPL code because it's a legal minefield.
However, LLMs producers proudly and loudly don't give a damn about copyrights and licenses. They stole our open source work to feed their beast and make tons of money from it. Hence the intellectual theft. It's not about "my" work or the perceived stolen monetary value, it's about a broken legal and social contract. It's stealing the collective intellectual work of everyone, not only developers, and privatizing that immense sum of knowledge to make a profit from it.
This always irks me, because the clause is that you're sharing the modified work to your downstream customers. Not upstream to the project, not the general public. Just the people you are selling...
you're obligated to share your own work with the same license.
This always irks me, because the clause is that you're sharing the modified work to your downstream customers. Not upstream to the project, not the general public. Just the people you are selling to.
There is nothing breaking the GPL by keeping the source in a private repo that you give your customers access to. Heck, if you look at your car dash's 'about' button you can find a download, email, or address to get a CD of the relevant sources.
You can't punish your customers from then republishing or upstreaming your changes again, but hey, that's one less thing you have to maintain.
Honestly, the more I've understood of how the GPL works, the dumber it has seemed. It essentially hinges on copyright law. This is stupid. Copyright law varies wildly between different countries...
Honestly, the more I've understood of how the GPL works, the dumber it has seemed.
It essentially hinges on copyright law. This is stupid. Copyright law varies wildly between different countries and many of its ramifications aren't clear within a single jurisdiction. For example, is using a piece of third party software in your own software a copyright violation? The FSF (who write the GPL) say yes. The EU says no. There is no clear answer from the US, IIRC. The whole copyright thing is fraught with ambiguity and open source licenses are stuck in the middle of that.
But there is an easier option' EULAs. These do have a clearer legal backing, and you can be much more explicit about what you put in them. You can say, for example, that changes to source code must be released back to the original licensor if passed on to an end user. You can clarify much more precisely what you want to allow. The LLM situation would be much easier to resolve because you could explicitly forbid LLM ingestion in your license (as opposed to right now where it looks like training an LLM generally counts as fair use and therefore there's nothing a copyright-based license can do about it).
There's a different version of history where the FSF weren't quite so smug and didn't try to be so clever about their solution, and I'm not saying that version of history would have been better (I don't know what all the ramifications would be), but I do think free software would have had a much stronger legal basis in some of these discussions.
You have that backwards I'm afraid. EULA's,are almost universially founded upon bullshit, trying to disrupt consumer rights by trying to straddle the legal underpinings of goods and services...
You have that backwards I'm afraid. EULA's,are almost universially founded upon bullshit, trying to disrupt consumer rights by trying to straddle the legal underpinings of goods and services whenever it is convienient for them. Every country (and states in the USA) have completely different standards for what is actually enforcable. That's why so many clauses have things like "you have no rights, unless you live in one of these three places that explicitly say you do."
Partly because EULA's are completely 1-sided in a way typical contracts are not.
By contrast, copyright law exists more or less everywhere that distinction matters, with fairly consistent enforcement, and courts have repeatedly held up that the GPL is enforcable. Yes,even as EU copyright.
Because copyright law is about the rights-holder having exclusive control over the permission to copy their IP. Don't follow the rules of the GPL? You lose that granted permission.
It is a clever legal hack that even the likes of Cisco and Samsung could not defeat.
Copyright as a concept is generally fairly well defined. Copyright for software is not. The GPL exploits this ambiguity, and basically makes a bunch of claims about what is and isn't copyright...
Copyright as a concept is generally fairly well defined. Copyright for software is not. The GPL exploits this ambiguity, and basically makes a bunch of claims about what is and isn't copyright infringement, regardless of what the actual law says. For a while this worked out okay, but as copyright for software is slowly being cleaned up, many of those claims aren't actually true any more.
For example, the GPL claims that a derivative work includes any work that links the original work. (The GPL also makes this more complicated by exclusively using language that only makes sense in the context of C, but the FSF certainly intend this clause to cover any case of a library being used by another program.) This is the whole virality concept: if you use a GPL work, you now need to comply with GPL for all of your code as well as the original code. This is probably the most famous feature of the GPL specifically.
Except it turns out this is completely unenforceable in the EU, because that's not considered a derivative work in the EU. The GPL doesn't get to decide what a derivative work is, copyright law does. And Directive EC 2009/24 recitals 10 & 15 specifically state that you can freely call or reference other code without creating a derivative work.
Generally, cases where the GPL has been successfully contested in the EU have either happened before this directive came into force, or have been more general tests of the qualities of an open source license (i.e. the referenced library itself is still copyrighted and under GPL, even if no derived work is created in the rest of the code).
(Also note that the situation in the US is unclear, but it looks like courts are slowly leaning towards a similar approach to the EU (see e.g. Google vs Oracle) and I wouldn't be surprised to see similar legislation appear over there.)
You're also missing the point of EULAs. They exist because they are enforceable. They often contain unenforceable clauses, sure, but the basic concept of "to access this item, you need to agree to these terms of use" is very well enshrined in law. The problem with EULAs for end-user software is that they represent a significant power imbalance where consumers are pitted against corporations, but for open source licenses which are more typically used in a B2B setup, that isn't the case. And with something like a license agreement, you get to decide when the license applies because you define that as part of the license. So you could have had much more rigourous consumer protections via an EULA-style license agreement than you can ever have by abusing copyright protection, but the FSF for some reason decided it was more fun to be clever than to try and solve a real problem. And now we're stuck with a concept of open source that is slowly making itself less and less relevant over time.
At least for me, the issue is that LLMs are trained on copyleft code then their output is put into proprietary code bases. I’d love for people to use my code, but I want the improvements to be shared.
At least for me, the issue is that LLMs are trained on copyleft code then their output is put into proprietary code bases. I’d love for people to use my code, but I want the improvements to be shared.
I remember hearing a story about a 1980s~ programmer (Richard Stallman) who reached out to someone at Xerox to try and get the source code for one of their printers in order to program something...
I remember hearing a story about a 1980s~ programmer (Richard Stallman) who reached out to someone at Xerox to try and get the source code for one of their printers in order to program something that would stop them from jamming up all the time and he was flabbergasted that they denied his request. The concept of IP law on code was so alien to him that it inspired him to start a whole organization dedicated to providing software freely to everyone.
I think there is a genuine risk of anti-AI backlash leading to a shift in people supporting stronger IP laws when they otherwise wouldn't (or would be materially harmed by such laws). I find the...
I think there is a genuine risk of anti-AI backlash leading to a shift in people supporting stronger IP laws when they otherwise wouldn't (or would be materially harmed by such laws). I find the sentiments I see on the matter from really staunchly anti-AI creatives genuinely worrying sometimes.
I see a lot of under-informed artists in places like Tumblr cheering on things like what amounts to copyrighting style. Especially on sites that are full of fan art and other work that relies on...
I see a lot of under-informed artists in places like Tumblr cheering on things like what amounts to copyrighting style. Especially on sites that are full of fan art and other work that relies on permissive interpretations of copyright, this has been a source of frustration for me.
Modern generative AI can be used to make art, but afaik it's not all that helpful as a tool as part of a workflow that isn't centered around it. At least that's the impression I get from the more...
Modern generative AI can be used to make art, but afaik it's not all that helpful as a tool as part of a workflow that isn't centered around it. At least that's the impression I get from the more nuanced artists I've seen talk about it. I'm not super familiar with what's out there though, since I'm not an artist myself. It's definitely less sophisticated than it is with coding regardless, though.
My stance has always been: IP law should do a better job of promoting creative works. When companies like Oracle assert copyright over APIs, or patent trolls try to control ideas instead of...
My stance has always been: IP law should do a better job of promoting creative works.
When companies like Oracle assert copyright over APIs, or patent trolls try to control ideas instead of physical objects, those are actions that suppress innovation, and they're abusing IP law to do it. That doesn't mean copyright and patents are bad! It means computers have changed the world, and we need to fix the laws to curb the abuse.
LLMs are the same way. Copyright hasn't magically changed from inherently evil to inherently good—it's just a tool. And we shouldn't optimize for what's bad for Oracle versus what's bad for OpenAI, because by doing so we might miss out on building a better world.
You betcha. In fact, it's worded almost exactly as such in the very document that authorizes making said laws. I wonder what the Founding Fathers would think of 100+ year copyrights and closed...
IP law should do a better job of promoting creative works
You betcha. In fact, it's worded almost exactly as such in the very document that authorizes making said laws.
[The Congress shall have Power . . . ] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
I wonder what the Founding Fathers would think of 100+ year copyrights and closed software ecosystems. Betcha a nickle Jefferson would be a GPL zealot too.
I wasn't really one of those. I'm not anti-copyright, but we do need to review the current periods of it. Locking down art for some century plus (well after the creator has died, and maybe even...
I wasn't really one of those. I'm not anti-copyright, but we do need to review the current periods of it. Locking down art for some century plus (well after the creator has died, and maybe even their children) was not the intention of copyright.
If we eliminate copyright outright, we surrender profits to whoever can produce the most merch. And I don't think "developers" can hope to compete with that.
Conversely, when I engage with pro-AI people and criticize these systems for being wrong quite often and not the panacea they pretend to be, I repeatedly hear the phrase "you're not using it...
I no longer engage in such discussions because I'm afraid to be seen as "one of them", a traitor developer encouraging intellectual theft and ecocide for convenience.
Conversely, when I engage with pro-AI people and criticize these systems for being wrong quite often and not the panacea they pretend to be, I repeatedly hear the phrase "you're not using it right" as a response. Frustrates me to no end, conversations on both sides of this spectrum happen frequently, so I'm increasingly feeling like I just want to ignore the topic entirely because I fall right in the middle of either and face the ire of both.
There are definitely too many people on the extremes on both ends, and I think which "side" you get more frustrated with inevitably depends on which one you encounter more often. Both sides tend...
There are definitely too many people on the extremes on both ends, and I think which "side" you get more frustrated with inevitably depends on which one you encounter more often. Both sides tend to discredit anyone with a remotely nuanced middle-ground opinion as being biased or corrupted in some way, in my experience.
Well that's because "AI" is a huge umbrella term for many different models that are also extremely highly customizable by their nature. It's like criticizing a car's performance without saying...
Well that's because "AI" is a huge umbrella term for many different models that are also extremely highly customizable by their nature. It's like criticizing a car's performance without saying anything about its model, body, gear box or engine. And without having a driver's license.
Imo, current AI tooling feels a lot like SCM tooling circa 90s/00s: so bad that it makes you question (often screaming, pulling hair out) whether copying versioned tarballs around would be a...
Imo, current AI tooling feels a lot like SCM tooling circa 90s/00s: so bad that it makes you question (often screaming, pulling hair out) whether copying versioned tarballs around would be a better use of time. I think we've still yet to see the git/hg of this tech, which along with the incredible fomo + hype train pushing it, makes really difficult to talk about, since the tooling right now is Perforce-tier, even if there's a glimmer of a good concept in there.
I guess I'm saying, I totally see where you're coming from. It reaaaaaaallly sucks to have true believers discard your opinion by telling you that you're holding it wrong.
Hey, fwiw, as someone who's engaged with the LLM discourse and also has some friend groups that're vehemently anti-AI, I don't see you as a traitor or any more complicit in ecocide than the...
Hey, fwiw, as someone who's engaged with the LLM discourse and also has some friend groups that're vehemently anti-AI, I don't see you as a traitor or any more complicit in ecocide than the average citizen in the western world (I haven't done the math, but I can't fathom that one's participation in LLM usage is worse than eating beef or driving an SUV to work). We're all doing what we need to survive, and although angsting over it is acceptable, whipping out the pitchforks at friends imo isn't. So at least this random internet person thinks you're on the correct side of this.
More broadly, I've been sharpening my stance a bit on these tools? I figure that they're going to be developed and deployed against the interests of most people, so understanding how to leverage them best in our own favour (and to tilt the tables, if at all possible), is worthwhile. They were trained by large-scale theft of our culture, and are proceeding the profit directly off of it, while holding the world economy hostage. We should be extremely incentivized as a population to find as many ways as possible to exploit this while we can, since we're the ones who are actually paying for it in the end, whether or not we're literally subscribed to ChatGPT 😅
To your point, by some metrics, a prompt uses only one thousandth of the water that goes into producing a single cheeseburger. https://www.seangoedecke.com/water-impact-of-ai/
(I haven't done the math, but I can't fathom that one's participation in LLM usage is worse than eating beef or driving an SUV to work
To your point, by some metrics, a prompt uses only one thousandth of the water that goes into producing a single cheeseburger.
Kind of off topic, but there's an old-ish movie called SLC Punk! (Matthew Lillard at his best, IMHO) that your comment made me think of. At one point, the main character's father says "I didn't...
20 year old occupy wallstreet protestor me would be appalled
Kind of off topic, but there's an old-ish movie called SLC Punk! (Matthew Lillard at his best, IMHO) that your comment made me think of. At one point, the main character's father says "I didn't sell out, Son, I bought in. Keep that in mind." and as I get older the more I can relate to that.
I think that relates to the old adage that as you get older you get more conservative. While I've always felt that at a certain point you kind of get stuck in your ways, and what was once progressive is soon seen as conservative (just by the nature of things always moving and changing - not that at some point you go "You know what, minority rights weren't as important as I thought when I was younger!" or whatever).
The backslide of civil rights seems to contradict this, but it definitely feels worth exploring. I feel like the "conservative as age" adage tends to fall apart once you hit "you do you so long as...
not that at some point you go "You know what, minority rights weren't as important as I thought when I was younger!" or whatever).
The backslide of civil rights seems to contradict this, but it definitely feels worth exploring.
I feel like the "conservative as age" adage tends to fall apart once you hit "you do you so long as it's not hurting me" levels of progressivism.
You might not remember the modern terms or ettiquite, but you're generally still in support.
I think the only real truth to it is that once you have kids, you're much less likely to throw yourself into danger for their sake.
I don't know if it's the case that people now are less supportive or if those that never supported it in the first place feel emboldened to be openly racist just in general. I think enough people...
The backslide of civil rights seems to contradict this, but it definitely feels worth exploring.
I don't know if it's the case that people now are less supportive or if those that never supported it in the first place feel emboldened to be openly racist just in general. I think enough people realized "Hey, I can just be racist and there's no consequences" so they are.
I think the only real truth to it is that once you have kids, you're much less likely to throw yourself into danger for their sake.
Yes this for sure. Until people can't feed their families, then I think it swings hard the other way. But in our society as it is today, if you are comfortable enough and have enough to live on, it's hard to push to potentially make your family's situation worse.
Occupy wallstreet was a weirdly conservative movement! lots of people, me included, got mixed up in libertarianism for a while because we felt like the corporations were using the government to...
Occupy wallstreet was a weirdly conservative movement! lots of people, me included, got mixed up in libertarianism for a while because we felt like the corporations were using the government to funnel money to themselves so maybe the solution is less government.
I pretty quickly figured out that the actual libertarian party makes no goddamn sense and at best would result it more money being funneled to the rich. Lots of people didn’t learn though and here we are today, a dumbass memefied conservative party.
Personally, I see a lot of conservatism with age through the lens of "achieving vs. holding on to your achievements", be they material, cultural, status or what have you. Generally, though not...
Personally, I see a lot of conservatism with age through the lens of "achieving vs. holding on to your achievements", be they material, cultural, status or what have you.
Generally, though not universally, older people (especially well-off older people) seem to have more and more to lose (from change) and less and less to gain.
It seems to me that younger people who are very unattached and flexible also just have little to lose from experimenting and also have more of an ambition to change the world in their image or achieve their goals, which the older generation might already have done (and thus they resist changing the world according to someone else's ideas)
Just wanted to say that it's really cool that you protested back then! I don't think that many of us are living in the future that we'd hoped, and having the courage at a young age to stick up for...
Just wanted to say that it's really cool that you protested back then! I don't think that many of us are living in the future that we'd hoped, and having the courage at a young age to stick up for that is admirable!
Ah idk it was the cool thing to do if you were young and had no job prospects because people literally committed fraud to steal tax money. I go to the no kings protests now but those really lack...
Ah idk it was the cool thing to do if you were young and had no job prospects because people literally committed fraud to steal tax money.
I go to the no kings protests now but those really lack the zest of the unemployed masses that we had back then. Maybe in a few years itll be on par.
I was just listening to a podcast about GLP-1s (so, obviously, quite unrelated to the topic at hand). Something really applies to this conversation. Scolders. In 22-23 when GLP-1s were just...
I was just listening to a podcast about GLP-1s (so, obviously, quite unrelated to the topic at hand). Something really applies to this conversation.
Scolders.
In 22-23 when GLP-1s were just becoming popular with the wealthier folks, the scolders had a lot to say about these drugs. "So and so used Ozempic to lose all that weight. How terrible!" - so on and so forth. And I absolutely saw it happen in some of my professional circles and in the spaces I visited online. The podcaster's point, though, was that he intentionally avoided getting answers to his own questions and curiosities at the time, afraid of the scolders online.
He then cracked a simple quip, 'eventually I realized the scolders had moved on to their next outrage, and I could finally explore my questions without their immediate scorn'.
So, anyways, I've been thinking about that a lot in the context of LLMs. I have zero interest in being a scolder. Generally it's hard to put something back in the toothpaste tube. So, let's just figure out how to effectively use this toothpaste we've got all over the place.
Sorry, I had trouble reading the article ... I'll give it another shot in a bit, but it bothers me that the author is compressing a complex topic down to the 'ol one dimensional binary: everyone...
Exemplary
Sorry, I had trouble reading the article ... I'll give it another shot in a bit, but it bothers me that the author is compressing a complex topic down to the 'ol one dimensional binary: everyone needs to stand on the pro vs anti AI line, and absolutely everyone's stances must be comprehensible by that metric.
It bothers me. A lot of what makes these tools problematic derive from being used and deployed by exploitative corporations, and that the research to develop them as communities at large (instead of in gigantic, centralized, water devouring data centres) is by comparison underdeveloped. Since the force driving this development is not curiousity, or a genuine need to reduce human suffering, but is instead a worldwide grift of hitherto unseen proportions (see: the S&P 7 discourse), there is absolutely zero incentive to minimize harm and produce long term sustainable outcomes, and absolutely all of the incentive to lie, cheat, and steal every penny from pension funds to private wallets to facilitate tearing the copper out of the walls of society.
I'm pro-AI done completely differently than we are today, but hopefully how we will eventually. Even that sentiment feels too much to convey in IRL conversations, so I tend to just shut up and let louder voices do the talking.
(edit) Wowow OK made it through and my criticisms still hold. I also have a new criticism!
Crypto is a good reminder: plenty of projects looked every bit as exciting as coding agents do now, and still collapsed when the economics no longer worked.
No they didn't! If this person was in the centre of that technology, they clearly didn't spend enough time to understand how it worked. As a very early enthusiast in the field (like, paying twenty bitcoin for a pizza early), it was crystal clear even before the boom that proof of work was never going to cut it due to the abysmally slow transaction throughput in practice. It was an elegant, probabilistic solution for distributed consensus under antagonistic conditions, but it needed serious work.
(also no, I was too stupid and poor to mine any of them way back when, so it's not like I'm secretly rich or anything; just visibly dumb XD)
(edit 2: nearly had a heart attack when I saw that this was by the founder of Earendil, because I briefly confused it with Anduril XD ah, time for afternoon coffee)
Blockchain is, was, and always has been, a solution looking for a problem. There are multiple flowcharts with like 12 end nodes for answering the question "Do I need blockchain". ONE of the end...
Blockchain is, was, and always has been, a solution looking for a problem.
There are multiple flowcharts with like 12 end nodes for answering the question "Do I need blockchain".
ONE of the end nodes is "yes, you need blockchain". It requires very specific conditions to be sensible, for everything else it's either useless or actively dangerous/misleading.
The problem being solved seemed fairly obvious. A truly secure, globally accepted, dencetralized currency. Given the kerfluffle with companies like Mastercard and Visa, the use case is very clear...
The problem being solved seemed fairly obvious. A truly secure, globally accepted, dencetralized currency. Given the kerfluffle with companies like Mastercard and Visa, the use case is very clear if you ever came on the wrong end of them.
But reality, as always, is messy. And the road to hell is paved with good intentions. So I turned against Bitcoin for the same reason I do AI: it became a tool used for grifts rather than genuine progress, and the barrel became rotten in the process.
I still have some Bitcoin, mostly because I was greedy and didn't exit when it hit 100k€ :D I'll sell the next time it gets there, I promise. (And of course I'm one of the people who had like 2BTC...
I still have some Bitcoin, mostly because I was greedy and didn't exit when it hit 100k€ :D
I'll sell the next time it gets there, I promise.
(And of course I'm one of the people who had like 2BTC mined with my PC at home and sold them when 100€ for a BTC was a crazy amount for digital currency.)
This is conceptually very related to another thing I learned recently: Discussing a topic in good faith and with the goal of arriving at a new, better understanding does not mean giving all sides...
This is conceptually very related to another thing I learned recently: Discussing a topic in good faith and with the goal of arriving at a new, better understanding does not mean giving all sides the same time and consideration. Sounds counter-intuitive, but is entirely true; say you're discussing eugenics. One side says "we shouldn't kill people to optimise for a defined 'ideal' in society", the other one says "we should kill people to optimise for a defined 'ideal' in society". Both of these points are valid points logically, and correct english grammar, yet we do not need to consider one side because it's patently absurd and incorrect by most systems of value. This is the same way. To discuss AI, you need to grok AI, and that means to be capable of loving it just as much as being capable of hating it. I don't see the loud anti-AI camp being charitable on that side.
If you presented these to someone a couple hundred years ago, they would disagree with you about which option is patently absurd and incorrect. And plenty of people up to significantly more...
yet we do not need to consider one side because it's patently absurd and incorrect by most systems of value
If you presented these to someone a couple hundred years ago, they would disagree with you about which option is patently absurd and incorrect. And plenty of people up to significantly more recently would consider me being gay and trans to be patently absurd and incorrect by most systems of value. The reason we shouldn't give eugenics equal weight in discussion is because it is violent and inhumane, and it relies on outdated ideas about human biology. But declaring it "patently absurd" just isn't enough -- people in the past did have to argue against it using real evidence, because the consensus on what's right and what's patently absurd changes over time. And discussion of minority views is part of how you effect changes in what people perceive as right and wrong like this.
I don't think eugenics is a particularly good comparison to AI in almost any respect, fwiw, including how you use it here.
I don't see how "is it okay to bypass centuries of copyright in order to produce content even faster" is anything but patently absurd. It would be absurd in the 1950's, 1980's, and the 2010's. "To...
I don't think eugenics is a particularly good comparison to AI in almost any respect, fwiw, including how you use it here.
I don't see how "is it okay to bypass centuries of copyright in order to produce content even faster" is anything but patently absurd. It would be absurd in the 1950's, 1980's, and the 2010's. "To make us make stuff faster" was the primary justification for all those decades.
If we could regulate to make such models accountable for copyright, that'd take away a good 90% of my reasons to be anti-AI
There are plenty of people who don't believe that copyright should exist at all because it stifles creativity and is principally used by large corporations to consolidate ownership of IP in ways...
There are plenty of people who don't believe that copyright should exist at all because it stifles creativity and is principally used by large corporations to consolidate ownership of IP in ways that harm independent creators. This is a relatively extreme position but not necessarily an uncommon one, and there are a very large number of weaker positions that favor weaker copyright protections than currently exist that are even more common. Whether you or I personally believe this or not, it's an opinion held widely enough that even if we take as a given that it's true that AI is "bypassing centuries of copyright" under current copyright law (which is very much not a settled legal question in any jurisdiction fwiw), it is not necessarily patently absurd to believe that bypassing centuries of copyright is a good thing or at least worth it in light of what is gained in exchange (which, even under the most skeptical realistic analysis of the capabilities of modern generative AI, is definitely more than "producing content faster.")
However, this is all beside the point. I think you miss the actual message I was trying to get across in my previous comment. My previous comment is not remotely arguing in favor of AI whatsoever. What I'm pointing out is that simply declaring something to be "patently absurd" is not a remotely sensible way to decide what is and isn't worth discussing because the definition of "patently absurd" is dependent both on personal opinion and social mores, and if we were to use that as a basis for what is considered acceptable discussion, plenty of things that are very much good and worth discussing nowadays would have never been allowed to be discussed. There must be a better framework for deciding whether something merits discussion that goes deeper than just declaring it patently absurd.
Also, if copyright constitutes 90% of your issues with AI, it should be extremely obvious why I don't think that eugenics is a good analogue for it in this context (or most contexts, tbqh). I also think, frankly, that you don't have a good understanding of the real risks and dangers of AI if copyright is indeed such a large portion of your issue with it.
Yes, I've been around communities like that. I think it's horribly myopic for many reasons. To name a few: When copyright isn't a thing, the biggest producers became the biggest winners, with no...
because it stifles creativity and is principally used by large corporations to consolidate ownership of IP in ways that harm independent creators.
Yes, I've been around communities like that. I think it's horribly myopic for many reasons. To name a few:
When copyright isn't a thing, the biggest producers became the biggest winners, with no obligation to compensate the creator. That's why copyright was created to begin with
You need some sort of scarcity to create value in anything. This is built-in with physical goods. For creative works, we already see the results of what happens when a store is filled with slop; even quality works get drowned out. Now imagine that on a single brand. The idea of Mickey Mouse ceases to be anything meaningful when everyone and everything makes use of it.
I can see why a tech oriented person would support the destruction of copyright. They believe the best tools will float to the top and then be iterated on the fastest. But there's no objective measure to define "quality" in art. A complete destruction of copyright would destroy the livelihood of artists, and much of culture as a result.
it is not necessarily patently absurd to believe that bypassing centuries of copyright is a good thing or at least worth it in light of what is gained in exchange
I don't really subscribe to the populist fallacy much. There was a time where slavery was a questionable net gain for society, and even a recently as 2024 in the US we see where "populism" got us. It's a metric to consider, but it is not the ethos end-all be-all.
Humans in general have a tendency to overreact whenever some bad aspect of a concept is discovered; I recognize there's countless examples of abusing copyright past the letter of the law and lobbying to stretch the definition past the spirit of the law, but that doesn't mean the spirit is wrong. It means we need a stronger review of the spirit and evaluating if the letter works for that anymore.
What I'm pointing out is that simply declaring something to be "patently absurd" is not a remotely sensible way to decide what is and isn't worth discussing because the definition of "patently absurd" is dependent both on personal opinion and social mores
As it always will be. But at the same time lines will need to be drawn. And if we don't push back, those lines will be drawn for you. I'm sure you're experiencing this now with the current trans "discourse" out there.
We can certainly argue all day about what lines are indefensible, but ultimately we still need to set our own lines. I'm still of the belief of overhauling copyright over repealing it, so it'll be hard to convince me that destroying it without some sort of replacement is valid.
if copyright constitutes 90% of your issues with AI, it should be extremely obvious why I don't think that eugenics is a good analogue for it in this context (or most contexts, tbqh).
Sure, I didn't make the argument. Eugenics brings about direct harm to the person, society, and even biology as a whole. Bypassing copyright brings about a cultural harm. There are different levels to it.
I also think, frankly, that you don't have a good understanding of the real risks and dangers of AI if copyright is indeed such a large portion of your issue with it.
I do. I just accept that humanity will do a lot of stupid stuff, and that US culture especially has commonly defended against regulating harmful aspects in the name of personal freedom.
Copyright is my 90% because, by all cultural and legal accounts, AI can't really be defended as such. If nothing else, all courts consistently ruled that AI created works cannot be copyrighted, so that was never an issue to begin with.
If we control that and the country still wants to burn a hole in the ozone layer, replace all labor with robots, and mass manufacture propaganda... well, that's "freedom" at work I suppose. I'm sure at some point in such a descent I'd simply leave the country if society is unable to stop that.
But I've spent my life being shown and told "you aren't entitled to a job" and "money is speech". If enough people believe that, I'm not going to keep futility pushing back on such a society.
Copyright was originally created as a quiet form of government censorship and control. The right to copy a document required registration and approval from the government. This was done to limit...
When copyright isn't a thing, the biggest producers became the biggest winners, with no obligation to compensate the creator. That's why copyright was created to begin with
Copyright was originally created as a quiet form of government censorship and control. The right to copy a document required registration and approval from the government. This was done to limit and control the free spread of information.
It also had impacts on who profited from a creative work, but that's not why it was originally created.
That's fair. It evolved some 50 years after that and made protections for those that specifically didn't have the means nor knowledge for such copyright. That's why copyright for the last 150...
That's fair. It evolved some 50 years after that and made protections for those that specifically didn't have the means nor knowledge for such copyright. That's why copyright for the last 150 years or so is default.
I agree with this, though I suspect we'd disagree on exactly where that line is. The point of my earlier comment was to criticize the purported basis for drawing the line in a certain place -- I...
But at the same time lines will need to be drawn. And if we don't push back, those lines will be drawn for you.
I agree with this, though I suspect we'd disagree on exactly where that line is. The point of my earlier comment was to criticize the purported basis for drawing the line in a certain place -- I think it needs to be better justified through well thought-out arguments than with truisms like "patently absurd." I agree with the earlier commenter, for instance, that a site like this should shut down discussion that contains advocacy for eugenics, but I don't think that "it's patently absurd" is the reason why or a good justification to give when arguing for doing so.
We disagree on a number of matters when it comes to copyright's effects, but I'm fine agreeing to disagree on that front. I wasn't really interested in arguing about AI in this thread, just in criticizing the points made about shutting down discussion of certain topics more generally, because I think there's a real danger in the thought-terminating nature of "patently absurd" as a category/descriptor when it comes to ideas.
If nothing else, all courts consistently ruled that AI created works cannot be copyrighted, so that was never an issue to begin with.
this is going off on a tangent bc I don't really want to actually argue about copyright rn, but fwiw, afaik this has been the ruling in one case, and it was a weaker decision than you present it here. I do personally think there's still a solid argument that strictly AI generated elements of a work are uncopyrightable under US copyright law, but it hasn't really been litigated yet whether prompt engineering counts as the tiniest modicum of human creativity as is necessary for copyright, which afaik is the sticking point when it comes to whether AI-generated work is copyrightable. The existing case that you're probably referencing involved the human involved listing the AI itself as author on the copyright registration, which made the decision a lot more obvious but not strong enough to establish that all AI-generated images are surely uncopyrightable. I'm sure that will be litigated sooner rather than later, but the courts move slowly. Even if all AI generated work was found to be wholly uncopyrightable by the courts, which has not yet happened, existing case law indicates that elements of the finished work that did involve any modicum of human creativity would be copyrightable. For instance, if a human arranges images in a certain way, the courts have already found that the arrangement can be copyrighted even if the individual images therein are found to be uncopyrightable. This is, in fact, exactly what happened in the decision you're probably referencing.
I think whether AI-generated work is copyrightable is pretty unrelated to the types of moral questions you're discussing anyway though, even as it pertains to copyright. Even the ways in which AI training may violate copyright (another thing that's both up in the air legally and will probably involve a lot of fine-grained case-by-case distinctions once there is more case law on it) for instance, are pretty much entirely unrelated to whether AI-generated work is copyrightable.
I can spend all day explaining my take, but at some point it's futile. If the statement claiming credit for other people's IP is wrong, and profiting off of someone's IP without permission is very...
I think it needs to be better justified through well thought-out arguments than with truisms like "patently absurd."
I can spend all day explaining my take, but at some point it's futile. If the statement claiming credit for other people's IP is wrong, and profiting off of someone's IP without permission is very wrong" doesn't resonate with you, then there's not much to discuss on the issue. If our viewpoints are so diverged, then there's no point trying to find nuances. Making those hard lines are part of understanding your identity and voewpoints of the world.
because I think there's a real danger in the thought-terminating nature of "patently absurd" as a category/descriptor when it comes to ideas.
I like to think I'm flexible on most topics. I'm still open to the idea of AI under proper regulations. But that's why I highlighted copyright as 90% of my objection. Taking someone else work to profit off of without so much as a credit makes me unable to accept AI in its current iteration, because it hits one of those "patently absurd" use cases. I don't use that term often nor lightly. It's based on a lot of first and second-hand experiences of people stifled under such abuse. We're supposedly in a "best ideas prevai" society, but have great ideas quashed under whoever has the best litigation? I can't accept that.
I've seen many arguments as well as deep dived on how the tech underneath works (I did engage in quite a bit of Reinforcement learning pre-GPT, so I'm not new to this tech), and I'm still not convinced this isn't outright theft of copyright. But I'm not a lawyer, so all I can do is see how current high profile litigation goes to see if the country's court systems agree with me.
this has been the ruling in one case, and it was a weaker decision than you present it here.
the SCOTUS upholding a ruling can't make it much clearer, can it?
There's at least 2 other rulings from earlier too. I dont see other angles here unless we start arguing "I made 90% of this, but AI generated a few details". I guess those can go to court, but it's pretty consistent that typing a few prompts and trying to claim copyright on the results is not enough.
I think whether AI-generated work is copyrightable is pretty unrelated to the types of moral questions you're discussing anyway though, even as it pertains to copyright
I agree, and I personally don't care if someone copyrights an AI work as a concept. As long as it's not overwriting someone else's copyright. I just point to that as one of the more consistent rulings early on that puts some reservations on how companies function. And it will probably be used in current litigation, so while it's a different case, they are at least tangentially related.
You seem to continue to be under the misapprehension that I want to argue about AI with you, despite my repeatedly reiterating that I do not want to do so. I can't tell if you're willfully...
You seem to continue to be under the misapprehension that I want to argue about AI with you, despite my repeatedly reiterating that I do not want to do so. I can't tell if you're willfully refusing to understand what my initial comment was actually about but there are only so many ways I can reword the sentiment that "just saying something is 'patently absurd' is a bad justification for shutting down discussion about it because it fails to distinguish things that are actually harmful to discuss from things that are unjustifiably societally taboo". Your arguments about AI and copyright are a non sequitur. Please stop trying to argue with me about it.
As for the copyright decisions, SCOTUS declining to hear a case is not the same thing as them upholding a ruling, and whether the Supreme Court heard the case or not does not contradict any of the facts I gave about the limitations in scope of the decision in the case in question. My point was simply that the decisions that exist on this question are more limited in scope than what you're claiming, and you describe it in absolute terms that are broader than what has been established in the very limited amount of case law on the subject we currently have. More case law will absolutely build up over time and it's perfectly likely that they will continue in the vein of not finding AI generated material to be copyrightable, I don't think that's an unlikely decision under US copyright law whatsoever. But when I addressed this in your original comment, I was merely trying to correct what seemed to me to be a factual misstatement, because framing this as having been very firmly and broadly decided already doesn't reflect the current state of things legally. But this has reached the limit of what I want to discuss on the matter. As I said before, I have not been remotely interested anywhere in this particular thread in debating AI, as it's unrelated to and distracts from the point I actually was trying to make originally. We also have very different views on IP law based on your preceding comments, so a discussion of AI and copyright would no doubt be stifled by our differing perspectives on copyright as a legal tool, on which I suspect we would need to agree to disagree anyway.
Can you elaborate on how this relates to the preceding portion of your comment?
To discuss AI, you need to grok AI, and that means to be capable of loving it just as much as being capable of hating it. I don't see the loud anti-AI camp being charitable on that side.
Can you elaborate on how this relates to the preceding portion of your comment?
The conversation at hand is "is AI good or bad?", so answering or at least discussion of that question requires that you examine AI first. If your position - like the negative side of the argument...
The conversation at hand is "is AI good or bad?", so answering or at least discussion of that question requires that you examine AI first. If your position - like the negative side of the argument here - is that AI is so inherently bad we shouldn't even give it that time of day, then you're not interested in having the conversation in a nuanced way to begin with, and therefore we can discard that position in favour of the ones that are actually interested in a discussion about the topic.
To put it simply, if your goal is to have a conversation (is AI good?), then the available options are "Yes", "Maybe", and "No" and variations thereof. "I'm not even going to discuss the question because I believe it's not worth it" is not a valid answer to that question.
Yes, Maybe, No, Potentially, It Depends, Signs point to yes, these are all answers, and getting to whichever one of them will mean having reached a new quantum of understanding. "We will never know because I'm not going to discuss it" does not add any new insight.
It's quite simple: AI as a tool is pretty neat with varying degrees of usefulness. The ethics of AI are a very different story. As such, you don't really need to know anything beyond its...
It's quite simple: AI as a tool is pretty neat with varying degrees of usefulness. The ethics of AI are a very different story. As such, you don't really need to know anything beyond its externalities to pass judgement.
The real pro/anti boils down to how the tools will be used (primarily to further exploit workers), the actual gains (a lot of confirmation bias), the psychological harms of ascribing personality to a statistical engine (and thus its usage as a therapist/partner), the lies about its economic feasibility, and if this is worth the substantial environmental damage this is causing in a time where it would be better to reduce energy consumption instead of increasing it.
Most AI is powered by natural gas. Natural gas emits substantial greenhouse gasses, through every stage of its extraction, transport, and combustion. It is only "green" in the sense that it's only about half as bad as coal.
By almost any standard, if you consider climate change or worker exploitation a problem, using AI is immoral.
This seems biased to me, and reinforces the original commenters point. If you are just going to paint it in an evil light, then why would anyone want to include you in a real discussion on the...
This seems biased to me, and reinforces the original commenters point. If you are just going to paint it in an evil light, then why would anyone want to include you in a real discussion on the topic?
You couldn't concede a single positive while responding to someone who is trying to ensure people are approaching the topic in good faith.
Because painting it an evil light is, based upon substantial research, the accurate portrayal of the current state of affairs once you've stripped away the hype. A hypothetical future AI that does...
Because painting it an evil light is, based upon substantial research, the accurate portrayal of the current state of affairs once you've stripped away the hype. A hypothetical future AI that does not have these problems does not change the current evilness.
I did concede a positive: It is a tool of varying degree of usefulness. It has a lot of potential. It does not negate the vast list of externalities that mostly get handwaved away by the boosters.
In other posts, I've actually mentioned how I'm surprisingly impressed with Claude's capabilities. Does not change that it feels a bit like selling your soul when you use it to do what could be a 30 second web research for 1/100th the energy.
However, it is also completely unrealistic for every single person to always rehash their nuance to every arguement. If AI boosters/makers provided actual answers to these biggest problems that keep coming up, the conversation would change....much how we don't debate the merits of leaded gasoline or paint anymore.
I guess that's how coprations excluded climate change from the discussion for decades. I'll give one: AI is a catalysts that accelerates the status quo. That could be a good thing. But I think...
If you are just going to paint it in an evil light, then why would anyone want to include you in a real discussion on the topic?
I guess that's how coprations excluded climate change from the discussion for decades.
You couldn't concede a single positive while responding to someone who is trying to ensure people are approaching the topic in good faith.
I'll give one: AI is a catalysts that accelerates the status quo.
That could be a good thing. But I think many would agree that between the POTUS casting himself as Jesus Christ, the current inefficiencies in trying to force it in the workplace, and the exploding stock market over unrealized gains, that the status quo is very much not accelerating in the right direction. I can see a lot of AI being useful in theory, but the current realitiesa are extremely disappointing.
I was stating my agreement with the poster who more or less said the hardliners on one side wouldn't engage in actual discussion because they believe too strongly against it. Just like the person...
I was stating my agreement with the poster who more or less said the hardliners on one side wouldn't engage in actual discussion because they believe too strongly against it. Just like the person who's convictions will never concede that abortions aren't anything but literal murder isn't really worth including in a nuanced discussion on the topic.
I can say there are positives and negatives about AI, but I'm not going to say its inherently perfect or evil.
Yes, at some point there's not much discussing of a topic, and instead you simply need to take action. This discourse has gone on for at least 3 years so I don't think there's much anyone can say...
Yes, at some point there's not much discussing of a topic, and instead you simply need to take action. This discourse has gone on for at least 3 years so I don't think there's much anyone can say (nor even research) that will move dial. There's too much at stake on either side of the aisle for that.
but I'm not going to say its inherently perfect or evil.
" There is nothing either good or bad, but thinking makes it so" - Shakespeare
The people "thinking" have made their intentions loud and clear, sadly.
Not to invoke the ol' whataboutism, but there are plenty of technologies that are much more grey and do not in the slightest gather that much ire from the general public, so I have a hard time...
Not to invoke the ol' whataboutism, but there are plenty of technologies that are much more grey and do not in the slightest gather that much ire from the general public, so I have a hard time taking this point seriously. What about AI makes it so much worse than, say, cryptocurrency? Or the global stock market? Or nuclear power?
Cryptocurrency: Almost every single use case for cryptocurrency falls apart once a single party involved can be trusted. Finding the edge cases where it has been a positive always sounds a lot...
Cryptocurrency: Almost every single use case for cryptocurrency falls apart once a single party involved can be trusted. Finding the edge cases where it has been a positive always sounds a lot like 'At least Mousillini got the trains to run on time.''
Stock markets, in their current form, bear no resemblance to the economic activity that is supposedly based upon them, with about a 5 to 1 ratio of 0-sum gambling to actual economic funding.
Nuclear power is the best green energy source we have, and while it is difficult to implement it is the best chance we have of taking the base load off fossil fuels, because the amount of battery storage and grid improvements that would be required for universal wind/solar is currently not feasible.
It's all about scale, magnitudes, and regulations for evils. We know enough about energy that any new energy would be highly regulated, so I'm less worried about nuclear energy being used wrong....
What about AI makes it so much worse than, say, cryptocurrency? Or the global stock market? Or nuclear power?
It's all about scale, magnitudes, and regulations for evils. We know enough about energy that any new energy would be highly regulated, so I'm less worried about nuclear energy being used wrong.
AI on the other end of the scale has been actively tryng to pre-emptively remove any regulation around it. I don't ever see how this can be a good thing for an emerging technology. And it's hitting at a much bigger scale than cryptocurrency ever was. Not to mention the the fact that we aren't prepared for a positive outcome of proper AI usage (e.g. UBI initiatives).
AI isn't absolutely worse than any of those, but it does present the most danger.
I don't really wanna get into it, but I did mean "thin the population by removing those with undesirable traits so they won't reproduce" which unless I'm completely mistaken is eugenics
I don't really wanna get into it, but I did mean "thin the population by removing those with undesirable traits so they won't reproduce" which unless I'm completely mistaken is eugenics
I associate eugenics with things like forced sterilization programs and other restrictions of reproductive freedom, which are abhorrent, but actually killing people is a step beyond that.
I associate eugenics with things like forced sterilization programs and other restrictions of reproductive freedom, which are abhorrent, but actually killing people is a step beyond that.
Totally fair to have your own opinion on this, but I don't think creating a ladder of atrocities at this granularity is useful w.r.t. discussing morality. Even permitting that subtlety to exist --...
but actually killing people is a step beyond that.
Once you concede that, although it's bad to sterilize an entire culture, it's worse to just kill them, then that permits a man proposing the former to seem a level headed pragmatist by comparison to those raving lunatics proposing the latter. It's the civilized solution, even! Kill the indian, to save the man.
I freely admit that this is a slippery slope "fallacy", but because we've seen it happen over, and over, and over, and over again, the actually pragmatic approach for people who would like to see the oppression and death stop is to prohibit that foot in the door to begin with.
I think if you ignore all distinctions, you might end up claiming that Margaret Sanger (the founder of Planned Parenthood, who apparently did publish pro-eugenics articles) was some kind of racist...
I think if you ignore all distinctions, you might end up claiming that Margaret Sanger (the founder of Planned Parenthood, who apparently did publish pro-eugenics articles) was some kind of racist Nazi.
So, I'm going to stick with making distinctions. I think it's important to try not to misrepresent other people's beliefs.
The distinction you make is not a real one. Eugenics encompasses both the methods you describe and actual killing, since it describes the goal of "improving" the population by removing...
The distinction you make is not a real one. Eugenics encompasses both the methods you describe and actual killing, since it describes the goal of "improving" the population by removing undesirables. In practice, these "lesser" approaches you describe are incredibly frequently accompanied by overt killing anyway. The Nazis were undoubtedly eugenicists, ofc, and their programs took huge influence from contemporary US eugenics. Attempting to draw a distinction here seems to amount to white-washing US eugenics solely because the Germans arguably beat them in terms of the scale of implementation.
It's also worth noting that the methods you describe also uncontroversially constitute genocide (or at least attempted genocide) by the accepted definitions.
No worries, I don't aim to convince people of anything, least of all in a wee internet forum 😅 ^ TIL about Planned Parenthood! But I try not to hold founders against their institutions in general,...
No worries, I don't aim to convince people of anything, least of all in a wee internet forum 😅
^ TIL about Planned Parenthood! But I try not to hold founders against their institutions in general, so that doesn't change much for me, at least -- moreso it's their actions after the fact.
Can't it also be argued that you're acknowledging one is worse than the other, but on some level you don't want to admit it, so you're removing any distinction which to some would feel...
Once you concede that, although it's bad to sterilize an entire culture, it's worse to just kill them, then that permits a man proposing the former to seem a level headed pragmatist by comparison to those raving lunatics proposing the latter. It's the civilized solution, even! Kill the indian, to save the man.
Can't it also be argued that you're acknowledging one is worse than the other, but on some level you don't want to admit it, so you're removing any distinction which to some would feel disingenuous and cause them to feel you're lying to cover something up.
I don't personally believe that, but I think there's a slippery slope whichever side you go down. Purposefully misrepresenting something because you don't want to give the appearance that one is worse than the other gives people cause to distrust you. The way you're approaching it may be the better way overall, I just don't think it should be seen as one without its own set of consequences.
Nope! Because people are free to have discussions about genocide and eugenics in private, invite-only groups. The Canadian Criminal Code notes that we limit freedom of expression in public spaces...
Can't it also be argued that you're acknowledging one is worse than the other, but on some level you don't want to admit it, so you're removing any distinction which to some would feel disingenuous and cause them to feel you're lying to cover something up.
Nope! Because people are free to have discussions about genocide and eugenics in private, invite-only groups. The Canadian Criminal Code notes that we limit freedom of expression in public spaces when advocating for hatred (e.g. eugenics, genocide). If a curious soul wants to have a more nuanced understanding of this, then they are free to do so in controlled environments; out here in public, we wash our hands of the discussion knowing that it's all horrendous, vile trash, where shades of grey are unwelcome (note: there are precious few cases where I have this hardline stance; this is one of them).
Pushing a smidge beyond my pay grade, but it feels like discussions debating the subtle distinctions between different methods of cultural annihilation are a form of infohazard, and we should treat them like we do virulent plagues, hazardous and persistent toxins, etc. It's fine if you're adequately protected, but the average person should absolutely not be legally permitted to manipulate the social equivalent of ebola outside of an extremely controlled environment.
The way you're approaching it may be the better way overall, I just don't think it should be seen as one without its own set of consequences.
Agreed that there are consequences, but I think they're mostly on the side of not enforcing this strongly enough. My views are somewhat in line with that of the Canadian government's, and even it still occasionally applauds literal Nazis in parliament, or argues towards stripping away First Nations' rights.
Right, and in those private groups, where they become an echo chamber, they're going to point to how you've pushed them to discuss it in private and that you're the real enemy and there will be a...
Nope! Because people are free to have discussions about genocide and eugenics in private, invite-only groups.
Right, and in those private groups, where they become an echo chamber, they're going to point to how you've pushed them to discuss it in private and that you're the real enemy and there will be a nugget of truth behind their attacks on you, that you purposefully misrepresent your position and lie (because of a fear of a slippery slope, but that part won't be accounted for), and it won't just be you in particular it's directed at, it will be anyone on 'your side', it will be whatever your perceived side will be.
On the one hand I could take a very cynical or even nihilistic take and say no matter what, we lose, but I don't know that I believe that either. Whether you clamp down to suppress expression of ideas or you don't, it almost feels like an inevitable tide, but I also wonder if there are cases where both ways have worked at different times and in different circumstances, where suppression only made things worse and fighting with truth worked better, and in other circumstances where truth didn't prevail and suppression worked better.
I was moreso thinking humanities courses at universities, or at museums covering historical atrocities, than like Nazi bars or terrorist meet-ups. Places where actual experts can chime in and...
Right, and in those private groups, where they become an echo chamber [...]
I was moreso thinking humanities courses at universities, or at museums covering historical atrocities, than like Nazi bars or terrorist meet-ups. Places where actual experts can chime in and explain how being very polite and extending the benefit of the doubt to ideas which involve extermination has turned out, extremely badly, extremely often. As I noted, it's OK to explore these ideas in very controlled, safe environments; doing so in public can rapidly become hazardous.
While I would agree those would be ideal places, that seems a bit more like preaching to the choir. What percentage of people are actually exposed to those situations at any significant level?...
I was moreso thinking humanities courses at universities, or at museums covering historical atrocities, than like Nazi bars or terrorist meet-ups.
While I would agree those would be ideal places, that seems a bit more like preaching to the choir. What percentage of people are actually exposed to those situations at any significant level? You're not really covering that many people in that case. That could also be taken as "well you're just not smart enough or don't have enough money to be involved in this discussion" considering the resources required to travel for leisure or go to university.
It's a pretty strange ethical stance to equate killing people with preventing future births. Certainly, the people being killed would care about the distinction.
It's a pretty strange ethical stance to equate killing people with preventing future births. Certainly, the people being killed would care about the distinction.
If somebody advocates to neuter all the Jews so their bloodline ends....that's just delayed genocide. It will take 80ish years, but it would have succeeded where Hitler failed. It's all cut from...
If somebody advocates to neuter all the Jews so their bloodline ends....that's just delayed genocide. It will take 80ish years, but it would have succeeded where Hitler failed.
It's all cut from the same cloth of racial supremacy. It's like debating if chopping off a hand or putting a tourniquite around a hand is the best way to remove hands, because they are unclean.
This is an overly reductive argument that elides what we're really discussing when we talk about AI. Is it LLMs trained by opaque private entities? Are we referring to machine learning models...
This is an overly reductive argument that elides what we're really discussing when we talk about AI. Is it LLMs trained by opaque private entities? Are we referring to machine learning models specialized for understanding molecular interactions, automating robotic movements, target identification in satellite images, etc.? Generative transformers for image synthesis?
Is "good" just utility, or does it take into account, as /u/vord said, all the training corpus theft, environmental, economic, social, and other externalities? And the grotesque politics [paywall] for U.S. users?
I'm in that uncomfortable middle - I've used a couple of LLMs both out of curiosity and because they've been heavily promoted for work. I'm still in the stage where it's usually a coin flip as to whether my productivity will be improved or hindered, but I've had some cases where the results were stellar and well beyond anything I could do unaided. I've also seen some corporate (LLM) and medical AI (image processing) misuse that was actively detrimental.
Given the resources, I'd much prefer to run and train a smaller specialized model for the tasks that matter in my work, instead of stumbling across the gaps and unpredictable costs of genericized private models.
I'd like to see public open source models, run on public infrastructure, using transparently sourced public data, used to achieve public goods - transit and budget planning, public health, social services connectors, resource utilization monitoring, decision support for small communities, and so on.
I can see paths where we might eventually get to unalloyed benefits from AI. But they're not going to arrive through "lines of code go up". There are massive inefficiencies and risks in dumping the entire Internet and every digitized medium into training and trying to mount guardrails after the fact. Centralized model control in vast data centers, with profits going to a tiny minority, creates far too much environmental, economic, and social damage to proceed without regulation.
Put me in the "Potentially" and "It Depends" camps that /u/Delphi mentioned, but not with respect to whatever OpenAI is doing this minute. And color me massively skeptical about "AGI".
The way I think about it is that you can’t get an informed view without either doing an investigation yourself or relying on someone else to do the investigation. That takes effort. A good...
The way I think about it is that you can’t get an informed view without either doing an investigation yourself or relying on someone else to do the investigation. That takes effort. A good question to ask is who did the homework and what did they actually do?
Nobody should be required to investigate things themselves to have an opinion, but you should be careful what your sources are and understand their limitations. For example, after an airplane crash, the best source is going to be the formal investigation, which will take months.
Scientific studies and Investigative reporting are other good sources. Writers can publish more informal investigations to their blogs.
All these sources have to be handled with care because writers do have their biases. They might not be committed to being curious to learn more about the subject and reporting whatever they find even if it doesn’t support a favored position, which I feel goes a long way to offset biases.
What I keep noticing is that a lot of the criticism directed at crystal meth is perfectly legitimate, but it often comes from people without a meaningful amount of direct experience with it. They...
What I keep noticing is that a lot of the criticism directed at crystal meth is perfectly legitimate, but it often comes from people without a meaningful amount of direct experience with it. They are not necessarily wrong. In fact, many of them cite studies, polls and all kinds of sources that they themselves spent time investigating and surveying. And quite legitimately they identified real issues: your dental outcomes can be bad, the malnutrition implications are scary, the hallucinations of insects are strange and potentially result in sores from over-scratching, there is a liver and kidney impact, the long-term nervous system consequences are unclear, and the hype is exhausting.
To carry on with this analogy, I've personally encountered people who have pointed at that same evidence to insist that everyone who takes stimulant medication for ADHD is a degenerate addict and...
To carry on with this analogy, I've personally encountered people who have pointed at that same evidence to insist that everyone who takes stimulant medication for ADHD is a degenerate addict and that it should be banned. It's possible for something to be genuinely harmful and dangerous and for people to wildly overreact and fear-monger to an extent that is itself harmful and dangerous. The two aren't mutually exclusive.
Interesting, but I think the author isn't carving out enough space for people like myself. For example: The author seems to miss the idea of someone who pushed through the first failure and...
Interesting, but I think the author isn't carving out enough space for people like myself. For example:
This matters because from the perspective of the outright rejecter, all of these people can look the same. If someone spent serious time with coding agents, found them useful in some areas, harmful in others, and came away with a nuanced view, they may still be thrown into the same bucket as the person who thinks agents can do no wrong.
The author seems to miss the idea of someone who pushed through the first failure and honeymoon period, but concluded that they reject the technology, and thus stepped away from it. I (and some of my favorite former and present coworkers) did exactly that. I gave LLMs a few months. I didn't like the results.
But now all I hear is accusations that I "never gave them a chance" or folks blowing smoke about how "the latest models are so much better." Maybe. But they don't address any of my issues with the underlying technology.
Funnily enough, back in high school I did the exact same thing (maybe with more memes) with cryptocurrency. It was neat! And then I decided it wasn't useful and didn't need space in my life.
To have an informed opinion you absolutely need some level of practical experience, sure (though this doesn't apply to everything: I don't need to drive one to know that I don't need a truck or large SUV, and I don't need to try adderall to know that it would probably make me much more focused). But once I've come to a conclusion, I don't need to keep immersing myself. Once I know the water is cold, I'm not going to jump fully back in just because someone tells me it's warmer now. I'll dip a toe in from time to time, and have a chuckle to myself when it was, indeed, an enthusiast trumping up a 1 degree difference as a "game changer". If it's balmy, I'll be more than happy to jump right in. But I do not (and simply don't have time to) explain every. single. time. my full stance. It is simply too tiring. Let me read my book by the water's edge in peace!
One of the major mainstream criticisms I hold is the very abstract ones around the environmental, social implications and disgust around the hype even with these negative impacts known. And if you...
One of the major mainstream criticisms I hold is the very abstract ones around the environmental, social implications and disgust around the hype even with these negative impacts known.
And if you take those concerns as a moral issue like I do - it feels wrong to use AI agents/LLMs at all. To support and use a tool that is essentially accelerating the climate crisis as an environmentalist would be totally unprincipled.
So even if abstract, i don't think its necessarily missing anything in its criticism. Unless somehow data centers and chip production are produced with 100% clean energy & materials. The actual performance of AI and experience using it become irrelevant.
So very similar to vegetarianism. But I don't see many people complaining about how those damned vegetarians just don't have enough experience with meat to know better than their initial biased...
So very similar to vegetarianism. But I don't see many people complaining about how those damned vegetarians just don't have enough experience with meat to know better than their initial biased position.
Honestly, is even possible to find a job as a developer where you're not asked to use AI. If there are, I imagine they will go the same way as open source jobs: few and far between.
Honestly, is even possible to find a job as a developer where you're not asked to use AI. If there are, I imagine they will go the same way as open source jobs: few and far between.
AI is a tool and hiring someone who's vehemently anti-AI in 2026 is a risk for any company. And because it's a new shiny tool, people are going around poking it into places it has no business...
AI is a tool and hiring someone who's vehemently anti-AI in 2026 is a risk for any company.
And because it's a new shiny tool, people are going around poking it into places it has no business going in. People will get hurt - and are actually, as some companies are using AI as an excuse to lay off people.
But that doesn't mean there aren't good uses for "AI", which is a spectrum. For the vast majority of regular people AI = chatting with chatgpt, using it as a search engine, therapist, proofreader and whatever and generating silly images of themselves.
That's nowhere near the actual good uses for language models, the actual uses are mostly invisible to people and they actually have been using "AI" waaay before ChatGPT was a thing.
I blame Sam Altman. It is reasonable to hate AI because he's the face of it to everyone who is not a programmer or researcher. And he's, at best, an asshole.
I blame Sam Altman. It is reasonable to hate AI because he's the face of it to everyone who is not a programmer or researcher.
Huh, that's a good point. Maybe development of some super secure code that can't be leaked to AI companies in any way? Something like using strictly local models. Maybe some applications...
Huh, that's a good point.
Maybe development of some super secure code that can't be leaked to AI companies in any way? Something like using strictly local models. Maybe some applications orthogonal to AI systems, like systems that want to verify/broke/influence AI in any way?
My issue is not that AI is forced onto us, my issue is that tops anticipate 2x, 3x, 5x, 10x performance improvements from using AI without losing quality.
As a developer, I've been an LLM opponent since it began to become mainstream, for all the usual reasons.
Then I've been heavily nudged to use it at work, and I have actually used it. I conversed with colleagues that are much more enthusiastic than me, and others that are also more cautious. I learned (and I'm learning, as they're evolving way too fast) LLMs strengths and weaknesses, and while my views are mostly unchanged, I now know what they can(not) do, and I've experienced the effects on my work. I think I'm dead in the biased center the article describes.
Absolutely, and that's terrible. I'm active in a few communities where people are strongly anti-AI, and I no longer engage in such discussions because I'm afraid to be seen as "one of them", a traitor developer encouraging intellectual theft and ecocide for convenience. And quite honestly, that's also how I see myself.
Every developer I've heard from online before LLMs was staunchly against IP. I only ever heard calls to eliminate software patents, complaints about proprietary licensing, etc.
It's a little more nuanced than that, actually. Licensing can be not proprietary, especially in the open source world.
One of these licenses, the GNU General Public License (GPL) basically says "do whatever you want with this project, as long as you contribute back and also open source your own fork". I'm simplifying, but that's the gist. The deal is I give you my work for free, but if you're making something with it, you're obligated to share your own work with the same license. That effectively makes monetization really hard, and enterprises usually refuse to touch GPL code because it's a legal minefield.
However, LLMs producers proudly and loudly don't give a damn about copyrights and licenses. They stole our open source work to feed their beast and make tons of money from it. Hence the intellectual theft. It's not about "my" work or the perceived stolen monetary value, it's about a broken legal and social contract. It's stealing the collective intellectual work of everyone, not only developers, and privatizing that immense sum of knowledge to make a profit from it.
This always irks me, because the clause is that you're sharing the modified work to your downstream customers. Not upstream to the project, not the general public. Just the people you are selling to.
There is nothing breaking the GPL by keeping the source in a private repo that you give your customers access to. Heck, if you look at your car dash's 'about' button you can find a download, email, or address to get a CD of the relevant sources.
You can't punish your customers from then republishing or upstreaming your changes again, but hey, that's one less thing you have to maintain.
Honestly, the more I've understood of how the GPL works, the dumber it has seemed.
It essentially hinges on copyright law. This is stupid. Copyright law varies wildly between different countries and many of its ramifications aren't clear within a single jurisdiction. For example, is using a piece of third party software in your own software a copyright violation? The FSF (who write the GPL) say yes. The EU says no. There is no clear answer from the US, IIRC. The whole copyright thing is fraught with ambiguity and open source licenses are stuck in the middle of that.
But there is an easier option' EULAs. These do have a clearer legal backing, and you can be much more explicit about what you put in them. You can say, for example, that changes to source code must be released back to the original licensor if passed on to an end user. You can clarify much more precisely what you want to allow. The LLM situation would be much easier to resolve because you could explicitly forbid LLM ingestion in your license (as opposed to right now where it looks like training an LLM generally counts as fair use and therefore there's nothing a copyright-based license can do about it).
There's a different version of history where the FSF weren't quite so smug and didn't try to be so clever about their solution, and I'm not saying that version of history would have been better (I don't know what all the ramifications would be), but I do think free software would have had a much stronger legal basis in some of these discussions.
You have that backwards I'm afraid. EULA's,are almost universially founded upon bullshit, trying to disrupt consumer rights by trying to straddle the legal underpinings of goods and services whenever it is convienient for them. Every country (and states in the USA) have completely different standards for what is actually enforcable. That's why so many clauses have things like "you have no rights, unless you live in one of these three places that explicitly say you do."
Partly because EULA's are completely 1-sided in a way typical contracts are not.
By contrast, copyright law exists more or less everywhere that distinction matters, with fairly consistent enforcement, and courts have repeatedly held up that the GPL is enforcable. Yes,even as EU copyright.
Because copyright law is about the rights-holder having exclusive control over the permission to copy their IP. Don't follow the rules of the GPL? You lose that granted permission.
It is a clever legal hack that even the likes of Cisco and Samsung could not defeat.
Copyright as a concept is generally fairly well defined. Copyright for software is not. The GPL exploits this ambiguity, and basically makes a bunch of claims about what is and isn't copyright infringement, regardless of what the actual law says. For a while this worked out okay, but as copyright for software is slowly being cleaned up, many of those claims aren't actually true any more.
For example, the GPL claims that a derivative work includes any work that links the original work. (The GPL also makes this more complicated by exclusively using language that only makes sense in the context of C, but the FSF certainly intend this clause to cover any case of a library being used by another program.) This is the whole virality concept: if you use a GPL work, you now need to comply with GPL for all of your code as well as the original code. This is probably the most famous feature of the GPL specifically.
Except it turns out this is completely unenforceable in the EU, because that's not considered a derivative work in the EU. The GPL doesn't get to decide what a derivative work is, copyright law does. And Directive EC 2009/24 recitals 10 & 15 specifically state that you can freely call or reference other code without creating a derivative work.
Generally, cases where the GPL has been successfully contested in the EU have either happened before this directive came into force, or have been more general tests of the qualities of an open source license (i.e. the referenced library itself is still copyrighted and under GPL, even if no derived work is created in the rest of the code).
(Also note that the situation in the US is unclear, but it looks like courts are slowly leaning towards a similar approach to the EU (see e.g. Google vs Oracle) and I wouldn't be surprised to see similar legislation appear over there.)
You're also missing the point of EULAs. They exist because they are enforceable. They often contain unenforceable clauses, sure, but the basic concept of "to access this item, you need to agree to these terms of use" is very well enshrined in law. The problem with EULAs for end-user software is that they represent a significant power imbalance where consumers are pitted against corporations, but for open source licenses which are more typically used in a B2B setup, that isn't the case. And with something like a license agreement, you get to decide when the license applies because you define that as part of the license. So you could have had much more rigourous consumer protections via an EULA-style license agreement than you can ever have by abusing copyright protection, but the FSF for some reason decided it was more fun to be clever than to try and solve a real problem. And now we're stuck with a concept of open source that is slowly making itself less and less relevant over time.
Not that I'm bitter about this or anything.
At least for me, the issue is that LLMs are trained on copyleft code then their output is put into proprietary code bases. I’d love for people to use my code, but I want the improvements to be shared.
I remember hearing a story about a 1980s~ programmer (Richard Stallman) who reached out to someone at Xerox to try and get the source code for one of their printers in order to program something that would stop them from jamming up all the time and he was flabbergasted that they denied his request. The concept of IP law on code was so alien to him that it inspired him to start a whole organization dedicated to providing software freely to everyone.
Information wants to be free mannnnnn
I think there is a genuine risk of anti-AI backlash leading to a shift in people supporting stronger IP laws when they otherwise wouldn't (or would be materially harmed by such laws). I find the sentiments I see on the matter from really staunchly anti-AI creatives genuinely worrying sometimes.
Tell me more
I see a lot of under-informed artists in places like Tumblr cheering on things like what amounts to copyrighting style. Especially on sites that are full of fan art and other work that relies on permissive interpretations of copyright, this has been a source of frustration for me.
How good are the AI tools for artists? Is there anything available like the AI coding tools for programmers?
Modern generative AI can be used to make art, but afaik it's not all that helpful as a tool as part of a workflow that isn't centered around it. At least that's the impression I get from the more nuanced artists I've seen talk about it. I'm not super familiar with what's out there though, since I'm not an artist myself. It's definitely less sophisticated than it is with coding regardless, though.
My stance has always been: IP law should do a better job of promoting creative works.
When companies like Oracle assert copyright over APIs, or patent trolls try to control ideas instead of physical objects, those are actions that suppress innovation, and they're abusing IP law to do it. That doesn't mean copyright and patents are bad! It means computers have changed the world, and we need to fix the laws to curb the abuse.
LLMs are the same way. Copyright hasn't magically changed from inherently evil to inherently good—it's just a tool. And we shouldn't optimize for what's bad for Oracle versus what's bad for OpenAI, because by doing so we might miss out on building a better world.
You betcha. In fact, it's worded almost exactly as such in the very document that authorizes making said laws.
I wonder what the Founding Fathers would think of 100+ year copyrights and closed software ecosystems. Betcha a nickle Jefferson would be a GPL zealot too.
I don't know about the license itself but I could definitely see him fighting for the GPL's ideals. Somewhere in a steampunk parallel universe:
I wasn't really one of those. I'm not anti-copyright, but we do need to review the current periods of it. Locking down art for some century plus (well after the creator has died, and maybe even their children) was not the intention of copyright.
If we eliminate copyright outright, we surrender profits to whoever can produce the most merch. And I don't think "developers" can hope to compete with that.
Conversely, when I engage with pro-AI people and criticize these systems for being wrong quite often and not the panacea they pretend to be, I repeatedly hear the phrase "you're not using it right" as a response. Frustrates me to no end, conversations on both sides of this spectrum happen frequently, so I'm increasingly feeling like I just want to ignore the topic entirely because I fall right in the middle of either and face the ire of both.
There are definitely too many people on the extremes on both ends, and I think which "side" you get more frustrated with inevitably depends on which one you encounter more often. Both sides tend to discredit anyone with a remotely nuanced middle-ground opinion as being biased or corrupted in some way, in my experience.
Well that's because "AI" is a huge umbrella term for many different models that are also extremely highly customizable by their nature. It's like criticizing a car's performance without saying anything about its model, body, gear box or engine. And without having a driver's license.
Case in point I guess.
Imo, current AI tooling feels a lot like SCM tooling circa 90s/00s: so bad that it makes you question (often screaming, pulling hair out) whether copying versioned tarballs around would be a better use of time. I think we've still yet to see the git/hg of this tech, which along with the incredible fomo + hype train pushing it, makes really difficult to talk about, since the tooling right now is Perforce-tier, even if there's a glimmer of a good concept in there.
I guess I'm saying, I totally see where you're coming from. It reaaaaaaallly sucks to have true believers discard your opinion by telling you that you're holding it wrong.
Hey, fwiw, as someone who's engaged with the LLM discourse and also has some friend groups that're vehemently anti-AI, I don't see you as a traitor or any more complicit in ecocide than the average citizen in the western world (I haven't done the math, but I can't fathom that one's participation in LLM usage is worse than eating beef or driving an SUV to work). We're all doing what we need to survive, and although angsting over it is acceptable, whipping out the pitchforks at friends imo isn't. So at least this random internet person thinks you're on the correct side of this.
More broadly, I've been sharpening my stance a bit on these tools? I figure that they're going to be developed and deployed against the interests of most people, so understanding how to leverage them best in our own favour (and to tilt the tables, if at all possible), is worthwhile. They were trained by large-scale theft of our culture, and are proceeding the profit directly off of it, while holding the world economy hostage. We should be extremely incentivized as a population to find as many ways as possible to exploit this while we can, since we're the ones who are actually paying for it in the end, whether or not we're literally subscribed to ChatGPT 😅
To your point, by some metrics, a prompt uses only one thousandth of the water that goes into producing a single cheeseburger.
https://www.seangoedecke.com/water-impact-of-ai/
What you described is how I justify my entire life, cant beat em join em.
20 year old occupy wallstreet protestor me would be appalled
Kind of off topic, but there's an old-ish movie called SLC Punk! (Matthew Lillard at his best, IMHO) that your comment made me think of. At one point, the main character's father says "I didn't sell out, Son, I bought in. Keep that in mind." and as I get older the more I can relate to that.
I think that relates to the old adage that as you get older you get more conservative. While I've always felt that at a certain point you kind of get stuck in your ways, and what was once progressive is soon seen as conservative (just by the nature of things always moving and changing - not that at some point you go "You know what, minority rights weren't as important as I thought when I was younger!" or whatever).
The backslide of civil rights seems to contradict this, but it definitely feels worth exploring.
I feel like the "conservative as age" adage tends to fall apart once you hit "you do you so long as it's not hurting me" levels of progressivism.
You might not remember the modern terms or ettiquite, but you're generally still in support.
I think the only real truth to it is that once you have kids, you're much less likely to throw yourself into danger for their sake.
I don't know if it's the case that people now are less supportive or if those that never supported it in the first place feel emboldened to be openly racist just in general. I think enough people realized "Hey, I can just be racist and there's no consequences" so they are.
Yes this for sure. Until people can't feed their families, then I think it swings hard the other way. But in our society as it is today, if you are comfortable enough and have enough to live on, it's hard to push to potentially make your family's situation worse.
Occupy wallstreet was a weirdly conservative movement! lots of people, me included, got mixed up in libertarianism for a while because we felt like the corporations were using the government to funnel money to themselves so maybe the solution is less government.
I pretty quickly figured out that the actual libertarian party makes no goddamn sense and at best would result it more money being funneled to the rich. Lots of people didn’t learn though and here we are today, a dumbass memefied conservative party.
Personally, I see a lot of conservatism with age through the lens of "achieving vs. holding on to your achievements", be they material, cultural, status or what have you.
Generally, though not universally, older people (especially well-off older people) seem to have more and more to lose (from change) and less and less to gain.
It seems to me that younger people who are very unattached and flexible also just have little to lose from experimenting and also have more of an ambition to change the world in their image or achieve their goals, which the older generation might already have done (and thus they resist changing the world according to someone else's ideas)
Heroin Bob :(
but also when they beat up Nazis :)
Just wanted to say that it's really cool that you protested back then! I don't think that many of us are living in the future that we'd hoped, and having the courage at a young age to stick up for that is admirable!
Ah idk it was the cool thing to do if you were young and had no job prospects because people literally committed fraud to steal tax money.
I go to the no kings protests now but those really lack the zest of the unemployed masses that we had back then. Maybe in a few years itll be on par.
Great way of putting it. I'll be stealing this phrasing!
I was just listening to a podcast about GLP-1s (so, obviously, quite unrelated to the topic at hand). Something really applies to this conversation.
Scolders.
In 22-23 when GLP-1s were just becoming popular with the wealthier folks, the scolders had a lot to say about these drugs. "So and so used Ozempic to lose all that weight. How terrible!" - so on and so forth. And I absolutely saw it happen in some of my professional circles and in the spaces I visited online. The podcaster's point, though, was that he intentionally avoided getting answers to his own questions and curiosities at the time, afraid of the scolders online.
He then cracked a simple quip, 'eventually I realized the scolders had moved on to their next outrage, and I could finally explore my questions without their immediate scorn'.
So, anyways, I've been thinking about that a lot in the context of LLMs. I have zero interest in being a scolder. Generally it's hard to put something back in the toothpaste tube. So, let's just figure out how to effectively use this toothpaste we've got all over the place.
Sorry, I had trouble reading the article ... I'll give it another shot in a bit, but it bothers me that the author is compressing a complex topic down to the 'ol one dimensional binary: everyone needs to stand on the pro vs anti AI line, and absolutely everyone's stances must be comprehensible by that metric.
It bothers me. A lot of what makes these tools problematic derive from being used and deployed by exploitative corporations, and that the research to develop them as communities at large (instead of in gigantic, centralized, water devouring data centres) is by comparison underdeveloped. Since the force driving this development is not curiousity, or a genuine need to reduce human suffering, but is instead a worldwide grift of hitherto unseen proportions (see: the S&P 7 discourse), there is absolutely zero incentive to minimize harm and produce long term sustainable outcomes, and absolutely all of the incentive to lie, cheat, and steal every penny from pension funds to private wallets to facilitate tearing the copper out of the walls of society.
I'm pro-AI done completely differently than we are today, but hopefully how we will eventually. Even that sentiment feels too much to convey in IRL conversations, so I tend to just shut up and let louder voices do the talking.
(edit) Wowow OK made it through and my criticisms still hold. I also have a new criticism!
No they didn't! If this person was in the centre of that technology, they clearly didn't spend enough time to understand how it worked. As a very early enthusiast in the field (like, paying twenty bitcoin for a pizza early), it was crystal clear even before the boom that proof of work was never going to cut it due to the abysmally slow transaction throughput in practice. It was an elegant, probabilistic solution for distributed consensus under antagonistic conditions, but it needed serious work.
(also no, I was too stupid and poor to mine any of them way back when, so it's not like I'm secretly rich or anything; just visibly dumb XD)
(edit 2: nearly had a heart attack when I saw that this was by the founder of Earendil, because I briefly confused it with Anduril XD ah, time for afternoon coffee)
Blockchain is, was, and always has been, a solution looking for a problem.
There are multiple flowcharts with like 12 end nodes for answering the question "Do I need blockchain".
ONE of the end nodes is "yes, you need blockchain". It requires very specific conditions to be sensible, for everything else it's either useless or actively dangerous/misleading.
The problem being solved seemed fairly obvious. A truly secure, globally accepted, dencetralized currency. Given the kerfluffle with companies like Mastercard and Visa, the use case is very clear if you ever came on the wrong end of them.
But reality, as always, is messy. And the road to hell is paved with good intentions. So I turned against Bitcoin for the same reason I do AI: it became a tool used for grifts rather than genuine progress, and the barrel became rotten in the process.
I still have some Bitcoin, mostly because I was greedy and didn't exit when it hit 100k€ :D
I'll sell the next time it gets there, I promise.
(And of course I'm one of the people who had like 2BTC mined with my PC at home and sold them when 100€ for a BTC was a crazy amount for digital currency.)
We as a society should never keep quiet about the grift!
This is conceptually very related to another thing I learned recently: Discussing a topic in good faith and with the goal of arriving at a new, better understanding does not mean giving all sides the same time and consideration. Sounds counter-intuitive, but is entirely true; say you're discussing eugenics. One side says "we shouldn't kill people to optimise for a defined 'ideal' in society", the other one says "we should kill people to optimise for a defined 'ideal' in society". Both of these points are valid points logically, and correct english grammar, yet we do not need to consider one side because it's patently absurd and incorrect by most systems of value. This is the same way. To discuss AI, you need to grok AI, and that means to be capable of loving it just as much as being capable of hating it. I don't see the loud anti-AI camp being charitable on that side.
If you presented these to someone a couple hundred years ago, they would disagree with you about which option is patently absurd and incorrect. And plenty of people up to significantly more recently would consider me being gay and trans to be patently absurd and incorrect by most systems of value. The reason we shouldn't give eugenics equal weight in discussion is because it is violent and inhumane, and it relies on outdated ideas about human biology. But declaring it "patently absurd" just isn't enough -- people in the past did have to argue against it using real evidence, because the consensus on what's right and what's patently absurd changes over time. And discussion of minority views is part of how you effect changes in what people perceive as right and wrong like this.
I don't think eugenics is a particularly good comparison to AI in almost any respect, fwiw, including how you use it here.
I don't see how "is it okay to bypass centuries of copyright in order to produce content even faster" is anything but patently absurd. It would be absurd in the 1950's, 1980's, and the 2010's. "To make us make stuff faster" was the primary justification for all those decades.
If we could regulate to make such models accountable for copyright, that'd take away a good 90% of my reasons to be anti-AI
There are plenty of people who don't believe that copyright should exist at all because it stifles creativity and is principally used by large corporations to consolidate ownership of IP in ways that harm independent creators. This is a relatively extreme position but not necessarily an uncommon one, and there are a very large number of weaker positions that favor weaker copyright protections than currently exist that are even more common. Whether you or I personally believe this or not, it's an opinion held widely enough that even if we take as a given that it's true that AI is "bypassing centuries of copyright" under current copyright law (which is very much not a settled legal question in any jurisdiction fwiw), it is not necessarily patently absurd to believe that bypassing centuries of copyright is a good thing or at least worth it in light of what is gained in exchange (which, even under the most skeptical realistic analysis of the capabilities of modern generative AI, is definitely more than "producing content faster.")
However, this is all beside the point. I think you miss the actual message I was trying to get across in my previous comment. My previous comment is not remotely arguing in favor of AI whatsoever. What I'm pointing out is that simply declaring something to be "patently absurd" is not a remotely sensible way to decide what is and isn't worth discussing because the definition of "patently absurd" is dependent both on personal opinion and social mores, and if we were to use that as a basis for what is considered acceptable discussion, plenty of things that are very much good and worth discussing nowadays would have never been allowed to be discussed. There must be a better framework for deciding whether something merits discussion that goes deeper than just declaring it patently absurd.
Also, if copyright constitutes 90% of your issues with AI, it should be extremely obvious why I don't think that eugenics is a good analogue for it in this context (or most contexts, tbqh). I also think, frankly, that you don't have a good understanding of the real risks and dangers of AI if copyright is indeed such a large portion of your issue with it.
Yes, I've been around communities like that. I think it's horribly myopic for many reasons. To name a few:
When copyright isn't a thing, the biggest producers became the biggest winners, with no obligation to compensate the creator. That's why copyright was created to begin with
You need some sort of scarcity to create value in anything. This is built-in with physical goods. For creative works, we already see the results of what happens when a store is filled with slop; even quality works get drowned out. Now imagine that on a single brand. The idea of Mickey Mouse ceases to be anything meaningful when everyone and everything makes use of it.
I can see why a tech oriented person would support the destruction of copyright. They believe the best tools will float to the top and then be iterated on the fastest. But there's no objective measure to define "quality" in art. A complete destruction of copyright would destroy the livelihood of artists, and much of culture as a result.
I don't really subscribe to the populist fallacy much. There was a time where slavery was a questionable net gain for society, and even a recently as 2024 in the US we see where "populism" got us. It's a metric to consider, but it is not the ethos end-all be-all.
Humans in general have a tendency to overreact whenever some bad aspect of a concept is discovered; I recognize there's countless examples of abusing copyright past the letter of the law and lobbying to stretch the definition past the spirit of the law, but that doesn't mean the spirit is wrong. It means we need a stronger review of the spirit and evaluating if the letter works for that anymore.
As it always will be. But at the same time lines will need to be drawn. And if we don't push back, those lines will be drawn for you. I'm sure you're experiencing this now with the current trans "discourse" out there.
We can certainly argue all day about what lines are indefensible, but ultimately we still need to set our own lines. I'm still of the belief of overhauling copyright over repealing it, so it'll be hard to convince me that destroying it without some sort of replacement is valid.
Sure, I didn't make the argument. Eugenics brings about direct harm to the person, society, and even biology as a whole. Bypassing copyright brings about a cultural harm. There are different levels to it.
I do. I just accept that humanity will do a lot of stupid stuff, and that US culture especially has commonly defended against regulating harmful aspects in the name of personal freedom.
Copyright is my 90% because, by all cultural and legal accounts, AI can't really be defended as such. If nothing else, all courts consistently ruled that AI created works cannot be copyrighted, so that was never an issue to begin with.
If we control that and the country still wants to burn a hole in the ozone layer, replace all labor with robots, and mass manufacture propaganda... well, that's "freedom" at work I suppose. I'm sure at some point in such a descent I'd simply leave the country if society is unable to stop that.
But I've spent my life being shown and told "you aren't entitled to a job" and "money is speech". If enough people believe that, I'm not going to keep futility pushing back on such a society.
Copyright was originally created as a quiet form of government censorship and control. The right to copy a document required registration and approval from the government. This was done to limit and control the free spread of information.
It also had impacts on who profited from a creative work, but that's not why it was originally created.
That's fair. It evolved some 50 years after that and made protections for those that specifically didn't have the means nor knowledge for such copyright. That's why copyright for the last 150 years or so is default.
I agree with this, though I suspect we'd disagree on exactly where that line is. The point of my earlier comment was to criticize the purported basis for drawing the line in a certain place -- I think it needs to be better justified through well thought-out arguments than with truisms like "patently absurd." I agree with the earlier commenter, for instance, that a site like this should shut down discussion that contains advocacy for eugenics, but I don't think that "it's patently absurd" is the reason why or a good justification to give when arguing for doing so.
We disagree on a number of matters when it comes to copyright's effects, but I'm fine agreeing to disagree on that front. I wasn't really interested in arguing about AI in this thread, just in criticizing the points made about shutting down discussion of certain topics more generally, because I think there's a real danger in the thought-terminating nature of "patently absurd" as a category/descriptor when it comes to ideas.
this is going off on a tangent bc I don't really want to actually argue about copyright rn, but fwiw, afaik this has been the ruling in one case, and it was a weaker decision than you present it here. I do personally think there's still a solid argument that strictly AI generated elements of a work are uncopyrightable under US copyright law, but it hasn't really been litigated yet whether prompt engineering counts as the tiniest modicum of human creativity as is necessary for copyright, which afaik is the sticking point when it comes to whether AI-generated work is copyrightable. The existing case that you're probably referencing involved the human involved listing the AI itself as author on the copyright registration, which made the decision a lot more obvious but not strong enough to establish that all AI-generated images are surely uncopyrightable. I'm sure that will be litigated sooner rather than later, but the courts move slowly. Even if all AI generated work was found to be wholly uncopyrightable by the courts, which has not yet happened, existing case law indicates that elements of the finished work that did involve any modicum of human creativity would be copyrightable. For instance, if a human arranges images in a certain way, the courts have already found that the arrangement can be copyrighted even if the individual images therein are found to be uncopyrightable. This is, in fact, exactly what happened in the decision you're probably referencing.
I think whether AI-generated work is copyrightable is pretty unrelated to the types of moral questions you're discussing anyway though, even as it pertains to copyright. Even the ways in which AI training may violate copyright (another thing that's both up in the air legally and will probably involve a lot of fine-grained case-by-case distinctions once there is more case law on it) for instance, are pretty much entirely unrelated to whether AI-generated work is copyrightable.
I can spend all day explaining my take, but at some point it's futile. If the statement claiming credit for other people's IP is wrong, and profiting off of someone's IP without permission is very wrong" doesn't resonate with you, then there's not much to discuss on the issue. If our viewpoints are so diverged, then there's no point trying to find nuances. Making those hard lines are part of understanding your identity and voewpoints of the world.
I like to think I'm flexible on most topics. I'm still open to the idea of AI under proper regulations. But that's why I highlighted copyright as 90% of my objection. Taking someone else work to profit off of without so much as a credit makes me unable to accept AI in its current iteration, because it hits one of those "patently absurd" use cases. I don't use that term often nor lightly. It's based on a lot of first and second-hand experiences of people stifled under such abuse. We're supposedly in a "best ideas prevai" society, but have great ideas quashed under whoever has the best litigation? I can't accept that.
I've seen many arguments as well as deep dived on how the tech underneath works (I did engage in quite a bit of Reinforcement learning pre-GPT, so I'm not new to this tech), and I'm still not convinced this isn't outright theft of copyright. But I'm not a lawyer, so all I can do is see how current high profile litigation goes to see if the country's court systems agree with me.
the SCOTUS upholding a ruling can't make it much clearer, can it?
https://www.reuters.com/legal/government/us-supreme-court-declines-hear-dispute-over-copyrights-ai-generated-material-2026-03-02/
There's at least 2 other rulings from earlier too. I dont see other angles here unless we start arguing "I made 90% of this, but AI generated a few details". I guess those can go to court, but it's pretty consistent that typing a few prompts and trying to claim copyright on the results is not enough.
I agree, and I personally don't care if someone copyrights an AI work as a concept. As long as it's not overwriting someone else's copyright. I just point to that as one of the more consistent rulings early on that puts some reservations on how companies function. And it will probably be used in current litigation, so while it's a different case, they are at least tangentially related.
You seem to continue to be under the misapprehension that I want to argue about AI with you, despite my repeatedly reiterating that I do not want to do so. I can't tell if you're willfully refusing to understand what my initial comment was actually about but there are only so many ways I can reword the sentiment that "just saying something is 'patently absurd' is a bad justification for shutting down discussion about it because it fails to distinguish things that are actually harmful to discuss from things that are unjustifiably societally taboo". Your arguments about AI and copyright are a non sequitur. Please stop trying to argue with me about it.
As for the copyright decisions, SCOTUS declining to hear a case is not the same thing as them upholding a ruling, and whether the Supreme Court heard the case or not does not contradict any of the facts I gave about the limitations in scope of the decision in the case in question. My point was simply that the decisions that exist on this question are more limited in scope than what you're claiming, and you describe it in absolute terms that are broader than what has been established in the very limited amount of case law on the subject we currently have. More case law will absolutely build up over time and it's perfectly likely that they will continue in the vein of not finding AI generated material to be copyrightable, I don't think that's an unlikely decision under US copyright law whatsoever. But when I addressed this in your original comment, I was merely trying to correct what seemed to me to be a factual misstatement, because framing this as having been very firmly and broadly decided already doesn't reflect the current state of things legally. But this has reached the limit of what I want to discuss on the matter. As I said before, I have not been remotely interested anywhere in this particular thread in debating AI, as it's unrelated to and distracts from the point I actually was trying to make originally. We also have very different views on IP law based on your preceding comments, so a discussion of AI and copyright would no doubt be stifled by our differing perspectives on copyright as a legal tool, on which I suspect we would need to agree to disagree anyway.
Can you elaborate on how this relates to the preceding portion of your comment?
The conversation at hand is "is AI good or bad?", so answering or at least discussion of that question requires that you examine AI first. If your position - like the negative side of the argument here - is that AI is so inherently bad we shouldn't even give it that time of day, then you're not interested in having the conversation in a nuanced way to begin with, and therefore we can discard that position in favour of the ones that are actually interested in a discussion about the topic.
To put it simply, if your goal is to have a conversation (is AI good?), then the available options are "Yes", "Maybe", and "No" and variations thereof. "I'm not even going to discuss the question because I believe it's not worth it" is not a valid answer to that question.
Yes, Maybe, No, Potentially, It Depends, Signs point to yes, these are all answers, and getting to whichever one of them will mean having reached a new quantum of understanding. "We will never know because I'm not going to discuss it" does not add any new insight.
It's quite simple: AI as a tool is pretty neat with varying degrees of usefulness. The ethics of AI are a very different story. As such, you don't really need to know anything beyond its externalities to pass judgement.
The real pro/anti boils down to how the tools will be used (primarily to further exploit workers), the actual gains (a lot of confirmation bias), the psychological harms of ascribing personality to a statistical engine (and thus its usage as a therapist/partner), the lies about its economic feasibility, and if this is worth the substantial environmental damage this is causing in a time where it would be better to reduce energy consumption instead of increasing it.
Most AI is powered by natural gas. Natural gas emits substantial greenhouse gasses, through every stage of its extraction, transport, and combustion. It is only "green" in the sense that it's only about half as bad as coal.
By almost any standard, if you consider climate change or worker exploitation a problem, using AI is immoral.
This seems biased to me, and reinforces the original commenters point. If you are just going to paint it in an evil light, then why would anyone want to include you in a real discussion on the topic?
You couldn't concede a single positive while responding to someone who is trying to ensure people are approaching the topic in good faith.
Because painting it an evil light is, based upon substantial research, the accurate portrayal of the current state of affairs once you've stripped away the hype. A hypothetical future AI that does not have these problems does not change the current evilness.
I did concede a positive: It is a tool of varying degree of usefulness. It has a lot of potential. It does not negate the vast list of externalities that mostly get handwaved away by the boosters.
In other posts, I've actually mentioned how I'm surprisingly impressed with Claude's capabilities. Does not change that it feels a bit like selling your soul when you use it to do what could be a 30 second web research for 1/100th the energy.
However, it is also completely unrealistic for every single person to always rehash their nuance to every arguement. If AI boosters/makers provided actual answers to these biggest problems that keep coming up, the conversation would change....much how we don't debate the merits of leaded gasoline or paint anymore.
I guess that's how coprations excluded climate change from the discussion for decades.
I'll give one: AI is a catalysts that accelerates the status quo.
That could be a good thing. But I think many would agree that between the POTUS casting himself as Jesus Christ, the current inefficiencies in trying to force it in the workplace, and the exploding stock market over unrealized gains, that the status quo is very much not accelerating in the right direction. I can see a lot of AI being useful in theory, but the current realitiesa are extremely disappointing.
I was stating my agreement with the poster who more or less said the hardliners on one side wouldn't engage in actual discussion because they believe too strongly against it. Just like the person who's convictions will never concede that abortions aren't anything but literal murder isn't really worth including in a nuanced discussion on the topic.
I can say there are positives and negatives about AI, but I'm not going to say its inherently perfect or evil.
Yes, at some point there's not much discussing of a topic, and instead you simply need to take action. This discourse has gone on for at least 3 years so I don't think there's much anyone can say (nor even research) that will move dial. There's too much at stake on either side of the aisle for that.
The people "thinking" have made their intentions loud and clear, sadly.
Not to invoke the ol' whataboutism, but there are plenty of technologies that are much more grey and do not in the slightest gather that much ire from the general public, so I have a hard time taking this point seriously. What about AI makes it so much worse than, say, cryptocurrency? Or the global stock market? Or nuclear power?
Cryptocurrency: Almost every single use case for cryptocurrency falls apart once a single party involved can be trusted. Finding the edge cases where it has been a positive always sounds a lot like 'At least Mousillini got the trains to run on time.''
Stock markets, in their current form, bear no resemblance to the economic activity that is supposedly based upon them, with about a 5 to 1 ratio of 0-sum gambling to actual economic funding.
Nuclear power is the best green energy source we have, and while it is difficult to implement it is the best chance we have of taking the base load off fossil fuels, because the amount of battery storage and grid improvements that would be required for universal wind/solar is currently not feasible.
It's all about scale, magnitudes, and regulations for evils. We know enough about energy that any new energy would be highly regulated, so I'm less worried about nuclear energy being used wrong.
AI on the other end of the scale has been actively tryng to pre-emptively remove any regulation around it. I don't ever see how this can be a good thing for an emerging technology. And it's hitting at a much bigger scale than cryptocurrency ever was. Not to mention the the fact that we aren't prepared for a positive outcome of proper AI usage (e.g. UBI initiatives).
AI isn't absolutely worse than any of those, but it does present the most danger.
In your example, you seem to have eugenics confused with genocide?
I don't really wanna get into it, but I did mean "thin the population by removing those with undesirable traits so they won't reproduce" which unless I'm completely mistaken is eugenics
It is a form of eugenics, it might also be a form of genocide. You were fine.
I associate eugenics with things like forced sterilization programs and other restrictions of reproductive freedom, which are abhorrent, but actually killing people is a step beyond that.
Totally fair to have your own opinion on this, but I don't think creating a ladder of atrocities at this granularity is useful w.r.t. discussing morality. Even permitting that subtlety to exist -- as delphi noted -- leads to justifying, for example, the attempted (and largely successful) wholesale elimination of first nations from Canada.
Once you concede that, although it's bad to sterilize an entire culture, it's worse to just kill them, then that permits a man proposing the former to seem a level headed pragmatist by comparison to those raving lunatics proposing the latter. It's the civilized solution, even! Kill the indian, to save the man.
I freely admit that this is a slippery slope "fallacy", but because we've seen it happen over, and over, and over, and over again, the actually pragmatic approach for people who would like to see the oppression and death stop is to prohibit that foot in the door to begin with.
I think if you ignore all distinctions, you might end up claiming that Margaret Sanger (the founder of Planned Parenthood, who apparently did publish pro-eugenics articles) was some kind of racist Nazi.
So, I'm going to stick with making distinctions. I think it's important to try not to misrepresent other people's beliefs.
The distinction you make is not a real one. Eugenics encompasses both the methods you describe and actual killing, since it describes the goal of "improving" the population by removing undesirables. In practice, these "lesser" approaches you describe are incredibly frequently accompanied by overt killing anyway. The Nazis were undoubtedly eugenicists, ofc, and their programs took huge influence from contemporary US eugenics. Attempting to draw a distinction here seems to amount to white-washing US eugenics solely because the Germans arguably beat them in terms of the scale of implementation.
It's also worth noting that the methods you describe also uncontroversially constitute genocide (or at least attempted genocide) by the accepted definitions.
No worries, I don't aim to convince people of anything, least of all in a wee internet forum 😅
^ TIL about Planned Parenthood! But I try not to hold founders against their institutions in general, so that doesn't change much for me, at least -- moreso it's their actions after the fact.
Can't it also be argued that you're acknowledging one is worse than the other, but on some level you don't want to admit it, so you're removing any distinction which to some would feel disingenuous and cause them to feel you're lying to cover something up.
I don't personally believe that, but I think there's a slippery slope whichever side you go down. Purposefully misrepresenting something because you don't want to give the appearance that one is worse than the other gives people cause to distrust you. The way you're approaching it may be the better way overall, I just don't think it should be seen as one without its own set of consequences.
Nope! Because people are free to have discussions about genocide and eugenics in private, invite-only groups. The Canadian Criminal Code notes that we limit freedom of expression in public spaces when advocating for hatred (e.g. eugenics, genocide). If a curious soul wants to have a more nuanced understanding of this, then they are free to do so in controlled environments; out here in public, we wash our hands of the discussion knowing that it's all horrendous, vile trash, where shades of grey are unwelcome (note: there are precious few cases where I have this hardline stance; this is one of them).
Pushing a smidge beyond my pay grade, but it feels like discussions debating the subtle distinctions between different methods of cultural annihilation are a form of infohazard, and we should treat them like we do virulent plagues, hazardous and persistent toxins, etc. It's fine if you're adequately protected, but the average person should absolutely not be legally permitted to manipulate the social equivalent of ebola outside of an extremely controlled environment.
Agreed that there are consequences, but I think they're mostly on the side of not enforcing this strongly enough. My views are somewhat in line with that of the Canadian government's, and even it still occasionally applauds literal Nazis in parliament, or argues towards stripping away First Nations' rights.
Right, and in those private groups, where they become an echo chamber, they're going to point to how you've pushed them to discuss it in private and that you're the real enemy and there will be a nugget of truth behind their attacks on you, that you purposefully misrepresent your position and lie (because of a fear of a slippery slope, but that part won't be accounted for), and it won't just be you in particular it's directed at, it will be anyone on 'your side', it will be whatever your perceived side will be.
On the one hand I could take a very cynical or even nihilistic take and say no matter what, we lose, but I don't know that I believe that either. Whether you clamp down to suppress expression of ideas or you don't, it almost feels like an inevitable tide, but I also wonder if there are cases where both ways have worked at different times and in different circumstances, where suppression only made things worse and fighting with truth worked better, and in other circumstances where truth didn't prevail and suppression worked better.
I was moreso thinking humanities courses at universities, or at museums covering historical atrocities, than like Nazi bars or terrorist meet-ups. Places where actual experts can chime in and explain how being very polite and extending the benefit of the doubt to ideas which involve extermination has turned out, extremely badly, extremely often. As I noted, it's OK to explore these ideas in very controlled, safe environments; doing so in public can rapidly become hazardous.
While I would agree those would be ideal places, that seems a bit more like preaching to the choir. What percentage of people are actually exposed to those situations at any significant level? You're not really covering that many people in that case. That could also be taken as "well you're just not smart enough or don't have enough money to be involved in this discussion" considering the resources required to travel for leisure or go to university.
It's just delayed genocide. Also used as an excuse for why actual genocide is OK.
It's a pretty strange ethical stance to equate killing people with preventing future births. Certainly, the people being killed would care about the distinction.
If somebody advocates to neuter all the Jews so their bloodline ends....that's just delayed genocide. It will take 80ish years, but it would have succeeded where Hitler failed.
It's all cut from the same cloth of racial supremacy. It's like debating if chopping off a hand or putting a tourniquite around a hand is the best way to remove hands, because they are unclean.
This is an overly reductive argument that elides what we're really discussing when we talk about AI. Is it LLMs trained by opaque private entities? Are we referring to machine learning models specialized for understanding molecular interactions, automating robotic movements, target identification in satellite images, etc.? Generative transformers for image synthesis?
Is "good" just utility, or does it take into account, as /u/vord said, all the training corpus theft, environmental, economic, social, and other externalities? And the grotesque politics [paywall] for U.S. users?
I'm in that uncomfortable middle - I've used a couple of LLMs both out of curiosity and because they've been heavily promoted for work. I'm still in the stage where it's usually a coin flip as to whether my productivity will be improved or hindered, but I've had some cases where the results were stellar and well beyond anything I could do unaided. I've also seen some corporate (LLM) and medical AI (image processing) misuse that was actively detrimental.
Given the resources, I'd much prefer to run and train a smaller specialized model for the tasks that matter in my work, instead of stumbling across the gaps and unpredictable costs of genericized private models.
I'd like to see public open source models, run on public infrastructure, using transparently sourced public data, used to achieve public goods - transit and budget planning, public health, social services connectors, resource utilization monitoring, decision support for small communities, and so on.
I can see paths where we might eventually get to unalloyed benefits from AI. But they're not going to arrive through "lines of code go up". There are massive inefficiencies and risks in dumping the entire Internet and every digitized medium into training and trying to mount guardrails after the fact. Centralized model control in vast data centers, with profits going to a tiny minority, creates far too much environmental, economic, and social damage to proceed without regulation.
Put me in the "Potentially" and "It Depends" camps that /u/Delphi mentioned, but not with respect to whatever OpenAI is doing this minute. And color me massively skeptical about "AGI".
The way I think about it is that you can’t get an informed view without either doing an investigation yourself or relying on someone else to do the investigation. That takes effort. A good question to ask is who did the homework and what did they actually do?
Nobody should be required to investigate things themselves to have an opinion, but you should be careful what your sources are and understand their limitations. For example, after an airplane crash, the best source is going to be the formal investigation, which will take months.
Scientific studies and Investigative reporting are other good sources. Writers can publish more informal investigations to their blogs.
All these sources have to be handled with care because writers do have their biases. They might not be committed to being curious to learn more about the subject and reporting whatever they find even if it doesn’t support a favored position, which I feel goes a long way to offset biases.
What I keep noticing is that a lot of the criticism directed at crystal meth is perfectly legitimate, but it often comes from people without a meaningful amount of direct experience with it. They are not necessarily wrong. In fact, many of them cite studies, polls and all kinds of sources that they themselves spent time investigating and surveying. And quite legitimately they identified real issues: your dental outcomes can be bad, the malnutrition implications are scary, the hallucinations of insects are strange and potentially result in sores from over-scratching, there is a liver and kidney impact, the long-term nervous system consequences are unclear, and the hype is exhausting.
To carry on with this analogy, I've personally encountered people who have pointed at that same evidence to insist that everyone who takes stimulant medication for ADHD is a degenerate addict and that it should be banned. It's possible for something to be genuinely harmful and dangerous and for people to wildly overreact and fear-monger to an extent that is itself harmful and dangerous. The two aren't mutually exclusive.
Interesting, but I think the author isn't carving out enough space for people like myself. For example:
The author seems to miss the idea of someone who pushed through the first failure and honeymoon period, but concluded that they reject the technology, and thus stepped away from it. I (and some of my favorite former and present coworkers) did exactly that. I gave LLMs a few months. I didn't like the results.
But now all I hear is accusations that I "never gave them a chance" or folks blowing smoke about how "the latest models are so much better." Maybe. But they don't address any of my issues with the underlying technology.
Funnily enough, back in high school I did the exact same thing (maybe with more memes) with cryptocurrency. It was neat! And then I decided it wasn't useful and didn't need space in my life.
To have an informed opinion you absolutely need some level of practical experience, sure (though this doesn't apply to everything: I don't need to drive one to know that I don't need a truck or large SUV, and I don't need to try adderall to know that it would probably make me much more focused). But once I've come to a conclusion, I don't need to keep immersing myself. Once I know the water is cold, I'm not going to jump fully back in just because someone tells me it's warmer now. I'll dip a toe in from time to time, and have a chuckle to myself when it was, indeed, an enthusiast trumping up a 1 degree difference as a "game changer". If it's balmy, I'll be more than happy to jump right in. But I do not (and simply don't have time to) explain every. single. time. my full stance. It is simply too tiring. Let me read my book by the water's edge in peace!
One of the major mainstream criticisms I hold is the very abstract ones around the environmental, social implications and disgust around the hype even with these negative impacts known.
And if you take those concerns as a moral issue like I do - it feels wrong to use AI agents/LLMs at all. To support and use a tool that is essentially accelerating the climate crisis as an environmentalist would be totally unprincipled.
So even if abstract, i don't think its necessarily missing anything in its criticism. Unless somehow data centers and chip production are produced with 100% clean energy & materials. The actual performance of AI and experience using it become irrelevant.
So very similar to vegetarianism. But I don't see many people complaining about how those damned vegetarians just don't have enough experience with meat to know better than their initial biased position.
Honestly, is even possible to find a job as a developer where you're not asked to use AI. If there are, I imagine they will go the same way as open source jobs: few and far between.
AI is a tool and hiring someone who's vehemently anti-AI in 2026 is a risk for any company.
And because it's a new shiny tool, people are going around poking it into places it has no business going in. People will get hurt - and are actually, as some companies are using AI as an excuse to lay off people.
But that doesn't mean there aren't good uses for "AI", which is a spectrum. For the vast majority of regular people AI = chatting with chatgpt, using it as a search engine, therapist, proofreader and whatever and generating silly images of themselves.
That's nowhere near the actual good uses for language models, the actual uses are mostly invisible to people and they actually have been using "AI" waaay before ChatGPT was a thing.
I blame Sam Altman. It is reasonable to hate AI because he's the face of it to everyone who is not a programmer or researcher.
And he's, at best, an asshole.
Huh, that's a good point.
Maybe development of some super secure code that can't be leaked to AI companies in any way? Something like using strictly local models. Maybe some applications orthogonal to AI systems, like systems that want to verify/broke/influence AI in any way?
My issue is not that AI is forced onto us, my issue is that tops anticipate 2x, 3x, 5x, 10x performance improvements from using AI without losing quality.
I’m not convinced the tops care about quality.
This was very, very good. Thanks for sharing.