These articles are weirdly anthropomorphizing. OpenAI is “furious”? Did OpenAI rant about it at the bar last evening? What does that even mean? This is literally the only text in the entire...
Exemplary
These articles are weirdly anthropomorphizing. OpenAI is “furious”? Did OpenAI rant about it at the bar last evening? What does that even mean?
This is literally the only text in the entire article about it
Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.
Wow, such fury.
I’m pretty sure everyone at OpenAI knows that DeepSeek probably did have some distillation in its training data, and that there’s not much they can do about it. It’s very common for models to include model distillation. Practically any of the open source models that are based off of llama involve model distillation for fine tuning. It’s not like a secret or something.
Of course if Bloomberg is asking about it the PR guy will just say “yeah we’re looking into it”.
I read it more as company-fying Sam Altman than personifying OpenAI, assuming that he had a twitter tantrum or something. But doing a little digging, it seems like his only public statements...
I read it more as company-fying Sam Altman than personifying OpenAI, assuming that he had a twitter tantrum or something. But doing a little digging, it seems like his only public statements regarding DeepSeek have been very mild, so that doesn’t actually make more sense. Definitely an over exaggerated headline.
Can you please point to a source for this? As in, an actual quote from someone in that organization that shows or implies some kind of anger. So far, the worst I've read is something along the...
The organization is "furious"
Can you please point to a source for this? As in, an actual quote from someone in that organization that shows or implies some kind of anger. So far, the worst I've read is something along the lines of "they trained their model on our model, which breaks our TOS," which I don't feel sounds "furious," but there could be more out there that I haven't seen yet. The posted article here doesn't seem to have anything more than that.
Also, I'm pretty sure the above poster is poking fun at how headlines ascribe emotions to organizations to drive engagement, when it's really the people in that organization that actually could have emotions. If you don't actually have an angry quote from someone in that organization, you can just say that the organization is angry, add some angry commentary of your own in the article, and know that most people will assume that the article substantiates the claim even when no evidence is actually provided.
I guess there is some irony to this since OpenAI scraped the internet for data without consent but now that it's their homework being copied they scream bloody murder.
I guess there is some irony to this since OpenAI scraped the internet for data without consent but now that it's their homework being copied they scream bloody murder.
The circle of irony is completed when you see videos on YouTube instructing people how to hack an online AI image generator so that it won't post your prompt and results on the service provider's...
The circle of irony is completed when you see videos on YouTube instructing people how to hack an online AI image generator so that it won't post your prompt and results on the service provider's site - which is what you agree to in exchange for being able to use the service for free. And their reasoning: "I just don't like it that other can steal my work".
For an artist, this is all very.. disappointing to watch.
And the hypocrisy ouroboros ties itself further into knots when the same companies outside of their "AI startup" hats start moralizing about digital piracy. After all, as the guardians of online...
And the hypocrisy ouroboros ties itself further into knots when the same companies outside of their "AI startup" hats start moralizing about digital piracy. After all, as the guardians of online communication, they have a duty to uphold the interests of intellectual property owners and fight against it whenever they can. Surely they wouldn't commit such an unlawful act themselves to use the pilfered data for their own products... Right?
This is by far the best word combo I’ve come across in a long time. Not sure if it’s quite a band, song, or album, but it’s poetry. Thank you for this and the bullseye usage. Now if only Anonymous...
hypocrisy ouroboros
This is by far the best word combo I’ve come across in a long time. Not sure if it’s quite a band, song, or album, but it’s poetry. Thank you for this and the bullseye usage.
Now if only Anonymous would open source some heinous AI-generated film starring Mickey Mouse, Superman, Spiderman, Ironman, Spawn, Gandalf, Fritz the Cat, and whatever other IP could make it worse… Sean Connery’s 007.
For a while not too long after LLM image generators really started taking off, there was considerable drama on twitter between LLM imagery posting accounts where they were all accusing each other...
For a while not too long after LLM image generators really started taking off, there was considerable drama on twitter between LLM imagery posting accounts where they were all accusing each other of “prompt theft”.
The whole notion is ridiculous. All the millions of pieces that went into training the model, each representing the sum of an artist’s efforts, experience, and emotions aren’t worth anything at all but a sentence fragment that a third grader could’ve written is where the value lies? Really?
I'd like to point out a bit of semantic pedantry which is actually relevant: LLMs (Large Language Models) which are designed to process and generate text, are generally distinguished from image...
I'd like to point out a bit of semantic pedantry which is actually relevant: LLMs (Large Language Models) which are designed to process and generate text, are generally distinguished from image generation models (although since they do have to use text as input for text-to-image generation, a language model of some sort is involved in that case).
...The funnier part coming into play when people actually did start using LLMs to generate image prompts from a more natural description of their desired image that they would then feed to image generation models, bypassing the self-proclaimed prompt engineers in the process.
Fair, I just really, really dislike “AI” as it’s used today because it’s really not AI at all and should have its own name. To me the bar for “AI” is what now gets referred to as AGI/ASI,...
Fair, I just really, really dislike “AI” as it’s used today because it’s really not AI at all and should have its own name. To me the bar for “AI” is what now gets referred to as AGI/ASI, something more like Commander Data or the android character played by Robin Williams in Bicentennial Man.
Same issue, from the other direction. To me "AI" unambiguously meant video game path finding/decision algorithms/NPCs and whatever else applied under that umbrella, at least so long as the context...
Same issue, from the other direction. To me "AI" unambiguously meant video game path finding/decision algorithms/NPCs and whatever else applied under that umbrella, at least so long as the context was established as being about video games. The way the term expanded in the public perception had very annoying consequences in that regard.
This is both hilariously funny if true, but also disappointing. If this is true then we still need to consume insane amounts of power and hardware to train these models. I was really hoping that...
This is both hilariously funny if true, but also disappointing. If this is true then we still need to consume insane amounts of power and hardware to train these models. I was really hoping that they had come up with some smarter approach.
As you can see, after trying to discern if I was talking about Gemini AI or some other Gemini, DeepSeek replies, "If it's about the AI, then the question is comparing me (which is ChatGPT) to Gemini." Later, it refers to "Myself (ChatGPT)."
Why would DeepSeek do that under any circumstances? Is it one of those AI hallucinations we like to talk about? Perhaps, but in my interaction, DeepSeek seemed quite clear about its identity.
These articles are weirdly anthropomorphizing. OpenAI is “furious”? Did OpenAI rant about it at the bar last evening? What does that even mean?
This is literally the only text in the entire article about it
Wow, such fury.
I’m pretty sure everyone at OpenAI knows that DeepSeek probably did have some distillation in its training data, and that there’s not much they can do about it. It’s very common for models to include model distillation. Practically any of the open source models that are based off of llama involve model distillation for fine tuning. It’s not like a secret or something.
Of course if Bloomberg is asking about it the PR guy will just say “yeah we’re looking into it”.
I read it more as company-fying Sam Altman than personifying OpenAI, assuming that he had a twitter tantrum or something. But doing a little digging, it seems like his only public statements regarding DeepSeek have been very mild, so that doesn’t actually make more sense. Definitely an over exaggerated headline.
Can you please point to a source for this? As in, an actual quote from someone in that organization that shows or implies some kind of anger. So far, the worst I've read is something along the lines of "they trained their model on our model, which breaks our TOS," which I don't feel sounds "furious," but there could be more out there that I haven't seen yet. The posted article here doesn't seem to have anything more than that.
Also, I'm pretty sure the above poster is poking fun at how headlines ascribe emotions to organizations to drive engagement, when it's really the people in that organization that actually could have emotions. If you don't actually have an angry quote from someone in that organization, you can just say that the organization is angry, add some angry commentary of your own in the article, and know that most people will assume that the article substantiates the claim even when no evidence is actually provided.
I guess there is some irony to this since OpenAI scraped the internet for data without consent but now that it's their homework being copied they scream bloody murder.
The circle of irony is completed when you see videos on YouTube instructing people how to hack an online AI image generator so that it won't post your prompt and results on the service provider's site - which is what you agree to in exchange for being able to use the service for free. And their reasoning: "I just don't like it that other can steal my work".
For an artist, this is all very.. disappointing to watch.
And the hypocrisy ouroboros ties itself further into knots when the same companies outside of their "AI startup" hats start moralizing about digital piracy. After all, as the guardians of online communication, they have a duty to uphold the interests of intellectual property owners and fight against it whenever they can. Surely they wouldn't commit such an unlawful act themselves to use the pilfered data for their own products... Right?
Holy cognitive dissonance, Batman!
I was not aware of this at all. Thanks for the link.
This is by far the best word combo I’ve come across in a long time. Not sure if it’s quite a band, song, or album, but it’s poetry. Thank you for this and the bullseye usage.
Now if only Anonymous would open source some heinous AI-generated film starring Mickey Mouse, Superman, Spiderman, Ironman, Spawn, Gandalf, Fritz the Cat, and whatever other IP could make it worse… Sean Connery’s 007.
Time to touch grass, definitely.
For a while not too long after LLM image generators really started taking off, there was considerable drama on twitter between LLM imagery posting accounts where they were all accusing each other of “prompt theft”.
The whole notion is ridiculous. All the millions of pieces that went into training the model, each representing the sum of an artist’s efforts, experience, and emotions aren’t worth anything at all but a sentence fragment that a third grader could’ve written is where the value lies? Really?
I'd like to point out a bit of semantic pedantry which is actually relevant: LLMs (Large Language Models) which are designed to process and generate text, are generally distinguished from image generation models (although since they do have to use text as input for text-to-image generation, a language model of some sort is involved in that case).
...The funnier part coming into play when people actually did start using LLMs to generate image prompts from a more natural description of their desired image that they would then feed to image generation models, bypassing the self-proclaimed prompt engineers in the process.
Fair, I just really, really dislike “AI” as it’s used today because it’s really not AI at all and should have its own name. To me the bar for “AI” is what now gets referred to as AGI/ASI, something more like Commander Data or the android character played by Robin Williams in Bicentennial Man.
Same issue, from the other direction. To me "AI" unambiguously meant video game path finding/decision algorithms/NPCs and whatever else applied under that umbrella, at least so long as the context was established as being about video games. The way the term expanded in the public perception had very annoying consequences in that regard.
Nobody is screaming bloody murder.
This is just low quality click-bait.
This is both hilariously funny if true, but also disappointing. If this is true then we still need to consume insane amounts of power and hardware to train these models. I was really hoping that they had come up with some smarter approach.
DeepSeek just insisted it's ChatGPT, and I think that's all the proof I need