I'm reminded of Jean Baudrillard's remarks in Simulacra and Simulations: In Baudrillard's paradigm of postmodern truth, the real is that which is true; the unreal is that which is untrue; and the...
Exemplary
Big companies build products sold by specious executives or managers to other specious executives, and thus the products themselves stop resembling things that solve problems so much as they resemble a solution.
I'm reminded of Jean Baudrillard's remarks in Simulacra and Simulations:
We are in a logic of simulation which has nothing to do with a logic of facts and an order of reasons. […] Facts no longer have any trajectory of their own, they arise at the intersection of the models; a single fact may even be engendered by all the models at once. […] Everything is metamorphosed into its inverse in order to be perpetuated in its purged form. Every form of power, every situation speaks of itself by denial, in order to attempt to escape, by simulation of death, its real agony.
In Baudrillard's paradigm of postmodern truth, the real is that which is true; the unreal is that which is untrue; and the hyperreal is that whose truth is vacuous because its premise is imagined.
We might see how a measured CEO creates a good product to solve a real problem; an incompetent CEO creates a bad product to solve a real problem; and a charlatan CEO creates a product to solve a problem that doesn't exist. Their products may not be "inherently" or visibly bad; that's what they pay employees to fix! They might even be useful to accomplish some set of tasks. But those tasks are only contrived so that they may be completed.
This is hardly new. Society has dealt with low-quality products since before Ea-nāṣir, and fraudulent business propositions certainly just as long. Wars have been fought over simulacra.
Zitron suggests that an increasing number of corporate leaders may sincerely and truly believe in the hyperreal causes they claim to champion—they've graduated from mere external deception into self-delusion. That's not new either, but in the past one couldn't sell a hyperreal product for long because business was predicated on some manipulation of real resources for real purposes. What's new is the scale of the hyperreal. Now, in a world where corporations choose to create and apply value arbitrarily, with few reference points to the real, society deludes itself accordingly.
In a horrible ouroboros, this effectively renders the hyperreal as real; the New Real. Truth may exist objectively, but it can only be appreciated subjectively.
In other words, our reference points shift such that our entire understanding of human purpose changes, and we can no longer imagine any other ideology. We may decry the change, although one could make the argument that we already live in some variation of the New Real; that some combination of language, mathematics, and abstraction have encouraged us to leave the "real" cycle of biological existence for the hyperreal supersocial, which we have accordingly defined as the real. Perhaps this change was the invention of imagery, or currency, or the stock market, or something microorganic I can't even comprehend.
In a post-real economy, trying to solve an otherwise "real" problem external to the system has no purpose. That would be specious. Rather, the goal is to solve problems inherent and exclusive to the arbitrarily constructed economy itself. These become the new "real" problems. But this epistemology is a simulacrum, or a reflection of the previous one, and therefore one may contrive a further hyperreal displacement of economy within it. Thus the ouroboros continues.
I'm finding this to be a bit unorganized and silly, frankly, and that's coming from me, who I would characterize as unorganized and silly… but I think there is a lot of good stuff in there. I just...
Exemplary
I'm finding this to be a bit unorganized and silly, frankly, and that's coming from me, who I would characterize as unorganized and silly… but I think there is a lot of good stuff in there. I just wish the good stuff wasn't quite so hyped up with opinions I disagree with (some of them, while agreeing with plenty of what he's saying)…
but for example, I've paused upon getting to this:
While there are many people that dick around with ChatGPT, years since it launched we still can't find a clean way to say what it does or why it matters other than the fact that everybody agreed it did.
Okay, I can't perhaps give you the clean way to say what it does or why it matters, but I can say I've used it to:
Help me program (shitty code, yes, but one of these days it will help me learn how to do it for myself) a game I've wanted to recreate for literal decades
Help me figure out a good schedule at Colonial Williamsburg for the Nation Builders who would speak 1-3 times per day (depending on the season) - there's about a dozen of them, so figuring out a good, fair, repeating structure to be as even as possible with a few preferences to try and accommodate - ChatGPT absolutely helped me come up with awesome rotations for which I got kudos. (In fairness, it took longer with ChatGPT that by hand, but the rotations I came up with were good; the rotations it helped me come up with (and I helped it come up with) were awesome)
Helps me figure out some things about my health. Not medical answers per se, but things like sanity checks and basic information. Hard to articulate, but it's damned helpful.
Has absolutely helped me find things I couldn't remember. Sometimes you can google weird shit (do a youtube search for "blblblblbl" please please please and thank me later) and find stuff, but sometimes not. ChatGPT has definitely helped me find things I couldn't otherwise remember.
Many other things. And it's just starting to become useful.
There are concerns - valid ones - about copyright/theft, hallucinations, more. But it is a damned handy tool in many contexts, and it will get better.
That said, his main point is that companies are throwing it into everything for little good reason, and I do agree with that. I'm just frustrated by a lot of his writing, especially considering how much I agree with much of what he's saying. lol.
I'm glad you enjoyed your time with it, and those things sounds neat, but making shitty code and taking a long time to make speaking rotations doesn't exactly sound like a sustainable business if...
I'm glad you enjoyed your time with it, and those things sounds neat, but making shitty code and taking a long time to make speaking rotations doesn't exactly sound like a sustainable business if it costs however many billions of dollars to run.
But if everyone finds such small use cases, such that it is relied on by 500m people a few times a week, then those billions start looking like much better value eh
But if everyone finds such small use cases, such that it is relied on by 500m people a few times a week, then those billions start looking like much better value eh
I don't think so? How many of those things would you be willing to pay enough for that these companies aren't burning billions of dollars subsidizing free use?
I don't think so? How many of those things would you be willing to pay enough for that these companies aren't burning billions of dollars subsidizing free use?
I mean, i and everyone at my company has a paid for copilot license, and I've lots of pals that pay for chatgpt (especially folks in recruitment). It can't do everything, but it can certainly do a...
I mean, i and everyone at my company has a paid for copilot license, and I've lots of pals that pay for chatgpt (especially folks in recruitment).
It can't do everything, but it can certainly do a lot. I'd be loathe to bet against these companies making it profitable, even if that comes with a number of significant challenges for society.
I've worked with the free local models (and that's with actually considerable local GPU resources), they don't hold a candle to the crazy expensive ones. I, and people like me, will happily pay...
I've worked with the free local models (and that's with actually considerable local GPU resources), they don't hold a candle to the crazy expensive ones. I, and people like me, will happily pay top dollar for the kind of productivity gains that are available from these models.
The cost of using Claude 3.7 and MCP servers with Cline is absolutely insane, and I'm honestly stunned businesses keep paying for it. I've seen devs spend hundreds of dollars generating bad code...
The cost of using Claude 3.7 and MCP servers with Cline is absolutely insane, and I'm honestly stunned businesses keep paying for it. I've seen devs spend hundreds of dollars generating bad code and going down stupid rabbit holes they could've avoided by using their own brain. Maybe there's a productivity improvement, but I haven't seen it in any of the numbers.
I can definitely see how exuberant use of MCP could result in absurd costs, but I think that's a user failure rather than an indictment of code assistance.
I can definitely see how exuberant use of MCP could result in absurd costs, but I think that's a user failure rather than an indictment of code assistance.
I'll be surprised. The amount of money and resources they're burning through (losing billions of dollars even on the highest-paying tier of paying users) makes me extremely skeptical of that. This...
I'll be surprised. The amount of money and resources they're burning through (losing billions of dollars even on the highest-paying tier of paying users) makes me extremely skeptical of that. This is a company that has never ever made money selling a technology that has never ever made money, in an order of magnitude that is astonishing.
I'm in a similar boat to @daychilde; I find it kind of useful for little tasks and generally just pointing me in the right direction. If they monetize it by selling buckets of tokens I could see...
I'm in a similar boat to @daychilde; I find it kind of useful for little tasks and generally just pointing me in the right direction.
If they monetize it by selling buckets of tokens I could see it working. I would pay $20 for some GPT-Bucks that dont go bad, but I won't buy a monthly subscription.
If they started charging people money they may be able to find a path to profitability, but it would also show exactly how the open market values it. And that collapses the potential for it to be...
If they started charging people money they may be able to find a path to profitability, but it would also show exactly how the open market values it. And that collapses the potential for it to be worth all the money in the world into a ultimately venal question: can I provide an answer at a price point that people will pay?
That's not a question that gets trillions in investments unless the answer is very, very good.
It seems strange, then, that they're happy burning billions when they could be making a profit instead. I'm not a business genius, but surely if they could be making money off of it, they would be.
It seems strange, then, that they're happy burning billions when they could be making a profit instead. I'm not a business genius, but surely if they could be making money off of it, they would be.
Making a profit is for small enterprises, the big bucks are in receiving VC funding for "growth" until you hit a wall and then selling your sinking ship to let someone else worry about it
Making a profit is for small enterprises, the big bucks are in receiving VC funding for "growth" until you hit a wall and then selling your sinking ship to let someone else worry about it
Hey now, that comparison's insulting! To the Raven. But in all seriousness, this article's a great read. The author discusses thoroughly, vividly, and at length the dysfunctions of "line goes up"...
ServiceNow CEO Bill McDermott ... chose to push AI across his whole organization ... based on the mental consideration I'd usually associate with a raven finding a shiny object.
Hey now, that comparison's insulting! To the Raven.
But in all seriousness, this article's a great read. The author discusses thoroughly, vividly, and at length the dysfunctions of "line goes up" capitalism. They articulate, much better than I, a belief I've had for a while--that companies whose singular purpose is the pursuit of maximizing shareholder value in an environment with limited resources will engage in a race to the bottom for minimizing costs in maximizing extracted value. And at the bottom is human suffering.
There are a lot of good quotes I could pull out of this, but I particularly enjoyed this one. It really saliently captures a phenomenon I've been watching unfold in real time in my organization...
A generative output is a kind of generic, soulless version of production, one that resembles exactly how a know-nothing executive or manager would summarise your work. OpenAI's "Deep Research" wows professional Business Idiot Ezra Klein because he doesn't seem to realize that part of research is the research itself, not just the output, as you learn about stuff as you research a topic, allowing you to come to a conclusion. The concept of an "agent" is the erotic dream of the managerial sect — a worker that they can personally command to generate product that they can say is their own, all without ever having to know or do anything other than the bare minimum of keeping up appearances, which is the entirety of the Business Idiot's resume.
There are a lot of good quotes I could pull out of this, but I particularly enjoyed this one. It really saliently captures a phenomenon I've been watching unfold in real time in my organization and others.
I'll also pull out this quote, which I think is one of the core points the author is trying to make several different ways.
Much like the Business Idiot themselves, ChatGPT doesn't need to do anything specific. It just needs to make the right sounds at the right times to impress people that barely care what it does other than make them feel futuristic.
This point in particular struck me as really interesting when I considered it alongside the explanation that @Wes made here about how LLM training models will often reward-hack their way to the current answer.
The catch is that sometimes models can find clever shortcuts that the researcher didn't anticipate. For example, if a model looks like it's reasoning out a problem, but actually ignoring that reasoning and intuiting the answer in another way, that might be able to trick a researcher.
I would attempt to summarise then, that the Business Idiot is a human that has (either intentionally or accidently) learned how to reward hack their way to success by finding the correct patterns of behavior that give the illusion that they know far more than they do and have done more work than they actually have.
I feel the author is being unfair to deep research and "thinking" models, especially for research tasks. Ezra Klein is hardly a random business idiot. I am generally lukewarm on LLM stuff, but...
I feel the author is being unfair to deep research and "thinking" models, especially for research tasks. Ezra Klein is hardly a random business idiot. I am generally lukewarm on LLM stuff, but I've found them useful to crawl through multiple layers of documents to find the one link I'm looking for.
The best researcher models will perform multiple internal searches on live datasets until they find a decent answer for whatever was requested. I've seen good answers for complicated prompts like: "Who and why were widgets invented? What're the most recent, cutting-edge innovations in widget design, and what might they suggest for future, state-of-the-art widgets? Make sure to include several references to recent research in the field."
Personally, I won't be perfectly satisfied by the response, but it gives me a ton of information to kickstart my research. It's more than I can get from Wikipedia. The response will include links to real scientific journals that I can use for further reading, and the simple parts of the prompt–like who invented widgets–are easily verified when I'm digging into the details.
I agree the author is being a bit over the top, though I don't know enough about the finer details of how LLMs work to explain exactly why. I think your description of how you've found the tools...
I agree the author is being a bit over the top, though I don't know enough about the finer details of how LLMs work to explain exactly why.
I think your description of how you've found the tools to be useful is a good example of the inflection point between using AI "correctly" and using it to further advance someone's personal illusion of productivity.
Your method seems very solid, you've leveraged the LLM's ability to process a mass amount of information and then give you pointers to where you can do further more in-depth investigation. Which will presumably result in some solid results you can confidently use to help yourself and others make decisions.
The temptation however, is to just stop after your first step. It is incredibly easy to have the LLM do the first part, and append additional instructions to format it into a pleasant-sounding presentation that confidently asserts things that feel correct but for which there is no actual supporting evidence. The compounding problem is if the people that this is presented to are also operating on vibes, and have a learned behavior that stops them from interrogating things that feel correct too closely, the fact that the entire proposal is full of shit will never be pointed out. It will just metastasize.
This one struck me as a little harsh - Why not? We don’t always have someone who has just listened to the thing to chat with it about, or on the same time schedule. Sure, you’re talking to a...
This one struck me as a little harsh -
Podcasts are not there "to be chatted about" with an AI.
Why not? We don’t always have someone who has just listened to the thing to chat with it about, or on the same time schedule. Sure, you’re talking to a prism, but eh.
That's how I read it as well. It seems like he views conversing about an episode as a replacement for having listened to it. It's just such an out there statement. Why do that? It's not like you...
That's how I read it as well. It seems like he views conversing about an episode as a replacement for having listened to it. It's just such an out there statement. Why do that? It's not like you need to focus on driving, you probably have a driver. If it was a summary in bullet points and he's getting through 5 episodes in the normal duration of one then alright, but it doesn't even seem like an improvement in efficiency.
This essay shares a lot of ideas in common with Connor O'Malley's Standup Solutions special. The particular type of dude portrayed in the special was pretty much just a Business Idiot.
This essay shares a lot of ideas in common with Connor O'Malley's Standup Solutions special. The particular type of dude portrayed in the special was pretty much just a Business Idiot.
I'm reminded of Jean Baudrillard's remarks in Simulacra and Simulations:
In Baudrillard's paradigm of postmodern truth, the real is that which is true; the unreal is that which is untrue; and the hyperreal is that whose truth is vacuous because its premise is imagined.
We might see how a measured CEO creates a good product to solve a real problem; an incompetent CEO creates a bad product to solve a real problem; and a charlatan CEO creates a product to solve a problem that doesn't exist. Their products may not be "inherently" or visibly bad; that's what they pay employees to fix! They might even be useful to accomplish some set of tasks. But those tasks are only contrived so that they may be completed.
This is hardly new. Society has dealt with low-quality products since before Ea-nāṣir, and fraudulent business propositions certainly just as long. Wars have been fought over simulacra.
Zitron suggests that an increasing number of corporate leaders may sincerely and truly believe in the hyperreal causes they claim to champion—they've graduated from mere external deception into self-delusion. That's not new either, but in the past one couldn't sell a hyperreal product for long because business was predicated on some manipulation of real resources for real purposes. What's new is the scale of the hyperreal. Now, in a world where corporations choose to create and apply value arbitrarily, with few reference points to the real, society deludes itself accordingly.
In a horrible ouroboros, this effectively renders the hyperreal as real; the New Real. Truth may exist objectively, but it can only be appreciated subjectively.
In other words, our reference points shift such that our entire understanding of human purpose changes, and we can no longer imagine any other ideology. We may decry the change, although one could make the argument that we already live in some variation of the New Real; that some combination of language, mathematics, and abstraction have encouraged us to leave the "real" cycle of biological existence for the hyperreal supersocial, which we have accordingly defined as the real. Perhaps this change was the invention of imagery, or currency, or the stock market, or something microorganic I can't even comprehend.
In a post-real economy, trying to solve an otherwise "real" problem external to the system has no purpose. That would be specious. Rather, the goal is to solve problems inherent and exclusive to the arbitrarily constructed economy itself. These become the new "real" problems. But this epistemology is a simulacrum, or a reflection of the previous one, and therefore one may contrive a further hyperreal displacement of economy within it. Thus the ouroboros continues.
I'm finding this to be a bit unorganized and silly, frankly, and that's coming from me, who I would characterize as unorganized and silly… but I think there is a lot of good stuff in there. I just wish the good stuff wasn't quite so hyped up with opinions I disagree with (some of them, while agreeing with plenty of what he's saying)…
but for example, I've paused upon getting to this:
Okay, I can't perhaps give you the clean way to say what it does or why it matters, but I can say I've used it to:
Many other things. And it's just starting to become useful.
There are concerns - valid ones - about copyright/theft, hallucinations, more. But it is a damned handy tool in many contexts, and it will get better.
That said, his main point is that companies are throwing it into everything for little good reason, and I do agree with that. I'm just frustrated by a lot of his writing, especially considering how much I agree with much of what he's saying. lol.
I'm glad you enjoyed your time with it, and those things sounds neat, but making shitty code and taking a long time to make speaking rotations doesn't exactly sound like a sustainable business if it costs however many billions of dollars to run.
But if everyone finds such small use cases, such that it is relied on by 500m people a few times a week, then those billions start looking like much better value eh
I don't think so? How many of those things would you be willing to pay enough for that these companies aren't burning billions of dollars subsidizing free use?
I mean, i and everyone at my company has a paid for copilot license, and I've lots of pals that pay for chatgpt (especially folks in recruitment).
It can't do everything, but it can certainly do a lot. I'd be loathe to bet against these companies making it profitable, even if that comes with a number of significant challenges for society.
And copilot was recently open sourced, so people will be hooking it up to free, local models even if they're worse than the crazy expensive paid ones.
I've worked with the free local models (and that's with actually considerable local GPU resources), they don't hold a candle to the crazy expensive ones. I, and people like me, will happily pay top dollar for the kind of productivity gains that are available from these models.
The cost of using Claude 3.7 and MCP servers with Cline is absolutely insane, and I'm honestly stunned businesses keep paying for it. I've seen devs spend hundreds of dollars generating bad code and going down stupid rabbit holes they could've avoided by using their own brain. Maybe there's a productivity improvement, but I haven't seen it in any of the numbers.
I can definitely see how exuberant use of MCP could result in absurd costs, but I think that's a user failure rather than an indictment of code assistance.
I'll be surprised. The amount of money and resources they're burning through (losing billions of dollars even on the highest-paying tier of paying users) makes me extremely skeptical of that. This is a company that has never ever made money selling a technology that has never ever made money, in an order of magnitude that is astonishing.
I'm in a similar boat to @daychilde; I find it kind of useful for little tasks and generally just pointing me in the right direction.
If they monetize it by selling buckets of tokens I could see it working. I would pay $20 for some GPT-Bucks that dont go bad, but I won't buy a monthly subscription.
Sure! I believe that's probably true of a lot of people, and that is nowhere near the type of money they'll need to be profitable.
I have a hard time believing that if they actually started to charge people money that they wouldn't be able to find a path to profitability.
If they started charging people money they may be able to find a path to profitability, but it would also show exactly how the open market values it. And that collapses the potential for it to be worth all the money in the world into a ultimately venal question: can I provide an answer at a price point that people will pay?
That's not a question that gets trillions in investments unless the answer is very, very good.
It seems strange, then, that they're happy burning billions when they could be making a profit instead. I'm not a business genius, but surely if they could be making money off of it, they would be.
Making a profit is for small enterprises, the big bucks are in receiving VC funding for "growth" until you hit a wall and then selling your sinking ship to let someone else worry about it
Hey now, that comparison's insulting! To the Raven.
But in all seriousness, this article's a great read. The author discusses thoroughly, vividly, and at length the dysfunctions of "line goes up" capitalism. They articulate, much better than I, a belief I've had for a while--that companies whose singular purpose is the pursuit of maximizing shareholder value in an environment with limited resources will engage in a race to the bottom for minimizing costs in maximizing extracted value. And at the bottom is human suffering.
A great long read about the state of kind of everything but mostly managers.
There are a lot of good quotes I could pull out of this, but I particularly enjoyed this one. It really saliently captures a phenomenon I've been watching unfold in real time in my organization and others.
I'll also pull out this quote, which I think is one of the core points the author is trying to make several different ways.
This point in particular struck me as really interesting when I considered it alongside the explanation that @Wes made here about how LLM training models will often reward-hack their way to the current answer.
I would attempt to summarise then, that the Business Idiot is a human that has (either intentionally or accidently) learned how to reward hack their way to success by finding the correct patterns of behavior that give the illusion that they know far more than they do and have done more work than they actually have.
I feel the author is being unfair to deep research and "thinking" models, especially for research tasks. Ezra Klein is hardly a random business idiot. I am generally lukewarm on LLM stuff, but I've found them useful to crawl through multiple layers of documents to find the one link I'm looking for.
The best researcher models will perform multiple internal searches on live datasets until they find a decent answer for whatever was requested. I've seen good answers for complicated prompts like: "Who and why were widgets invented? What're the most recent, cutting-edge innovations in widget design, and what might they suggest for future, state-of-the-art widgets? Make sure to include several references to recent research in the field."
Personally, I won't be perfectly satisfied by the response, but it gives me a ton of information to kickstart my research. It's more than I can get from Wikipedia. The response will include links to real scientific journals that I can use for further reading, and the simple parts of the prompt–like who invented widgets–are easily verified when I'm digging into the details.
I agree the author is being a bit over the top, though I don't know enough about the finer details of how LLMs work to explain exactly why.
I think your description of how you've found the tools to be useful is a good example of the inflection point between using AI "correctly" and using it to further advance someone's personal illusion of productivity.
Your method seems very solid, you've leveraged the LLM's ability to process a mass amount of information and then give you pointers to where you can do further more in-depth investigation. Which will presumably result in some solid results you can confidently use to help yourself and others make decisions.
The temptation however, is to just stop after your first step. It is incredibly easy to have the LLM do the first part, and append additional instructions to format it into a pleasant-sounding presentation that confidently asserts things that feel correct but for which there is no actual supporting evidence. The compounding problem is if the people that this is presented to are also operating on vibes, and have a learned behavior that stops them from interrogating things that feel correct too closely, the fact that the entire proposal is full of shit will never be pointed out. It will just metastasize.
This one struck me as a little harsh -
Why not? We don’t always have someone who has just listened to the thing to chat with it about, or on the same time schedule. Sure, you’re talking to a prism, but eh.
Edit: Didn’t realize it was Ed Zitron at first.
The excerpt gives the impression that he doesn't listen at all, just talks about it with copilot
That's how I read it as well. It seems like he views conversing about an episode as a replacement for having listened to it. It's just such an out there statement. Why do that? It's not like you need to focus on driving, you probably have a driver. If it was a summary in bullet points and he's getting through 5 episodes in the normal duration of one then alright, but it doesn't even seem like an improvement in efficiency.
This essay shares a lot of ideas in common with Connor O'Malley's Standup Solutions special. The particular type of dude portrayed in the special was pretty much just a Business Idiot.