R3qn65's recent activity
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
R3qn65 Link ParentReally? I’d assumed that you were just carrying on with the bit earlier, when you asked if I’d actually read C-score. Or wait, is this the bit too-Really? I’d assumed that you were just carrying on with the bit earlier, when you asked if I’d actually read C-score.
Or wait, is this the bit too-
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
R3qn65 Link ParentI don’t think it really even takes a broad construction of consciousness to start getting into confusing territory! It is very difficult to come up with a definition that includes all humans but...But if we construe the definition of consciousness broadly
I don’t think it really even takes a broad construction of consciousness to start getting into confusing territory! It is very difficult to come up with a definition that includes all humans but excludes LLMs. (Cf. your subsequent example about ephemeral people). Or at least a definition that’s not completely tautological, anyway.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
R3qn65 Link ParentI'm very impressed that you managed to touch on exactly the core point with no foreknowledge!I'm very impressed that you managed to touch on exactly the core point with no foreknowledge!
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
R3qn65 Link ParentThe bat example is from one the most famous papers on consciousness / the theory of mind. It’s a relatively easy read, by philosophical standards, and is worth reading when you have time. I...The bat example is from one the most famous papers on consciousness / the theory of mind. It’s a relatively easy read, by philosophical standards, and is worth reading when you have time. I mention that because you’ve already touched on the most important theme of the essay:
Is it really completely inaccessible to me to search for the overlap in my base experience to seek even a glimpse of what it must be like to be a bat? I think to make this argument is to argue that no living thing can share even an impression of what the experience of being alive is to any other living thing.
The short answer is that Nagel argued that even if you could perfectly imagine what it was like for you to be a bat, you can’t conceive of what it’s like for the bat to be a bat. Basically.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
R3qn65 LinkYour C-score essay was also very, very good. I found it was right on the edge of being on-the-nose by the end, but I think that probably just means you nailed the balance of making it readable -...Your C-score essay was also very, very good. I found it was right on the edge of being on-the-nose by the end, but I think that probably just means you nailed the balance of making it readable - the only reason not to do that would be if you were trying to be all Straussian and stuff.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
R3qn65 LinkHah, this is awesome. Really well done. I love seeing applied philosophy!Hah, this is awesome. Really well done. I love seeing applied philosophy!
-
Comment on “This technology disrupts [...] Democratic—voters, [and] increases the economic power of [...] male, working-class voters” in ~society
R3qn65 Link ParentI'm not taking a position on whether or not he's right. (I think he is, but that's not the question at hand.) I'm objecting to the characterization of him as "an idiot" or "dumbfuck."I'm not taking a position on whether or not he's right. (I think he is, but that's not the question at hand.) I'm objecting to the characterization of him as "an idiot" or "dumbfuck."
-
Comment on “This technology disrupts [...] Democratic—voters, [and] increases the economic power of [...] male, working-class voters” in ~society
R3qn65 Link ParentYour dislike of Karp is fundamentally causing you to attribute to him other negative traits (like being an idiot.) That’s an error, and it’s also foolish. If you think Karp is destroying the...Your dislike of Karp is fundamentally causing you to attribute to him other negative traits (like being an idiot.) That’s an error, and it’s also foolish. If you think Karp is destroying the world, you should be treating him like he’s some sort of criminal mastermind just in case he is, because it’s better to overestimate your enemies than underestimate them. Dismissing every neocon as an idiot feels good, but is a strategic mistake.
They might be able to recite references about the specific thing they studied, but simply be idiots at everything else.
In that case Karp is probably one of the few people in the world qualified to make his statements. He has a doctorate in the humanities - which he wrote, I will note, in a language that is not his native language - and successfully co-founded a tech company which has become an AI company. He has legitimate credibility as both a student of the humanities and as a tech mogul. If anything, this is the specific take where you should begrudgingly admit that Karp has the standing to hold an opinion.
And if we’re talking about being wrong, he did not call the humanities feminine. He said “highly educated, often female voters, who vote mostly Democrat.” That is not a moral judgement, it is a statistical fact: more women receive advanced degrees than men and holders of advanced degrees (regardless of gender) overwhelmingly vote democrat. I would also note that his entire point was that those individuals are going to be disrupted the most and thus society will have to figure out what we’re going to do to prevent everything from collapsing. If you’re imputing some sort of toxic incel ideology here, that’s on you.
The title of his biography is called “the philosopher in the valley,” for god’s sake. He doesn’t think the humanities are “gay.”
For the record, I don’t have strong feelings about Karp and I regret, typing this now, putting so much effort into defending him.
-
Comment on “This technology disrupts [...] Democratic—voters, [and] increases the economic power of [...] male, working-class voters” in ~society
R3qn65 Link ParentI think the rest of your post is really good, but I think this is a pretty odd way to start. Karp has a JD from Stanford and a PhD in neoclassical social theory (disclaimer: I don’t know what that...Karp is an idiot who evidently does not know what the fuck he’s talking about.
I think the rest of your post is really good, but I think this is a pretty odd way to start. Karp has a JD from Stanford and a PhD in neoclassical social theory (disclaimer: I don’t know what that is) from Goethe in Germany. He’s the cofounder of a successful tech company. You can describe Karp many ways, but “idiot who doesn’t know what the fuck he’s talking about” is a strange one.
-
Comment on AI companies try to pay staff in AI tokens, not money in ~tech
R3qn65 Link ParentThat may be true, but even if so, I think it’s a maximally negative (and therefore un-usefully negative) view of what Altman is saying. I do not like Altman, but criticism of him should stand on...That may be true, but even if so, I think it’s a maximally negative (and therefore un-usefully negative) view of what Altman is saying. I do not like Altman, but criticism of him should stand on its own merit.
The background of Altman’s argument is in a hypothetical post-work society in which all work is done by AI agents. In that context, he is saying, one could simply give people partial ownership over tokens (which would be the means of production, since AI would be doing everything) rather than give them money. In a sense it’s more socialist proposal even than UBI is, because you’re directly transferring control of production rather than cash, which is basically the means of buying/renting production.
Maybe he’s wrong. Maybe the hypothetical is so far removed from reality as to not be a useful framing in the first place. But for goodness’ sake,
Altman really likes the idea of made-up credit at OpenAI being the money now. Because he’s a crypto bro.
is such a mean-spirited, intellectually vacuous response.
-
Comment on “This technology disrupts [...] Democratic—voters, [and] increases the economic power of [...] male, working-class voters” in ~society
-
Comment on AI companies try to pay staff in AI tokens, not money in ~tech
R3qn65 Link ParentMy opinion is that if Deimos took action on every piece of ragebait, there would be almost nothing left on this website : )My opinion is that if Deimos took action on every piece of ragebait, there would be almost nothing left on this website : )
-
Comment on “This technology disrupts [...] Democratic—voters, [and] increases the economic power of [...] male, working-class voters” in ~society
R3qn65 Link ParentHere is more of the interview. Can I ask why? If that was Karp’s plan, it doesn’t seem to make a whole lot of sense to tell everybody about it.Here is more of the interview.
To me, this feels like someone announcing they are going to rob a bank and sharing a detailed plan on how to do so… I imagine Karp is primarily focussed on influencing American politics here, but I think this is a worldwide issue.
Can I ask why? If that was Karp’s plan, it doesn’t seem to make a whole lot of sense to tell everybody about it.
-
Comment on AI companies try to pay staff in AI tokens, not money in ~tech
R3qn65 LinkThanks for posting this - it will definitely generate discussion. For the sake of posterity, the full tweet is this: That is pretty far from "AI companies try to pay staff in AI tokens, not...Thanks for posting this - it will definitely generate discussion.
For the sake of posterity, the full tweet is this:
I am increasingly asked during candidate interviews how much dedicated inference compute they will have to build with Codex.
Pairing this with usage per user growing significantly faster than the number of users, it's pretty clear that compute will be something that is scarce.
That is pretty far from "AI companies try to pay staff in AI tokens, not money." In fairness to the author here, they are reacting to a stupid headline by business insider.
The author also criticizes some comments Sam Altman made:
"I mean a crazy idea, but in the spirit of crazy ideas is, if the world, there’s like eight roughly eight billion people in the world. If the world can generate eight quintillion tokens per year, if that’s the world, actually let’s say the world can generate 20 quintillion tokens per year. Tokens are like each word generated by an AI. Okay, just making up a huge number here. We’ll say 12 of those go to the normal capitalistic system, but eight of those eight quintillion tokens are going to get divided up equally among eight billion people. So everybody gets one trillion tokens and that’s your universal basic wealth globally."
Altman really likes the idea of made-up credit at OpenAI being the money now. Because he’s a crypto bro.
I don't think it's wholly reasonable to take a hypothetical scenario posed by Altman, particularly when Altman is saying, literally, "this is a crazy idea but you've asked me for crazy ideas," and go "aha!" in this way.
I post this because people are going to read this headline and think it's true, when as far as I can tell it's not.
-
Comment on What radicalized you? in ~talk
R3qn65 LinkWhat is it?Today is my yearly reminder of what I consider to be the impetus of what "radicalized" me.
What is it?
-
Comment on New York Times quiz: Who’s a better writer: AI or humans? in ~tech
R3qn65 Link ParentI just want to say I admire your willingness to acknowledge something that doesn't align with your preferences.I guess I have to admit it: the prose is good.
I just want to say I admire your willingness to acknowledge something that doesn't align with your preferences.
-
Comment on New York Times quiz: Who’s a better writer: AI or humans? in ~tech
R3qn65 Link ParentAh, my mistake, I thought discussing is what we were doing. I stand corrected.Nor am I really that interested in responding to what is essentially a ton of philosophical birdshot fired from a shotgun. Here's an argument but if that doesn't work what about this, and if that isn't what you think about about this, or this, and this! This isn't conducive to any actual discussion.
Ah, my mistake, I thought discussing is what we were doing. I stand corrected.
-
Comment on New York Times quiz: Who’s a better writer: AI or humans? in ~tech
R3qn65 Link ParentIf you’re interested in engaging, you may be interested to see my comment here.If you’re interested in engaging, you may be interested to see my comment here.
-
Comment on New York Times quiz: Who’s a better writer: AI or humans? in ~tech
R3qn65 Link ParentBut what does “people make things and AIs don’t” actually mean? I don’t think even the steelman version of your argument holds up: Let’s consider a spectrum between wholly-human work and wholly-AI...But what does “people make things and AIs don’t” actually mean? I don’t think even the steelman version of your argument holds up:
Let’s consider a spectrum between wholly-human work and wholly-AI work. The strongest version of your argument, I think, is regarding the wholly-AI end of the spectrum, since everything in the middle, where one is assisting the other, is both really debatable and also clearly not what you’re talking about. (I don’t think you would claim that if I write something and have a model help me copyedit it, I haven’t actually made anything.)[1]
Let’s also take a very rigorous definition of the word “make,” where we set it to mean both something new and something useful. That is to say, neither novel gibberish. Nor, like a lot of code, a trite rephrasing of something that already exists: potentially very useful, but not very new.
Even under both of those maximally stringent criteria, models, unassisted by humans, are making things that are both new and useful. This is a paper penned by a legendary computer scientist over at Stanford University showing that an AI model created a novel solution to an unsolved problem. (And this is a blog post explaining what the hell he’s talking about.) There are many such examples — this one is imperfect, but the totality of evidence supports my point, I think. Even if we take the maximally negative interpretation of AI thoughtfulness - that they’re merely generating output based on input - something novel has still been created.
Now, I would tend to agree that AI does not typically create things that are completely new. To my eye, what they tend to be best at (if we’re talking about novelty) is solving problems that involve a good amount of brute force, not (yet) coming up with entirely new fields of inquiry. But if that’s the standard, then only a miniscule fraction of humans have ever made something either.
If the argument is that only conscious beings can create things, I would point to several hundred years of philosophy debating whether humans are actually conscious; it’s quite difficult to construct a definition of consciousness that includes humans but excludes AI. Even things that seem obvious at first (humans can remember things!) become very troublesome very quickly: do people with dementia not possess consciousness?
If the argument is, basically, “flesh-and-blood creatures create things, machines can fuck off,” then I would call that a distinction without a difference, though I can also respect it.
[1] There is an argument here that because a human is always(?) involved in the prompting, the AI is never actually creating something completely de novo. I don’t think that argument holds: the value of a work is in creating it, not in having the idea in the first place. The Lord of the Rings isn’t beautiful because JRR Tolkien had an idea about some guys walking to throw jewelry into a mountain, it’s beautiful because of qualities inherent to the work itself. If someone else had given JRR Tolkien a document - even a very comprehensive document! - with all the plot points before he’d started writing, would we have any cause to say that he hadn’t actually created anything? That the thousands of pages of his work didn’t count because someone had given him a 20-page outline? Do none of the fantastically creative Harry Potter fan fictions count as creating something because they’re building on JK Rowling’s world? It’s very difficult to construct a logically coherent reason that prompting an AI means none of what it produces counts as creating something. There may be other reasons to conclude such a thing, but this isn’t it.
You know, I refuse on principle. I recognize that you're sincerely not bothered whether I'm AI or not, but many tildes users do care - very, very much so. So much so that there was a big wave of purity testing a few months ago ("did you use AI to write that comment?! I see emdashes!!") that the site only recently got over and I am loath to see it return in any form, even in pseudojest. (I still am not sure whether you're still doing performance art, which I mean as high praise).
So no. You can believe either that I saw no reason to summarize your own work back to you and generally think that "when X did Y, I thought..." is the most pedestrian form of engagement, or believe that I'm actually a LLM with access only to tildes. Either is fine with me, and I'll take it as a compliment that you find my writing error-free enough to have come from AI.