Algernon_Asimov's recent activity
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov I'm just going to have to continue to wear the approbation of people like @Adys by calling them "autocomplete algorithms" or "text generators". I might not change the world, but maybe one person...I'm just going to have to continue to wear the approbation of people like @Adys by calling them "autocomplete algorithms" or "text generators". I might not change the world, but maybe one person reading what I write will realise these programs aren't actually intelligent, can't actually think, and don't actually know anything.
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov But they're not being called large language models. They're being called "AIs" and "chatbots", when they're neither intelligent nor can they chat. "Autocomplete" makes it very clear to people not...But they're not being called large language models. They're being called "AIs" and "chatbots", when they're neither intelligent nor can they chat.
"Autocomplete" makes it very clear to people not only what they are, but how they work. So does "text generator".
And why does it bother you so much what I call them?
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov Fine. Let's call them "text generators" instead. But not "artificial intelligence". That's a loaded term, and it's misleading a lot of people.I do think that the tools we’re seeing now are so fundamentally different in power and application to my phone keyboard that implying they’re in the same category is equally misleading.
Fine. Let's call them "text generators" instead.
But not "artificial intelligence". That's a loaded term, and it's misleading a lot of people.
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov It's also getting tiresome reading other people praise these autocomplete algorithms as being intelligent. If they're so smart, how do they keep producing nonsensical non-existent "information"?...It's getting frankly tiresome to read your responses here,
It's also getting tiresome reading other people praise these autocomplete algorithms as being intelligent.
If they're so smart, how do they keep producing nonsensical non-existent "information"?
As for your papers...
The "Theory of Mind" paper merely observes that a chatbot can correctly deduce facts when you give it those facts. If you tell it that a bag contains popcorn, it tells you that the bag contains popcorn. If you tell it that the label on the bag says it contains chocolate, it tells you that a person reading the label will assume it contains chocolate.
It's given facts, and it regurgitates those facts.
And, as regards the emergent abilities paper:
What GPT-3.5 currently cannot do
-
It can't change its incorrect statements based on new information intended to correct those statements.
-
It can't perform logical reasoning.
-
It can't read the internet (but that's not relevant here).
-
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov Well, they're still not artificial intelligences, and as long as people call them "AI", that's going to raise false expectations about what they can do. It's been demonstrated time and time again...I don't really think calling them autocomplete algorithms is fair post-RLHF.
Well, they're still not artificial intelligences, and as long as people call them "AI", that's going to raise false expectations about what they can do.
It's been demonstrated time and time again that they simply can not answer questions of fact, beyond the most simple, such as your "What is the capital of..." example.
If not "autocomplete algorithm", then we should at least call them "chatbots" or maybe "automated text generators". "AI" is a misnomer for this programs, and it should be banned from all discussions of these text-generating algorithms.
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov Even considered as a source of hints, the AI's answer in this case was objectively bad. Out of seven pieces of information it provided, only one item even existed, and that one actual piece of...If you think of AI chat as a source of hints, the hints are often quite good.
Even considered as a source of hints, the AI's answer in this case was objectively bad. Out of seven pieces of information it provided, only one item even existed, and that one actual piece of information wasn't even relevant to the question being asked.
Maybe a different, more gossipy personality would help people get this? It could say things like "rumor has it" or "I heard from a friend that..." Or "I think I read somewhere..."
Or, it could be programmed to simply say "I don't know that information. I don't know any information."
-
Comment on Kenyan player expelled after pretending to be a woman to win lucrative prize in ~games.tabletop
Algernon_Asimov Oh, great. This impostor has reinforced two bigoted stereotypes, and made things that much harder for two groups of people. First, there's the bigotry from people like my parents against Muslims....Oh, great. This impostor has reinforced two bigoted stereotypes, and made things that much harder for two groups of people.
First, there's the bigotry from people like my parents against Muslims. My mother has Iiterally said that you can't trust Muslims, particularly women, because "you never know who's in those outfits. It could be a man with a bomb." So, this impostor has just reinforced that stereotype.
Second, there's the bigotry from people against transgender people, particularly women, in sport. A common argument is that a transgender woman is just a man pretending to be a woman, to gain an advantage when competing in sport. So, this impostor has just reinforced that stereotype.
Thanks for that, you selfish idiot!
-
Comment on Megathread #5 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov I saw an example of this out in the wild, this week on Reddit. It was in a linguistics subreddit. Someone asked a question about a particular linguistic phenomenon. Someone else replied and said...it can be astonishingly useful, for all kinds of purposes... but seeking truthful, factual answers is very much not one of them.
I saw an example of this out in the wild, this week on Reddit.
It was in a linguistics subreddit. Someone asked a question about a particular linguistic phenomenon. Someone else replied and said "ask chat gpt (not satire)". So the asker did ask ChatGPT, and copy-pasted its answer into the thread. The asker seemed quite happy with answer: "Normally I avoid ChatGPT because it tends to make stuff up 9 times out of 10 but this seemed to work."
The autocomplete algorithm provided:
-
a name for the phenomenon the asker inquired about;
-
the titles of five different papers about the phenomenon, and;
-
the name of a linguist who had written one of those papers.
I got curious, and fact-checked every item in the reply. Only one item existed: the linguist. But she hadn't written any papers about this phenomenon, and the phenomenon wasn't called what the algorithm said it was called (the name provided didn't exist anywhere else on the internet, based on my searching).
Of course I provided the results of my fact-checking in the thread.
Convincing people that these aren’t a sentient AI out of a science fiction story
That might be easier if people didn't go around calling these things "artificial intelligences". Just call them "autocomplete algorithms", which is what they are. That would manage users' expectations a lot better. People know what autocomplete is, what it does - and, more importantly, what it doesn't do.
-
-
Comment on I've (mostly) left reddit. It's a lot like quitting cigarettes. in ~talk
Algernon_Asimov Fuck, yes! Here's how it was announced on Reddit: "An ex-Reddit administrator is aiming to create the Reddit we've always wanted–Tildes is a non-profit community site driven by its users'...I wasn't here in the beginning, but was tildes ever meant to be a reddit competitor?
Fuck, yes!
-
Here's how it was announced on Reddit: "An ex-Reddit administrator is aiming to create the Reddit we've always wanted–Tildes is a non-profit community site driven by its users' interests"
-
And here's how Deimos described Tildes in his first blog post about it: "Combined with my own experiences, everything I've learned affects how I've been building Tildes, a new link-aggregator-style community site". Note: Reddit is also a link-aggregator.
-
Here's all the discussions about Tildes in /r/RedditAlternatives on Reddit: search results
With how people couldn't even sign up for the site without an invite code, it seems like the goal was not growth but rather that small community of like-minded people you mentioned.
Actually, the goal was controlled growth, especially because Deimos was building the site from the ground up, as he went.
- Here's how Deimos explained the intentions: "Tildes will stay invite-only for the foreseeable future. There's a ton to figure out and build before we can even consider making the site open to the public."
Basically, Tildes is in a beta-testing stage that just never ended.
-
-
Comment on I've (mostly) left reddit. It's a lot like quitting cigarettes. in ~talk
Algernon_Asimov Tildes didn't grow. It stagnated. It became stale and repetitive. Some people like the small intimate atmosphere here. That's not what I signed up for. I was looking forward to the next big Reddit...Tildes didn't grow. It stagnated. It became stale and repetitive.
Some people like the small intimate atmosphere here. That's not what I signed up for.
I was looking forward to the next big Reddit competitor, with thousands of members posting hundreds of items. I put in a lot of time and effort in the early days to help things along, expecting to build a thriving busy community. Instead, we ended up with a closed group with a handful of people posting a few items, all about the same things, because that handful of people has a limited range of interests (because of the low numbers of people, not because of the quality of those people).
Tildes is okaaayyyyy... but it could have been so much more. The bones are good, but there's not enough flesh on them. Even the creator has all but abandoned it these days.
( How's that, @NaraVara? :P )
-
Comment on I've (mostly) left reddit. It's a lot like quitting cigarettes. in ~talk
Algernon_Asimov Thank you for articulating something I felt subconsciously, but could never put into words. This is exactly how I feel about these sites. I gave up Reddit, too! Until I didn't. :) I have an...No, the difference is that reddit (and tildes, and usenet) are structured around ideas, whereas facebook etc. are structured around personalities.
Thank you for articulating something I felt subconsciously, but could never put into words. This is exactly how I feel about these sites.
I gave up Reddit, too! Until I didn't. :)
I have an "Algernon_Asimov" account over there. I've been on Reddit for nearly 12 years now, and I was a high-profile moderator for at least some of those years: I was on the mod team for /r/Help and /r/ModHelp for a few years, among others.
And it's not like Reddit was the best place on the internet, even then. 10 years ago, a co-mod and I would often discuss how our little boutique subreddit focussed on high-quality non-toxic discussion was an island of sanity surrounded by an ever-rising sea of inanity. At times, we felt like King Cnut, demonstrating the futility of trying to hold back the tide. But we held on anyway!
But then the redesign happened. And happened. And just kept happening.
And I was caught in an insidious position, as a moderator of /r/Help and /r/ModHelp, of not only having to explain these changes, but also defending them (or at least remaining neutral). Eventually, I couldn't take it any longer. I stepped down from most of the subreddits I moderated, particularly /r/Help and /r/ModHelp, and heaved a huge sigh of relief.
Along the way, I had created an alternate account, to focus on some more specialised posting and commenting - and ended up becoming a big fish in a small pond in a limited number of subreddits, then a moderator of a few... and now that secondary alt account is the one I spend most time on. It's just a couple of small-scale pet projects, rather than large-scale subreddits, but they're mine, so I can't give them up.
When Tildes turned up, I signed up eagerly. I transferred most of my "Algernon_Asimov" attention here. But then this site proved disappointing in its way.
So now I mostly just run my little pet projects on my secondary Reddit account, and I'm content with that. It's a big step down from how involved I was with Reddit between 5 and 10 years ago, and that's probably a good thing.
-
Comment on What are you reading these days? in ~books
Algernon_Asimov I've got three books on the go at the moment. My high-end serious reading is The Words That Made Australia: How a Nation Came to Know Itself. It's a series of essays from the past 100 years, about...I've got three books on the go at the moment.
My high-end serious reading is The Words That Made Australia: How a Nation Came to Know Itself. It's a series of essays from the past 100 years, about the Australian way of life over the decades. I read one essay every week or so. I've read a lot of references to these various essays, but now I'm reading the sources. For example, I've learned that the phrase "the lucky country" was not the compliment everyone thinks it is.
My mid-range casual reading is Exiles: The Uplift Storm Trilogy. I was recently prompted to re-read the Uplift Trilogy, which I've always enjoyed. I've previously tried to finish all six books, but I got bogged down in the fourth book (Brightness Reef), and never got further than that. I read somewhere recently that some big mysteries pay off in the later books, so I'm currently slogging through Brightness Reef again, hoping to make it to the end this time. And it is a slog. I remember why I didn't finish it last time, but I'm determined this time.
My low-end comfort reading is The Narnia series - yet again. Just something light to wind down with before bed.
It is wonderful having an e-reader. I can carry all three of these books/series around with me wherever I go!
-
Comment on Megathread #4 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov Australian whistleblower to test whether ChatGPT can be sued for lying [...]Australian whistleblower to test whether ChatGPT can be sued for lying
The creator of the wildly popular artificial intelligence writing tool ChatGPT is facing the threat of a landmark defamation claim in Australia after its chatbot falsely described a whistleblower in a bribery scandal as being one of its perpetrators.
[...]
Asked “What role did Brian Hood have in the Securency bribery saga?“, the AI chatbot claims that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to jail. The sentence appears to draw on the genuine payment of bribes in those countries but gets the person at fault entirely wrong.
-
Comment on Megathread #3 for news/updates/discussion of AI chatbots and image generators in ~tech
Algernon_Asimov At a bare minimum, a text-building program would need an actual understanding of the words it's compiling. It would need to understand that "blue apple" is factually incorrect (to re-use an...- Exemplary
At a bare minimum, a text-building program would need an actual understanding of the words it's compiling. It would need to understand that "blue apple" is factually incorrect (to re-use an example I discussed in the other thread), because real-world apples just don't come in blue. This would require an understanding of what an apple is, what blue is, what a colour is, what truth is, and so on. At the moment, it can produce strings of text which say these things, but it doesn't truly understand what it's saying.
It would need self-awareness. Not full awareness of its self, but awareness of what it's typing. It would need to review the words it's compiling, in real time, to realise that it is just about to produce the phrase "blue apple", and that this phrase does not match reality. It would not be enough for someone to point out to it in hindsight that apples aren't blue and for it to agree, or for someone to ask "Are apples blue?" and for it to say "no". It would need to proactively and preemptively review its own text to recognise errors of fact as they arise, and correct them in real time. (I expect that, at the speed computers operate, we would not notice this self-correction occurring. We would not see the false phrase "blue apple", because the microseconds required for the software to recognise this, check this, and correct this, are too fast for us to notice any delay.)
That's not to say that it could never write "blue apple". I've written that phrase a few times on Tildes in the past few weeks. But I have a context for saying it, even though it's false. An intelligent program would have a context for saying untrue things, such as knowingly writing fiction, or creating a contrast between falsity or truth, or discussing falseness. It would not produce an untrue statement and present it as truth.
Other people have talked about opinions. I don't think an intelligent program would need to have opinions.
I've also considered whether an intelligent program would need to be creative, in the sense that it could produce an idea that hasn't been thought of before, rather than just regurgitating humans' ideas as it currently does. I'm not sure that this is a requirement for intelligence. Something could be intelligent without being creative.
But it does need knowledge and self-awareness. There are possibly other traits that I haven't though of, and wouldn't think of until someone plonked a program in front of me and said "Okay, it has knowledge and self-awareness. Is that good enough for you? What more do you want?" My interrogation of it would then produce a gap that I haven't thought of yet. Or not. Maybe that would be enough.
In this vein, I recently read a blog written by our own @skybrian, which pointed out that ChatGPT doesn't do anything when it's not producing text. In between responses, it's just lying fallow. There's nothing doing any thinking in between responses. That made me aware that an intelligent program would think when it's not being asked to. It's obvious when you consider it, but I hadn't considered it until @skybrian pointed it out (thanks, @skybrian!).
So, there are probably other traits I haven't considered, and wouldn't consider until I'm interacting with a program that someone says is intelligent, but I realise it just isn't for some reason.
-
Comment on Does your flag fail? CGP Grey grades the state flags of the USA! in ~design
Algernon_Asimov Well... yes. That's exactly what the explanation says happened. (Without the modern references.)Well... yes. That's exactly what the explanation says happened. (Without the modern references.)
-
Comment on Does your flag fail? CGP Grey grades the state flags of the USA! in ~design
Algernon_Asimov It is kinda cool.It is kinda cool.
-
Comment on Does your flag fail? CGP Grey grades the state flags of the USA! in ~design
Algernon_Asimov If you watch the video, Grey would class all these flags in the F grade, just like all the U.S. flags that are blue with a seal on them.If you watch the video, Grey would class all these flags in the F grade, just like all the U.S. flags that are blue with a seal on them.
-
Comment on Does your flag fail? CGP Grey grades the state flags of the USA! in ~design
Algernon_Asimov That video really needs a stop-frame at the end, showing all the flags on the grading grid.That video really needs a stop-frame at the end, showing all the flags on the grading grid.
-
Comment on Does your flag fail? CGP Grey grades the state flags of the USA! in ~design
Algernon_Asimov Wow. All Australia's state flags would fail, according to Grey's criteria. They're practically identical, except for the badges on the fly half. The territories' flags are better, but the state...Wow. All Australia's state flags would fail, according to Grey's criteria. They're practically identical, except for the badges on the fly half. The territories' flags are better, but the state flags are very consistent, boring, and non-individual.
That's fine. I'll get out, and leave you to your AIs.