-
36 votes
-
Is chain-of-thought reasoning of LLMs a mirage? A data distribution lens.
28 votes -
If you're a programmer, are you ever going to believe an AGI is actually 'I'?
First, I am emphatically not talking about LLMs. Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it...
First, I am emphatically not talking about LLMs.
Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it differently.
Let's say someone comes up with something that seems to walk and talk like a self-aware, sentient, AGI duck. It has a "memories" db, it learns and adapts, it seems to understand cause and effect, actions and consequences, truth v falsehood, it passes Turing tests like they're tic-tac-toe, it recognizes itself in the mirror, yada.
But as a developer, you can "look behind the curtain" and see exactly how it works. (For argument's sake, let's say it's a FOSS duck, so you can actually look at the source code.)
Does it ever "feel" like a real, sentient being? Does it ever pass your litmus test?
For me, I think the answer is, "yes, eventually" ... but only looong after other people are having relationships with them, getting married, voting for them, etc.
31 votes -
Reddit will block the Internet Archive
58 votes -
Question - how would you best explain how an LLM functions to someone who has never taken a statistics class?
My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course...
My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course only for some degree programs.
How should chatgpt etc be explained to the public at large to avoid the worst problems that are emerging from widespread use?
37 votes -
Nvidia, AMD agree to pay US government 15% of AI chip sales to China
21 votes -
AI industry horrified to face largest copyright class action ever certified
63 votes -
I spent months living with smart glasses. People talk to me differently now.
35 votes -
Over twenty-one days of talking with ChatGPT, an otherwise perfectly sane man became convinced he was a superhero
62 votes -
Meta allegedly pirated terabytes of porn to trick the BitTorrent protocol into letting them pirate books faster
42 votes -
Web browsers with AI assistants built-in are coming
4 votes -
Open AI announces $1.5 million bonus for every employee
22 votes -
Google’s healthcare AI made up a body part — what happens when doctors don’t notice?
30 votes -
In the future all food will be cooked in a microwave, and if you can’t deal with that then you need to get out of the kitchen
35 votes -
Is AI actually useful for anyone here?
Sometimes I feel like there's something wrong with how I use technology, or I'm just incredibly biased and predisposed to cynicism or something, so I wanted to get a pulse on how everyone else...
Sometimes I feel like there's something wrong with how I use technology, or I'm just incredibly biased and predisposed to cynicism or something, so I wanted to get a pulse on how everyone else feels about AI, specifically LLMs, and how you use them in your professional and personal lives.
I've been messing with LLMs since GPT-3, being initially very impressed by the technology, to that view sort of evolving to a more nuanced one. I think they're very good at a specific thing and not great at anything else.
I feel like, increasingly, I'm becoming a rarity among tech people, especially executives. I run cybersecurity for a medium sized agency, and my boss is the CIO. Any time I, or any of her direct reports write a proposal, a policy, a report, or basically anything meant to distribute to a wide audience, they insist on us "running it through copilot", which to them, just means pasting the whole document into copilot chat, then taking the output.
It inevitably takes a document I worked hard on to balance tone, information, brevity, professional voice, and technical details and turns it into a bland, wordy mess. It's unusable crap that I then have to spend more time with to have it sound normal. My boss almost always comes up with "suggestions" or "ideas" that are very obviously just copy pasted answers from copilot chat too.
I see people online that talk about how LLMs have made them so much faster at development, but every time I've ever used it that field, it can toss together a quick prototype for something I likely could have googled, but there will frequently be little hidden bugs in the code. If I try to use the LLM to fix those bugs, it inevitably just makes it worse. Every time I've tried to use AI in a coding workflow, I spend less time thinking about the control flow of the software, and more time chasing down weird esoteric bugs. Overall it's never saved me any time at all.
I've used them as a quick web search, and while they do save me from having to trawl through a lot of the hellhole that is the modern internet, with blogspam, ads, and nonsense people write online, a lot of times, it will just hallucinate answers. I've noticed it's decent at providing me results when results exist, but if results don't exist, or I'm asking something that doesn't make sense, it falls flat on its face because it will just make things up in order to sound convincing and helpful.
I do see some niches where the stuff has been useful. Summarizing large swathes of documents, where the accuracy of that summary doesn't matter much is a little useful. Like if I were tasked to look through 300 documents and decide which ones were most relevant to a project, and I only had an hour to do it, I think that would be a task it would do well with. I can't review or even skim 300 documents in an hour, and even though an LLM would very likely be wrong about a lot of it, at least that's something.
The thing is, I don't frequently run into tasks where accuracy doesn't matter. I doubt most people do. Usually when someone asks for an answer to something, or you want to actually do something useful, the hidden assumption is that the output will be correct, and LLMs are just really bad at being correct.
The thing is, the internet is full of AI evangelists that talk about their AI stack made up of SaaS products I've never even heard of chained together. They talk about how insanely productive it's made them and how it's like being superhuman and without it they'd be left behind.
I'm 99% sure that most of this is influencer clickbait capitalizing on FOMO to keep the shared delusion of LLM's usefulness going, usually because they have stake in the game. They either run an AI startup, are involved in a company that profits off of AI being popular, they're an influencer that makes AI content, or they just have Nvidia in their stock portfolio like so much of us do.
Is there anyone out there that feels this technology is actually super useful that doesn't fall into one of those categories?
If so, let me know. Also, let me know what I'm doing wrong. Am I just a Luddite? A crotchety old man? Out of touch? I'm fine if I am, I just want to know once and for all.
80 votes -
Perplexity AI is using stealth, undeclared crawlers to evade website no-crawl directives
35 votes -
No, AI is not making engineers 10x as productive: curing your AI 10x engineer imposter syndrome
27 votes -
The great LLM scrape
24 votes -
We're launching Stargate Norway, OpenAI's first AI data center initiative in Europe under our OpenAI for Countries program
9 votes -
Persona vectors: monitoring and controlling character traits in language models
13 votes -
One quirky anti AI technique I've used is leaving in the typos
Ironically, AI has boomeranged from surpassing human intelligence to having us spot it like a dove in a pond. So now, leave in all the little flubs to make it a bit more clear that a person at...
Ironically, AI has boomeranged from surpassing human intelligence to having us spot it like a dove in a pond. So now, leave in all the little flubs to make it a bit more clear that a person at least typed this in a keyboard, you know?
42 votes -
They’re putting blue food coloring in everything
83 votes -
Dave Barry found out about his death the way everybody finds out everything: from Google
23 votes -
Subliminal learning: Language models transmit behavioral traits via hidden signals in data
21 votes -
OpenAI's gold medal performance on the International Math Olympiad
14 votes -
What is your opinion whenever you see news/opinion that tech companies are relying more on chatbots rather than junior developers/interns?
I see that in the headline from time to time. Not really sure how prevalent it is and it's pretty disappointing news. but I also can't help but think: the news articles are probably overblowing it...
I see that in the headline from time to time. Not really sure how prevalent it is and it's pretty disappointing news.
but I also can't help but think:
- the news articles are probably overblowing it and it's not probably not as prevalent as it's being portrayed
- that any tech company doing that is shooting themselves in the foot. in total, I was an intern at various companies for a little under 3 years. I don't doubt that the work I did for the majority of the my co-ops were all things that could have been done by a chatBot. writing unit tests and small scripts and etc. but they were invaluable to me (1) understanding what is expected of me in a professional environment and (2) gave me a basic idea of how to code in a professional environment (2) gave me alot of perspective on what technologies and tools I should spend spare time learning cause my university very much focused on dinosaur-era languages, for the classes that did teach any coding related skills. same for the friends I went to uni with. So all I think is maybe in the short term, they are saving money on not hiring interns/co-ops/junior devs to do work that can be done by a bot but I feel like in the long terms that will reduce the number of intermediate/senior devs on the market which means they'll be in higher demand and cost more money.
26 votes -
OpenAI to release web browser in challenge to Google Chrome
37 votes -
What are your AI-generated guilty pleasures?
Most people here dislike AI, more specifically LLM generated content, for reasons such as environmental impact, stealing people's work, etc. Despite that, is there anything that you enjoy? I've...
Most people here dislike AI, more specifically LLM generated content, for reasons such as environmental impact, stealing people's work, etc. Despite that, is there anything that you enjoy?
I've been listening to this artist's music for a while. It's mostly video game music "re-imagined by AI" into City Pop and other styles. Artist says they use AI to generate samples, then do the rest of the work like any producer would. I have no idea if it's true or not, but I gotta admit that most of it is really good.
Today I also watched some "AI ASMR" videos out of curiosity. It's stupid, I know. But watching a knife cut glass can be so damn satisfying. I'm sorry, planet.
45 votes -
Shouldn't somebody *stop* "Meta Superintelligence Labs"?
Noted smoked meats enthusiast Mark Zuckerberg has recently been running around collecting ML experts for a project involving an organization called Meta Superintelligence Labs, which is set to...
Noted smoked meats enthusiast Mark Zuckerberg has recently been running around collecting ML experts for a project involving an organization called Meta Superintelligence Labs, which is set to feature compute clusters with names like "Prometheus" and "Hyperion", and which will attempt to "deliver" superintelligence.
Isn't this sort of behavior on the list of things people are absolutely not to be allowed to do? Or has something changed and we now feel it's safe for Mark Zuckerberg to be allowed control of a piece of equipment that can outsmart all his enemies and also Mark Zuckerberg? Are we all safely convinced he will fail?
If it cannot be permitted, who is responsible for not permitting it?
26 votes -
OpenAI can rehabilitate AI models that develop a “bad boy persona”
14 votes -
Grok searches for Elon Musk's opinion on controversial questions
39 votes -
Paying for AI: Have you found it to be worth it?
I'm starting to use AI increasingly, and am getting some value out of it. I'm curious if paying for paid tiers of the big players (in particular, ChatGPT and Claude) provides significantly better...
I'm starting to use AI increasingly, and am getting some value out of it. I'm curious if paying for paid tiers of the big players (in particular, ChatGPT and Claude) provides significantly better responses.
I'm aware that the paid tiers offer more features and benefits than just higher response quality. For me, those are just nice-to-haves, and not my primary concern.
My main uses of AI are software development and foreign language learning. So far, I've used the free versions of ChatGPT and Claude, as well as "proxies," including Github Copilot and Duck.ai. For both my use cases, I've found the responses usually good and helpful. I just maintain a healthy skepticism about the correctness of the answers, and challenge, test, and double check where needed (especially testing suggested code when developing software).
Have you found response quality to be noticeably and significantly better with paid tiers? I was just randomly thinking, and it occurred to me that the cost of an AI subscription is in the same ballpark as a subscription to a language learning service like Duolingo. So, if I can get value from AI that approaches what I'd get from a dedicated language learning service (even if it doesn't quite match or exceed it), then also getting the value of general AI in the same subscription should make things quite valuable and worth it. Not to mention possibly getting better software development assistance in the same package.
32 votes -
The future of forums is lies, I guess
63 votes -
AI videos have never been better: can you tell what's real?
31 votes -
Interview with Google's Android leader Sameer Samat
6 votes -
AI coding tools make developers slower but they think they're faster, study finds
40 votes -
The rise of Whatever
92 votes -
That white guy who can't get a job at Tim Hortons? He's AI.
22 votes -
Pay up or stop scraping: Cloudflare program charges bots for each crawl
46 votes -
Decrypted Apple Intelligence safety filters
18 votes -
Sam Altman says Meta offered OpenAI staff $100 million bonuses, as Mark Zuckerberg ramps up AI poaching efforts
37 votes -
China hosts first fully autonomous AI robot football (soccer) match
7 votes -
Cats confuse reasoning LLM: Query-agnostic adversarial triggers for reasoning models
24 votes -
You're going to use Gemini on Android whether you like it or not
48 votes -
TikTok is being flooded with racist AI videos generated by Google’s Veo 3
35 votes -
Content Independence Day: No AI crawl without compensation!
14 votes -
An industry group representing almost all of Denmark's media outlets including broadcasters and newspapers has said it's suing ChatGPT's parent company OpenAI for using its content
13 votes -
Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task
54 votes -
Meta poaches three OpenAI researchers: Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai
13 votes -
US Federal judge sides with Meta in lawsuit over training AI models on copyrighted books
22 votes