-
14 votes
-
What is your opinion whenever you see news/opinion that tech companies are relying more on chatbots rather than junior developers/interns?
I see that in the headline from time to time. Not really sure how prevalent it is and it's pretty disappointing news. but I also can't help but think: the news articles are probably overblowing it...
I see that in the headline from time to time. Not really sure how prevalent it is and it's pretty disappointing news.
but I also can't help but think:
- the news articles are probably overblowing it and it's not probably not as prevalent as it's being portrayed
- that any tech company doing that is shooting themselves in the foot. in total, I was an intern at various companies for a little under 3 years. I don't doubt that the work I did for the majority of the my co-ops were all things that could have been done by a chatBot. writing unit tests and small scripts and etc. but they were invaluable to me (1) understanding what is expected of me in a professional environment and (2) gave me a basic idea of how to code in a professional environment (2) gave me alot of perspective on what technologies and tools I should spend spare time learning cause my university very much focused on dinosaur-era languages, for the classes that did teach any coding related skills. same for the friends I went to uni with. So all I think is maybe in the short term, they are saving money on not hiring interns/co-ops/junior devs to do work that can be done by a bot but I feel like in the long terms that will reduce the number of intermediate/senior devs on the market which means they'll be in higher demand and cost more money.
26 votes -
OpenAI to release web browser in challenge to Google Chrome
37 votes -
What are your AI-generated guilty pleasures?
Most people here dislike AI, more specifically LLM generated content, for reasons such as environmental impact, stealing people's work, etc. Despite that, is there anything that you enjoy? I've...
Most people here dislike AI, more specifically LLM generated content, for reasons such as environmental impact, stealing people's work, etc. Despite that, is there anything that you enjoy?
I've been listening to this artist's music for a while. It's mostly video game music "re-imagined by AI" into City Pop and other styles. Artist says they use AI to generate samples, then do the rest of the work like any producer would. I have no idea if it's true or not, but I gotta admit that most of it is really good.
Today I also watched some "AI ASMR" videos out of curiosity. It's stupid, I know. But watching a knife cut glass can be so damn satisfying. I'm sorry, planet.
45 votes -
Michael Levin - "Communication With Intelligence in Unconventional Embodiments"
5 votes -
Death by a thousand slops | daniel.haxx.se
36 votes -
Shouldn't somebody *stop* "Meta Superintelligence Labs"?
Noted smoked meats enthusiast Mark Zuckerberg has recently been running around collecting ML experts for a project involving an organization called Meta Superintelligence Labs, which is set to...
Noted smoked meats enthusiast Mark Zuckerberg has recently been running around collecting ML experts for a project involving an organization called Meta Superintelligence Labs, which is set to feature compute clusters with names like "Prometheus" and "Hyperion", and which will attempt to "deliver" superintelligence.
Isn't this sort of behavior on the list of things people are absolutely not to be allowed to do? Or has something changed and we now feel it's safe for Mark Zuckerberg to be allowed control of a piece of equipment that can outsmart all his enemies and also Mark Zuckerberg? Are we all safely convinced he will fail?
If it cannot be permitted, who is responsible for not permitting it?
26 votes -
Most AI struggles to read clocks and calendars
23 votes -
OpenAI can rehabilitate AI models that develop a “bad boy persona”
14 votes -
Grok searches for Elon Musk's opinion on controversial questions
39 votes -
Paying for AI: Have you found it to be worth it?
I'm starting to use AI increasingly, and am getting some value out of it. I'm curious if paying for paid tiers of the big players (in particular, ChatGPT and Claude) provides significantly better...
I'm starting to use AI increasingly, and am getting some value out of it. I'm curious if paying for paid tiers of the big players (in particular, ChatGPT and Claude) provides significantly better responses.
I'm aware that the paid tiers offer more features and benefits than just higher response quality. For me, those are just nice-to-haves, and not my primary concern.
My main uses of AI are software development and foreign language learning. So far, I've used the free versions of ChatGPT and Claude, as well as "proxies," including Github Copilot and Duck.ai. For both my use cases, I've found the responses usually good and helpful. I just maintain a healthy skepticism about the correctness of the answers, and challenge, test, and double check where needed (especially testing suggested code when developing software).
Have you found response quality to be noticeably and significantly better with paid tiers? I was just randomly thinking, and it occurred to me that the cost of an AI subscription is in the same ballpark as a subscription to a language learning service like Duolingo. So, if I can get value from AI that approaches what I'd get from a dedicated language learning service (even if it doesn't quite match or exceed it), then also getting the value of general AI in the same subscription should make things quite valuable and worth it. Not to mention possibly getting better software development assistance in the same package.
32 votes -
The future of forums is lies, I guess
63 votes -
Learning to Be Me (1990)
23 votes -
AI videos have never been better: can you tell what's real?
31 votes -
Some AI music I generated
12 votes -
Interview with Google's Android leader Sameer Samat
6 votes -
AI coding tools make developers slower but they think they're faster, study finds
40 votes -
The rise of Whatever
92 votes -
'I can't drink the water' - life next to a US data centre
26 votes -
That white guy who can't get a job at Tim Hortons? He's AI.
22 votes -
I wrote my first Chrome extension to simplify Wikipedia articles
15 votes -
Pay up or stop scraping: Cloudflare program charges bots for each crawl
46 votes -
Decrypted Apple Intelligence safety filters
18 votes -
Exploring the dangers of AI in mental health care
15 votes -
Sam Altman says Meta offered OpenAI staff $100 million bonuses, as Mark Zuckerberg ramps up AI poaching efforts
37 votes -
China hosts first fully autonomous AI robot football (soccer) match
7 votes -
Cats confuse reasoning LLM: Query-agnostic adversarial triggers for reasoning models
24 votes -
You're going to use Gemini on Android whether you like it or not
48 votes -
'Positive review only': Researchers hide AI prompts in papers to influence automated review
29 votes -
TikTok is being flooded with racist AI videos generated by Google’s Veo 3
35 votes -
Content Independence Day: No AI crawl without compensation!
14 votes -
Can AI-generated photos be art?
24 votes -
AI’s ability to read and summarize is making it a useful tool for scholarship
18 votes -
An industry group representing almost all of Denmark's media outlets including broadcasters and newspapers has said it's suing ChatGPT's parent company OpenAI for using its content
13 votes -
Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task
54 votes -
Meta poaches three OpenAI researchers: Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai
13 votes -
US Federal judge sides with Meta in lawsuit over training AI models on copyrighted books
22 votes -
User-friendly and privacy-friendly LLM experience?
I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people...
I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people (who are not developers) mostly talk about the end-user products that I lack the knowledge of.
Ethical problems aside, the problem with non-API usage is, even if you pay, I can't find one that have better privacy policy than API. And the problem with API version is that it is not as good as the completed apps unless you want to reinvent the wheel. The apps also may include ads in the future, while API technically cannot as it would affect some downstream usecases.
Provider Data Retention (API) Data Retention (Consumer) UI-only features ChatGPT Plus 30 days, no training Training opt-out, 30 days for temp. chat, unknown retention otherwise Voice, Canvas, Image generation in chat, screensharing, Mobile app Google AI Pro 0 72 hours if you disable history, or up to 3 years and trained upon otherwise Android assistant, Canvas, AI in Google Drive/Docs, RAG (NotebookLM), Podcast generation, Browser use (Mariner), Coding (Gemini CLI), Screensharing Gemini in Google Workspace See above 0-18 months, but no human review/training See above Claude Pro 30 days Up to 2 years (no training without opt-in) Coding, Artifact, Desktop app, RAG, MCP As a dual use technology, the table doesn't include the extra retention period if they detect an abuse. Additionally, if you click on thumbs up/down it may also be recorded for the provider's employee to review.
I don't think OpenWebUI, self hosted models, etc. would suffice if they are not built to the same quality as the first party products. I know I'm probably asking for something that doesn't exists here, but at least I hope it will bring to people's attention that even if you're paying for the product you might not get the same privacy protection as API users.
15 votes -
Echo Chamber: A context-poisoning jailbreak that bypasses LLM guardrails
34 votes -
AI is transforming Indian call centers
26 votes -
The boss of mobile gaming giant Supercell says the industry needs to take bigger risks to compete
7 votes -
Anthropic wins key US ruling on AI training in authors' copyright lawsuit
27 votes -
The AI lifestyle subsidy is going to end
54 votes -
FilMaster: Bridging cinematic principles and generative AI for automated film generation
3 votes -
OpenAI is nabbing Microsoft customers, fueling partners’ rivalry
9 votes -
Curated realities: An AI film festival and the future of human expression
3 votes -
OpenAI slams US court order to save all ChatGPT logs, including deleted chats
45 votes -
Is pop culture a form of "model collapse?"
Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit. One of the things I find interesting about conversations on LLMs is when have a critique about them, and...
Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit.
One of the things I find interesting about conversations on LLMs is when have a critique about them, and someone says, "Well, it's no different than people." People are only as good as their training data, people misremember / misspeak / make mistakes all the time, people will listen to you and affirm you as you think terrible things. My thought is that not being reliably consistent is a verifiable issue for automation. Still, I think it's excellent food for thought.
I was looking for new music venues the other day. I happened upon several, and as I looked at their menu and layout, it occurred to me that I had eaten there before. Not there, but in my city, and in others. The Stylish-Expensive-Small-Plates-Record-Bar was an international phenomenon. And more than that, I couldn't help but shake that it was a perversion of the original, alluring concept-- to be in a somewhat secretive record bar in Tokyo where you'll be glared into the ground if you speak over the music.
It's not a bad idea. And what's wrong with evoking a good idea, especially if the similarity is just unintentional? Isn't it helpful to be able to signal to people that you're like-that-thing instead of having to explain to people how you're different? Still, the idea of going just made me assume it'd be not simply like something I had experienced before, but played out and "fake." We're not in Tokyo, and people do talk over the music. And even if they didn't, they have silverware and such clanging. It makes me wonder if this permutation is a lossy estimation of the original concept, just chewed up, spat out, slurped, regurgitated, and expensively funded.
other forms of conceptual perversion:
- Matters of Body Image - is it a sort of collapse when we go from wanting 'conventional beauty' to frankensteining features onto ourselves? Think fox eye surgeries, buccal fat removal, etc. Rather than wanting to be conventionally attractive, we aim for the related concept of looking like people who are famous.
- (still thinking)
15 votes -
Contra Ptacek's terrible article on AI
27 votes -
Disney files landmark case against AI image generator
16 votes