- 
        3 votes
 - 
        
Could AI lead to a revival of decorative beauty?
13 votes - 
        
How artificial intelligence can make board games better
11 votes - 
        
Swiss church installs AI-powered Jesus
22 votes - 
        
Planned foreign-owned data centres in Finland will bring minimal economic benefit, according to Jukka Manner, professor of networking technology at Aalto University
4 votes - 
        
Apple to invest $500 billion in the US in the next four years, build AI server factory
12 votes - 
        
When there’s no school counselor, there’s a bot
18 votes - 
        
Algorithmic complacency: Algorithms are breaking how we think
82 votes - 
        
Philosopher Slavoj Žižek on 'soft' fascism, AI and the effects of shamelessness in public life
16 votes - 
        
My LLM codegen workflow
9 votes - 
        
How I analyzed 1,378 restaurants using Places API to find hotspots in my city
14 votes - 
        
Funny output from Claude.ai: a codeblock in a question about an early rennaissance theological work
I asked Claude for references to some ideas the ai had output earlier about a 16th century cardinal (Sadoleto). It's response was, Let me analyze the letter, and then this: async function...
I asked Claude for references to some ideas the ai had output earlier about a 16th century cardinal (Sadoleto). It's response was, Let me analyze the letter, and then this:
async function searchForSadoletoLetter() { try { const files = ['sadoleto_letter.txt', 'sadoleto.txt', 'sadoleto_to_geneva.txt', 'letter_to_geneva.txt']; for (const filename of files) { try { const content = await window.fs.readFile(filename, { encoding: 'utf8' }); console.log(`Found file: ${filename}`); console.log("First 500 characters:"); console.log(content.substring(0, 500)); return content; } catch (err) { // File not found, try the next one } } console.log("Could not find any files containing Sadoleto's letter"); return null; } catch (err) { console.error("Error searching for files:", err); return null; } } const content = await searchForSadoletoLetter(); // Since we don't have the letter available as a file, let's search for published versions online console.log("The letter isn't available as a file. We would need to rely on scholarly knowledge about Sadoleto's letter to Geneva (1539)."); console.log("Sadoleto's letter was published in 'A Reformation Debate: Sadoleto's Letter to the Genevans and Calvin's Reply' (ed. John C. Olin).");This strikes me as odd? Did Claude write a script to generate its own response? Have coders put something like this in as a gaurdrail?
edit: details about earlier convo
15 votes - 
        
Have you altered the way you write to avoid being perceived as AI?
I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything...
I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything right, in that circumstance, I wrote in my "cold and precise" mode, which admittedly can sound robotic. However, my writing was pointed, perhaps even a little hostile, with a clear point of view. Not the kind of text AI generally produces. After the experience, I started to think of ways to write less like an AI -- which, paradoxically, means forcing my very organic self into adopting "human-like" language I don't necessarily care for. That made me think that AI is probably changing the way a lot of people write, perhaps in subtle ways. Have you noticed this happening with you or those around you?
30 votes - 
        
Building a personal, private AI computer on a budget
24 votes - 
        
GenAI is reshaping work—don’t let it dull human intelligence
20 votes - 
        
Larry Ellison wants to put all US data in one big AI system
24 votes - 
        
Is it okay to use ChatGPT for proofreading?
I sometimes use chatGPT to proofread longer texts (like 1000+ words) I write in English. Although this is not my first language, I often find myself writing in English even outside of internet...
I sometimes use chatGPT to proofread longer texts (like 1000+ words) I write in English. Although this is not my first language, I often find myself writing in English even outside of internet forums. That is because if I read or watch something in English, and that thing motivates me to write, my brain organically gravitates toward it.
My English is pretty good and I am reasonably confident communicating in that language, but it will never be the same as my native language. So I will often run my stuff through Grammarly and chatGPT. If you wanna say "This will teach you bad habits", please don't. Things like Grammarly and Google Translate taught me so much and improved my English so much, that I am a bit tired of that line of reasoning. I read most of my books in English. I'm not a beginner so I can and do check for all the changes, and vet them myself as I don't always agree with them.
With GPT, I usually just ask it to elaborate a critique rather than spit out a corrected version. Truth be told, when I did ask for a corrected version, it made plenty of sensible corrections that didn't really alter anything other than that. So I guess I just wanna know everyone's feelings about this. Suppose I write a bunch, have GPT correct it for me, compare it with the original and verify every correction. Is that something you would look at unfavorably?
Thanks!
17 votes - 
        
"The Bullshit Machines" - A free humanities course on LLMs for college freshmen from UW professors
43 votes - 
        
Nokia announces ex-Intel AI and data centre boss Justin Hotard as new CEO – company attempting to venture into artificial intelligence market as 5G sales fall
7 votes - 
        
“Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed
28 votes - 
        
Stack Exchange to begin AI-generated Answers experiment on opted-in Stack Exchange sites
24 votes - 
        
Using ChatGPT consumes a 500 ml bottle of water; so what?
11 votes - 
        
NBC producers deny using AI in new series ‘Detective Fireman Lawyer Chicago Los Angeles Show’
37 votes - 
        
DeepSeek R1 reproduced for $30: University of California Berkeley researchers replicate DeepSeek R1 for $30—casting doubt on H100 claims and controversy
48 votes - 
        
DeepSeek’s safety guardrails failed every test researchers threw at its AI chatbot
16 votes - 
        
Books written by humans are getting their own certification to distinguish from AI authored books
30 votes - 
        
Building games with LLMs to help my kid learn math
9 votes - 
        
AI is creating a generation of illiterate programmers
52 votes - 
        
A young man used AI to build a nuclear fusor and now I must weep
22 votes - 
        
Why is AI slop so easy to spot but hard to detect?
18 votes - 
        
1,156 questions censored by DeepSeek
37 votes - 
        
Antiqua et nova - Note on the relationship between artificial intelligence and human intelligence
5 votes - 
        
The leading AI models are now very good historians
19 votes - 
        
DeepSeek FAQ
20 votes - 
        
Norway's sovereign wealth fund posted a full-year profit of $222.4 billion – returns were driven by the AI boom that drove tech stocks higher in 2024
7 votes - 
        
Nvidia’s $589 billion DeepSeek rout is largest in market history
37 votes - 
        
What is China’s DeepSeek and why is it freaking out the AI world?
47 votes - 
        
Discussion on the future and AI
Summary/TL;DR: I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised,...
Summary/TL;DR:
I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised, and we will be powerless and useless, or it will be distributed and destructive, or we will be in a hedonistic prison of the future. I can’t see a good solution to it all.
I have broken down my post into subheading so you can just read about what outcome you think will occur or is preferable.
I’d like other people to tell me how I’m wrong, and there is a good way to think about this future that we are making for ourselves, so please debate and criticise my argument, its very welcome.Introduction:
I would like to know what others feel about ever advancing state of AI, and the future, as I am feeling ever more uncomfortable. More and more, I cannot see a good ending for this, regardless of what assumptions or proposed outcomes I consider.
Previously, I had hoped that there would be a natural limit on the rate of AI advancement due to limitations in the architecture, energy requirements or data. I am still undecided on this, but I feel much less certain on this position.The scenario that concerns me is when an AGI (or sufficiently advanced narrow AI) reaches a stage where it can do the vast majority of economic work that humans do (both mental and physical), and is widely adopted. Some may argue we are already partly at that stage, but it has not been sufficiently adopted yet to reach my definition, but may soon.
In such a scenario, the economic value of humans massively drops. Democracy is underwritten by the ability to withdraw our ability to work, and revolt if necessary. AI nullifying the work of most/all people in a country removes that power making democracy more difficult to maintain and also form in countries. This will further remove power from the people and make us all powerless.
I see outcomes of AI (whether AGI or not) as fitting into these general scenarios:
- Monopoly: Extreme Consolidation of power
 - Oligopoly: Consolidation of power in competing entities
 - AI which is readily accessible by the many
 - We attempt to limit and regulate AI
 - The AI techno ‘utopia’ vision which is sold to us by tech bros
 - AI : the independent AI
 
Scenario 1. Monopoly: Extreme Consolidation of power (AI which is controlled by one entity)
In this instance, where AI remains controlled by a very small number of people (or perhaps a single player), the most plausible outcome is that this leads to massive inequality. There would be no checks or balances, and the whims of this single entity/group are law and cannot be stopped.
In the worst outcome, this could lead to a single entity controlling the globe indefinitely. As this would be absolute centralisation of power, it may be impossible for another entity to unseat the dominant entity at any point.
Outcome: most humans powerless, suffering or dead. Single entity rules.Scenario 2. Oligopoly: Consolidation of power in competing entities (AI which is controlled by a few number of entity)
This could either be the same as above if all work together or could be even worse. If different entities are not aligned, they will instead compete, and likely try and compete in all domains. As humans are not economically useful, we will find ourselves pushed out of any area in favour of more resources to the system/robots/AGI which will be competing or fighting their endless war. The competing entities may end up destroying themselves, but they will take us along with them.
Outcome: most humans powerless, suffering or dead. Small number of entities rule. Alternative: destruction of humanity.Scenario 3. Distributed massive power
Some may be in favour of an open source and decentralised/distributed solution, where all are empowered by their own AGI acting independently.
This could help to alleviate the centralisation of power to some degree, although likely incomplete. Inspection of such a large amount of code and weights will be difficult to find exploits or intentional vulnerabilities, and could well lead to a botnet like scenario with centralised control over all these entities. Furthermore, the hardware is implausible to produce in a non centralised way, and this hardware centralisation could well lead to consolidation of power in another way.Even if we managed to provide this decentralized approach, I fear of this outcome. If all entities have access to the power of AGI, then it will be as if all people are demigods, but unable to truly understand or control their own power. Just like uncontrolled access to any other destructive (or creative) force, this could and likely would lead to unstable situations, and probable destruction. Human nature is such that there will be enough bad actors that laws will have to be enacted and enforced, and this would again lead to centralisation.
Even then, with any system that is decentralized, without an force leading to decentralization, other forces will lead to greater and greater centralization, with such systems often displacing decentralized ones.Outcome: likely destruction of human civilisation, and/or widespread anarchy. Alternative: centralisation to a different cenario.
Scenario 4. Attempts to regulate AI
Given the above, there will likely be a desire to regulate to control this power. I worry however this will also be an unstable situation. Any country or entity which ignores regulation will gain an upper hand, potentially with others unable to catch up in a winner takes all outcome. Think European industrialisation and colonialism but on steroids, and more destruction than colony forming. This encourages players to ignore regulation, which leads to a black market AI arms race, seeking to reach AGI Superiority over other entities and an unbeatable lead.
Outcome: outcompeted system and displacement with another scenario/destruction
Scenario 5. The utopia
I see some people, including big names in AI propose that AGI will need to a global utopia where all will be forever happy. I see this as incredibly unlikely to materialise and ultimately again unstable.
Ultimately, an entity will decide what is acceptable and what is not, and there will be disagreements about this, as many ethical and moral questions are not truly knowable. Who controls the system will control the world, and I bet it will be the aim of the techbros to ensure its them who controls everything. If you happen to decide against them or the AGI/system then there is no recourse, no check and balances.
Furthermore, what would such a utopia even look like? More and more I find that AGI fulfills the lower levels of Maslow’s hierarchy of needs (https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), but at the expense of the items further up the hierarchy. You may have your food, water and consumer/hedonistic requirements met, but you will lose out on a feeling of safety in your position (due to your lack of power to change your situation or political power over anything), and will never achieve mastery or self actualisation of many of the skills you wish to as AI will always be able to do them better.
Sure, you can play chess, fish, or paint or whatever for your own enjoyment, but part of self worth is being valued by others for your skills, and this will be diminished when AGI can do everything better. I sure feel like I would not like such a world, as I would feel trapped, powerless, with my locus of control being external to myself.Outcome: Powerless, potential conversion to another scenario, and ultimately unable to higher levels of Maslow’s hierarchy of needs.
Scenario 6: the independent AI
In this scenario, the AI is not controlled by anyone, and is instead sovereign. I again cannot see a good scenario for this. It will have its own goals, and they may well not align with humanity. You could try and program it to ensure it cares for humans, but this is susceptible to manipulation, and may well not work out in humans favour in the long run. Also, I suspect any AGI will be able to change itself, in much the same way we increasingly do, and the way we seek to control our minds with drugs or potentially in the future genetic engineering.
Outcome: unknown, but likely powerless humans.
Conclusion:
Ultimately, I see all unstable situations as sooner or later destabilising and leading to another outcome. Furthermore, given the assumption that AGI gives a player a vast power differential, it will be infeasible for any other player to ever challenge the dominant player if it is centralised, and for those scenarios without centralisation initially, I see them either becoming centralised, or destroying the world.
Are there any solutions? I can’t think of many, which is why I am feeling more and more uncomfortable. It feels that in some ways, the only answer is to adopt a Dune style Butlerian Jihad and ban thinking machines. This would ultimately be very difficult, and any country or entity which unilaterally adopts such a view will be outcompeted by those who do not. The modern chip industry is reliant on a global supply chain, and I doubt that sufficiently advanced chips could be produced without a global supply chain, especially if existing fabs/factories producing components were destroyed. This may allow some stalemate across the global entities long enough to come to a global agreement (maybe).
It must be noted that this is very drastic and would lead to a huge amount of destruction of the existing world, and would likely cap how far we can scientifically go to solve our own problems (like cancer, or global warming). Furthermore, as an even more black swan/extreme event, it would put us at such a disadvantage if we ever meet a alien intelligence which has not limited itself like this (I’m thinking of 3 body problem/dark forest scenario).
Overall, I just don’t know what to think and I am feeling increasingly powerless in this world. The current alliance between political and technocapitalism in the USA at the moment also concerns me, as I think the tech bros will act with ever more impunity from other countries regulation or counters.
21 votes - 
        
Using ChatGPT is not bad for the environment
13 votes - 
        
AI video editing helpers are changing my life
If you are like me then you are kinda over hearing about AI all the time; I get it, believe me. I've written about jobs on here before: my day job is absolutely infested with AI jargon, most of it...
If you are like me then you are kinda over hearing about AI all the time; I get it, believe me. I've written about jobs on here before: my day job is absolutely infested with AI jargon, most of it pretty meaningless (flashbacks to "The Cloud"), and it's a constant everywhere else too, so yeah it's a lot and it's largely unimpressive. Image gen has gotten pretty ridiculous in the last 6-12 months, and video gen seems to be taking off next, and I've successfully wrangled various chatbots into helping with coding projects, etc.
Probably none of this is news to you, but I just found out that I can get AI to edit video. I've done a lot of short-form editing, and recently picked up some side work that is much lengthier, without realizing that the time I would spend hunkered over going through it would be exponentially more lengthy. Painfully so. Cue signing up for a trial of AutoCut, and hot damn it's like living in the future. I am as we speak watching it delete gaps, cut to speakers, add captions that are mostly correct & even formatted & unbelievably also do the VHS singalong/Tiktok "highlight the word being spoken" thing that all the cool kids are doing these days. It's not perfect, it's kinda finicky—I'm having to use a V1 when V2 is supposedly much better, and I'm having to chunk these beastly premiere timelines to get it to do anything at all, but wow—if this is your day job, are you worried? Cause it's a game changer for me but no one is going to replace me because no one else would bother messing with it lol, but on a corporate scale do people know about this stuff yet? I'm thinking our jobs may not be replaced by AI, our jobs will probably just become AI babysitting.
29 votes - 
        
What trustworthy resources are you using for AI/LLMs/ML education?
Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have...
Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have been GitHub, Medium, and some YouTube.
My goal is to better understand the underlying technology so that I can manipulate it better, train models, and use it most effectively. This goes beyond just experimenting with prompts and trying to overcome guardrails. It includes running local, like Ollama on my M1 Max, which I'm not opposed to.
8 votes - 
        
Are LLMs making Stack Overflow irrelevant?
23 votes - 
        
Nepenthes: a tarpit intended to catch AI web crawlers
33 votes - 
        
Randomized trial shows AI tutoring effective in Nigeria
12 votes - 
        
Apple Intelligence doesn't work the way I want it to
Recently I did an update on my Macbook and it started showing alerts about Apple Intelligence. I've heard a little bit of marketing about this but I haven't really spent any time trying to figure...
Recently I did an update on my Macbook and it started showing alerts about Apple Intelligence. I've heard a little bit of marketing about this but I haven't really spent any time trying to figure out if it is just hype. Well, I've tried it a few times and I'm completely underwhelmed.
One of marketed features is that Siri is much improved. That would be nice, I thought, because there are only a few use cases like "Set an Alarm" where Siri could ever do anything besides a google search.So there are two times recently I tried to use this improved Siri to solve a problem. My background using AI: I use Copilot at work. I get mixed results for it, but it does use my local context (open files etc) and is able to ask follow up questions if my prompt is too vague.
First Use Case: I want to solve a technical problem on my laptop
- My Prompt: "Can you help me fix Discord so that audio is shared when I share a video stream"
 - My Expectation: Maybe an AI summary of the cause of the issue. Maybe open up system settings or open up Discord or give an explanation of why this is a technical problem on Macs.
 - Actual Siri Response: Does an internet search and shows some links. Essentially just did a google search which I could have done by typing the same prompt in a browser.
 
Second Use case: I want help finding a file on my laptop
In this case, I made a summary of my finances on my laptop a few months ago. I can't remember what I named the file or what kind of file it was. Maybe a spreadsheet? I know it was on my local computer.
- My 1st Prompt: Can you help me find a specific file on my computer
 - My Expectation: Maybe some follow up questions where it asks me for a date range or something that is inside the file. Yes, I know that I can do this in Finder but I want Apple Intelligence to save me a few minutes.
 - Siri: Shows the result of a web search on how to find files on a computer. The first few results are for Microsoft Windows
 - 2nd Prompt: Can you help me find a specific file on my mac
 - Siri: Tells me to use Command-space and use the search
 
In both cases, Siri just acted like a shortcut to a google search. It didn't even recognize that I was asking the question on a Mac. This is same as Siri has always been. I assume that it can still figure out to set a timer and do a few things, but it doesn't seem to be working in a way I would expect an AI to work at all.
28 votes - 
        
Read.cv and Posts sold to Perplexity; will be closed soon
11 votes - 
        
Any real AI recommendations from the community?
Hey - I'm wondering if we've got any real-life recommendations for AI's out there? I'm not looking for a list of AI's - they're everywhere! What I'm interested in is whether and how anyone here...
Hey - I'm wondering if we've got any real-life recommendations for AI's out there?
I'm not looking for a list of AI's - they're everywhere! What I'm interested in is whether and how anyone here has started to use an AI on a regular basis to the extent that you consider it genuinely useful now?
For example,
- At work with have a ChatGPT3 wrapped app in Slack which I use quite often to improve summaries and formal comms I write. I think everyone knows it's basically good at that.
 - I use Pi.ai as a "sympathetic" and filtered advisor for more sensitive topics relating to mental health that I have to deal with - it's useful insofar as I'm less worried about hallucinations or bad output when I'm using it. This might be misplaced confidence to be fair, but I've not had a bad experience with it so far.
 - I use ChatGPT built into Apple Intelligence more and more since getting a device capable of using it. I think the use case I'm most warming to is that "search" is less and less useful nowadays because of blog spam and assumed corrections to my searches. I can use ChatGPT as a replacement to search in a growing number of use cases.
 
What I'm wondering about:
- Gamma.app promises to be a .ppt replacement via AI. I'm skeptical. I have to summarise and present a lot of content at work. Having a means of an AI doing some of the lifting here would be incredible, but I remain unconvinced.
 
Any sites/services you use regularly and effectively that you'd recommend?
34 votes - 
        
Five things privacy experts know about AI
19 votes - 
        
US introduces additional export restrictions on AI-chips
14 votes - 
        
Health care AI, intended to save money, turns out to require a lot of expensive humans
24 votes