-
29 votes
-
Copilot broke your audit log, but Microsoft won’t tell you
38 votes -
GPT 5 released
30 votes -
'I destroyed months of your work in seconds' says AI coding tool after deleting a dev's entire database during a code freeze: 'I panicked instead of thinking'
74 votes -
Applying Chinese Wall Reverse Engineering to LLM Code Editing
8 votes -
Death by a thousand slops | daniel.haxx.se
36 votes -
Most AI struggles to read clocks and calendars
23 votes -
I wrote my first Chrome extension to simplify Wikipedia articles
15 votes -
User-friendly and privacy-friendly LLM experience?
I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people...
I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people (who are not developers) mostly talk about the end-user products that I lack the knowledge of.
Ethical problems aside, the problem with non-API usage is, even if you pay, I can't find one that have better privacy policy than API. And the problem with API version is that it is not as good as the completed apps unless you want to reinvent the wheel. The apps also may include ads in the future, while API technically cannot as it would affect some downstream usecases.
Provider Data Retention (API) Data Retention (Consumer) UI-only features ChatGPT Plus 30 days, no training Training opt-out, 30 days for temp. chat, unknown retention otherwise Voice, Canvas, Image generation in chat, screensharing, Mobile app Google AI Pro 0 72 hours if you disable history, or up to 3 years and trained upon otherwise Android assistant, Canvas, AI in Google Drive/Docs, RAG (NotebookLM), Podcast generation, Browser use (Mariner), Coding (Gemini CLI), Screensharing Gemini in Google Workspace See above 0-18 months, but no human review/training See above Claude Pro 30 days Up to 2 years (no training without opt-in) Coding, Artifact, Desktop app, RAG, MCP As a dual use technology, the table doesn't include the extra retention period if they detect an abuse. Additionally, if you click on thumbs up/down it may also be recorded for the provider's employee to review.
I don't think OpenWebUI, self hosted models, etc. would suffice if they are not built to the same quality as the first party products. I know I'm probably asking for something that doesn't exists here, but at least I hope it will bring to people's attention that even if you're paying for the product you might not get the same privacy protection as API users.
15 votes -
I think I’m done thinking about genAI for now
37 votes -
Non-engineers AI coding & corporate compliance?
Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice. With the advent of AI coding, people who don't know how to code now...
Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice.
With the advent of AI coding, people who don't know how to code now start to use the AI to automate their work. This isn't new - previously they might use already other low code tools like Excel, UIPath, n8n, etc. but it still require learning the tools to use it. Now, anyone can "vibe coding" and get an output, which is fine for engineers who understand how the output should work and can design how it should be tested (edge cases, etc.)
I had a team come up with me that they managed to automate their work, which is good, but they did it with ChatGPT and the code works as they expected, but they doesn't fully understand how the code works and of course they're deploying this "to production" which means they're setting up an environment that supposed to be for internal tools, but use real customer data fed in from the production systems.
If you're an engineer, usually this violates a lot of policies - you should get the code peer reviewed by people who know what it does (incl. business context), the QA should test the code and think about edge cases and the best ways to test it and sign it off, the code should be developed & tested in non-production environment with fake data.
I can't think of a way non-engineers can do this - they cannot read code (and it get worse if you need two people in the same team to review each other) and if you're outsourcing it to AI, the AI company doesn't accept liability, nor you can retrain the AI from postmortems. The only way is to include lessons learned into the prompt, and I guess at some point it will become one long holy bible everyone has to paste into the limited context window. They are not trained to work on non-production data (if you ever try, usually they'll claim that the data doesn't match production - which I think because they aren't trained to design and test for edge cases). The only way to solve this directly is asking engineers to review them, but engineers aren't cheap and they're best doing something more important.
So far I think the best way to approach this problem is to think of it like Excel - the formulas are always safe to use - they don't send data to the internet, they don't create malware, etc. The worst think they can do is probably destroy that file or hangs your PC. And people don't know how to write VBA so they never do it. Now you have people copy pasting VBA code that they don't understand. The new AI workspace has to be done by building technical guardrails that the AI are limited to. I think it has to be done in some low-code tools that people using AI has to use (like say n8n). For example, blocks that do computation can be used, blocks that send data to the intranet/internet or run arbitrary code requires approval before use. And engineers can build safe blocks that can be used, such as sending messages to Slack that can only be used to send to corporate workspace only.
Does your work has adjusted policies for this AI epidemic? or other ideas that you wanted to share?
23 votes -
Personalized software really is coming, but not today. Maybe tomorrow?
13 votes -
AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
22 votes -
Visualising how AI training avoids getting stuck in local minima
18 votes -
Blackhat hacker 'EncryptHub' behind vibe-coded ransomware unmasked due to opsec mistakes in ChatGPT-created infrastructure
20 votes -
Block AI scrapers with Anubis
27 votes -
Bartosz Milewski - Understanding Attention in LLMs
6 votes -
How artificial intelligence can make board games better
11 votes -
My LLM codegen workflow
9 votes -
How I analyzed 1,378 restaurants using Places API to find hotspots in my city
14 votes -
Funny output from Claude.ai: a codeblock in a question about an early rennaissance theological work
I asked Claude for references to some ideas the ai had output earlier about a 16th century cardinal (Sadoleto). It's response was, Let me analyze the letter, and then this: async function...
I asked Claude for references to some ideas the ai had output earlier about a 16th century cardinal (Sadoleto). It's response was, Let me analyze the letter, and then this:
async function searchForSadoletoLetter() { try { const files = ['sadoleto_letter.txt', 'sadoleto.txt', 'sadoleto_to_geneva.txt', 'letter_to_geneva.txt']; for (const filename of files) { try { const content = await window.fs.readFile(filename, { encoding: 'utf8' }); console.log(`Found file: ${filename}`); console.log("First 500 characters:"); console.log(content.substring(0, 500)); return content; } catch (err) { // File not found, try the next one } } console.log("Could not find any files containing Sadoleto's letter"); return null; } catch (err) { console.error("Error searching for files:", err); return null; } } const content = await searchForSadoletoLetter(); // Since we don't have the letter available as a file, let's search for published versions online console.log("The letter isn't available as a file. We would need to rely on scholarly knowledge about Sadoleto's letter to Geneva (1539)."); console.log("Sadoleto's letter was published in 'A Reformation Debate: Sadoleto's Letter to the Genevans and Calvin's Reply' (ed. John C. Olin).");
This strikes me as odd? Did Claude write a script to generate its own response? Have coders put something like this in as a gaurdrail?
edit: details about earlier convo
15 votes -
Building a personal, private AI computer on a budget
24 votes -
Stack Exchange to begin AI-generated Answers experiment on opted-in Stack Exchange sites
24 votes -
Writing toy code with ChatGPT is a blast
14 votes -
Is there a free LLM frontend that works out of the box?
I want something like typingmind but for free, and that doesn't require installation. mainly for gemini and mistral (or perhaps groq too) I just want to be able to paste my API key and just use...
I want something like typingmind but for free, and that doesn't require installation. mainly for gemini and mistral (or perhaps groq too) I just want to be able to paste my API key and just use it. I know about OpenWebUI and msty but OpenWebUI requires installation, and msty doesn't have an android version.
anyone know something like this ? (would also be nice if it supports LaTeX)
17 votes -
AI-generated tools can make programming more fun
8 votes -
Task-Specific LLM Evals that Do & Don't Work
4 votes -
Everything I built with Claude Artifacts this week
10 votes -
Diffusion for World Modeling - CS:GO and other games rendered in real time using neural networks
7 votes -
wordfreq will no longer be updated partly due to AI polluting the data
74 votes -
Breaking my hand forced me to write all my code with AI for 2 months
14 votes -
Get roasted based on your Github username and public contributions
20 votes -
Researchers describe how to tell if ChatGPT is confabulating
24 votes -
I will fucking piledrive you if you mention AI again
119 votes -
"Badness 0", a suckerpinch/tom7 video dive into typesetting, LLMs, and Donald Knuth
29 votes -
HeavyIQ: Understanding 220M Flights with AI
2 votes -
Slop is the new name for unwanted AI-generated content
52 votes -
From the makers of the Monocle, Brilliant Labs releases open source AR Glasses
26 votes -
When provided with CVE descriptions of 15 different vulnerabilities and a set of tools useful for exploitation, GPT-4 was capable of autonomously exploiting 13 of which, yielding an 87% success rate
17 votes -
The Assist - Thoughts on AI coding assistants
12 votes -
MDN’s AI Help and lucid lies
7 votes -
What useful tasks are possible with an LLM with only 3B parameters?
Playing with Llama 7B and 13B, I found that the 13B model was capable of doing a simple task, rewriting titles in sentence case for Tildes submissions. The 7B model doesn't appear capable of the...
Playing with Llama 7B and 13B, I found that the 13B model was capable of doing a simple task, rewriting titles in sentence case for Tildes submissions. The 7B model doesn't appear capable of the same task, out of the box.
I heard about Android's new AICore available on a couple of new devices. But it sounds like Gemini Nano, which runs on-device, can only handle 2B or 3B parameters.
Is this size of model useful for real tasks? Does it only become useful after training on a specific domain? I'm a novice and wanting to learn a little bit about it. On-device AI is an appealing concept to me.
12 votes -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
21 votes -
Nvidia CEO says kids shouldn't learn to code
23 votes -
The human element in AI-driven testing strategies
7 votes -
Google's Say What You See - Come up with a prompt to match an already generated image
12 votes -
On GitHub Copilot
23 votes -
The morality of using AI-generated art in my web app
Hey, good people of Tildes! I'm building a self-help web app, a small part of which I'd like to involve some pixel pets. I like pixel art and it'd be great if I could create some. Though, the...
Hey, good people of Tildes!
I'm building a self-help web app, a small part of which I'd like to involve some pixel pets. I like pixel art and it'd be great if I could create some. Though, the truth is, I can't draw for shit, I have little to no imagination, and I'm afraid even if I put the time and effort into it, I still may not produce something I'd call good enough to put on the website. I also lack the motivation to spend a lot of time learning how to create good pixel art, as I only need it for this project.
I thought about paying some professional(s) to do it but that would probably break the bank for me, as I want to offer the users a lot of pixel pet options, which brings us to what I guess is the only remaining option.
I found some services that offer AI-generated pixel art. This one in particular looks like what I'm looking for and also offers animations. While watching a demo of it on YouTube, I noticed a few comments voicing concern about the ethics of selling art that's generated using models trained off of unpaid artists' work. While this is not a new topic, I admittedly hadn't given it much thought before, as I've never used, or planned to use AI-generated art in a meaningful capacity.
While I'm not sure whether it changes much, for what it's worth, I should note that my web app is going to be free, open-source, and ad-free forever.
What are your thoughts? Also, I'd love to know if there are options that I missed!
26 votes -
Cheap options(?) to run local AI models
I have been having fun learning about generative AI. All in the cloud -- I got some models on hugging face to work, tried out Colab Pro, and found another cloud provider that runs SD models...
I have been having fun learning about generative AI. All in the cloud -- I got some models on hugging face to work, tried out Colab Pro, and found another cloud provider that runs SD models (dreamlook.ai if anyone is interested).
It's got me curious about trying to run something locally (mostly stable diffusion/dreambooth, possibly ollama).
I currently have a Thinkpad T490 with 16 gb ram and the base-level graphics card. I haven't actually tried to run anything locally, on the assumption that it would be extremely slow. I saw that you can get an external GPU, though I also saw some reports of headaches trying to get external GPUs up and running.I am curious what a workstation might cost that could do a reasonable job running local models. I am not a huge gamer or have any other high performance needs that are not currently served by the Thinkpad; not sure I can justify a $3000 workstation just to make a few jpgs.
I would be happy to buy something secondhand, like if there was a good source of off-lease workstations.
Alternatively-- if you have a similar computer to the T490 and do run models locally, what sort of performance is reasonable to expect? Would it be enough to buy some more RAM for this laptop?
Thanks for any advice!
13 votes -
Linus Torvalds on the state of Linux today and how AI figures in its future
26 votes