-
18 votes
-
Google's new AI video tool floods internet with real-looking clips
57 votes -
Citing illegal pollution US racial justice nonprofit NAACP calls for emergency shutdown of Elon Musk's supercomputer in Memphis
21 votes -
Duolingo is replacing human workers with AI
34 votes -
New York Times, Amazon unveil AI content licensing deal
10 votes -
Large Language Models are more persuasive than incentivized human persuaders
14 votes -
Non-engineers AI coding & corporate compliance?
Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice. With the advent of AI coding, people who don't know how to code now...
Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice.
With the advent of AI coding, people who don't know how to code now start to use the AI to automate their work. This isn't new - previously they might use already other low code tools like Excel, UIPath, n8n, etc. but it still require learning the tools to use it. Now, anyone can "vibe coding" and get an output, which is fine for engineers who understand how the output should work and can design how it should be tested (edge cases, etc.)
I had a team come up with me that they managed to automate their work, which is good, but they did it with ChatGPT and the code works as they expected, but they doesn't fully understand how the code works and of course they're deploying this "to production" which means they're setting up an environment that supposed to be for internal tools, but use real customer data fed in from the production systems.
If you're an engineer, usually this violates a lot of policies - you should get the code peer reviewed by people who know what it does (incl. business context), the QA should test the code and think about edge cases and the best ways to test it and sign it off, the code should be developed & tested in non-production environment with fake data.
I can't think of a way non-engineers can do this - they cannot read code (and it get worse if you need two people in the same team to review each other) and if you're outsourcing it to AI, the AI company doesn't accept liability, nor you can retrain the AI from postmortems. The only way is to include lessons learned into the prompt, and I guess at some point it will become one long holy bible everyone has to paste into the limited context window. They are not trained to work on non-production data (if you ever try, usually they'll claim that the data doesn't match production - which I think because they aren't trained to design and test for edge cases). The only way to solve this directly is asking engineers to review them, but engineers aren't cheap and they're best doing something more important.
So far I think the best way to approach this problem is to think of it like Excel - the formulas are always safe to use - they don't send data to the internet, they don't create malware, etc. The worst think they can do is probably destroy that file or hangs your PC. And people don't know how to write VBA so they never do it. Now you have people copy pasting VBA code that they don't understand. The new AI workspace has to be done by building technical guardrails that the AI are limited to. I think it has to be done in some low-code tools that people using AI has to use (like say n8n). For example, blocks that do computation can be used, blocks that send data to the intranet/internet or run arbitrary code requires approval before use. And engineers can build safe blocks that can be used, such as sending messages to Slack that can only be used to send to corporate workspace only.
Does your work has adjusted policies for this AI epidemic? or other ideas that you wanted to share?
23 votes -
Highlights from the Claude 4 system prompt
25 votes -
Introducing Claude Opus and Sonnet 4
18 votes -
How Big Tech hides its outsourced African workforce
16 votes -
China’s superstition boom
25 votes -
Personalized software really is coming, but not today. Maybe tomorrow?
13 votes -
Chicago Sun-Times prints summer reading list full of fake books
42 votes -
Sam and Jony introduce io | OpenAI
15 votes -
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
23 votes -
YouTube’s new ads will ruin the best part of a video on purpose
60 votes -
LinkedIn executive says that the bottom rung of the career ladder is breaking
43 votes -
I don’t care whether you use ChatGPT to write
25 votes -
Introducing Codex [OpenAI]
23 votes -
Grok’s white genocide fixation caused by ‘unauthorized modification’
51 votes -
Will the humanities survive artificial intelligence?
28 votes -
AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
22 votes -
Visualising how AI training avoids getting stuck in local minima
18 votes -
Software engineer lost his $150K-a-year job to AI—he’s been rejected from 800 jobs and forced to DoorDash and live in a trailer to make ends meet
34 votes -
Two unrelated stories that make me even more cynical about AI
I saw both of these stories on Lemmy today. They show two different facets to the topic of AI. This first story is from the perspective of cynicism about AI and how it has been overhyped. If AI is...
I saw both of these stories on Lemmy today. They show two different facets to the topic of AI.
This first story is from the perspective of cynicism about AI and how it has been overhyped.
If AI is so good, where are the open source contributionsBut if AI is so obviously superior … show us the code. Where’s the receipts? Let’s say, where’s the open source code contributions using AI?
The second story is about crony capitalism, deregulation, and politics around AI:
GOP sneaks decades long AI regulation ban into spending bill
On Sunday night, House Republicans added language to the Budget Reconciliation bill that would block all state and local governments from regulating AI for 10 years, 404 Media reports. The provision, introduced by Representative Brett Guthrie of Kentucky, states that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act
I saw these stories minutes apart, and they really make me feel even more cynical and annoyed by AI than I was yesterday. Because:
- In the short term AI is largely a boondoggle, which won’t work as advertised but still humans will be replaced by it because the people who hire don’t understand it’s limitations but they fear missing out on a gold rush.
- The same shady people at the AI companies who are stealing your art and content, in order to sell a product that will replace you, are writing legislation to protect themselves from being held accountable
- They also are going to be protected from any skynet-style disasters caused by their recklessness
28 votes -
The (not so) futuristic technology of “Lazarus”
12 votes -
Some ChatGPT users are developing delusional beliefs that are reinforced by the large language model
53 votes -
Everyone is cheating their way through college
49 votes -
In 2025, venture capital can’t pretend everything is fine any more
53 votes -
How AGI made the future unthinkable
19 votes -
Level-5 CEO says games are now being made 80-90% by AI, making “aesthetic sense” a must for developers
24 votes -
Amazon makes ‘fundamental leap forward in robotics’ with device having sense of touch
10 votes -
xAI is running generators without pollution controls in Memphis
27 votes -
Tech companies apparently do not understand why we dislike AI
49 votes -
Researchers secretly ran a massive, unauthorized AI persuasion experiment on Reddit users
64 votes -
Dark Visitors got a new free plan
6 votes -
Chinese factories are more automated
13 votes -
Time saved by AI offset by new work created, study suggests
23 votes -
When ChatGPT broke an entire field: An oral history
14 votes -
California community colleges are losing millions to financial aid fraud
12 votes -
A nonsense phrase has been occurring in scientific papers, suggesting artificial intelligence data contamination
53 votes -
Don’t buy stuff from old AI people
20 votes -
IGN and Eurogamer owner Ziff Davis is suing OpenAI for content theft
24 votes -
State Bar of California admits it used AI to develop exam questions, triggering new furor
25 votes -
Norway has launched a new scheme to lure top international researchers amid growing pressure on academic freedom in the US
11 votes -
OpenAI is a systemic risk to the tech industry
35 votes -
Kagi Assistant is now available to all users
44 votes -
Russia seeds chatbots with lies. Any bad actor could game AI the same way.
33 votes -
The dangers of vibe coding
26 votes -
Nintendo President on the new Switch 2, tariffs and what's next for the company
17 votes