Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice. With the advent of AI coding, people who don't know how to code now...
Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice.
With the advent of AI coding, people who don't know how to code now start to use the AI to automate their work. This isn't new - previously they might use already other low code tools like Excel, UIPath, n8n, etc. but it still require learning the tools to use it. Now, anyone can "vibe coding" and get an output, which is fine for engineers who understand how the output should work and can design how it should be tested (edge cases, etc.)
I had a team come up with me that they managed to automate their work, which is good, but they did it with ChatGPT and the code works as they expected, but they doesn't fully understand how the code works and of course they're deploying this "to production" which means they're setting up an environment that supposed to be for internal tools, but use real customer data fed in from the production systems.
If you're an engineer, usually this violates a lot of policies - you should get the code peer reviewed by people who know what it does (incl. business context), the QA should test the code and think about edge cases and the best ways to test it and sign it off, the code should be developed & tested in non-production environment with fake data.
I can't think of a way non-engineers can do this - they cannot read code (and it get worse if you need two people in the same team to review each other) and if you're outsourcing it to AI, the AI company doesn't accept liability, nor you can retrain the AI from postmortems. The only way is to include lessons learned into the prompt, and I guess at some point it will become one long holy bible everyone has to paste into the limited context window. They are not trained to work on non-production data (if you ever try, usually they'll claim that the data doesn't match production - which I think because they aren't trained to design and test for edge cases). The only way to solve this directly is asking engineers to review them, but engineers aren't cheap and they're best doing something more important.
So far I think the best way to approach this problem is to think of it like Excel - the formulas are always safe to use - they don't send data to the internet, they don't create malware, etc. The worst think they can do is probably destroy that file or hangs your PC. And people don't know how to write VBA so they never do it. Now you have people copy pasting VBA code that they don't understand. The new AI workspace has to be done by building technical guardrails that the AI are limited to. I think it has to be done in some low-code tools that people using AI has to use (like say n8n). For example, blocks that do computation can be used, blocks that send data to the intranet/internet or run arbitrary code requires approval before use. And engineers can build safe blocks that can be used, such as sending messages to Slack that can only be used to send to corporate workspace only.
Does your work has adjusted policies for this AI epidemic? or other ideas that you wanted to share?