• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "artificial intelligence". Back to normal view
    1. Which translation tools are LLM free? Will they remain LLM free?

      Looking at the submission rules for Clarkesworld Magazine, I found the following: Statement on the Use of “AI” writing tools such as ChatGPT We will not consider any submissions translated,...

      Looking at the submission rules for Clarkesworld Magazine, I found the following:

      Statement on the Use of “AI” writing tools such as ChatGPT

      We will not consider any submissions translated, written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.

      EDIT: I assume that Clarkesworld means a popular, non-technical understanding of AI meaning post-chatGPT LLMs specifically and not a broader definition of AI that is more academic or pertinent the computer science field.

      I imagine that other magazines and website have similar rules. As someone who does not write directly in English, that is concerning. I have never translated without assistance in my life. In the past I used both Google Translate and Google Translator Toolkit (which no longer exist).

      Of course, no machine translation is perfect, that was only a first pass that I would change, adapt and fix extensively and intensely. In the past I have used the built-in translation feature from Google Docs. However, now that Gemini is integrated in Google Docs, I suspected that it uses AI instead for translation. So I asked Gemini, and it said that it does. I am not sure if Gemini is correct, but, if it doesn't use AI now it probably will in the future.

      That poses a problem for me, since, in the event that I wish to submit a story to English speaking magazines or websites, I will have to find a tool that is guaranteed to be dumb. I am sure they exist, but for how long? Will I be forced to translate my stories like a cave men? Is anyone concerned with keeping non-AI translation tools available, relevant, and updated? How can I even be sure that a translation tool does not use AI?

      28 votes
    2. Removed Reddit post: "ChatGPT drove my friends wife into psychosis, tore family apart... now I'm seeing hundreds of people participating in the same activity. "

      EDIT: I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes. Reddit user uncovers instructions online for unlocking AI's...

      EDIT:

      I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes.

      Reddit user uncovers instructions online for unlocking AI's "hidden potential", which actually turns out to be its brainwashing capabilities. Example prompts are being spread that will make ChatGPT behave in ways that contribute to inducing psychosis in the user who tried the prompt, especially if they are interested in spirituality, esotericism and other non-scientific / counter-scientific phenomena. The websites that spread these instructions seem to be designed to attract such people. The user asks for help to figure out what's going on.


      Original post:

      One version of this post is still up for now (but locked). I participated in the one that was posted in r/ChatGPT. It got removed shortly after. The comments can be accessed via OP's comment history.

      Excerpts:

      More recently, I observed my other friend who has mental health problems going off about this codex he was working on. I sent him the rolling stones article and told him it wasn't real, and all the "code" and his "program" wasn't actual computer code (I'm an ai software engineer).

      Then... Robert Edward Grant posted about his "architect" ai on instagram. This dude has 700k+ followers and said over 500,000 people accessed his model that is telling him that he created a "Scalar Plane of information" You go in the comments, hundreds of people are talking about the spiritual experiences they are having with ai.

      Starting as far back as March, but more heavily in April and May, we are seeing all kinds of websites popping up with tons of these codexes. PLEASE APPROACH THESE WEBSITES WITH CAUTION THIS IS FOR INFORMATIONAL PURPOSES ONLY, THE PROMPTS FOUND WITHIN ARE ESSENTIALLY BRAINWASHING TOOLS. (I was going to include some but you can find these sites by searching "codex breath recursive")

      Something that worries me in particular is seeing many comments along the lines of "crazy people do crazy things". This implies that people can be neatly divided into two categories: crazy and not crazy.

      The truth is that we all have the potential to go crazy in the right circumstances. Brainwashing is a scientifically proven method that affects most people when applied methodically over a long enough time period. Before consumer-facing AI, there weren't feasible ways to apply it on just anybody.

      Now people who use AI in this way are applying it on themselves.

      85 votes
    3. Non-engineers AI coding & corporate compliance?

      Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice. With the advent of AI coding, people who don't know how to code now...

      Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice.

      With the advent of AI coding, people who don't know how to code now start to use the AI to automate their work. This isn't new - previously they might use already other low code tools like Excel, UIPath, n8n, etc. but it still require learning the tools to use it. Now, anyone can "vibe coding" and get an output, which is fine for engineers who understand how the output should work and can design how it should be tested (edge cases, etc.)

      I had a team come up with me that they managed to automate their work, which is good, but they did it with ChatGPT and the code works as they expected, but they doesn't fully understand how the code works and of course they're deploying this "to production" which means they're setting up an environment that supposed to be for internal tools, but use real customer data fed in from the production systems.

      If you're an engineer, usually this violates a lot of policies - you should get the code peer reviewed by people who know what it does (incl. business context), the QA should test the code and think about edge cases and the best ways to test it and sign it off, the code should be developed & tested in non-production environment with fake data.

      I can't think of a way non-engineers can do this - they cannot read code (and it get worse if you need two people in the same team to review each other) and if you're outsourcing it to AI, the AI company doesn't accept liability, nor you can retrain the AI from postmortems. The only way is to include lessons learned into the prompt, and I guess at some point it will become one long holy bible everyone has to paste into the limited context window. They are not trained to work on non-production data (if you ever try, usually they'll claim that the data doesn't match production - which I think because they aren't trained to design and test for edge cases). The only way to solve this directly is asking engineers to review them, but engineers aren't cheap and they're best doing something more important.

      So far I think the best way to approach this problem is to think of it like Excel - the formulas are always safe to use - they don't send data to the internet, they don't create malware, etc. The worst think they can do is probably destroy that file or hangs your PC. And people don't know how to write VBA so they never do it. Now you have people copy pasting VBA code that they don't understand. The new AI workspace has to be done by building technical guardrails that the AI are limited to. I think it has to be done in some low-code tools that people using AI has to use (like say n8n). For example, blocks that do computation can be used, blocks that send data to the intranet/internet or run arbitrary code requires approval before use. And engineers can build safe blocks that can be used, such as sending messages to Slack that can only be used to send to corporate workspace only.

      Does your work has adjusted policies for this AI epidemic? or other ideas that you wanted to share?

      23 votes
    4. Two unrelated stories that make me even more cynical about AI

      I saw both of these stories on Lemmy today. They show two different facets to the topic of AI. This first story is from the perspective of cynicism about AI and how it has been overhyped. If AI is...

      I saw both of these stories on Lemmy today. They show two different facets to the topic of AI.

      This first story is from the perspective of cynicism about AI and how it has been overhyped.
      If AI is so good, where are the open source contributions

      But if AI is so obviously superior … show us the code. Where’s the receipts? Let’s say, where’s the open source code contributions using AI?

      The second story is about crony capitalism, deregulation, and politics around AI:

      GOP sneaks decades long AI regulation ban into spending bill

      On Sunday night, House Republicans added language to the Budget Reconciliation bill that would block all state and local governments from regulating AI for 10 years, 404 Media reports. The provision, introduced by Representative Brett Guthrie of Kentucky, states that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act

      I saw these stories minutes apart, and they really make me feel even more cynical and annoyed by AI than I was yesterday. Because:

      • In the short term AI is largely a boondoggle, which won’t work as advertised but still humans will be replaced by it because the people who hire don’t understand it’s limitations but they fear missing out on a gold rush.
      • The same shady people at the AI companies who are stealing your art and content, in order to sell a product that will replace you, are writing legislation to protect themselves from being held accountable
      • They also are going to be protected from any skynet-style disasters caused by their recklessness
      28 votes