This is pretty standard corporate stuff. Corporations generally don't want to put their proprietary information anywhere that they haven't vetted and created an enterprise agreement with.
This is pretty standard corporate stuff. Corporations generally don't want to put their proprietary information anywhere that they haven't vetted and created an enterprise agreement with.
Summary: • Alphabet issued a warning to its employees to not input confidential information into chatbots like Google-owned Bard and Microsoft-backed ChatGPT from OpenAI. • Human reviewers,...
Summary:
• Alphabet issued a warning to its employees to not input confidential information into chatbots like Google-owned Bard and Microsoft-backed ChatGPT from OpenAI.
• Human reviewers, trained on the text exchanges of the chatbot users, may read conversations between employees and the chatbots, risking the exposure of trade secrets and personal privacy.
• Samsung, Amazon, and Apple have reportedly enacted similar internal policies to prevent employees from leaking confidential data to chatbots.
This is pretty standard corporate stuff. Corporations generally don't want to put their proprietary information anywhere that they haven't vetted and created an enterprise agreement with.
Summary:
• Alphabet issued a warning to its employees to not input confidential information into chatbots like Google-owned Bard and Microsoft-backed ChatGPT from OpenAI.
• Human reviewers, trained on the text exchanges of the chatbot users, may read conversations between employees and the chatbots, risking the exposure of trade secrets and personal privacy.
• Samsung, Amazon, and Apple have reportedly enacted similar internal policies to prevent employees from leaking confidential data to chatbots.
If you are entering in any personal or proprietary information into an AI chatbot...you fuggin up. Never do that.