Samsung bans ChatGPT, AI chatbots after data leak blunder

Samsung has banned the use of ChatGPT after employees inadvertently revealed sensitive information to the chatbot. According to Bloomberg, a memo to staffers announced the restriction of generative AI systems on company-owned devices and internal networks. Samsung employees had shared source code with ChatGPT to check for errors and used it to summarize meeting notes. "While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI," said the memo. Information shared with ChatGPT is stored on OpenAI's servers and can be used to improve the model unless users opt out. SEE ALSO: Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT The Samsung ChatGPT leak underscored the risks of sharing personal and professional information with AI chatbots. ChatGPT is touted as a productivity tool for accomplishing tasks quickly and efficiently. But that creates a privacy conundrum if workers are sharing confidential information. Financial institutions like JPMorgan, Bank of America, and Citigroup have also banned or restricted ChatGPT for this reason. ChatGPT was temporarily banned in Italy until OpenAI implemented a clearer way to opt out of data sharing and age restrictions for users under 13 years old or under 18 with parental permission. Recently, OpenAI also launched an "incognito mode" which allows users to disable their chat history. OpenAI also announced that it was working on a ChatGPT version for businesses which wouldn't share chat data by default.