Samsung staff accidentally leaked information to ChatGPT
Samsung staff members are facing consequences for allegedly sharing data into ChatGPT confidential company data with OpenAI’s bot on multiple occasions. This situation illuminates the widespread usage of the increasingly popular professional AI chatbot and the often disregarded capability of OpenAI to collect sensitive information from its numerous consenting users.

According to reports from Korean media, a Samsung staff member allegedly copied source code from a malfunctioning semiconductor database and utilized ChatGPT to assist in identifying a solution. Additionally, another employee apparently shared confidential code in an attempt to remedy faulty equipment. A third employee reportedly fed an entire meeting into the chatbot and requested it to generate meeting minutes. Upon discovering these breaches, Samsung implemented an “emergency measure” to restrict each employee’s ChatGPT prompt to 1024 bytes in an effort to minimize any further damage.
To compound the problem, these leaks have surfaced only three weeks after Samsung had removed its previous ban on using ChatGPT by employees due to concerns over a potential occurrence of this problem. At present, the company is working on creating its own proprietary AI system.
Open AI has successfully stored information obtained from prompts. This achievement has been carried out in a professional manner, ensuring that no data is left out during the process.
Sharing confidential information with ChatGPT can pose a critical challenge as the written queries submitted by employees do not vanish automatically when they log out of their system. OpenAI has stated that it may utilize the data obtained from ChatGPT or similar consumer services to enhance its AI models. It implies that OpenAI keeps hold of such data unless the users decide to opt-out explicitly. However, OpenAI cautions users against sharing any sensitive information as it cannot delete particular prompts.
According to a study conducted by Cyberhaven, Samsung employees are not the only ones who have been sharing confidential company information with ChatGPT. The research found that 3.1% of Cyberhaven’s customers who used the AI had also submitted confidential data into the system. Cyberhaven suggests that this could be happening hundreds of times per week in companies with around 100,000 employees. The sharing of such information could have serious consequences for the companies involved.
Some major corporations, including Amazon and Walmart, have started to pay attention to the AI mode’s potential risks. They have recently cautioned their employees not to share confidential information with the tool. In contrast, Verizon and J.P. Morgan Chase have gone further by prohibiting their staff from using the AI mode.
- ChatGPT wasn’t created to allow users to create malicious programs but people have found ways to use it to create ransomware, python scripts that steal information after an exploit, and other types of malware.
Read more related articles:
(function(d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.5”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));
Source link