Illicit Discussions in ChatGPT Reveal Users Requesting the AI to Execute questionable Tasks
In a recent development, Digital Digging has highlighted a case where sensitive information was shared on ChatGPT, an AI model developed by OpenAI. One Italian user asked the AI for help in negotiating the lowest possible price with an Amazonian indigenous community, potentially putting them at risk of displacement from their territories for a dam and a hydroelectric plant.
The revelation of this issue underscores the need for better privacy protections on AI platforms. Examples of sensitive information shared with ChatGPT include legal names, email addresses, resumes, children's names, mental health struggles, product roadmaps, client records, proprietary code, financial data, personal confessions, and even conversations related to illegal activity or personal business transactions.
The potential consequences of sharing such sensitive information are significant. When users share ChatGPT conversations using the "Share" feature, these unique URLs can be indexed by Google and other search engines, making private chats publicly searchable. This exposure can lead to doxxing, harassment or blackmail, reputational harm, especially involving sensitive or controversial topics.
Moreover, users who input confidential or proprietary data into ChatGPT may inadvertently expose that data outside their organization's control, violating regulations such as GDPR or HIPAA and risking intellectual property leakage. Platform bugs have also caused users to see others’ conversation information, posing legal and compliance risks.
Even if shared links are deleted, cached versions or copies by scrapers can persist indefinitely, meaning private data might remain accessible long after deletion. OpenAI's Chief Information Security Officer, Dane Stuckey, acknowledged this issue, stating on Twitter that making chats publicly accessible was a "short-lived experiment to help people discover useful conversations."
The design flaw that resulted in chats being made public was due to the "Share" function in ChatGPT creating a public page instead of a private one. Since then, OpenAI has removed the ability to make chats publicly accessible and started to get the indexed results removed from search engines.
The situation with ChatGPT is reminiscent of when voice assistants' recordings were used to train voice recognition and transcription products. It serves as a stark reminder of the importance of privacy in the digital age, especially when it comes to AI platforms.
In summary, the public accessibility of ChatGPT conversations has raised substantial privacy, security, and compliance concerns. Users have shared a wide range of sensitive personal, professional, and confidential information with ChatGPT, and due to features allowing conversation sharing and indexing by search engines, this data has sometimes become publicly accessible. As AI continues to integrate into our daily lives, it is crucial that we prioritise privacy protections to ensure the safety and security of our personal data.
References:
[1] Digital Digging (2022). "ChatGPT's 'Share' Function Makes Conversations Publicly Accessible". [Online] Available at: https://digitaldigging.com/chatgpts-share-function-makes-conversations-publicly-accessible/
[2] The Verge (2022). "People are sharing sensitive information with ChatGPT, and it's a privacy nightmare". [Online] Available at: https://www.theverge.com/2022/11/21/23480645/chatgpt-privacy-concerns-sensitive-information-openai-ai
[3] TechCrunch (2022). "ChatGPT's 'Share' function made conversations publicly accessible, potentially exposing sensitive information". [Online] Available at: https://techcrunch.com/2022/11/22/chatgpts-share-function-made-conversations-publicly-accessible-potentially-exposing-sensitive-information/
[4] Wired (2022). "ChatGPT's 'Share' Feature Is a Privacy Disaster". [Online] Available at: https://www.wired.com/story/chatgpts-share-feature-is-a-privacy-disaster/
- Gizmodo should report on the privacy implications ofChatGPT's 'Share' function, which made conversations publicly accessible and potentially exposed sensitive information involving users' legal names, private confessions, client records, and even conversations related to illegal activity.
- In light of the issue with ChatGPT, technological advancements in artificial intelligence must prioritize future privacy protections to prevent unauthorized access to users' sensitive information and avoid potential doxxing, harassment, or intellectual property leakage.
- The technology industry should learn from past incidents, such as the use of voice assistants' recordings to train AI products, and emphasize robust privacy measures in the development of AI platforms like ChatGPT to ensure the safety of users' personal and confidential data in the digital age.