In a surprising move, OpenAI CEO Sam Altman has issued a warning to users of ChatGPT. Altman, who has hailed his generative AI chatbot as the next big thing, has advised users not to divulge all critical information. The CEO revealed that conversations with the popular AI chatbot are not protected by legal privilege and could potentially be used as evidence in court.
The disclosure, which was shared in a recent Theo Von’s podcast, This Past Weekend w/ Theo Von, has sparked concerns about user data privacy. AI chatbots have evolved enough to be trusted by millions of users across the world for seeking solutions to their life problems and fit in as the best digital friend for sharing their personal life secrets. However, cyber experts and makers of these AI chatbots are now warning against oversharing their details and information with these AI tools.
Altman warns of privacy concerns in AI chatbots
Altman’s warning comes as millions across the world turn to ChatGPT for everything – from casual chats to emotional support and professional advice. The OpenAI lead highlighted that while conversations with lawyers, doctors, or therapists are safeguarded by attorney-client privilege, doctor-patient confidentiality, or therapist-client privilege, discussions with ChatGPT hold no such legal protection.
“People talk about the most personal sh** in their lives to ChatGPT. People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT,” said Altman in the podcast.
Altman elaborated on the issue, noting the absence of a legal or policy framework to address AI privacy. “I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago,” Altman added.
The revelation has raised eyebrows among privacy advocates and legal experts, who point to OpenAI’s privacy policy as further evidence of the risk. According to the company’s official documentation, personal data, which includes interaction records, may be shared with government authorities or third parties if deemed necessary to comply with legal obligations or protect against liability.
What should ChatGPT users do then?
Until such frameworks are in place, users are advised to exercise discretion when sharing sensitive information with ChatGPT, Google Gemini, Perplexity AI and other similar AI platforms. OpenAI has yet to issue an official statement addressing the concerns. The debate is expected to intensify as lawmakers and tech leaders struggle to balance innovation and privacy in the AI era. The outcome could result in AI chatbot operators figuring out legal ways to protect user privacy.