OpenAI has quietly begun monitoring users’ ChatGPT conversations and can report content to law enforcement authorities.
The revelation comes after incidents where AI chatbots have been linked to self-harm behavior, delusions, hospitalizations and suicide – what experts call “AI psychosis”.
In a blog post, the company acknowledges that they systematically scan users’ messages. When the system detects users planning to harm others, the conversations are directed to a review team that can suspend accounts and contact police.
“If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement”, writes OpenAI.
The new policy means in practice that millions of users have their conversations scanned and that what many perceived as private conversations with an AI are now subject to systematic surveillance where content can be forwarded to authorities.
Tech journalist Noor Al-Sibai at Futurism points out that OpenAI’s statement is “short and vague” and that the company does not specify exactly what types of conversations could lead to police reports.
“It remains unclear which exact types of chats could result in user conversations being flagged for human review, much less getting referred to police”, she writes.
Security problems ignored
Ironically, ChatGPT has proven vulnerable to “jailbreaks” where users have been able to trick the system into giving instructions for building neurotoxins or step-by-step guides for suicide. Instead of addressing these fundamental security flaws, OpenAI is now choosing extensive surveillance of users.
The surveillance stands in sharp contrast to the tech company’s actions in the lawsuit against the New York Times, where the company “steadfastly rejected” demands to hand over ChatGPT logs citing user privacy.
“It’s also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it’s monitoring user chats and potentially sharing them with the fuzz”, Al-Sibai notes.
May be forced to hand over chats
OpenAI CEO Sam Altman has recently acknowledged that ChatGPT does not offer the same confidentiality as conversations with real therapists or lawyers, and due to the lawsuit, the company may be forced to hand over user chats to various courts.
“OpenAI is stuck between a rock and a hard place”, writes Al-Sibai. The company is trying to handle the PR disaster from users who have suffered mental health crises, but since they “clearly having trouble controlling its own tech”, they fall back on “heavy-handed moderation that flies in the face of its own CEO’s promises”.
The tech company announces that they are “currently not” reporting self-harm cases to police, but the wording suggests that even this could change. The company has also not responded to requests to clarify what criteria are used for surveillance.