The widespread use of artificial intelligence (AI) and especially AI chatbots, like Claude, ChatGPT, Google Gemini, and so many more, has reached levels never thought before.
Within just a very short amount of time, they have become so extremely normalised, entered our daily vernacular, that it is no big deal to just say “Oh, I’ll just ChatGPT this” or “Let me ask Gemini that.”
These chatbots have now become our current generation’s Google, wherein the phrase is less to do with the actual platform itself, and a noun all on its own. People are treating these chatbots like everything, from a career counsellor, a dictionary, a therapist and even in some cases a partner/spouse.
They tell ChatGPT about their health problems. They paste their company’s business strategy into Claude. They confide relationship troubles to Gemini. They ask for help drafting legal responses and include everything their lawyer told them in confidence.
And then they close the tab and assume it’s gone. It isn’t.
The safety concerns with these chatbots are not news to anyone; thousands of reports have been made in the last couple of years about how vulnerable these platforms still are. However, a recent ruling in a court case has once more shed light on this.
The Court Case That Should Concern Everyone
In February 2026, a federal judge in Manhattan issued a ruling that sent shockwaves through the legal profession and, for anyone paying attention, through the broader population of AI chatbot users.
US District Judge Jed Rakoff of the Southern District of New York ruled that Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings, who had used Anthropic’s chatbot Claude to prepare reports about his own criminal defence strategy, had no legal right to keep those conversations private.
He ordered Heppner to hand over 31 chatbot-generated documents to prosecutors. The ruling turned on a fundamental legal observation.
“No attorney-client relationship exists, or could exist, between an AI user and a platform such as Claude,” Judge Rakoff wrote.
He also noted something that most users have never thought about: Claude’s own privacy policy “expressly provided that users have no expectation of privacy in their inputs.”
That sentence, users have no expectation of privacy in their inputs, has now led to many legal experts warning clients and the general public not to use AI as if it were their own personal advisors.
A lot of the general public might be wary of AI chatbots from a technical point, afraid of getting their information stolen, but many might be more unaware that there are no legal protections that would keep their conversations with AI private.
At least not for the time being.
Read More: Rural Indian Women Are Made To Watch Hours Of Sexual Content To Train AI
The attorney-client privileges that allow a client to keep their communication with their lawyer private do not seem to apply to AI tools.
Alexandria Gutiérrez Swette, a lawyer at New York-based Kobre & Kim, speaking with Reuters, said, “We are telling our clients: You should proceed with caution here.”
New York-based firm Sher Tremonte went further, including explicit language in client contracts stating that “Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege.”
Law firm Debevoise & Plimpton has also posted a notice on its website advising that if a lawyer recommends using AI for legal research, then the client should start the session with the line “I am doing this research at the direction of counsel for X $TWTR 0.00% litigation.”
The National Law Review, in a formal legal analysis, articulated the practical danger for ordinary people and businesses in plain terms: “Using a third-party chatbot as a substitute for a confidential conversation with counsel can implicate waiver arguments and discovery exposure.
Many AI chat tools are provided by third parties whose terms and privacy practices may allow prompts and outputs to be stored, reviewed, used for product improvement, shared with vendors, or produced under legal process.
Even if the tool feels private, unless it is configured properly, using it may mean disclosing sensitive facts to someone other than your lawyer.”
What Else Can Go Wrong?
The legal exposure is serious, but it is only one dimension of the risk landscape.
A 2024 EU audit found that 63% of ChatGPT user data contained personally identifiable information (PII), while only 22% of users knew they could opt out of data collection.
Research cited by Wald.ai showed that 87% of U.S. citizens can be uniquely identified using just three data points: their gender, ZIP code, and date of birth, a combination many users include in health or lifestyle queries without a second thought.
Data breach exposure is real and documented. Over 225,000 OpenAI credentials were found on the dark web in 2024–2025, stolen by infostealer malware.
In February 2025, a threat actor claimed to have obtained credentials for 20 million OpenAI accounts.
In 2025, a serious flaw in McDonald’s AI recruitment chatbot “McHire” allowed researchers to access data of 64 million job applicants using the default password “123456.”
OpenAI’s own ChatGPT memory system now has the ability to “reference all your past conversations,” meaning details you provided weeks or months ago can be recalled and surfaced in future interactions in ways that may surprise users.
For children and teenagers specifically, the Stanford study raised a red flag that has received insufficient attention: “Developers’ practices vary in this regard, but most are not taking steps to remove children’s input from their data collection and model training processes.”
What’s OK To Share
- General knowledge questions and research.
- Creative writing and fiction.
- Coding, debugging, and technical problems.
- Learning, studying, and skills development.
- Brainstorming and idea generation.
- Public information and news analysis.
What’s Not OK To Share
- Full name, address, date of birth, or Social Security/national ID numbers.
- Medical records, diagnoses, symptoms with identifying details, or mental health disclosures.
- Your lawyer’s advice, legal strategy, or anything related to ongoing litigation.
- Passwords, PINs, and login credentials (including third-party services).
- Confidential business information, trade secrets, and proprietary code.
- Financial account details, credit card numbers, or tax information.
- Immigration status or sensitive government documentation.
- Sensitive personal relationships or private communications about real individuals.
Image Credits: Google Images
Sources: Reuters, Quartz, NDTV
Find the blogger: @chirali_08
This post is tagged under: ChatGPT, ChatGPT app, ChatGPT news, ChatGPT openai, ChatGPT users, Technology, ai, ai chatbots, ai chatbot information, ai chatbot safe, information, artificial intelligence, claude safe, chatgpt safe, chatgpt sensitive information, claude sensitive information
Disclaimer: We do not own any rights or copyrights to the images used; these images have been sourced from Google. If you require credits or wish to request removal, please contact us via email.
Other Recommendations:
Tons Of ChatGPT Users Are Mass-Uninstalling The App, But Why?































