The widespread use of artificial intelligence (AI) and especially AI chatbots, like Claude, ChatGPT, Google Gemini, and so many more, has reached levels never thought before.
Within just a very short amount of time, they have become so extremely normalised, entered our daily vernacular, that it is no big deal to just say “Oh, I’ll just ChatGPT this” or “Let me ask Gemini that.”
These chatbots have now become our current generation’s Google, wherein the phrase is less to do with the actual platform itself, and a noun all on its own. People are treating these chatbots like everything, from a career counsellor, a dictionary, a therapist and even in some cases a partner/spouse.
They tell ChatGPT about their health problems. They paste their company’s business strategy into Claude. They confide relationship troubles to Gemini. They ask for help drafting legal responses and include everything their lawyer told them in confidence.
And then they close the tab and assume it’s gone. It isn’t.
The safety concerns with these chatbots are not news to anyone; thousands of reports have been made in the last couple of years about how vulnerable these platforms still are. However, a recent ruling in a court case has once more shed light on this.
The Court Case That Should Concern Everyone
In February 2026, a federal judge in Manhattan issued a ruling that sent shockwaves through the legal profession and, for anyone paying attention, through the broader population of AI chatbot users.
U.S. District Judge Jed Rakoff of the Southern District of New York ruled that Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings, who had used Anthropic’s chatbot Claude to prepare reports about his own criminal defense strategy, had no legal right to keep those conversations private.
He ordered Heppner to hand over 31 chatbot-generated documents to prosecutors. The ruling turned on a fundamental legal observation.
“No attorney-client relationship exists, or could exist, between an AI user and a platform such as Claude,” Judge Rakoff wrote.
He also noted something that most users have never thought about: Claude’s own privacy policy “expressly provided that users have no expectation of privacy in their inputs.”
That sentence, users have no expectation of privacy in their inputs, has now led to many legal experts warning clients and the general public not to use AI as if it were their own personal advisors.
A lot of the general public might be wary of AI chatbots from a technical point, afraid of getting their information stolen, but many might be more unaware that there are no legal protections that would keep their conversations with AI private.
At least not for the time being.
Read More: Rural Indian Women Are Made To Watch Hours Of Sexual Content To Train AI
The attorney-client privileges that allow a client to keep their communication with their lawyer private do not seem to apply to AI tools.
Alexandria Gutiérrez Swette, a lawyer at New York-based Kobre & Kim, speaking with Reuters, said, “We are telling our clients: You should proceed with caution here.”
New York-based firm Sher Tremonte went further, including explicit language in client contracts stating that “Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege.”
Law firm Debevoise & Plimpton has also posted a notice on its website advising that if a lawyer does recommend using AI for legal research, then the client should start the session with the line “I am doing this research at the direction of counsel for X $TWTR 0.00% litigation.”
The National Law Review, in a formal legal analysis, articulated the practical danger for ordinary people and businesses in plain terms: “Using a third-party chatbot as a substitute for a confidential conversation with counsel can implicate waiver arguments and discovery exposure.
Many AI chat tools are provided by third parties whose terms and privacy practices may allow prompts and outputs to be stored, reviewed, used for product improvement, shared with vendors, or produced under legal process.
Even if the tool feels private, unless it is configured properly, using it may mean disclosing sensitive facts to someone other than your lawyer.”
What Else Can Go Wrong?
The legal exposure is serious, but it is only one dimension of the risk landscape.
A 2024 EU audit found that 63% of ChatGPT user data contained personally identifiable information (PII), while only 22% of users knew they could opt out of data collection.
Research cited by Wald.ai showed that 87% of U.S. citizens can be uniquely identified using just three data points: their gender, ZIP code, and date of birth, a combination many users include in health or lifestyle queries without a second thought.
Data breach exposure is real and documented. Over 225,000 OpenAI credentials were found on the dark web in 2024–2025, stolen by infostealer malware.
In February 2025, a threat actor claimed to have obtained credentials for 20 million OpenAI accounts.
In 2025, a serious flaw in McDonald’s AI recruitment chatbot “McHire” allowed researchers to access data of 64 million job applicants using the default password “123456.”
OpenAI’s own ChatGPT memory system now has the ability to “reference all your past conversations,” meaning details you provided weeks or months ago can be recalled and surfaced in future interactions in ways that may surprise users.
For children and teenagers specifically, the Stanford study raised a red flag that has received insufficient attention: “Developers’ practices vary in this regard, but most are not taking steps to remove children’s input from their data collection and model training processes.”
What’s OK To Share
- General knowledge questions and research: Asking about historical events, scientific concepts, geography, language, culture, mathematics, or any topic where the information is already public and the query reveals nothing personal about you is generally safe. “What were the causes of World War I?” or “Explain quantum entanglement” poses no privacy risk.
- Creative writing and fiction: Writing fiction, poetry, stories, scripts, game narratives, or other creative content is low-risk as long as the characters and situations are clearly fictional and do not map to real personal experiences, real individuals, or real legal situations. Asking for help with a novel’s plot carries no material risk.
- Coding, debugging, and technical problems: Asking for help with generic coding problems, algorithms, syntax errors, or software concepts is generally safe, as long as the code you paste does not contain proprietary business logic, API keys, passwords, customer data, or trade secrets. Asking how to build a sorting algorithm: safe. Pasting your company’s production database schema: not safe.
- Learning, studying, and skills development: Using AI to learn languages, understand academic subjects, prepare for exams, summarize publicly available material, or develop new skills is well within the safe zone. The key is that you are receiving general information rather than sharing personal data.
- Brainstorming and idea generation (general): General brainstorming about publicly available topics, marketing concepts, creative campaign ideas, hypothetical business models, is relatively low-risk as long as it does not include real client names, proprietary product details, or unreleased company strategies.
- Public information and news analysis: Asking for help understanding publicly available news, policy, regulations, or publicly filed documents poses minimal privacy risk, as you are not contributing personal information.
What’s Not OK To Share
- Full name, address, date of birth, or Social Security / national ID numbers: Once in a training pipeline, this information cannot be fully removed, even after account deletion.
- Medical records, diagnoses, symptoms with identifying details, or mental health disclosures.
- Your lawyer’s advice, legal strategy, or anything related to ongoing litigation.
- Passwords, PINs, and login credentials: Never share a password, security question answer, or authentication token with any AI platform. Research shows AI password crackers can break 51% of common passwords within one minute.
- Confidential business information, trade secrets, and proprietary code: Company strategy documents, unreleased product plans, client lists, financial projections, M&A plans, internal pricing structures, and proprietary source code should never be pasted into consumer-facing AI chatbots.
- Other people’s personal information without their consent: Do not share third parties’ names, medical situations, financial circumstances, relationship details, or employment information without their knowledge and consent. This includes describing real disputes involving named real individuals, submitting HR materials about employees, or sharing a friend’s private situation.
- Financial account details, credit card numbers, or tax information.
- Immigration status or sensitive government documentation: For individuals in legally precarious situations, undocumented immigrants, asylum seekers, whistleblowers, and political dissidents, the stakes of data disclosure are categorically higher. Government subpoenas can compel AI companies to disclose stored conversations.
- Login information for third-party services: Avoid granting AI tools access to external accounts (email, calendar, cloud storage) unless absolutely necessary, and always review the scope of permissions requested.
- Sensitive personal relationships or private communications about real individuals.
Image Credits: Google Images
Sources: Reuters, Quartz, NDTV
Find the blogger: @chirali_08
This post is tagged under: ChatGPT, ChatGPT app, ChatGPT news, ChatGPT openai, ChatGPT users, Technology, ai, ai chatbots, ai chatbot information, ai chatbot safe, information, artificial intelligence, claude safe, chatgpt safe, chatgpt sensitive information, claude sensitive information
Disclaimer: We do not own any rights or copyrights to the images used; these images have been sourced from Google. If you require credits or wish to request removal, please contact us via email.
Other Recommendations:
Tons Of ChatGPT Users Are Mass-Uninstalling The App, But Why?
































