Anthropic alters user data policy, igniting privacy debate

While Anthropic claims the move is designed to improve AI safety and performance, the shift is seen by many as a strategic response to competitive pressures from rivals like OpenAI and Google.

By  Storyboard18| Aug 29, 2025 8:44 AM
This new approach reflects a growing concern within the tech industry about the misuse of AI. It seeks to reduce harmful prompts from escalating and prevent the system from being exploited for negative purposes. While Anthropic emphasizes that Claude is not sentient, this feature marks a new stage in how humans and AI will interact, one that values respect and sets clear limits on acceptable behavior.

Anthropic, the company behind the AI assistant Claude, is making significant changes to its user data policy, now requiring users to actively opt out if they don't want their conversations used for model training. The new policy, which takes effect on September 28, marks a major shift from Anthropic’s previous stance of not using consumer chat data for this purpose.

Previously, consumer chat data—including prompts and outputs—was automatically deleted after 30 days. Under the new rules, data from users who don't opt out will be retained for up to five years and used to train future AI models. This change applies to users of Claude Free, Pro, and Max, but not to business customers or API users.

While Anthropic claims the move is designed to improve AI safety and performance, the shift is seen by many as a strategic response to competitive pressures from rivals like OpenAI and Google. Training large language models requires vast amounts of high-quality conversational data, and access to millions of user interactions provides a significant competitive advantage.

The policy change also highlights broader industry confusion and privacy concerns. The design of the new policy—presenting users with a prominent "Accept" button and a smaller, pre-selected toggle for data sharing—raises questions about whether users are giving true, informed consent. This practice has previously drawn scrutiny from the Federal Trade Commission (FTC), which has warned AI companies against burying policy changes in fine print or through deceptive design.

The move by Anthropic comes as rival OpenAI faces its own legal battles, including a lawsuit from The New York Times that seeks to force the company to retain all consumer chat data indefinitely, a demand OpenAI has called "unnecessary" and in conflict with its privacy commitments.

First Published onAug 29, 2025 8:50 AM

SPOTLIGHT

DigitalFrom Clutter to Clarity: How Video is transforming B2B storytelling

According to LinkedIn’s research with over 1,700 B2B tech buyers, video storytelling has emerged as the most trusted, engaging, and effective format for B2B marketers. But what’s driving this shift towards video in B2B? (Image Source: Unsplash)

Read More

Explained: Standing Committee’s draft report on India’s fight against Fake News

India’s parliamentary panel warns fake news threatens democracy, markets and media credibility, urging stronger regulation, fact-checking, AI oversight and global cooperation.