Claude creator Anthropic has given customers using its Free, Pro, and Max plans one month to prevent the engine from storing their chats for five years by default and using them for training.
A popup will show for existing users, asking if they want to opt out of a new "Help improve Claude" function, and Anthropic will prompt new users with a similar question during app setup. But if you opt in, the data retention window is being extended from 30 days to 1,826 days, give or take leap years. Even if customers opt out, they'll still have their convos stored for 30 days. After September 28, users will no longer see the popup, and will have to tweak these retention settings in Claude's privacy settings.
Any conversations that the user deletes will not be retained, Anthropic said, but, if the chats are flagged for containing objectionable content, then they could be retained for seven years. For example, discussions about nuclear weapons would trigger the process.
Pro and Max users currently pay $20 and $100 a month respectively to access Claude's AI engine, but that won't buy them out of Anthropic's data collection grab. However, the new data collection policy will not affect commercial, educational, or government customers, nor will API use with Amazon Bedrock, Google Cloud’s Vertex AI, and other commercial partners.
The exceptions for commercial or government contracts are notable. In the latter case, the AI biz is in the running for a potentially lucrative deal with the US General Services Administration to integrate AI into government systems and reduce the nation's reliance on humans to carry the workload of dealing with citizens.
A spokesperson declined to expand on the original statement, other than to tell The Register that the "updated retention length will only apply to new or resumed chats and coding sessions, and will allow us to better support model development and safety improvements."
It has been a busy week for Anthropic, as it launched a new Chrome extension to try and get more people using Claude for search investigations. Anthropic has limited the rollout to 1,000 users, but may enlarge the program once it has sorted out the technical issues.
Anthropic also admitted that criminals online are harnessing the AI bot's capacity to help with computer intrusions and remote worker fraud. The California biz says it's blocked one North Korean attempt to turn its AI engine to malevolent ends and is on guard for more people trying to abuse its tech. ®
Source: The register