In short
- Anthropic included passport and selfie confirmation for Claude– no other significant AI chatbot presently needs the very same.
- The relocation comes weeks after millions signed up with Claude particularly over OpenAI’s monitoring offer.
- Confirmation information goes to Personality’s servers, not Anthropic’s, and will not be utilized to train designs.
Anthropic silently released identity confirmation requirements for Claude today, asking particular users to turn over a government-issued image ID and a live selfie. Something its rivals do not need.
” We are presenting identity confirmation for a couple of usage cases, and you may see a confirmation trigger when accessing particular abilities, as part of our regular platform stability checks, or other security and compliance steps,” Anthropic stated. “We just utilize your confirmation information to validate who you are and not for any other functions.”
Countless users left OpenAI for Anthropic in February after OpenAI signed an offer to release AI on Pentagon categorized networks– an agreement Anthropic declined over issues about mass monitoring and self-governing weapons. Daily signups exceeded, and totally free users were up 60% given that January, Anthropic stated at the time. The privacy-conscious crowd had actually discovered its home.
That crowd, it appears, might now have some files to prepare if it wishes to continue utilizing Claude. The responses up until now have actually been rather unfavorable, explaining that it’s an intentional choice and not a policy or a compulsory order enforced by a federal government on Anthropic as a provider.
According to the assistance center page, which went live on April 14, Anthropic chosen Personality Identities as its confirmation partner– the very same KYC facilities utilized throughout monetary services– and needs a physical, intact passport, chauffeur’s license, or nationwide identity card. Photocopies, mobile IDs, and trainee qualifications do not count. A live selfie might likewise be needed.
The policy isn’t universal yet. Confirmation will set off when accessing “particular abilities,” throughout “regular platform stability checks,” or as part of security and compliance steps. Anthropic hasn’t stated openly which functions are gated, or what user habits may trigger a check. The business did not instantly react to Decrypt‘s ask for extra information.
On information managing, Anthropic draws a cautious line: your ID and selfie go to Personality’s servers, not Anthropic’s own systems. The business states it is the information controller setting the terms, which Personality can utilize the info to validate identity and enhance scams detection. The information is secured in transit and at rest, left out from design training, and will not be shown 3rd parties for marketing, something Anthropic has actually bewared to guarantee given that its earliest industrial policies.
Cautious guarantees, however, have a history of conference reckless facilities. An October 2025 breach at Discord exposed approximately 70,000 federal government IDs users had actually sent for age confirmation. Personality is a major gamer in this area, however third-party custody of federal government files has actually shown consistently that no 3rd party is immune.
Tighter identity controls likewise fit a pattern Anthropic has actually been constructing towards. In December, the business revealed classifiers to discover users who self-identify as minors. Several adult users had their accounts suspended anyhow, reporting that whole task histories were cleaned while they attempted to appeal inaccurate flags.
Accounts signed up from areas Anthropic does not officially serve are likewise based on restrictions– an information that lands hardest on Chinese users accessing Claude through intermediaries, given that a live selfie matched versus a physical federal government file is tough to phony your method through.
Daily Debrief Newsletter
Start every day with the leading newspaper article today, plus initial functions, a podcast, videos and more.