Fresh from watching rival OpenAI stick its nose into patient records, Anthropic has decided now is the perfect moment to march Claude into US healthcare too, promising to fix medicine with yet more AI, APIs, and carefully-worded reassurances about privacy.
In a blog post over the weekend, Anthropic trumpeted the launch of Claude for Healthcare alongside expanded life sciences tools, a double-barreled push to make its chatbot not just a research assistant for scientists but an actual cog in the $4 trillion-plus American healthcare machine.
If this feels less like healthcare reform and more like an AI land rush toward anything stuffed with data and VC money, you've got the gist.
Anthropic is selling Claude for Healthcare as a HIPAA-compliant way to plug its model into the plumbing of US medicine, from coverage databases and diagnostic codes to provider registries. Once wired up, Claude can help with prior authorization checks, claims appeals, medical coding, and other administrative chores that currently clog up clinicians' inboxes and sanity.
"Claude can now connect to industry-standard systems and databases to help clinicians and administrators find the data they need and generate reports more efficiently," Anthropic wrote. "The aim is to make patients' conversations with doctors more productive, and to help users stay well-informed about their health information."
The life sciences side of the announcement adds integrations with Medidata and ClinicalTrials.gov, promising to help with clinical trial planning and regulatory wrangling. Because nothing says "we're a serious AI partner for pharma" quite like rifling through clinical trial registries.
There's plenty of lofty talk about helping researchers and saving time, but the underlying logic is the same one driving most AI-for-industry plays – admin drudgery is far easier, and far more profitable, to automate than care itself.
The company is keen to emphasize that Claude won't quietly slurp up your health data to train future models: data sharing is opt-in, connectors are HIPAA-compliant, and "we do not use user health data to train models," Anthropic reassures us. That's the polite way of saying it would let hospitals, insurers, and maybe patients themselves hand over structured medical forms and records as long as lawyers and compliance teams are satisfied.
And yes, patients may get to play too. In beta, Claude can integrate with services like HealthEx, Apple HealthKit, and Android Health Connect so subscribers can ask the bot to explain their lab results or summarize their personal medical history. That'll be handy right up until the inevitable moment when someone discovers that handing a large language model access to health apps brings with it all the usual "AI hallucination" caveats and eyebrow-raising liability questions.
Anthropic's announcement follows hot on the heels of OpenAI's ChatGPT Health ploy, which instantly raised privacy concerns by suggesting clinicians and consumers alike could feed it raw medical records and get back summaries and treatment suggestions. That gambit drew criticism from privacy advocates worried about where all that data might go, a conversation Anthropic's carefully-worded language aims to pre-empt.
So here we are: two of the biggest names in "responsible AI" now neck-deep in the US healthcare sector, promising to make sense of everything from coverage policies to clinical trial data. The claims are big, the caveats are long, and the proof, as ever, will come later. ®
Source: The register