Opinion Anthropic CEO Dario Amodei has published a novella-length essay about the risk of superintelligent AI, something that doesn't yet exist.
It's as good an advertisement as any for the summarization capabilities of the company's Claude model family as you can find.
tl;dr AI presents a serious risk and intervention is required to prevent disaster, though not so much that the regulations spoil the party. Go team.
The AI threat has been a talking point among tech cognoscenti for more than a decade, and longer than that if you count sci-fi alarmism. Rewind to 2014 when Elon Musk warned: "With artificial intelligence we are summoning the demon."
You can measure Musk's concern by his investment in xAI.
AI luminary Geoffrey Hinton offered a more convincing example of concern through his resignation from Google and the doubts he expressed about his life's work in machine learning. It's a message that recently inspired AI industry insiders to try to pop the AI bubble with poisoned data.
If you're concerned about this, you may find consolation in the fact that Amodei made a prediction that has not come to pass. In March 2025, he said: "I think we'll be there in three to six months – where AI is writing 90 percent of the code." And in 12 months, he said, AI will essentially be writing all of the code. Spoiler: human developers still have jobs.
But the problem with Amodei's essay of almost 22,000 words is his insistence on framing the fraught state of the world in terms of AI. If you're a hammer, everything looks like a nail. If you're head of an AI company, it's AI everywhere, all the time.
If you're, say, on the streets of Minneapolis, or Tehran, or Kyiv, or Gaza, or Port-au-Prince, or any other area short on supplies or stability, AI probably isn't at the top of your list of threats. Nor will it be a year or three from now.
Amodei floats his cautionary tale on the back of this scenario:
Suppose a literal "country of geniuses" were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation.
The analogy is not much better than the discredited Infinite Monkeys Theorem that posits a sufficient number of keyboard-equipped chimps would eventually produce the works of Shakespeare. Certainly 50 million brainiacs – proxies for AI models – could get up to some mischief, but the national security advisor of a major state has more plausible and present threats to consider.
If you look at the leading causes of mortality in 2023, AI doesn't show up. The dominant category is circulatory (e.g. heart disease) at 28.5 percent, followed by neoplasms (e.g. cancer) at 22.0 percent. External causes account for 7.0 percent of the total. That includes suicide, at 2.1 percent of the total, which is actually something that AI may make worse when people try to use it to manage mental health problems.
Polling company Ipsos conducts a monthly "What Worries the World" survey and it's not AI. When the biz last checked the global public pulse in September 2025, top concerns were: crime and violence (32 percent); inflation (30 percent); poverty and social inequity (29 percent); unemployment (28 percent); financial/political corruption (28 percent); and coronavirus (2 percent).
AI now plays a role in some of these concerns. Investment in AI datacenters has raised utility prices and led to a shortage of DRAM. The construction of these datacenters is increasing demand for water – though Amodei contends this isn't a real problem. High capex spending may be accompanied by layoffs as companies look for ways to compensate by cutting costs. And for some occupations, AI may be capable enough to automate some portion of job requirements.
But focusing on the danger and unpredictability of AI misses the point: it's people who allow this and it's people who can manage it. This is a debate about regulation, which is presently minimal.
We can choose how much AI costs by deciding whether creative work can be captured, laundered, and resold without compensation to those who created it. We can choose whether the government should subsidize the development of these models. We can impose liability on model makers when models can be used to generate sexual abuse material or when models make material errors. We can decide not to let AI models make nuclear launch decisions.
Amodei does identify some risks that are more pressing than the theorized legion of genius models. "The thing to worry about is a level of wealth concentration that will break society," he writes, noting that Elon Musk's $700 billion net worth already exceeds the ~2 percent of GDP that John D. Rockefeller's wealth represented during the Gilded Age.
He makes that point amid speculation that the wealth generated by AI companies will lead to personal fortunes in the trillions, which is a possibility if the AI bubble doesn't collapse on itself.
But AI companies still have to prove they can turn a profit as open source models make headway. Anthropic isn't expected to become profitable until 2028. For OpenAI, profit is projected in 2030 if the company survives that long, after burning "roughly 14 times as much cash as Anthropic," according to the Wall Street Journal.
Amodei's optimism about revenue potential aside, it's the money that matters. Those not blessed with Silicon Valley wealth may yet develop an aversion for billionaire-controlled tech platforms that steer public opinion and suppress regulation.
Let's not forget that much of the investment in AI followed from the belief that AI models will break Google's grip on search and advertising, which has persisted due to the lack of effective antitrust enforcement.
Amodei argues for a cautious path, one that focuses on denying China access to powerful chips.
"I do see a path to a slight moderation in AI development that is compatible with a realist view of geopolitics," he writes. "That path involves slowing down the march of autocracies towards powerful AI for a few years by denying them the resources they need to build it, namely chips and semiconductor manufacturing equipment."
His path avoids a more radical approach driven by the "public backlash against AI" that he says is brewing.
"The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones," Amodei argues.
That doesn't sound like someone worried about the AI demon. It sounds like every business leader who wants to minimize burdensome regulations.
The fact is no one wants superintelligent AI, which by definition would make unexpected decisions. Last year, when AI agents took up all the air in the room, the goal was to constrain behavior and make agents predictable and knowable, to make them subservient rather than independent, to prevent them from deleting all your files and posting your passwords on Reddit.
And if the reported slowdown in AI model advancement persists, we'll be free to focus on more pressing problems – like preventing billionaires from drowning democracy in a flood of AI-generated misinformation and slop. ®
Source: The register