Home

AI is the new Oracle of Delphi. That’s bad news

De redactie van NRC selecteert de beste artikelen uit The Economist voor een breder perspectief op internationale politiek en economie.

Societies urgently need to confront the ethics of prediction, writes Carissa Véliz.

Dit artikel komt uit The Economist

Prediction is old as intelligent life. In pre-industrial times, part of what made humans strong despite physical disadvantage was an ability to foresee the behaviour of other animals, which made it easier to hunt them. In the modern world prediction continues to confer an array of competitive advantages: if you run a company, for instance, everything from choosing what businesses to enter or exit to finding the right location for operations depends in part on forecasting.

The ai age has brought a boom in prediction. Machine learning—the kind of ai most commonly used in chatbots and decision-making algorithms such as hiring software—is a prediction machine. It uses historical data to fill in blanks, whether it is predicting text or deciding whether someone will be a good employee. „Oracle” is even a technical term in machine learning: a system of perfect prediction.

De redactie van NRC selecteert de beste artikelen uit The Economist voor een breder perspectief op internationale politiek en economie.

Predictive algorithms are everywhere, opening and closing doors for us: deciding whether we get insurance, or a loan, or an apartment. Law-enforcement agencies use predictive policing tools that flag neighbourhoods or individuals as higher-risk. Predictive algorithms are used in the justice system to inform decisions about bail, sentencing and parole. Streaming platforms and social-media feeds rely on them to curate content.

The use of predictive technology to make decisions about people raises ethical questions that humanity has devoted worryingly little time to considering. Giving these questions more thought will lead to better decisions about when prediction should be used, and when it shouldn’t.

Predictions are, at best, educated guesses. Often, they are riddled with prejudice; a rich literature is emerging on ai’s predictive biases caused by biased training data or algorithmic design flaws. Verdicts by algorithmic prediction also create Kafkaesque processes in which people cannot contest decisions because they are not based on clearly defined criteria, but rather on black-boxed pattern-matching by a machine.

At a deeper level, there is arrogance in social predictions. When we predict people’s future as if we were forecasting the weather, we are treating them as inert objects, not as agents who have a say in their future and can defy their odds.

The outputs of predictive ai might sound like a description of the world, but they are „normative”: they implicitly tell us what to do. When a large language model tells me that in the future everyone will be using large language models for their business, it is encouraging me to go out there and fulfill its creators’ vision of the world. In philosophical jargon, predictions are „speech acts”, closer to commands than descriptions. When we give them credence and act accordingly, we are obeying them.

That leads to the ethical question of whether there might be things we shouldn’t predict, even if we could. Predictions about human beings have a propensity to bend reality towards themselves. At different times in ancient Rome, authorities banned prophecies about the death of the emperor, for the simple reason that they tended to result in a murdered emperor. An ai predicting a bleak financial future for someone can push that person further into economic disadvantage by denying them credit. A prediction of future disease might push up insurance premiums, making finances tighter and causing health to worsen from stress.

It is disquieting that now that we are using prediction more than ever, we have no rules for it. There is not even much public debate about guardrails. Big questions abound. In the context of justice, for instance, should we prefer transparent and contestable criteria to statistical pattern-matching for making decisions? In insurance, is it fair to base premiums increasingly on individualised predictions, which cuts against the pooling-of-risk principle at the heart of the industry?

The lack of rules leaves anyone free to use ai to make any kind of prediction about anyone else and act accordingly, without even notifying the subject of the prediction about the prophecies that are influencing their fate. If ai deems someone unemployable, and most companies use similar ais to hire people, that person will not get a job. But it might be that the ai is achieving „accuracy” at the price of creating the reality it is purporting to predict. Even worse, self-fulfilling prophecies act like perfect crimes because they don’t produce error signals. We will never know whether the person who didn’t get a job would have been a great employee because that data doesn’t exist.

It is a mistake to think about ai purely in technological terms. It is crucial to consider it from an ethical standpoint, too—not for the sake of intellectualism but because its predictive function has a concrete, and questionable, effect on how humans behave and what they can achieve. Although its methods are vastly different from those of bygone prophesies, the political function ai plays, in dressing up contestable value-laden decisions as facts, is analogous to that of history’s oracles, seers and astrologers. Efforts to predict people’s future go hand in hand with efforts to control it. If we know what someone will do tomorrow, there is a good chance it’s because we are determining their behaviour.

Beware of prophets and predictions. Only when we accept that we don’t know what the future holds, and act accordingly, can we be sure to live in a free society.

© 2026 The Economist Newspaper Limited. All rights reserved.

Kunstmatige intelligentie

Lees meer

Source: NRC

Previous

Next