sign up log in
Want to go ad-free? Find out how, here.

Diane Coyle highlights the risks of entrusting more decisions to machine-learning and artificial-intelligence systems

Diane Coyle highlights the risks of entrusting more decisions to machine-learning and artificial-intelligence systems

Friends of mine who work in the arts and humanities have started doing something unusual, at least for them: poring over data. This is due to the pandemic, of course. Every day, they check COVID-19 case numbers, how slowly or quickly the R factor is declining, and how many people in our area got vaccinated the day before.

Meanwhile, social media are full of claims and counterclaims about all manner of other data. Is global poverty declining or increasing? What is the real level of US unemployment? The scrutiny, sometimes leading to tetchy arguments, results from people’s desire to cite – or challenge – the authority of data to support their position or worldview.

But in other areas where data are used, there is remarkably little focus on its reliability or interpretation. One striking example I have noticed recently concerns the “CAPTCHA” tests designed to protect websites against bots, which ask you to prove your humanity by identifying images containing common features such as boats, bicycles, or traffic lights. If your choice – even if correct – differs from that of the machine system using your selection to train an image-recognition algorithm, you will be deemed inhuman.

In this example, the machine’s error is obvious, although there is no appeal against it if you want to access the website it is guarding. But in other cases, it may not be possible to identify what conclusions either machine-learning systems or human analysts are drawing when they put more weight on data than the data can bear.

Economists are rushing to embrace the use of big data in their research, while many policymakers think artificial intelligence offers scope for greater cost-effectiveness and better policy outcomes. But before we entrust more decisions to data-based machine-learning and AI systems, we must be clear about the limitations of the data.

Already, too little attention is paid to the uncertainties inherent in economic data. Although policymakers generally appreciate that even something as basic as GDP growth is subject to large uncertainties and revisions, it seems impossible to stop people from building narratives on weak foundations.

For example, cross-country comparisons of the pandemic’s impact on national GDP are fraught with difficulty, owing to differences in economic structure and statistical methodology. But that does not stop claims about which economies are weathering the crisis better or worse.

Or consider the “true” rate of inflation. Seemingly technical disputes about how best to construct a price index mask profound distributional conflicts, such as those between borrowers and bond holders, or workers and employers.

The data we use shape our view of a complex, changing world. But data represent reality from a particular perspective. Data of the kind deployed in policy debates are rarely completely unanchored from the world they describe, but the lens they provide can be sharp or blurry – and there is no escaping the perspective they offer.

One possible reason for the current distrust of economic “expertise” is the growing gap between top-down, technical economic assessments based on familiar data series, and an alternative world of more granular data presenting the bottom-up picture. Standard economic statistics capture average experience, which ceases to be typical when people’s fortunes diverge.

In general, advocates of evidence-based policy are aware of the inherent uncertainty of available data. Researchers take great care regarding sampling, the scope for error, and the limitations of the data-collection method used. But the degree of false certainty tends to increase with proximity to policy and political decision-making. Former US President Harry S. Truman is far from the only politician to have expressed impatience with economists who say, “‘On the one hand...,’ then, ‘but on the other.’”

But the current hunger for data-based certainty is becoming dangerous as we increasingly rely on technocratic decision procedures – including machine-learning systems – for policymaking in areas such as criminal justice, policing, and welfare. Democracies often rely on constructive ambiguity to reconcile conflicting interests, such as those regarding the distribution of returns to an asset, or to address the question of whether law-enforcement authorities should err on the side of imprisoning innocent people or letting criminals walk free. Claims to data-based authority minimise or eliminate the scope of ambiguity, with potentially significant consequences.

I am all in favour of more and better data, which have been essential to governments’ efforts to manage the pandemic. But the more we use data to make decisions, the more sensitive we must be to the fact that data paint an expert’s- or machine’s-eye view, based on categories devised by someone who is themselves a player in society’s status game. Otherwise, we will end up with decision processes just like those rogue CAPTCHA tests – insisting that a boat is a bicycle, and leaving other people with no choice but to agree.


Diane Coyle, Professor of Public Policy at the University of Cambridge, is the author, most recently, of Markets, State, and People: Economics for Public Policy. This content is © Project Syndicate, 2021, and is here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.