sign up log in
Want to go ad-free? Find out how, here.

As algorithmic decision-making spreads across more policy areas, it is exposing social and economic inequities that were long hidden behind "official" data. Diana Coyle wants to use a new bias in favour of 'a more just society'

Business
As algorithmic decision-making spreads across more policy areas, it is exposing social and economic inequities that were long hidden behind "official" data. Diana Coyle wants to use a new bias in favour of 'a more just society'

By Diane Coyle*

Algorithms are as biased as the data they feed on. And all data are biased. Even “official” statistics cannot be assumed to stand for objective, eternal “facts.”

The figures that governments publish represent society as it is now, through the lens of what those assembling the data consider to be relevant and important. The categories and classifications used to make sense of the data are not neutral. Just as we measure what we see, so we tend to see only what we measure.

As algorithmic decision-making spreads to a wider range of policymaking areas, it is shedding a harsh light on the social biases that once lurked in the shadows of the data we collect. By taking existing structures and processes to their logical extremes, artificial intelligence (AI) is forcing us to confront the kind of society we have created.

The problem is not just that computers are designed to think like corporations, as my University of Cambridge colleague Jonnie Penn has argued. It is also that computers think like economists. An AI, after all, is as infallible a version of homo economicus as one can imagine. It is a rationally calculating, logically consistent, ends-oriented agent capable of achieving its desired outcomes with finite computational resources. When it comes to “maximizing utility,” they are far more effective than any human.

“Utility” is to economics what “phlogiston” once was to chemistry. Early chemists hypothesized that combustible matter contained a hidden element – phlogiston – that could explain why substances changed form when they burned. Yet, try as they might, scientists never could confirm the hypothesis. They could not track down phlogiston for the same reason that economists today cannot offer a measure of actual utility.

Economists use the concept of utility to explain why people make the choices they do – what to buy, where to invest, how hard to work: everyone is trying to maximize utility in accordance with one’s preferences and beliefs about the world, and within the limits posed by scarce income or resources. Despite not existing, utility is a powerful construct. It seems only natural to suppose that everyone is trying to do as well as they can for themselves.

Moreover, economists’ notion of utility is born of classical utilitarianism, which aims to secure the greatest amount of good for the greatest number of people. Like modern economists following in the footsteps of John Stuart Mill, most of those designing algorithms are utilitarians who believe that if a “good” is known, then it can be maximized.

But this assumption can produce troubling outcomes. For example, consider how algorithms are being used to decide whether prisoners are deserving of parole. An important 2017 study finds that algorithms far outperform humans in predicting recidivism rates, and could be used to reduce the “jailing rate” by more than 40% “with no increase in crime rates.” In the United States, then, AIs could be used to reduce a prison population that is disproportionately black. But what happens when AIs take over the parole process and African-Americans are still being jailed at a higher rate than whites?

Highly efficient algorithmic decision-making has brought such questions to the fore, forcing us to decide precisely which outcomes should be maximized. Do we want merely to reduce the overall prison population, or should we also be concerned about fairness? Whereas politics allows for fudges and compromises to disguise such tradeoffs, computer code requires clarity.

That demand for clarity is making it harder to ignore the structural sources of societal inequities. In the age of AI, algorithms will force us to recognize how the outcomes of past social and political conflicts have been perpetuated into the present through our use of data.

Thanks to groups such as the AI Ethics Initiative and the Partnership on AI, a broader debate about the ethics of AI has begun to emerge. But AI algorithms are of course just doing what they are coded to do. The real issue extends beyond the use of algorithmic decision-making in corporate and political governance, and strikes at the ethical foundations of our societies.

While we certainly need to debate the practical and philosophical tradeoffs of maximizing “utility” through AI, we also need to engage in self-reflection. Algorithms are posing fundamental questions about how we have organized social, political, and economic relations to date. We now must decide if we really want to encode current social arrangements into the decision-making structures of the future. Given the political fracturing currently occurring around the world, this seems like a good moment to write a new script.


Diane Coyle is Professor of Public Policy at the University of Cambridge. This content is © Project Syndicate, 2018, and is here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

8 Comments

This sounds like heaven. I particularly like the prison example, for example with a little "fine-tuning" we could program AI to finally get rid of that horrendous 9 to 1 male-female prisoner imbalance!

Up
0

Unlikely, men are simply more violent and greater risk takers.

Up
0

The usual academic-speak theory, infused to look into something supposedly important, microscopically, until they have discussed it or written it or AI'd to death. This is par for the course at tertiary level in the new millennium, right across the western world, I'm afraid. It sounds like a great topic, but the author answered her own question, as was the whole point of the article, by saying we need to re-code our software in the first place. Ask the right questions, get the right answers? Right? Not. The right in this instance, is only true to the code-writer in the first place. Everything has a source. Even AI.
My theory says we are spending way too much time looking at screens & not enough time looking after one another. Life is only worth something when it is shared. It's no fun on your own. We need to care a bit more for our family, our neighbours & our colleagues, wherever they are. Love is the key. It is way more important than house prices, for example, but you wouldn't think so.
From what I can gather, tertiary-speak is these days so diverse, & so inclusive, that it's impossible to agree on a theory on anything - which is probably why there is so much confusion out there. In fact, if what I think is happening within our humanities & social sciences inside our western universities pretty much across the planet, carries on for too much longer, then it will continue to fatally undermine the value of traditional relationships, as it has been doing for 40-50 years now, to the point where it will force the male / female relationship into total breakdown (& despair) with the inevitable societal collapse that will follow.
If there is a cancer within our culture these days it comes from the universities that are supposed to educate our young minds, but which are sadly turning traditional thinking on its head, which we can see proof of all around us these days, in the way we are treating one another. Our relationships suck!
If the universities really want to something good for our culture, they could start by admitting that the nuclear family is the ultimate underwriter of our society, which without, there would be no society.

Up
0

Well spotted. Lowering recidivism rates sounds like a noble cause to me, buggering up the process rather less so. Address the causes of inequity, cope with the results, but don't confuse the two.

On a related topic, the latest algorithms in the finance world are those that model the processes of the earlier ones, to identify how the first generation algorithms will react to price events. Talk about circular thinking, and runaway feedback machinery....

Up
0

Imagine the first tribal leader /warlord being told about writing - this wonderful new technique for recording decisions - but just a tool to be used. Ditto AI.

Up
0

The trouble with "Utility" is that most approaches to its use still assume perfect information knowledge which is simply not the case.

We could go a long way to improving things by:
1) adopting best practice from around the world. e.g. for prisoners its not just about recidivism rates but making sure the social structures are in place in the first instance to minimize the numbers ending up in jail. Even then jail may not even be the best treatment
2) Using socioeconomic business case analysis to tweak existing best practice towards even better practice
3) Clearly there is a case for including AI in this process. AI already has better diagnostic rates for cancer than doctors. But what we need from AI is how the AI made the decisions it did. Many AI systems are black boxes.

Up
0

You nailed it. Assume you have two loved ones in prison and they are up for a parole hearing. One is accepted and the other rejected. You would demand an answer to the question "Why?". The answer 'because the AI program said so' will never be acceptable.

Up
0

Excellent article. It shows up the deficiencies of classical economics which has pervaded decision making for a generation.the work of Kahneman and Tversky,Thaler,Ariely and others has thrown a harsh light on the notion of homo economicus.
I would however,quibble with attributing utilitarianism to Mill,as the concept was originally formulated by Jeremy Bentham.

Up
0