sign up log in
Want to go ad-free? Find out how, here.

AI has transformative power, but it needs to be deployed in an equitable and just manner

Technology / opinion
AI has transformative power, but it needs to be deployed in an equitable and just manner
Are the AIs making decisions about your life fair? sorbetto/DigitalVision Vectors via Getty Images
Are the AIs making decisions about your life fair? sorbetto/DigitalVision Vectors via Getty Images

Artificial intelligence’s capacity to process and analyse vast amounts of data has revolutionised decision-making processes, making operations in health care, finance, criminal justice and other sectors of society more efficient and, in many instances, more effective.

With this transformative power, however, comes a significant responsibility: the need to ensure that these technologies are developed and deployed in a manner that is equitable and just. In short, AI needs to be fair.

The pursuit of fairness in AI is not merely an ethical imperative but a requirement in order to foster trust, inclusivity and the responsible advancement of technology. However, ensuring that AI is fair is a major challenge. And on top of that, my research as a computer scientist who studies AI shows that attempts to ensure fairness in AI can have unintended consequences.

Why fairness in AI matters

Fairness in AI has emerged as a critical area of focus for researchers, developers and policymakers. It transcends technical achievement, touching on ethical, social and legal dimensions of the technology.

Ethically, fairness is a cornerstone of building trust and acceptance of AI systems. People need to trust that AI decisions that affect their lives – for example, hiring algorithms – are made equitably. Socially, AI systems that embody fairness can help address and mitigate historical biases – for example, those against women and minorities – fostering inclusivity. Legally, embedding fairness in AI systems helps bring those systems into alignment with anti-discrimination laws and regulations around the world.

Unfairness can stem from two primary sources: the input data and the algorithms. Research has shown that input data can perpetuate bias in various sectors of society. For example, in hiring, algorithms processing data that reflects societal prejudices or lacks diversity can perpetuate “like me” biases. These biases favor candidates who are similar to the decision-makers or those already in an organisation. When biased data is then used to train a machine learning algorithm to aid a decision-maker, the algorithm can propagate and even amplify these biases.

Why fairness in AI is hard

Fairness is inherently subjective, influenced by cultural, social and personal perspectives. In the context of AI, researchers, developers and policymakers often translate fairness to the idea that algorithms should not perpetuate or exacerbate existing biases or inequalities.

However, measuring fairness and building it into AI systems is fraught with subjective decisions and technical difficulties. Researchers and policymakers have proposed various definitions of fairness, such as demographic parity, equality of opportunity and individual fairness.

Why the concept of algorithmic fairness is so challenging.

These definitions involve different mathematical formulations and underlying philosophies. They also often conflict, highlighting the difficulty of satisfying all fairness criteria simultaneously in practice.

In addition, fairness cannot be distilled into a single metric or guideline. It encompasses a spectrum of considerations including, but not limited to, equality of opportunity, treatment and impact.

Unintended effects on fairness

The multifaceted nature of fairness means that AI systems must be scrutinized at every level of their development cycle, from the initial design and data collection phases to their final deployment and ongoing evaluation. This scrutiny reveals another layer of complexity. AI systems are seldom deployed in isolation. They are used as part of often complex and important decision-making processes, such as making recommendations about hiring or allocating funds and resources, and are subject to many constraints, including security and privacy.

Research my colleagues and I conducted shows that constraints such as computational resources, hardware types and privacy can significantly influence the fairness of AI systems. For instance, the need for computational efficiency can lead to simplifications that inadvertently overlook or misrepresent marginalized groups.

In our study on network pruning – a method to make complex machine learning models smaller and faster – we found that this process can unfairly affect certain groups. This happens because the pruning might not consider how different groups are represented in the data and by the model, leading to biased outcomes.

Similarly, privacy-preserving techniques, while crucial, can obscure the data necessary to identify and mitigate biases or disproportionally affect the outcomes for minorities. For example, when statistical agencies add noise to data to protect privacy, this can lead to unfair resource allocation because the added noise affects some groups more than others. This disproportionality can also skew decision-making processes that rely on this data, such as resource allocation for public services.

These constraints do not operate in isolation but intersect in ways that compound their impact on fairness. For instance, when privacy measures exacerbate biases in data, it can further amplify existing inequalities. This makes it important to have a comprehensive understanding and approach to both privacy and fairness for AI development.

The path forward

Making AI fair is not straightforward, and there are no one-size-fits-all solutions. It requires a process of continuous learning, adaptation and collaboration. Given that bias is pervasive in society, I believe that people working in the AI field should recognise that it’s not possible to achieve perfect fairness and instead strive for continuous improvement.

This challenge requires a commitment to rigorous research, thoughtful policymaking and ethical practice. To make it work, researchers, developers and users of AI will need to ensure that considerations of fairness are woven into all aspects of the AI pipeline, from its conception through data collection and algorithm design to deployment and beyond.The Conversation


Ferdinando Fioretto, Assistant Professor of Computer Science, University of Virginia. This article is republished from The Conversation under a Creative Commons license. Read the original article.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

27 Comments

Bittensor.

Up
1

AI has transformative power, but it needs to be deployed in an equitable and just manner

Question that may make some uncomfortable:  why do things need to be "equitable and just", and if that's the aim, I suggest we are failing miserably.

Should "equity and fainess" apply to living standards irrespective of what effort or benefit a person offers society, or should "equity and fairness" relate to the benefit provided to society?

Up
1

Should "equity and fainess" apply to living standards irrespective of what effort or benefit a person offers society, or should "equity and fairness" relate to the benefit provided to society?

You're looking at it from an either/or mindset, when it is a both/and mindset. It's a really common logical fallacy I see in comments on this site. People only see A or B - not C which is the win/win outcome. In this case, if we agree as a society that people have rights, then we also agree they have responsibilities to benefit society back. They are both/and - not divorced from each other. 

Up
0

Agreed Xristina.

I think it's a great article, it's going to be an evolutionary process. The cynic in me has doubts if "equity and fairness" can really be achieved, I think none of us can truly get over our inherent biases.

But I concede that we should at least work towards minimising inequality and unfairness "

Up
3

Agreed. As per my comment below - getting more diversity into the industry is the part that really helps. From memory there's a ton of research on this. Similar to policy work, it's easy to see when something has been built by someone who hasn't "talked to the customer"! We just need to make sure our 'customer feedback' is diverse as well.  

Up
2

The existence of bias, mismeasurement, and unjustifiable resultant negative effects in data sets and algorithm use is not new not scandalous.

Many people are familiar with the historical example of orchestras moving to blind auditions and the resulting reduction in gender bias, for one. But it's easy to find many more examples.

AI can easily replicate or magnify such issues, being based on historical datasets where issues of bias and mismeasurement exist.

 

Re living standards, interesting point. We currently don't do that well. Strangely, we seem to transfer a lot of wealth to exisiting wealth from the poorer, via policy, and to protect wealth... regardless of contribution to society. Seems unfair to many...but we often seem to resent any provision for the poor more. I'm sure that phenomenon at different points in history has been studied well.

Up
1

In the case of AI having inbuilt inequity, I agree with this article. Having spent years working in IT here and also overseas, the biases are built into the AI systems because of the mindsets of the people working in them. They tend to be white middle class guys in Aotearoa NZ (similar is well-documented in Silicon Valley etc., with a lot of handwaving about "why women won't go into STEM") and so their thinking (and consequently the systems they build) reflects that.

They can't build anything else because they aren't exposed to anything else. It is reflected in things like designing forms where ethnicity only has one option, for example. As my friend said to me about these single-option ethnicity forms, "Which part of myself do you want me to disown?"

My direct experience is that IT environments have often been hostile for women and minorities, myself included, though they are supportive of - even glorify - male-specific autism/ADHD neurodiversity traits (and a strong misogynistic undercurrent). Women are in STEM - it's often that the guys there literally don't recognise that areas such as project management, BA and UX design are also part of STEM because it involves understanding how people interact, alongside understanding how the tech works. All the women I know who started out coding got out and moved into PM work. 

Up
1

They tend to be white middle class guys in Aotearoa NZ

I wonder how recently you've worked in software in NZ. Looking at the current team I work with, putting aside the offshore component, out of 20 of them, only 3 fit this description. I think this is fairly typical.

Up
2

Are you based in Auckland by any chance? I'm in Wgtn usually but last year it was Palmy... the demographics are different.  

Up
0

Shouldnt AI reflect the reality of the world as we live it not some fantasy utopia?  The world is unfair.  Outcomes are not equitable.  AI should reflect truth, not what some woke programmer thinks it should be.  See Gemini producing black nazis, asian knights, and female popes as an example.  When are we going to get tired of ideology being misrepresented as facts?

Good article on the subject here, as to how Google has sacrificed excellence for diversity  https://www.thefp.com/p/ex-google-employees-woke-gemini-culture-broken

Up
2

The question for any cultural artifact is always, Whose truth and reality is being perpetuated? The ones in power only?

Up
0

There is one truth and one reality. Not multiple.
The perception of the truth and reality may differ, but that is not what is being discussed here.
As to the question of power and control, the above post by K.W. (re Google Gemini fiasco) points out the lunacy that ensues when people who think there are multiple truths and realities come to power. 

 

Up
1

"There is one truth and one reality. Not multiple"

I disagree with this view. I'm quite certain that what I see as truth, you may see differently and vice versa.

Up
0

The idea that post-modern (aka woke) ideology should be instilled into AI is one of the most insidious, and dangerous concepts of our time.  There are no disadvantaged groups!  Shackling the most powerful intellectual tool that mankind has ever created with any orthodoxy could have disastrous effects.

Up
5

The point of the article is that it is already shackled by an orthodoxy - one that is invisible to the people inside it. 

There are no disadvantaged groups! 

Multiple universities worth of research disagree with you here in NZ alone. E.g. this Productivity Commission report suggested that nearly 700,000 people in NZ experienced 'persistent disadvantage' in at least one domain, according to the 2013 and 2018 census surveys. https://www.productivity.govt.nz/publications/final-report-a-fair-chanc… 

Up
1

People can experience disadvantage, but it's false logic to attribute that to group identity.  The idea that "disadvantaged groups" could even exist is antithetical to the principles adopted during the enlightenment period.   We are all individuals, not groups.

Up
0

So when women were not permitted jobs or education it was all discrimination against them as individuals not based on them having the characteristic of being women?

Up
1

First wave feminism corrected that mistake a long time ago.  Modern feminism is patronising and retrograde. 

Up
1

We should not instill new bias into AI but we should be aware of exisiting bias in data and AI to enable us to try to counter it. That's not "woke" or communism etc. just being aware of history.

We also need to enable moral improvement rather than values lock-in in AI, ultimately.

Up
1

its already built into  chatgpt.Try some awkward early NZ history and you will see. 

Up
0

Most AI is already full of lies, it's painful talking to a chatbot because it always tries to gaslight you and talks to you like a child, repeating bs over and over with no valid evidence.

There are many great books by Thomas Sowell that describe exactly how the mental illness of woke ideology works and how it is based on lies.

Most people today don't read books, nothing by famous philosophers and that lack of education really allows wokeness to take over their minds.

Nietzsche describes how many personality traits that are talked about as being great by the mainstream are completely toxic personality traits. Academy of Ideas has great summaries on YouTube.

Why is no AI trained to read famous literature? Because today's lies are not compatible with truths studied over thousands of years.

Up
4

What We Owe the Future is a good book on AI, among other issues. Excellent, thought-provoking book overall.

GPTs include massive amounts of famous literature in their datasets, FYI.

Up
2

How can AI, equally improve the lives of 100's of millions of people living in poverty in Africa, Asia and South America who have no jobs and no internet connection, to a working class person in a 1st or 2nd world country?

Up
1

you actually ask that question? Answer courtesy of AI - 

Improving the lives of hundreds of millions of people living in poverty in regions with limited access to jobs and the internet presents a unique set of challenges, but there are several ways AI can still make a significant impact:

  1. Agricultural Optimization: Many people in poverty-stricken regions rely on agriculture for their livelihoods. AI can be used to optimize farming practices, improve crop yields, and mitigate risks associated with weather and pests. This can lead to increased food security and income for farmers.

  2. Healthcare: AI-powered diagnostic tools can help bridge the gap in access to healthcare services in remote areas. Mobile applications and low-cost devices can provide diagnostic support, telemedicine consultations, and health education to underserved communities.

  3. Education: AI can support education initiatives by providing personalized learning experiences through mobile applications and offline resources. These tools can adapt to individual learning styles and provide access to educational content even in areas with limited connectivity.

  4. Microfinance and Banking: AI algorithms can assess creditworthiness and provide financial services to individuals who lack traditional banking access. Mobile banking and microfinance platforms can enable people to access loans, savings accounts, and insurance products, empowering them to start businesses and manage financial risks.

  5. Infrastructure Development: AI can be used to optimize infrastructure planning and development, including transportation, energy, and water management. Predictive analytics can help governments and organizations make informed decisions about resource allocation and infrastructure investments to benefit underserved communities.

  6. Natural Disaster Management: AI-powered early warning systems can help communities prepare for and respond to natural disasters such as floods, hurricanes, and droughts. By providing timely alerts and coordinating emergency responses, AI technology can save lives and mitigate the impact of disasters on vulnerable populations.

  7. Skills Training and Employment Opportunities: AI can support skills training programs tailored to the needs of local economies. Vocational training platforms can teach marketable skills such as digital literacy, craftsmanship, and entrepreneurship, helping individuals secure employment or start their own businesses.

  8. Environmental Conservation: AI can assist in monitoring and protecting natural resources, such as forests, waterways, and wildlife habitats. Remote sensing technologies and data analytics can help identify environmental threats, enforce conservation regulations, and promote sustainable land use practices.

  9. Community Empowerment and Governance: AI can facilitate community engagement and participatory decision-making processes. Digital platforms and chatbots can provide information about rights, resources, and government services, empowering citizens to advocate for their needs and hold authorities accountable.

In summary, while the challenges faced by people living in poverty in regions with limited infrastructure and connectivity are significant, AI has the potential to address many of these challenges and improve their quality of life in various ways. By focusing on solutions that are accessible, scalable, and contextually relevant, AI can help create more inclusive and equitable societies across the globe.

Up
1

Nice!

Up
0

Think so?

"can help bridge the gap", "can be used to optimize infrastructure planning and development", "can help governments and organizations make informed decisions", "support skills training programs tailored to the needs of local economies", "can assist in monitoring and protecting natural resources", "can facilitate community engagement and participatory decision-making processes" 

... really? 

Might sound good to some, but really is just meaningless word salad.

This one "AI algorithms can assess creditworthiness and provide financial services to individuals who lack traditional banking access" is not new to AI. The finance sharks have been selling debt to people for the ages.

Up
0

Let's be realistic: whoever develops the best AI will get to be in charge and become unspeakably rich and powerful. The cost will be borne by those displaced from their jobs because they no longer have useful skills wanted by society.

AI is a huge magnifier of inequality. To think otherwise is naive and wishful woke thinking.

Up
1