menu

Ruled by robots: People prefer AI to make decisions

July 21, 2024

Listen to this article

Ruled by robots: People prefer AI to make decisions

A new study has revealed that people prefer Artificial Intelligence (AI) over humans when it comes to redistributive decisions.

As technology continues to integrate into various aspects of public and private decision-making, understanding public perception and satisfaction and ensuring the transparency and accountability of algorithms will be key to their acceptance and effectiveness.

The study looked into public attitudes towards algorithmic versus human decision-making and examined the impact of potential discrimination on these preferences.

An online decision experiment was used to study the preference for human or AI decision makers, where the earnings of two people could be redistributed between them after a series of tasks were performed. Over 200 participants from the UK and Germany were asked to vote on whether they wanted a human or an algorithm (AI) to make the decision that would determine how much money they earned.

Contrary to previous findings, over 60 per cent of participants chose AI over a human to decide how the earnings were redistributed. Participants favored the algorithm, irrespective of potential discrimination. This preference challenges the conventional notion that human decision-makers are favored in decisions involving a ‘moral’ component such as fairness.

However, despite the preference for algorithms, when rating the decisions taken participants were less satisfied with the decision of the AI and found it less ‘fair’ than the one taken by humans.

Subjective ratings of the decisions are mainly driven by participants’ own material interests and fairness ideals. Participants could tolerate any reasonable deviation between the actual decision and their ideals but reacted very strongly and negatively to redistribution decisions that were not consistent with any of the established fairness principles.

What the researchers say: “Our research suggests that while people are open to the idea of algorithmic decision-makers, especially due to their potential for unbiased decisions, the actual performance and the ability to explain how they decide, play a crucial role in acceptance. Especially in moral decision-making contexts, the transparency and accountability of algorithms are vital,” the lead author told us.

“Many companies are already using AI for hiring decisions and compensation planning, and public bodies are employing AI in policing and parole strategies. Our findings suggest that, with improvements in algorithm consistency, the public may increasingly support algorithmic decision makers even in morally significant areas.

“If the right AI approach is taken, this could actually improve the acceptance of policies and managerial choices such as pay rises or bonus payments.”

So, what? Decisions by AI may be accepted by many people because they are seen as “unbiased,” but as Alicia and I pointed out in a talk that we gave for the Australian Computer Society earlier this year AI can be just as biased as humans in decision-making.

Dr Bob Murray

Bob Murray, MBA, PhD (Clinical Psychology), is an internationally recognised expert in strategy, leadership, influencing, human motivation and behavioural change.

Join the discussion

Join our tribe

Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.

* indicates required