menu

Is society ready for AI ethical decision-making?

June 12, 2022

Listen to this article

Is society ready for AI ethical decision-making?

With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions on their behalf. A research team has studied how humans react to the introduction of AI decision-making. Specifically, they explored the question, “is society ready for AI ethical decision making?” by studying human interaction with autonomous cars.

The team published their findings in the Journal of Behavioral and Experimental Economics.

In the first of two experiments, the researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario the researchers created, the car driver had to decide whether to crash the car into one group of people or another – the collision was unavoidable. The crash would cause severe harm to one group of people but would save the lives of the other group. The subjects in the study had to rate the car driver’s decision when the driver was a human and also when the driver was AI. This first experiment was designed to measure the bias people might have against AI ethical decision-making.

In their second experiment, 563 human subjects responded to the researchers’ questions. The researchers determined how people react to the debate over AI ethical decisions once they become part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. Their other scenario allowed the subjects to “vote” whether to allow the autonomous cars to make ethical decisions. In both cases, the subjects could choose to be in favor of or against the decisions made by the technology. This second experiment was designed to test the effect of two alternative ways of introducing AI into society.

The researchers observed that when the subjects were asked to evaluate the ethical decisions of either a human or AI driver, they did not have a definitive preference for either. However, when the subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI-operated cars. The researchers believe that the discrepancy between the two results is caused by a combination of two elements.

The first element is that individual people believe society as a whole does not want AI ethical decision making, and so they assign a positive weight to their beliefs when asked for their opinion on the matter. “Indeed, when participants are asked explicitly to separate their answers from those of society, the difference between the permissibility for AI and human drivers vanishes,” said Johann Caro-Burnett, an assistant professor in the Graduate School of Humanities and Social Sciences, Hiroshima University.

The second element is that when introducing this new technology into society, allowing discussion of the topic has mixed results depending on the country. “In regions where people trust their government and have strong political institutions, information, and decision-making power improve how subjects evaluate the ethical decisions of AI. In contrast, in regions where people do not trust their government and have weak political institutions, decision-making capability deteriorates how subjects evaluate the ethical decisions of AI,” said Caro-Burnett.

What the researchers say: “We find that there is a social fear of AI ethical decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from what individuals believe is the society’s opinion,”  the lead author said. “So when not being asked explicitly, people do not show any signs of bias against AI ethical decision-making. However, when asked explicitly, people show an aversion to AI. Furthermore, where there is added discussion and information on the topic, the acceptance of AI improves in developed countries and worsens in developing countries.”

The researchers believe this rejection of a new technology, which is mostly due to incorporating individuals’ beliefs about society’s opinion, is likely to apply to other machines and robots. “Therefore, it will be important to determine how to aggregate individual preferences into one social preference. Moreover, this task will also have to be different across countries, as our results suggest,” he said.

So, what? From this research, it would seem that people from liberal democratic societies—such as Australia—will be more prepared to accept ethical decision-making by AI and robots than in other countries and that may present them with an economic advantage. AI judges are already being seriously discussed in a number of legal jurisdictions. Do we still call the program “Your Honor?”

Dr Bob Murray

Bob Murray, MBA, PhD (Clinical Psychology), is an internationally recognised expert in strategy, leadership, influencing, human motivation and behavioural change.

Join the discussion

Join our tribe

Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.