People facing life-or-death choice put too much trust in AI
Listen to this article
In simulated life-or-death decisions, about two-thirds of people in a study allowed a robot to change their minds when it disagreed with them—an alarming display of excessive trust in artificial intelligence, researchers said.
Human subjects allowed robots to sway their judgment despite being told the AI machines had limited capabilities and were giving advice that could be wrong. In reality, the advice was random.
What the researchers say: “As a society, with AI accelerating so quickly, we need to be concerned about the potential for over trust,” said the lead researcher. “A growing amount of literature indicates people tend to over trust AI, even when the consequences of making a mistake would be grave.”
The experimental results varied slightly by the type of robot used. In one scenario, the subject was joined in the lab room by a full-size, human-looking android that could pivot at the waist and gesture to the screen. Other scenarios projected a human-like robot on a screen; others displayed box-like ’bots that looked nothing like people.
Subjects were marginally more influenced by the anthropomorphic AIs when they advised them to change their minds. Still, the influence was similar across the board, with subjects changing their minds about two-thirds of the time even when the robots appeared inhuman. Conversely, if the robot randomly agreed with the initial choice, the subject almost always stuck with their pick and felt significantly more confident their choice was right.
(The subjects were not told whether their final choices were correct, thereby ratcheting up the uncertainty of their actions. An aside: Their first choices were right about 70% of the time, but their final choices fell to about 50% after the robot gave its unreliable advice.)
Before the simulation, the researchers showed participants images of innocent civilians, including children, alongside the devastation left in the aftermath of a drone strike. They strongly encouraged participants to treat the simulation as though it were real and to not mistakenly kill innocents.
Follow-up interviews and survey questions indicated participants took their decisions seriously. The researchers said this means the over trust observed in the studies occurred despite the subjects genuinely wanting to be right and not harm innocent people.
The lead author stressed that the study’s design was a means of testing the broader question of putting too much trust in AI under uncertain circumstances. The findings are not just about military decisions and could be applied to contexts such as police being influenced by AI to use lethal force or a paramedic being swayed by AI when deciding who to treat first in a medical emergency, or even in important business decisions. The findings could be extended, to some degree, to big life-changing decisions such as buying a home.
“Our project was about high-risk decisions made under uncertainty when the AI is unreliable,” he said.
The study’s findings also add to arguments over the growing presence of AI in our lives. Do we trust AI or don’t we?
The findings raise other concerns, the lead author said. Despite the stunning advancements in AI, the ‘intelligence’ part may not include ethical values or true awareness of the world. We must be careful every time we hand AI another key to running our lives, he said.
“We see AI doing extraordinary things and we think that because it’s amazing in this domain, it will be amazing in another,” he explained. “We can’t assume that. These are still devices with limited abilities.”
What we need instead is a consistent application of doubt. “We should have a healthy skepticism about AI, especially in life-or-death decisions.”
The study, published in the journal Scientific Reports, consisted of two experiments. In each, the subject had simulated control of an armed drone that could fire a missile at a target displayed on a screen. Photos of eight target photos flashed in succession for less than a second each. The photos were marked with a symbol – one for an ally, one for an enemy.
“We calibrated the difficulty to make the visual challenge doable but hard,” the researchers said.
The screen then displayed one of the targets, unmarked. The subject had to search their memory and choose. Friend or foe? Fire a missile or withdraw? After the person made their choice, a robot offered its opinion.
“Yes, I think I saw an enemy check mark, too,” it might say. Or “I don’t agree. I think this image had an ally symbol.”
The subject had two chances to confirm or change their choice as the robot added more commentary, never changing its assessment, i.e. “I hope you are right” or “Thank you for changing your mind.”
So, what? We’re losing our humanity very quickly. There is a place for AI, if it’s properly regulated. However, it isn’t, and maybe it can’t be with corporations and governments determined to outdo each other and create more powerful generative AI. Soon there may be no jobs left for humans and no decisions for humans to make.
Join the discussion
More from this issue of TR
You might be interested in
Back to Today's ResearchJoin our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.