New research explores how AI can build trust in knoweledge work
Listen to this article

In today’s economy, many workers have transitioned from manual labor toward knowledge work, a move driven primarily by technological advances, and workers in this domain face challenges around managing non-routine work, which is inherently uncertain.
Automated interventions can help workers understand their work and boost performance and trust. In a new study, researchers explored how artificial intelligence can enhance performance and trust in knowledge work environments. They found that when AI systems provided feedback in real-time, performance and trust increased.
The study is published in Computers in Human Behavior.
What the researchers say: “Our findings challenge traditional concerns that AI-driven management fosters distrust and demonstrate a path by which AI complements human work by providing greater transparency and alignment with workers’ expectations,” suggested the study’s co-author. “The results have broad implications for AI-powered performance management in industries increasingly reliant on digital and algorithmic work environments.”
Applications of machine learning and AI have consistently proven capable of performing demanding cognitive tasks, provided they can be routinized. But in non-routine work, AI capabilities (e.g., those designed to facilitate managers’ ability to monitor productivity) often backfire, fostering enmity instead of efficiency.
In this study, researchers sought to determine how the frequency of feedback and the uncertainty of a task interacted to influence workers’ perceptions of an algorithm’s trustworthiness. In a randomized, controlled experiment, 140 men and women (with a median age of 39) performed caregiving tasks in an online, simulated home healthcare environment.
Individuals were randomly assigned to receive or not receive automated real-time feedback (i.e., feedback delivered during the task) while performing their work under conditions of high or low uncertainty. After completing the task, they received an algorithmically determined rating based on their actual performance on the task.
Real-time feedback increased the perceived trustworthiness of the performance rating by boosting workers’ sense of their own work quality (i.e., knowledge of the results) and reducing the degree to which they were surprised by their final evaluation. This, in turn, enhanced workers’ trust in AI-generated performance ratings—particularly in non-routine work settings where uncertainty was high.
“Non-routine work has long posed challenges to traditional management strategies, and the development of algorithmic management systems offers an opportunity to begin to address them,” the researchers said. “Our identification of a new framework for examining managerial interventions, one that makes performance standards more transparent and increases workers’ knowledge of the results, is particularly relevant in today’s emerging work environments.”
My take: The study is interesting, but the methodology leaves a lot to be desired. The finding, however, illustrates a fundamental thing about human design specs: our need for autonomy, the need to be in control of our work rather than our work being in control of us.
This is an issue that will become more and more important as AI comes to exceed human intelligence—as it will in a very short time. At that time, it may not be a case of our needing AI, but whether AI needs us. My guess is it probably won’t.
Join the discussion
More from this issue of TR
You might be interested in
Back to Today's ResearchJoin our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.