Nov 24, 2025
Study: Auditors trust adaptive AI more when uncertainty rises
- Adaptive beats static under uncertainty: When tasks are hard to predict, auditors rely more on learning algorithms.
- Framing matters: How a firm describes AI adaptability influences user trust and reliance.
- Action for firms: Be transparent about adaptability, uncertainty, and oversight; align with audit governance best practices.

Audit and accounting firms use artificial intelligence (AI) every day in their interactions with clients. It can help business professionals and their clients make smart decisions about their businesses – but because the technology is so new, AI can also be a fraught topic. Auditors vary in their comfort with AI-driven estimates – especially when methods are opaque. A key question is how much and when do business professionals trust information that is provided using AI.
Gies College of Business Professor Jenny Ulla and her coauthor Benjamin Commerford (University of Kentucky) recently published a study on this topic. Their paper, “Reliance on Algorithmic Estimates: The Joint Influence of Algorithm Adaptability and Estimation Uncertainty,” was published in The Accounting Review and examines the relationship between the use of AI in algorithms and auditors’ trust in the results.
A key capability of AI is adaptability – learning from new information and updating estimates in real time. The algorithms can refine their output in response to changing information and can continuously evolve in real time.
“It is unclear whether accounting professionals view algorithm adaptability favorably and, therefore, prefer advice from learning algorithms over static algorithms, or vice versa,” said Ulla, an assistant professor of accountancy at Gies Business.
And that is what they set out to determine.
Adaptive vs. static algorithms in audit estimates
In their research, Ulla and Commerford examined how algorithm adaptability, an emerging technological feature, interacts with estimation uncertainty, a prominent environmental factor in accounting. Uncertainty is fairly common in accounting and is something that accountants regularly struggle with when developing complex estimates and forecasting. The adaptability is a new feature and comes from utilizing advances in AI.
The researchers set up two experiments with two different sets of participants. In Experiment 1, experienced auditors evaluated an intangible patent valuation. In Experiment 2, they had business majors (as corporate manager proxies) estimate a patent’s value.
“I have participants who are processing as accounting managers, and then I also have actual auditors,” Ulla explained. “And so, their incentives are slightly different. And so that's why I really wanted to look at both sets of participants.”
Participants saw either a learning (adaptive) algorithm or a static one; the level of estimation uncertainty also varied.
This project received a Center for Audit Quality (CAQ) grant, and so auditor recruitment efforts were conducted through that organization for the first experiment. For the second experiment, participants were recruited online and were carefully screened to be sure that they had an undergraduate degree in business, and they had to report how many business courses they had taken.
The essential research question was to what extent will accountants and auditors trust the advice from adaptive or learning algorithm versus advice from a static algorithm? And does the level of uncertainty impact this trust?
Ulla was responding to the growth of AI in recent years and the changes that growth was bringing to the accounting and auditing profession. Do people have faith in the AI-derived algorithms, and does the presence of uncertainty have an impact on that trust in the algorithms?
“What I was curious about was what features of AI influence the degree to which people rely on the AI system.”
When do auditors trust AI estimates?
“Accountants and auditors were more likely to trust advice from adaptive algorithms – especially in uncertain settings when a situation may be hard to predict,” said Ulla. “I wanted to show that what I am predicting can generalize – or occurs – to both sets of participants and settings.”
They found that as the uncertainty in each experimental scenario increases, participants became increasingly willing to put trust in learning algorithms than in static algorithms. When the uncertainty was lower, however, differential trust in the learning algorithms (versus static algorithm) was much less pronounced.
For a profession built on verification and documentation, it may come as a surprise that accountants are more willing to rely on learning algorithms, even though these systems function more like a black box than their static counterparts. For accounting and auditing firms that want to incorporate more AI-based algorithms and decision-making into their processes, Ulla believes that the issue of trust is key, and firms should focus on proactively working to ensure that auditors and accountants have trust in the algorithms and have faith that it will be able to perform as expected in new situations.
“The manner in which you, as the firm, describe or highlight your systems capabilities can significantly influence how users perceive and rely on these AI systems,” she said. “What we found is that they don't treat the learning and static algorithm equally. There are differences, and so how you describe features of the AI system does make a difference.”
Ulla said this project is part of an ongoing stream of research projects.
“We're just trying to figure out how we can help firm and auditors integrate AI into their everyday processes,” she said. “These businesses are pouring billions of dollars into these AI systems, but how will the technology actually help accountants and auditors and people using the financial statements?”
-------------------------------------------
Jenny Ulla’s paper, “Reliance on Algorithmic Estimates: The Joint Influence of Algorithm Adaptability and Estimation Uncertainty” is available online.
DOI: 10.2308/TAR-2024-0109