Image of Business Instructional Facility

Feb 26, 2026 Accountancy Faculty Research

Study: How self-affirmation helps auditors use AI-supported advice

By Tom Moone

Auditing firms are integrating artificial intelligence (AI) more and more into their business operations. As AI-generated information is increasingly used, it becomes critical that auditors be comfortable working with AI-generated input and with colleagues who are using these technologies.

Companies are investing large sums in AI to improve performance and auditors’ judgments and decision-making. Some experienced auditors who lack familiarity with AI may be less inclined to accept AI-generated information, potentially putting them at odds with their firm’s AI implementation goals.

To examine how well professional auditors incorporate AI-supported advice in their decision-making, Gies Business faculty Mark Peecher and Sebastian Stirnkorb, Gies Business PhD student Isaac Yamoah, and coauthor Christian Pietsch (Erasmus University Rotterdam) studied how experienced audit professionals react when AI is introduced into their workflow. They theorized that auditors who believe AI is one of their weaker areas of competence would feel disaffirmed as a result of their firm’s new, heavy emphasis on AI know-how (i.e., upskilling).

To cope with this disaffirmation, or this threat to the adequacy of their AI skills, they defensively discount high-quality advice based on AI-generated content. They examined whether self-affirmation can aid auditors in overcoming this unhelpful coping mechanism, enabling them to integrate AI-supported advice more objectively into their decision-making. Their results are reported in their article, “Coping with Changing Skill Requirements: Does Disaffirmation Versus Affirmation Affect Auditors’ Reliance on AI-Supported Advice from Specialists?” which is published in Contemporary Accounting Research.

How the self-affirmation experiments worked

The researchers conducted two experiments to test their hypotheses and to examine how auditors rely on AI-generated information and their confidence in it.

The first experiment examined whether a brief self-affirmation activity would increase auditors’ willingness to welcome AI-generated information. In the experiments, some auditors wrote about their strongest professional skills (self-affirmation), while other auditors wrote about their weakest professional skill (disaffirmation). Participating auditors then worked through an audit case where some information was explicitly described as more heavily based on an AI system. The affirmation or disaffirmation intervention was presented right before the case task. As far as the participants knew, the tasks were completely separate. They were simply given one after the other.

The second experiment mirrored the first, but added a third possible task for the auditors. Some auditors wrote about what they observed on their commute to work. This neutral task constitutes a control condition, enabling the researchers to test how self-affirmation findings compare to findings when auditors’ task-relevant knowledge is neither explicitly disaffirmed nor affirmed.

One difficulty of the experiments was finding a ready group of professional auditors to participate.

“The data collections were a major effort,” Stirnkorb said. “It is difficult to get access to auditors.”

Fortunately, the team was able to partner with several major audit firms in the Netherlands to recruit highly experienced professional auditor participants. The researchers visited several audit firms and a variety of in-person technical and summer school training events to obtain sufficient participants for the study.

The researchers collected responses using pen and paper in a controlled, in-person setting, which is a gold standard for this type of behavioral research – it prevents distractions.

“We have them doing the study on a piece of paper. They’re concentrating, more focused. That’s the only thing they can work at one time,” Yamoah said.

The results were very strong and definitive. Auditors who engaged in the disaffirming activity were more likely to severely discount quality AI-supported advice. However, auditors who engaged in the self-affirmation activity were less likely to discount quality AI-supported advice. So, auditors’ writing about their own (non-AI) skills increased their reliance on AI tools markedly.

The strength of the results showed the power of the intervention: “The results were quite robust,” Stirnkorb said. “The effects came out really strong, and in the additional tests that we conducted, they were highly consistent.”

Their findings suggest that auditors may have a natural inclination to be in a disaffirmed state in today’s AI rich environment, almost as the natural state for many in the profession.

“Auditors work in an environment where they are always looking for problems, and they're always looking for something that is wrong,” Stirnkorb said. “They are taught to be skeptical about everything they do.”

This feeling of how they approach their profession can spill over into their view of their own professional work, where negative feelings of disaffirmation can persist.

What can audit firms do?

In their paper, the researchers describe steps firms could take to bring self-affirmation into the workplace. One approach dealt with how the topic is presented: they suggest moving away from using the term “upskilling” (which can imply a lack of current professional expertise) when referring to training on AI to the more neutral term “skill building.” This language change could support self-affirmation among auditors.

“I hope that [firms] could take our study and say, ‘How can we make this fit with our current practices in the audit firm, and explore various ways for implementing our novel self-affirmation intervention?'” said Yamoah.

The pace of change in the AI field may also have an impact, and can limit strong, absolute recommendations from experts.

“We didn't want to be prescriptive in terms of what audit firms should do, and then a year or two later, it becomes less relevant,” Yamoah said. “So we provide a framework that might be helpful, and then it's up to them [audit firms] to implement it.”

“A preliminary step, however, is for firms to recognize that disaffirmation in many of their auditors who are not themselves highly AI savvy will likely make them defensive and, thus, unreceptive to relying on AI-generated information,” Peecher said.