Image of Business Instructional Facility

Aug 17, 2020 2020-08 Business Administration Faculty Research in Education

Does artificial intelligence bias human decision making?

As artificial intelligence becomes more prevalent in society, it’s becoming increasingly important to consider how humans and AI can work together to make the smartest decisions. We’re starting to learn more about how human bias can creep into AI algorithms, but so far we don’t know much about how bias works the other way. Can AI bots bias human decision making? According to new research from three Gies College of Business scholars, the answer is “yes.”

Aravinda GarimellaIn their paper “Impact of Artificial Intelligence on Human Decision Making on ICO Platforms” Gies Business professors Aravinda Garimella (right) and Wencui Han along with PhD student Saunak Basu and professor Alan Dennis from Indiana University examined a unique context where AI bots and humans perform the same task sequentially. They found that human experts were consistently influenced by the bot regardless of whether the bot’s evaluation was accurate.

“It was very interesting to see what humans are good at and what they are bad at,” said Garimella, assistant professor of business administration. “We find that humans are good at identifying things that are overrated and calling them out. On the other hand, humans struggled with championing for something good that is underrated.”

The researchers performed this study using a leading initial coin offering (ICO) evaluation platform. An AI bot first evaluated whether the ICO would succeed or fail. Then human experts performed the same task and provided their rating. In the case where the bot was accurate in its assessment, human judgment was aligned with that of the bot. However, even when the bot was inaccurate the evaluation of the human experts generally aligned with that of the bot.

“AI Bots use quantitative, tangible, and observable criteria. Humans are supposed to use intangible, qualitative criteria along with their intuition. If that’s true, they should be able to recognize the diamond in the rough, but they’re not able to do that consistently,” said Garimella. “This contrast is very interesting and useful for us to know as human-AI hybrids are becoming more popular. It will be important for us to learn when we can rely on humans as a complementary intelligence to machines and when can we not.”

The study found, in general, bots were slightly more accurate that the human experts. The study separated the results into two types of errors. Type I errors occur when a low-quality project is misclassified as “good” by the bot. In the case of Type I errors, humans were more cautious and more likely to make independent judgments. Among Type II errors, when high-quality projects are mistakenly rated as “bad” by the bot, humans were overwhelmingly aligned with the bot’s evaluation. Essentially, humans are much better at identifying lemons than they are at identifying diamonds in the rough.

These findings could be especially applicable to the healthcare industry, where life-altering medical diagnoses can be a product of both machine analysis and human experience/intuition. For example, Google recently announced it has developed an AI system that the company believes can detect breast cancer more accurately than doctors.

Han_Wencui“Healthcare is a major sector where uncertainty and information asymmetry exist, and understanding the role of AI in facilitating decision making is important,” said co-author Wencui Han (left), assistant professor of business administration at Gies. “For example, is it more important that we make sure to identify diseases, or perhaps we also need to reduce the unnecessary tests and costs due to false positives? How will and how should physicians react to the recommendations of AI? What is the best use of AI in healthcare? These are all very important questions that need to be answered.”

“We’re beginning to see AI robots being used for everything from job recruiting to medical diagnostics,” added Garimella. “Generally the argument is that we can trust the bot to be unbiased when evaluating job candidates or even ‘recommending’ medical diagnoses, but there is an important human component here too. It’s not easy for a doctor to reject or override what he/she knows is coming from an algorithm that has crunched potentially millions of rows of patient data before coming up with a recommendation.”