Image of Business Instructional Facility

Nov 17, 2025 Business Administration Faculty Research in Education

Platforms launch chatbots to help users ‘socialize’, but do they work?

Gies Business study provides some answers to the effectiveness of social bots.

In 2024, X (formerly Twitter) made headlines by utilizing a large language model (LLM) to launch the chatbot Grok. Its top goal was to use the social bot to drive engagement. Before Grok, the Chinese microblogging social platform, Weibo, launched its social bot, CommentRobot, with similar intentions. With Weibo as its subject, a study led by Gies College of Business professor Yang Gao provides valuable insight into the behavior of users who interacted with both the social bot and their peers, guiding platforms on how to refine social bots.

Gao notes that before CommentRobot and Grok, bots were mainly used for functional tasks on social media, such as content moderation, not for socialization.

“Social media platforms were designed for people to talk to each other, but we wondered, when people talk to social bots, how their peers react to such human-bot interactions,” said Gao, an assistant professor of business administration.

The social bot has its own account on the platform, and users can engage with the bot the same way they engage with other account holders, by using the @ sign. The content for the response is generated by an LLM. According to Gao, the response rate for the social bot is around 40 percent because the bot may be busy responding to other users.

In addition to providing insights for users, Gao and his colleagues made recommendations to platforms as to who the bot should best target for interactions. They published their study, “Does Social Bot Help Socialize? Evidence from a Microblogging Platform,” in Information Systems Research.

Gao and his team built a system to collect publicly available data from Weibo, then measured the outcome based on the number of likes and comments for each post. They used fixed-effects models and instrumental variable analysis and backed it up with an online experiment. For that experiment, they randomly assigned posts into three groups: one where the post received no comments, one in which the post received a human comment, and one where the post received a bot comment.

The highlights of their findings included:

  • When posters receive a comment from the social bot, they are more likely to receive engagement from peers.

    “We tried to figure out why people appreciate the bot comment and engage further with the post,” Gao said. “We found several mechanisms. One is that the bot comment is relevant most of the time. The bot can contribute some subjective opinions and even provide some emotional support. All these textual characteristics can explain the positive effect of the bot comment on the post-level engagement.”

  • There was no significant difference in further interaction between the posts that had human and bot comments, but they both engaged at a higher rate than the ones with no initial comments. “That means the social bot does help in socialization,” Gao said.
  • Users who receive bot comments are more likely to engage more with the social bot, but it didn’t increase their overall posting activity.

  • Platforms would be more effective in targeting active users. “We think the platform’s strategy is not optimal,” Gao concluded. “They are still targeting those users who are less active, hoping they’ll be more active. But our findings suggest that it is not that efficient. You should put more bot comments under the posts of more active users because they will become even more active. Their peers will become more active and engage more with more active users. In that sense, you will optimize the whole engagement level at that platform.”
  • Platforms need to regulate the social bots, and governments may need to as well.

“Even as late as three years ago, computer algorithms were not that smart,” Gao said. “However, today with LLMs, AI can generate much better responses.  It can provide some attractive and interesting responses and emotional support. In the meantime, if the LLM hallucinates or produces misinformation and learns some bias from your training data, then the LLM will become a bad social actor. Therefore, platforms need to do the fine-tuning; otherwise, your chatbot is more likely to produce errors.”

Gao has a tangential interest in misinformation, having a paper recently accepted for publication in Information Systems Research, titled “Can Crowdchecking Curb Misinformation? Evidence from Community Notes.” He says that because noises or low-quality information might exist in the training data, social bots, which are also frequently used for fact-checking purposes, might give information that they believe is true, but which may not be true. Therefore, he suggests government regulations are needed to determine what you can and cannot do with a large language model on social platforms.

 


“The assumption is that people can realize the bot is not reliable,” Gao said. “What if the person doesn’t know the ground truth, and the person simply trusts the social bot? Perhaps the bot is right 99 out of 100 times. What if the person doesn’t realize the one mistake? The mistake could have serious consequences.”

Gao notes that, as social bots are relatively new, there needs to be more research to help optimize them. For instance, users don’t just talk to social bots once; they do so regularly. How do these human-bot relationships evolve as the interaction goes on? Do their relationships grow stronger? Does the bot’s reply matter in terms of relationship bonding?

“Social media users are willing to talk to social bots,” Gao said. “They can treat social bots like their friends. The bot can provide something interesting, like opinions or emotional support.”