More chatter and less connection

MIS research into bots on X and Reddit show they boost individual user engagement but curtail connections between people

Bots have been a fact of life on social media sites for almost a decade, but generative AI is changing and augmenting the role they play in discourse.

While it’s hard to say how gen AI bots might effect our favorite social media platforms, one indicator may be the way bots have already changed user behavior online.

Recent research by Hani Safadi, associate professor of management information systems at the Terry College of Business, has shown social media users’reactions to bots increase activity on the sites but stifle peer-to-peer interaction — even supplanting the role of group moderators in guiding discussion.

Safadi and his research team — Notre Dame’s John Lalor, assistant professor of IT, analytics and operations, and Nicholas Berente, professor of IT, analytics and operations — published their findings in fall 2024 in “The Effect of Bots on Human Interaction in Online Communities” in MIS Quarterly.

Recent work has identified a taxonomy of bots — a system of classifying and categorizing different types of bots based on their functionalities, behaviors and operating environments.

Bots can be very simple or very advanced. At one end of the spectrum, rules-based bots perform simple tasks based on specific guidelines. For example, the WikiTextBot account on Reddit replies to posts that contain a Wikipedia link with a summary of the Wikipedia page. The bot’s automated nature allows it to see every post on Reddit via an application programming interface to check each post against its hard-coded rule: “If the post includes a Wikipedia link, scrape the summary from the wiki page and post it as a reply.” These bots are called “reflexive” bots.

Other bots on Reddit moderate conversations in communities by, for example, deleting posts containing content that goes against community guidelines based on specifically defined rules. These are known as “supervisory” bots.

“While these bots are rigid because of their rules-based nature, bots can and will become more advanced as they incorporate generative AI technologies,” Lalor added.  “Therefore, it’s important to understand how the presence of these bots affects human-to-human interactions in these online communities.”

Safadi and his team analyzed a collection of Reddit communities (subreddits) that experienced increased bot activity between 2005 and 2019. They mapped the social network structure of human-to-human conversations in the communities as bot activity increased.

The team noticed that as the presence of bots that generate and share content increases, there are more connections between users because each post facilitates more opportunities for users to find novel content and engage with others. But this happens at the cost of deeper human-to-human interactions.

“While humans interacted with a wider variety of other humans, their interactions involved more single posts and fewer back-and-forth discussions,” Lalor said. “If one user posts on Reddit, there is now a higher likelihood that a bot will reply or interject itself into the conversation instead of two human users engaging in a meaningful back-and-forth discussion.”

At the same time, the inclusion of bots programmed to enforce community policies led to the diminished roles of human moderators who establish and enforce community norms.

In subreddits with fewer supervisory bots, key community members would coordinate with each other and the wider community to establish and enforce norms. With automated moderation, this is less necessary, and those human members are less central to the community.

As AI technology — especially generative AI — improves, bots can be leveraged by users to create new accounts and by firms to coordinate content moderation and push higher levels of engagement on their platforms.

“It is important for firms to understand how such increased bot activity affects how humans interact with each other on these platforms, especially with regard to their mission statements — for example, Meta’s statement to ‘build the future of human connection and the technology that makes it possible,” Lalor said, “Firms should also think about whether bots should be considered ‘users’ and how best to present any bot accounts on the platform to human users.”

(This article was originally published by Shannon Roddel at the University of Notre Dame.)