
Study Warns AI Could Supercharge Social Media Polarization
Artificial intelligence could supercharge polarization on social media, warn Concordia researchers and students, raising concerns over free speech, and misinformation.
In a rush? Here are the quick facts:
- AI algorithms can spread division using only follower counts and recent posts.
- Reinforcement-learning bots adapt quickly to exploit social media vulnerabilities.
- Experts warn platforms risk either censorship or unchecked manipulation.
Although polarization on social media is nothing new, researchers and student activists at Concordia University warn that artificial intelligence could make the problem much worse.
“Instead of being shown footage of what’s happening or content from the journalists who are reporting on it, we’re instead seeing overly dramatized AI art of things we should care about politically […] It really distances people and removes accountability” said Danna Ballantyne, external affairs and mobilization coordinator for the Concordia Student Union, as reported by The Link.
Her concerns echo new research from Concordia, where professor Rastko R. Selmic and PhD student Mohamed N. Zareer showed how reinforcement-learning bots can fuel division online. “Our goal was to understand what threshold artificial intelligence can have on polarization and social media networks, and simulate it […] to measure how this polarization and disagreement can arise.” Zareer said as reported by The Link.
The findings suggest that algorithms don’t need private data to stir division, where basic signals like follower counts and recent posts are enough. “It’s concerning, because [while] it’s not a simple robot, it’s still an algorithm that you can create on your computer […] And when you have enough computing power, you can affect more and more networks” Zareer explained to The Link.
This mirrors a wider body of research showing how reinforcement learning can be weaponized to push communities apart. The study by Concordia used Double-Deep Q-learning and demonstrated that adversarial AI agents can “flexibly adapt to changes within the network, allowing it to effectively exploit structural vulnerabilities and amplify divisions among users,” as the research noted.
Indeed, Double-Deep Q-learning is an AI technique where a bot learns optimal actions through trial and error. It uses deep neural networks to handle complex problems and two value estimates to avoid overestimating rewards. In social media, it can strategically spread content to increase polarization with minimal data.
Zareer warned that policymakers face a difficult balance. “There is a fine line between monitoring and censoring and trying to control the network,” he said to The Link. Too little oversight lets bots manipulate conversations, whilst too much may risks suppressing free speech.
Meanwhile, students like Ballantyne fear AI is erasing lived experience. “AI completely scraps that,” she said to The Link.