New study shows that reducing polarizing content decreases emotional intensity on social media

  1. HOME
  2. BUSINESS
  3. New study shows that reducing polarizing content decreases emotional intensity on social media
  • Last update: 1 hours ago
  • 3 min read
  • 178 Views
  • BUSINESS
New study shows that reducing polarizing content decreases emotional intensity on social media

New research shows that decreasing the prominence of divisive posts on social media can significantly ease political hostility. Researchers developed a method to adjust the ranking of users feedssomething traditionally controlled only by social media platformsto test the impact of exposure to polarizing content.

By rearranging feeds to limit posts expressing anti-democratic views or partisan anger, the study found measurable changes in both users emotions and their attitudes toward those with opposing political beliefs. The experiment used an open-source web tool to rerank posts in real time on X, formerly known as Twitter, for participants who gave consent.

Using social science principles and large language models, the team identified content likely to provoke polarization, such as posts promoting political violence or calls to imprison political opponents. These posts were not removed but ranked lower, making them less immediately visible and reducing the frequency with which users encountered them.

The trial ran for ten days leading up to the 2024 U.S. presidential election. Results indicated that limiting exposure to polarizing content improved participants feelings toward members of the opposite party and reduced negative emotions while browsing their feeds. The effects were consistent across different political affiliations, suggesting broad applicability of the approach.

Why This Matters

Contrary to the idea that social media feeds must either maximize engagement or follow a strict chronological order, the study demonstrates that intermediate strategies are possible. Feed algorithms, designed to capture attention, can influence moods, attitudes, and perceptions of others. The research emphasizes the importance of tools allowing independent studies of alternative feed strategies in realistic settings.

The findings show that large language models provide a viable method for detecting polarizing content and that platforms can use this technology to reduce the social harm caused by extreme posts.

Related Research

Research on alternative feed algorithms is expanding, though testing on live platforms remains challenging. Previous studies, such as collaborations with Meta, found that simply switching to chronological feeds had limited impact on polarization. Other initiatives, including the Prosocial Ranking Challenge at UC Berkeley, investigate ranking approaches aimed at improving social outcomes.

Advances in large language models offer enhanced ways to understand user behavior, emotions, and interactions. There is growing interest in giving users more control over feed content, with projects like the Alexandria library of pluralistic values and the Bonsai reranking system paving the way. Social media platforms, including Bluesky and X, are exploring similar approaches.

Future Directions

This study marks an initial step toward developing algorithms sensitive to their societal effects. Future research will examine long-term outcomes and explore ranking strategies that address other online well-being risks, including mental health and life satisfaction. Efforts will focus on balancing cultural context, personal values, and user choice to foster healthier social and civic interactions online.

Author: Grace Ellison

Share