The research methodology represents a significant technical and scientific achievement. Scientists integrated AI content analysis systems with live platform feed manipulation, creating an experimental setup that mirrors the capabilities platforms themselves employ but repurposes them for scientific investigation rather than commercial optimization.
The AI component analyzed posts in real-time for specific characteristics: support for undemocratic practices, partisan violence advocacy, opposition to bipartisan consensus, and biased fact interpretation. This analysis happened instantaneously as content appeared, allowing the system to classify posts quickly enough to affect what users saw in their constantly-refreshing feeds.
The manipulation component then adjusted feed composition based on these classifications. Over 1,000 users during the 2024 presidential election received customized experiences where divisive content appeared slightly more or less frequently than it would have naturally. The system operated continuously throughout the experiment week, making thousands of small adjustments to create measurably different information environments.
This integrated approach enabled precise testing of specific hypotheses about which content types drive polarization. Rather than broadly increasing or decreasing all political content, researchers could target specific categories they hypothesized were particularly divisive. Results confirmed that anti-democratic and partisan content specifically drives polarization increases.
The methodology could be extended to test other hypotheses about platform effects. Similar approaches might examine how algorithms affect misinformation spread, emotional wellbeing, civic engagement, or other important outcomes. By enabling rigorous causal testing, this methodological innovation could significantly advance scientific understanding of social media’s societal impacts.