OVERVIEWS OF RESEARCH ON POLARIZATION & SOCIAL MEDIA
For a bird's eye view of the literature on social media and political dysfunction, see this document Chris co-authored with John Haidt, or this New Yorker Article that describes how a range of experts reacted to this effort.
For a bird's eye view of polarization in the United States, check out this article Chris co-authored with a team of leading scholars in Science.
To learn what depolarization strategies work, see this new article that Chris co-authored with another group of experts in Nature Human Behavior.
REDUCING POLITICAL POLARIZATION IN THE UNITED STATES WITH A MOBILE CHAT PLATFORM
Do anonymous online conversations between people with different political views exacerbate or mitigate partisan polarization? We created a mobile chat platform to study the impact of such discussions. Our study recruited Republicans and Democrats in the United States to complete a survey about their political views. We later randomized them into treatment conditions where they were offered financial incentives to use our platform to discuss a contentious policy issue with an opposing partisan. We found that people who engage in anonymous cross-party conversations about political topics exhibit substantial decreases in polarization compared with a placebo group that wrote an essay using the same conversation prompts. Moreover, these depolarizing effects were correlated with the civility of dialogue between study participants. Our findings demonstrate the potential for well-designed social media platforms to mitigate political polarization and underscore the need for a flexible platform for scientific research on social media.
PERCEIVED GENDER AND POLITICAL PERSUASION: A SOCIAL MEDIA FIELD EXPERIMENT DURING THE 2020 US DEMOCRATIC PRIMARY ELECTION
Women have less influence than men in a variety of settings. Does this result from stereotypes that depict women as less capable, or biased interpretations of gender differences in behavior? We present a field experiment that—unbeknownst to the participants—randomized the gender of avatars assigned to Democrats using a social media platform we created to facilitate discussion about the 2020 Primary Election. We find that misrepresenting a man as a woman undermines his influence, but misrepresenting a woman as a man does not increase hers. We demonstrate that men’s higher resistance to being influenced—and gendered word use patterns—both contribute to this outcome. These findings challenge prevailing wisdom that women simply need to behave more like men to overcome gender discrimination and suggest that narrowing the gap will require simultaneous attention to the behavior of people who identify as women and as men
EXPOSURE TO OPPOSING VIEWS CAN INCREASE POLITICAL POLARIZATION: EVIDENCE FROM A LARGE-SCALE FIELD EXPERIMENT ON SOCIAL MEDIA
There is mounting concern that social media sites contribute to political polarization by creating "echo chambers" that insulate people from opposing views about current events. We surveyed a large sample of Democrats and Republicans who visit Twitter at least three times each week about a range of social policy issues. One week later, we randomly assigned respondents to a treatment condition in which they were offered financial incentives to follow a Twitter bot for one month that exposed them to messages produced by elected officials, organizations, and other opinion leaders with opposing political ideologies. Respondents were re-surveyed at the end of the month to measure the effect of this treatment, and at regular intervals throughout the study period to monitor treatment compliance. We find that Republicans who followed a liberal Twitter bot became substantially more conservative post-treatment, and Democrats who followed a conservative Twitter bot became slightly more liberal post-treatment. These findings have important implications for the interdisciplinary literature on political polarization as well as the emerging field of computational social science.
Read the article here. This research was funded by the National Science Foundation, the Russell Sage Foundation, the Carnegie Foundation, the Guggenheim Foundation, and Duke University.
ASSESSING THE RUSSIAN INTERNET RESEARCH AGENCY'S IMPACT ON THE POLITICAL ATTITUDES AND BEHAVIORS OF U.S. TWITTER USERS IN LATE 2017
There is widespread concern that Russia and other countries have launched social-media campaigns designed to increase political divisions in the United States. Though a growing number of studies analyze the strategy of such campaigns, it is not yet known how these efforts shaped the political attitudes and behaviors of Americans. We study this question using longitudinal data that describe the attitudes and online behaviors of 1,239 Republican and Democratic Twitter users from late 2017 merged with nonpublic data about the Russian Internet Research Agency (IRA) from Twitter. Using Bayesian regression tree models, we find no evidence that interaction with IRA accounts substantially impacted 6 distinctive measures of political attitudes and behaviors over a 1-mo period. We also find that interaction with IRA accounts were most common among respondents with strong ideological homophily within their Twitter network, high interest in politics, and high frequency of Twitter usage. Together, these findings suggest that Russian trolls might have failed to sow discord because they mostly interacted with those who were already highly polarized. We conclude by discussing several important limitations of our study—especially our inability to determine whether IRA accounts influenced the 2016 presidential election—as well as its implications for future research on social media influence campaigns, political polarization, and computational social science.
Read the article here. This research was funded by the National Science Foundation and Duke University.
EXPOSURE TO COMMON ENEMIES CAN INCREASE POLITICAL POLARIZATION: EVIDENCE FROM A COOPERATION EXPERIMENT WITH AUTOMATED PARTISANS
Longstanding theory indicates the threat of a common enemy can mitigate conflict between members of rival groups. We tested this hypothesis in a pre-registered experiment where 1,670 Republicans and Democrats in the United States were asked to complete a collaborative online task with an automated agent or ``bot" that was labelled as a member of the opposing party. Prior to this task, we exposed respondents to primes about a) a common enemy (involving threats from Iran, China, and Russia); b) a patriotic event; or c) a neutral, apolitical prime. Though we observed no significant differences in the behavior of Democrats as a result of these primes, we found that Republicans—and particularly those with very strong conservative views—were significantly less likely to cooperate with Democrats when primed about a common enemy. We also observed lower rates of cooperation among Republicans who participated in our study during the 2020 Iran crisis, which occurred in the middle of our fieldwork. These findings indicate common enemies may not reduce inter-group conflict in highly polarized societies, and contribute to a growing number of studies that find evidence of asymmetric political polarization. We conclude by discussing the implications of these findings for research in social psychology, political conflict, and the rapidly expanding field of computational social science.
Read the pre-print here. This research was funded by the Russell Sage Foundation and Duke University
CHANNELLING HEARTS AND MINDS: ADVOCACY ORGANIZATIONS, COGNITIVE-EMOTIONAL CURRENTS, AND PUBLIC CONVERSATION
Do advocacy organizations stimulate public conversation about social problems by engaging in rational debate, or by appealing to emotions? We argue that rational and emotional styles of communication ebb and flow within public discussions about social problems due to the alternating influence of social contagion and saturation effects. These “cognitive-emotional currents” create an opportunity structure whereby advocacy organizations stimulate more conversation if they produce emotional messages after prolonged rational debate or vice versa. We test this hypothesis using automated text-analysis techniques that measure the frequency of cognitive and emotional language within two advocacy fields on Facebook over 1.5 years, and a web-based application that offered these organizations a complimentary audit of their social media outreach in return for sharing nonpublic data about themselves, their social media audiences, and the broader social context in which they interact. Time-series models reveal strong support for our hypothesis, controlling for 33 confounding factors measured by our Facebook application. We conclude by discussing the implications of our findings for future research on public deliberation, how social contagions relate to each other, and the emerging field of computational social science.
Read the article here. This research was funded by the National Science Foundation
USING INTERNET SEARCH DATA TO EXAMINE THE RELATIONSHIP BETWEEN ANTI-MUSLIM AND PRO-ISIS SENTIMENT IN U.S. COUNTIES
Recent terrorist attacks by second or third-generation immigrants in the United States and Europe indicate radicalization may result from the failure of ethnic integration—or the rise of inter-group prejudice in communities where “home-grown'' extremists are raised. Yet such community-level drivers are notoriously difficult to study because public opinion surveys provide biased measures of both prejudice and radicalization. We examine the relationship between anti-Muslim and pro-ISIS internet searches in 3,099 U.S. counties between 2014 and 2016 using instrumental variable models that control for various community-level factors associated with radicalization. We find anti-Muslim searches are strongly associated with pro-ISIS searches—particularly in communities with high levels of poverty and ethnic homogeneity. Though more research is needed to verify the causal direction of this relationship, this finding suggests minority groups may be more susceptible to radicalization if they experience discrimination in settings where they are isolated and therefore highly visible—or in communities where they compete with majority group members for limited financial resources. We evaluate the validity of our measures using several other data sources and discuss the implications of our findings for the study of terrorism and inter-group relations as well as immigration and counter-terrorism policies.
Read the article here. This Research was funded by Duke University
CULTURAL NETWORKS AND BRIDGES: HOW ADVOCACY ORGANIZATIONS STIMULATE PUBLIC CONVERSATION ON SOCIAL MEDIA
Social media sites are rapidly becoming one of the most important forums for public deliberation about advocacy issues. However, social Scientists have not explained why some advocacy organizations produce social media messages that inspire far-ranging conversation among social media users, but the vast majority of them receive little or no attention. I argue that advocacy organizations are more likely to inspire comments from new social media audiences if they create “cultural bridges,” or produce messages that combine conversational
themes within an advocacy field that are seldom discussed together. I use natural language processing, network analysis, and a social media application to analyze how cultural bridges shaped public discourse about autism spectrum disorders on Facebook over the course of 1.5 y, controlling for various characteristics of advocacy organizations, their social media audiences, and the broader social context in which they interact. I show that organizations that create substantial cultural bridges provoke 2.52 times more comments about their messages from new social media users than those that do not, controlling for these factors. This study thus offers a theory of cultural messaging and public deliberation and computational techniques for text analysis and application-based survey research.
Read the article here. This research was funded by the National Science Foundation
Q: Who Funds the Polarization Lab?
A: The Polarization Lab was created via a grant from the Duke Provost’s Office. Our 2018 bot study and 2019 study on the Russian Internet Research Agency was supported by the National Science Foundation, the Russell Sage Foundation, and the Guggenheim and Carnegie Foundations. Our 2021 study that analyzes the role of anonymity and polarization using our scientific platform for social science research was funded by Duke and the platform was partially funded by an unrestricted grant from Facebook’s Integrity Foundational Research Program. Facebook does not provide any compensation to the directors of the lab. To learn more about the type of grant we received from Facebook, visit this link.