Law

Why There Is No Competition For Accurate Information: The Psychology Of Misinformation And The Necessity Of Content Moderation

Misinformation is misleading or erroneous information regardless of the author’s motive. Much of the conversation surrounding misinformation has centered on malicious attempts to infect social media platforms with false information. Concerns about plainly “fake” material entail purposeful and strategic dissemination of lies that threatens to have serious consequences for public health. Although such popular emphasis on examples of explicitly fake content is understandable, it subtly diverts attention away from inaccurate information that does not spread with an intention to deceive. Misinformation is a serious problem. A recent literature review[1] found there is broad agreement that disinformation is more popular than accurate information on social media, while its narrative frequently instills fear, worry, and mistrust in institutions. Regulators worldwide, but especially in the European Union, are discussing how to tackle disinformation. A common argument against regulation is that freedom of speech ensures that the truth prevails. This notion of a market for ideas is especially influential in the US, despite being in contradiction with what psychologists have documented for decades: that people are not willing (or able) to interpret information objectively.

According to one prominent theory, the inability to distinguish between accurate and fake news is motivated by political considerations. For example, it has been argued that people are motivated consumers of misinformation – that when confronted with politically charged content, they engage in “identity-protective cognition,” which leads them to be overly believing of content that is consistent with their partisan identity and overly skeptical of content that is inconsistent with their partisan identity. According to a related argument, people prioritize allegiance to their political identities over truth – and hence fail to distinguish truth from untruth in favor of merely believing ideologically concordant information. According to these accounts, the major factor explaining why individuals believe fake news is the influence of political motivation on believing.[2]

Nonetheless, people frequently fail to distinguish fact from fiction because they fail to pause and reflect on the veracity of what they see on social media. As a result, simple suggestions that direct people’s attention to accuracy can improve the quality of news shared on social media. Systems of this type, such as providing digital literacy suggestions, do not have the same scaling challenges as rigid fact-checking approaches — and, in fact, can be paired with crowd-sourced fact-checking to maximize efficiency. When used correctly, human thinking can be a strong antidote to the allure of disinformation.  But both strategies should be used in tandem. Motivated cognition is well-documented as well as the fact that most people just read the headlines of news posted on social media.[3]

Unlike markets of cellphones, laptops, or coffee mugs, in which sellers compete to offer the best quality and the lowest price possible—precisely because consumers prefer those qualities—the market of misinformation flourishes because of people’s tendency to place excessive weight on the content that confirms their political priors and their limited willingness to invest time in examining evidence that may challenge their ideas about reality.


[1] Yuxi Wang et al., Systematic Literature Review on the Spread of Health-related Misinformation on Social Media, 240 Social Science & Medicine 112552 (2019).

[2] Gordon Pennycook & David G. Rand, The Psychology of Fake News, 25 Trends in Cognitive Sciences 388–402 (2021).

[3] Maksym Gabielkov, Arthi Ramachandran, Augustin Chaintreau, and Arnaud Legout. 2016. Social Clicks: What and Who Gets Read on Twitter? In Proceedings of the 2016 ACM SIGMETRICS International Conference on measurement and Modeling of Computer Science (SIGMETRICS ’16). Association for Computing Machinery, New York, NY, USA, 179–192. https://doi-org.stanford.idm.oclc.org/10.1145/2896377.2901462