No one thinks, I am the kind of person who is susceptible to misinformation. It is those others (stupid anti-vaxxers! arrogant liberal elites!) who are swayed by propaganda masquerading as news and bot armies pushing partisan agendas on Twitter.
But recent disinformation campaigns—especially ones that originate with coordinated agencies in Russia or China—have been far more sweeping and insidious. Using memes, manipulated videos and impersonations to spark outrage and confusion, these campaigns have targets that transcend any single election or community. These efforts aim to engineer volatility to undermine democracy itself. If we’re all mentally exhausted and disagree about what is true, then authoritarian networks can more effectively push their version of reality. Playing into the “us vs. them” dynamic makes everyone more vulnerable to false belief.
Instead of surrendering to the idea of a post-truth world, we must recognize this so-called information disorder as an urgent societal crisis and bring rigorous, interdisciplinary scientific research to combat the problem. We need to understand the transmission of knowledge online; the origins, motivations and tactics of disinformation networks, both foreign and domestic; and the exact ways even the most educated evidence seekers can unwittingly become part of an influence operation. Little is known, for instance, about the effects of long-term exposure to disinformation or how it affects our brain or voting behavior. To examine these connections, technology behemoths such as Facebook, Twitter and Google must make more of their data available to independent researchers (while protecting user privacy).
The pace of research must try to catch up with the rapidly growing sophistication of disinformation strategies. One positive step was the January 2019 launch of the Misinformation Review, a multimedia-format journal from Harvard University’s John F. Kennedy School of Government that fast-tracks its peer-review process and prioritizes articles about real-world implications of misinformation in areas such as the media, public health and elections.
Journalists must be trained in how to cover deception so that they don’t inadvertently entrench it, and governments should strengthen their information agencies to fight back. Western nations can look to the Baltic states to learn some of the innovative ways their citizens have dealt with disinformation over the past decade: for example, volunteer armies of civilian “elves” expose the methods of Kremlin “trolls.” Minority and historically oppressed communities are also familiar with ways to push back on authorities’ attempts to overwrite truth. Critically, technologists should collaborate with social scientists to propose interventions—and they would be wise to imagine how attackers might thwart these tools or turn them around to use for their own means.
Ultimately, though, for most disinformation operations to succeed, it is regular users of the social Web who must share the videos, use the hashtags and add to the inflammatory comment threads. That means each one of us is a node on the battlefield for reality. We need to be more aware of how our emotions and biases can be exploited with precision and consider what forces might be provoking us to amplify divisive messages.
So every time you want to “like” or share a piece of content, imagine a tiny “pause” button hovering over the thumbs-up icon on Facebook or the retweet symbol on Twitter. Hit it and ask yourself, Am I responding to a meme meant to brand me as a partisan on a given issue? Have I actually read the article, or am I simply reacting to an amusing or enraging headline? Am I sharing this piece of information only to display my identity for my audience of friends and peers, to get validation through likes? If so, what groups might be microtargeting me through my consumer data, political preferences and past behavior to manipulate me with content that resonates strongly?
Even if—especially if—you’re passionately aligned with or disgusted by the premise of a meme, ask yourself if sharing it is worth the risk of becoming a messenger for disinformation meant to divide people who might otherwise have much in common.
It is easy to assume memes are innocuous entertainment, not powerful narrative weapons in a battle between democracy and authoritarianism. But these are among the tools of the new global information wars, and they will only evolve as machine learning advances. If researchers can figure out what would get people to take a reflective pause, it may be one of the most effective ways to safeguard public discourse and reclaim freedom of thought.