Crowdsourcing from a group of 1,128 of users, researchers were able to segment groups as small as 10 individuals online that could accurately determine whether or not an article was false—about as well as professional fact-checkers. Supplemented by algorithms, a system like this could be trained to identify fake news at the speed and scale in which it spreads.
Furthermore, open-sourcing these methods of verification so they are auditable and transparent enough to be easily understood might help ease claims of bias and censorship. An early attempt at this can be seen in Twitter’s Birdwatch, which leverages the community to flag misinformation tweets; the system is new and imperfect, and there are clearly ways that it can be gamed (a problem for any verification system), but it’s an important first attempt.
But Who Determines Truth?
Each of these three interventions requires someone, somewhere to make a determination as to what is true or what is of high quality. This “baseline” truth is a critical piece of the puzzle, but it’s an increasingly fraught idea to address.
Controlling the narrative will always be contentious, and any system that attempts to fix disinformation will be attacked for partisan bias. Indeed, extreme partisanship is directly associated with sharing fake news. Social media seems to be especially effective at drawing partisan battle lines around more and more issues, even if the issues are not inherently partisan.
But this is a new manifestation of an age-old problem: How do we verify knowledge? And how might we do it quickly enough to be reliable? Who do we trust in society to establish truth? Here we are wading into tricky epistemological territory, but one with precedent.
Let’s look at other services we regularly use to verify facts—imperfect but powerful systems we have come to rely upon. Google and Wikipedia have, writ large, built reputations on effectively helping people find accurate information. We generally trust them, because they have systems of verification and sourcing embedded in their design.
The frictionless design of the current social web has undermined the necessary precondition to democratic functioning: shared truths.
Implicit in our three recommendations is a trust and faith in the basic journalistic process of verification. Journalism is far from perfect. The New York Times does get it wrong sometimes. Just as all media entities struggle with the selective interpretation of events, along with editorial influence over the tone and tenor of stories. But the inherent value of validated information is critical infrastructure that has been undermined by social media. Social posts are not news articles, even if they’ve come to resemble them in our news feeds. Verifying new information is a core part of any functioning democracy, and we need to recreate the friction that was previously provided by the journalistic process.
On the horizon are new technologies that will enable both decentralization and end-to-end encryption of social media—immune to any moderation. As these new tools reach scale, viral rumors will become even harder to debunk, and the supply problem of mis- and disinformation will only worsen. We should address how these tools might be designed to rebalance the flow of accurate information now, before we lose our capacity to do so.
This responsibility lands at least partially on our shoulders as individuals. We must be vigilant about identifying inaccuracies, and about finding established, reputable sources of knowledge—both academic and journalistic. Too much institutional skepticism is toxic for our shared reality. We can redouble our efforts to find ways of carefully, and compassionately, sourcing truth together. But platforms can help, and must help, tilt the design of our shared spaces towards verifiable facts.
Data-Visualizations by Tobias Rose-Stockwell
More Great WIRED Stories