Sorry, I Lied About Fake News

A false tweet really does move faster than the truth.

Animation of a Pinocchio figurine with growing nose and Twitter birds
Getty; The Atlantic

Okay this is embarrassing: The news I shared the other day, about the sharing of fake news, was fake.

That news—which, again, let’s be clear, was fake—concerned a well-known MIT study from 2018 that analyzed the spread of news stories on Twitter. Using data drawn from 3 million Twitter users from 2006 to 2017, the research team, led by Soroush Vosoughi, a computer scientist who is now at Dartmouth, found that fact-checked news stories moved differently through social networks depending on whether they were true or false. “Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth,” they wrote in their paper for the journal Science.

False Stories Travel Way Faster Than the Truth,” read the English-language headlines (and also the ones in French, German, and Portuguese) when the paper first appeared online. In the four years since, that viral paper on virality has been cited about 5,000 times by other academic papers and mentioned in more than 500 news outlets. According to Altmetric, which computes an “attention score” for published scientific papers, the MIT study has also earned a mention in 13 Wikipedia articles and one U.S. patent.

Then, this week, an excellent feature article on the study of misinformation appeared in Science, by the reporter Kai Kupferschmidt. Buried halfway through was an intriguing tidbit: The MIT study had failed to account for a bias in its selection of news stories, the article claimed. When different researchers reanalyzed the data last year, controlling for that bias, they found no effect—“the difference between the speed and reach of false news and true news disappeared.” So the landmark paper had been … completely wrong?

It was more bewildering than that: When I looked up the reanalysis in question, I found that it had mostly been ignored. Written by Jonas Juul, of Cornell University, and Johan Ugander, of Stanford, and published in November 2021, it has accumulated just six citations in the research literature. Altmetrics suggests that it was covered by six news outlets, while not a single Wikipedia article or U.S. patent has referenced its findings. In other words, Vosoughi et al.’s fake news about fake news had traveled much further, deeper, and more quickly than the truth.

This was just the sort of thing I love: The science of misinformation is rife with mind-bending anecdotes in which a major theory of “post-truth” gets struck down by better data, then draws a last, ironic breath. In 2016, when a pair of young political scientists wrote a paper that cast doubt on the “backfire effect,” which claims that correcting falsehoods only makes them stronger, at first they couldn’t get it published. (The field was reluctant to acknowledge their correction.) The same pattern has repeated several times since: In academic echo chambers, it seems, no one really wants to hear that echo chambers don’t exist.

And here we were again. “I love this so much,” I wrote on Twitter on Thursday morning, above a screenshot of the Science story.

My tweet began to spread around the world. “Mehr Ironie geht nicht,” one user wrote above it. “La smentita si sta diffondendo molto più lentamente dello studio fallace,” another posted. I don’t speak German or Italian, but I could tell I’d struck a nerve. Retweets and likes gathered by the hundreds.

But then, wait a second—I was wrong. Within a few hours of my post, Kupferschmidt tweeted that he’d made a mistake. Later in the afternoon, he wrote a careful mea culpa and Science issued a correction. It seemed that Kupferschmidt had misinterpreted the work from Juul and Ugander: As a matter of fact, the MIT study hadn’t been debunked at all.

By the time I spoke with Juul on Thursday night, I knew I owed him an apology. He’d only just logged onto Twitter and seen the pileup of lies about his work. “Something similar happened when we first published the paper,” he told me. Mistakes were made—even by fellow scientists. Indeed, every time he gives a talk about it, he has to disabuse listeners of the same false inference. “It happens almost every time that I present the results,” he told me.

He walked me through the paper’s findings—what it really said. First off, when he reproduced the work from the team at MIT, using the same data set, he’d found the same result: Fake news did reach more people than the truth, on average, and it did so while spreading deeper, faster, and more broadly through layers of connections. But Juul figured these four qualities—further, faster, deeper, broader—might not really be distinct: Maybe fake news is simply more “infectious” than the truth, meaning that each person who sees a fake-news story is more likely to share it. As a result, more infectious stories would tend to make their way to more people overall. That greater reach—the further qualityseemed fundamental, from Juul’s perspective. The other qualities that the MIT paper had attributed to fake news—its faster, deeper, broader movement through Twitter—might simply be an outgrowth of this more basic fact. So Juul and Ugander reanalyzed the data, this time controlling for each news story’s total reach—and, voilá, they were right.

So fake news does spread further than the truth, according to Juul and Ugander’s study; but the other ways in which it moves across the network look the same. What does that mean in practice? First and foremost, you can’t identify a simple fingerprint for lies on social media and teach a computer to identify it. (Some researchers have tried and failed to build these sorts of automated fact-checkers, based on the work from MIT.)

But if Juul’s paper has been misunderstood, he told me, so, too, was the study that it reexamined. The Vosoughi et al. paper arrived in March 2018, at a moment when its dire warnings matched the public mood. Three weeks earlier, the Justice Department had indicted 13 Russians and three organizations for waging “information warfare” against the U.S. Less than two weeks later, The Guardian and The New York Times published stories about the leak of more than 50 million Facebook users’ private data to Cambridge Analytica. Fake news was a foreign plot. Fake news elected Donald Trump. Fake news infected all of our social networks. Fake news was now a superbug, and here, from MIT, was scientific proof.

As this hyped-up coverage multiplied, Deb Roy, one of the study’s co-authors, tweeted a warning that the scope of his research had been “over-interpreted.” The findings applied most clearly to a very small subset of fake-news stories on Twitter, he said: Those that had been deemed worthy of a formal fact-check, and which had been adjudicated as false by six specific fact-checking organizations. Yet much of the coverage assumed that the same conclusions could reliably be drawn about all fake news. But Roy’s message didn’t do that much to stop the spread of that exaggeration. Here’s a quote from The Guardian the very next day: “Lies spread six times faster than the truth on Twitter.”

Now, with signs that Russia may be losing its latest information war, perhaps psychic needs have changed. Misinformation is still a mortal threat, but U.S. news consumers may be past the peak of fake-news panic. We may even have an appetite for scientific “proof” that all those fake-news fears were unfounded.

When I told Juul that I was sorry for my tweet, he responded with a gracious scoff. “It’s completely human,” he said. The science is the science, and the truth can go only so far. In just the time that we’d been talking, my false post about his work had been shared another 28 times.

Daniel Engber is a senior editor at The Atlantic.