With the rampant proliferation of fictitious stories that are presented as legitimate news, new methods of sorting out real information from faux news stories has become increasingly important, especially as the real-world implications of the manipulation of information on social media are becoming drastically more apparent. New efforts are being made toward this end, such as Facebook’s endeavor in tackling the problem as it exists on its own site, as well as academic research being made toward this end – including one study that recently traced how fake news spreads much faster on social media than real news.

This study, conducted at the Massachusetts Institute of Technology, made use of news stories that had been either verified or debunked by six prominent fact-checking organizations, including Snopes, PolitiFact, FactCheck, Truth or Fiction, Hoax Slayer, and Urban Legends. The researchers then searched for mentions of each of these stories in Twitter’s archive: for each mention of a given story, a determination was made as to whether or not the entry was an original tweet, or a reply to a different posting. This allowed the researchers to track how information propagates through Twitter, and to trace the stories back to their originating tweets.

The resulting database included 126,000 stories, tweeted 4.5 million times by 3 million different accounts. They found that instances of legitimate news stories rarely spread past 1,000 people, but false stories could fan out to as many as 100,000 accounts. This was apparently due not to the personal influence of the individual tweeters themselves, but rather to the novelty of the story.

"Novel information is thought to be more valuable than redundant information," explains study co-author Sinan Aral, a professor of management at MIT. "People who spread novel information gain social status because they’re thought to be ‘in the know’ or to have inside information."

Aral’s team verified this aspect by analyzing the emotional content of the stories, and they found that the false stories were indeed designed to provoke shock and disgust. A bot detection algorithm was also employed, to weed out artificial propagation of stories from the legitimate spread caused by humans, and found that the rate of spread was about the same between machine and man.

"So bots could not explain this massive difference in the diffusion of true and false news we’re finding in our data," continues Aral, "it’s humans that are responsible."

Aral, and his team plan to build on this information to develop methods to stem the spread of false rumors similar to Facebook’s efforts to tackle the problem. In the meantime, it is increasingly essential for the rest of us to be mindful of the information we consume, and to question its veracity–especially if it seems to promote something that agrees with our worldview. 

Dreamland Video podcast
To watch the FREE video version on YouTube, click here.

Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.