Facebook has introduced measures to curb fake news, but unfortunately, those measures actually embolden the spread of fake news with users worldwide flipping a middle finger to the censor.
Facebook has a fake news problem.
Granted, it’s a problem that the Book itself is quite aware of, and has at least made some attempts at tackling. It’s just that those attempts are causing more harm than good.
Currently, when you see a news article that doesn’t seem all the right, say an article from Breitbart, you will often see it accompanied by a flag that tells you the tale being told is “disputed”.
Trouble is, as The Guardian reports, this flagging process is not consistent, with some already flagged stories continuing to circulate without a warning.
Worse still, traffic to fake news stories has actually increased since Facebook rolled out the new tag.
The feature, introduced last month, is part of a partnership between the social media giant and external fact-checking sites, including Snopes.com, ABC News, and PolitiFact.
When a user tries to share a shady story, an alert will pop up, informing them that what they’re about to post has been disputed.
The user is then linked to further information about the fact-checking feature, and calmly – if not passive aggressively – is told that “sometimes people share fake news without knowing it.”
Should the user be ballsy enough to still post the story, the link is supposed to appear in the news feeds of other users with a large, shaming banner, that says “disputed”, before listing the organisations that have flagged it as questionable.
It’s an idea that looks good on paper and seems noble in its intention – a reduction in the visibility of fake news, which Facebook has gained a reputation for rapidly spreading through its news feed algorithm.
But, as The Guardian article continues, in many cases the warning appears to be applied too late, after a story has already gone viral, or is in fact having the opposite effect.
The issue, according to sociologists and psychologists, is a problem of trust. The system is effective if users trust Facebook and the fact-checking services it has partnered with. If that trust is lacking, arguments made about the inaccuracy of a story can actually convince someone of the opposite.
For example, a story about how thousands of Irish were brought to the US as slaves – something you think they would at least touch on briefly in history class – was published by website Newport Buzz, and flagged as untrue according to Snopes.com and the Associated Press.
The editor, however, says that traffic to the story actually saw a significant increase after Facebook applied its warning.
“A bunch of conservative groups grabbed this and said, ‘hey, they are trying to silence this blog – share, share, share,’” explains Christian Winthrop. “With Facebook trying to throttle it and say, ‘don’t share it,’ it actually had the opposite effect.”
While Facebook hasn’t provided any actual data on the number of articles that have been flagged as disputed, or what effect on traffic the flag system has, a spokesman told The Guardian that the tag “does lead to a decrease in traffic and shares.”
Robert Shooltz runs a site with the snappy name of RealNewsRightNow, which he argues is more satire than fake news.
He says that a flag on one of his stories – an eerily believable tale on how Trump wants to bring back military drafting – “had absolutely no effect.”
So, why is it that Facebook, even as it tries its darnedest, seems powerless in the crusade against the most alternative of facts?
The issue is not in its methodical tagging of suspicious stories, nor in the resources it uses in the fact-checking process. Rather, according to sociologists and psychologists, it is a problem of trust.
The fake news flagging system is effective if users trust both Facebook itself and the fact-checking services it has partnered with.
If that trust is lacking, arguments made about the inaccuracy of a story can actually convince someone of the opposite.
This in spite of the source having facts and evidence to support their argument.
Also on The Big Smoke
- Charlie and the Fake News Factory
- Five fake news stories we believed in 2016
- Should publishing “fake news” be a crime?
- Scientists develop fake news “vaccine”
This psychological “boomerang effect” means that Facebook and Snopes flagging something as untrue can make a user more likely to believe it, not less. As actor James Woods, known for his right-leaning commentary on Twitter, said recently, “the fact that Facebook and Snopes ‘dispute’ a story is the best endorsement a story could have.”
Indeed, there are arguments that it is nigh on impossible for the constant output of fake news sewage to be halted by technology alone – nor even the presentation of facts – because the proliferation of alternative facts is being driven by human nature and a clash of cultures.
This is not a free pass, however, for social media to drop its method of flagging suspicious links.
While it may not prevent the proliferation of fake news, it may at the very least make a few users more likely to read an article with a healthy amount of skepticism.
More than anything, what can help prevent fake news is education; education on the importance of a strong, independent news network, and on what exactly fact-checking entails.
It’s easy to thumb off fake news as being a minor annoyance, or even something to laugh at, but for a generation raised on social media and constantly plugged in to technology, this inability to identify the real from the fake could pose some major, lifelong issues.
What seems like a silly article today about unreported terrorist attacks in Sweden, can turn into unchecked Islamophobia in a teenager just discovering their world view.
What starts as a distrust of news networks, turns into an inability to recognise very real, potentially very dangerous, articles of public safety, or of political occurrences, or of life-changing medical breakthroughs.
Fake news might be a buzzword, but that doesn’t mean it’s something you can just hope will go away.