Not long ago, social networks were a fun but somewhat inconsequential place—a digital playground to share ramen photos and connect with friends over cat memes. Then, it evolved into a springboard for democracy, helping usher in movements like Occupy Wall Street. Recently, social media has emerged as an insidious danger, a tool to divide and conquer democracy.
In the 2016 U.S. election, social networks, most notably Facebook, were used as a weapon to proliferate fake news to propel the Republican party into the White House. This fake news mostly came in the form of Facebook ads and articles that painted the Democratic National Committee as evil and corrupt. It turned out that a lot of this content came from accounts based in the Eastern Europe or from Russian websites that lived on U.S. servers.
The Danger of Content
Social media is essentially just billions of pieces of content being published and shared every day. Maybe it’s the volume or maybe it’s habitual, but users typically scan their feeds and share with a single tap, instead of engaging with content the way they would a Sunday newspaper or the evening news.
Social networks and tech leaders are culpable in the proliferation of misinformation, because they created a revenue generating platform, and for it to work, people must endlessly share so that the activity can be packaged to advertisers.
What Can Tech Companies Do?
Judging by the recent net neutrality issue, politicians are poorly equipped to police the web. They seem to be too deep in the pockets of corporations to act in good faith. The internet should be a free market where people can share opinions and content, and they need the ability to do so to freely to maintain a network’s integrity.
Leaders in the tech industry should combat this issue and not wait for Government action. They have access to the technology, create the policies and must work to regulate the veracity of content. Facebook recently announced that they will implement a strategy to deal with malicious news that is not factual. This strategy includes better detection, third-party verification and to provide warnings for stories that are unverified or are known to be false. We must wait to gauge the effectiveness of this strategy, but at the very least it could prompt other networks to take responsibility for their content and focus on truth and not advertising dollars.
Most importantly, it’s on us, the users of these networks, to be the solution. We need to create, read and curate more discriminately. If users ignore or report false or malicious content, it will eventually render them insignificant. The best way to kill misinformation is to keep scrolling on your newsfeed. It’s the only way to ensure it drifts off into the digital ether.