Facebook bans voting misinformation ahead of November midterms

The social media giant will ban false information concerning voting requirements and fake reports about violence in the lines at polling stations leading up to the US’ midterm elections, a senior company official said.

The ban on false and misleading information, announced by Facebook executives on Monday, comes six weeks after Sen. Ron Wyden (D-OR) asked the company’s Chief Operating Officer Sheryl Sandberg how Facebook would counter tactics which aim to mislead users and potentially prevent them from voting, Reuters reported.

The world’s largest online social network, with more than 1.5 billion daily users, also  said it would ban “misrepresentations about how to vote, such as claims that you can vote by text message, and statements about whether a vote will be counted.”

Facebook executives explained that the new policies would be enforced by community standards moderators. Company officials said that such moderation would be more effective than deleting all misleading posts at once.

“We don't believe we should remove things from Facebook that are shared by authentic people if they don't violate those community standards, even if they are false,” said Tessa Lyons, product manager for Facebook's News Feed feature, which shows users what their friends are sharing.

Facebook has previously taken fire for the alleged Russian involvement in the 2016 US presidential election via social media posts. Although little evidence has been found suggesting that such posts had much influence on the election, Facebook has been criticized for not taking action against dishonest accounts and misleading information on its platform. Since then, Facebook policies have undergone several changes.

At some point Facebook executives even discussed banning all political advertising on the social media platform, however, the company rejected that idea, as product managers were loath to leave advertising dollars on the table, and executives argued that blocking political ads wouldn’t contribute to the fight against fake news, but would favor wealthier campaigners who could afford more traditional and expensive advertisements, such as TV spots. Instead, the company agreed to check political ad buyers for proof of national residency and keep a public record of such buyers.

“Without a clear and transparent policy to curb the deliberate spread of false information that applies across platforms, we will continue to be vulnerable,” said Graham Brookie, head of the Atlantic Council’s Digital Forensic Research Lab.

Other social media companies, including Reddit and Twitter, have also launched their own efforts to keep misinformation off their platforms.

In the beginning of October Twitter announced that “as platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” noting that it would continue removing accounts with false information about voting or ones misrepresenting themselves as members of political parties.

You want to chat directly with us? Send us a message on WhatsApp at +250 788 310 999    


Follow The New Times on Google News