Twitter Is Taking Action To Report Voting Misinformation In A Well-Timed Move
In today's society, few things are free from the mark of social media. While many of us still view it as a way to see what our pals are up to or share pictures of our lattes, it's also the way some of us get our news updates, connect globally, or even buy products. But social media, in recent years, has also possibly had influence on the news the public consumes and even elections. In light of that, Twitter is taking action on voting ahead of the 2020 election.
Currently, if you're scrolling through Twitter and see a tweet you find offensive, the platform gives you the option to report it. But, in an April 24 statement on the company's blog, Twitter said it is adding a new option to that reporting function in order to bring down the hammer on "deliberate attempts to mislead voters." Originally when reporting a tweet a user could choose from one of three reasons: "I'm not interested in this tweet," "It's suspicious or spam," or "It's abusive or harmful." Now Twitter is adding the option, "It's misleading about voting." The new option will be launched for the 2019 Lok Sabha in India and the European Union elections, and will roll out to other global elections throughout the year, per the blog post. In an email to Elite Daily, a spokesperson for Twitter said, "We're exploring this for critical elections outside the United States, and we'll provide an update on 2020 if and when we have one.”
According to Twitter's statement, there are three main types of content that can be construed as being used for the "purpose of manipulating or interfering in elections," and therefore in violation of the platform's guidelines.
Here they are:
Misleading information about how to vote or register to vote (for example, that you can vote by Tweet, text message, email, or phone call); misleading information about requirements for voting, including identification requirements; and misleading statements or information about the official, announced date or time of an election.
Twitter's move towards trying to rid the platform of misinformation, especially about elections, is an important one. Following the 2016 presidential election, The New York Times conducted an investigation into how Russian operatives might have used Facebook and Twitter to influence the election. It found thousands of fake Twitter accounts connected to Russia responsible for spreading anti-Clinton messages. The publication reported that many of the fake accounts were automated bots capable of sending identical messages seconds apart and always in alphabetical order of their names. Some of the bots also employed hashtags such as #WarAgainstDemocrats more than 1,700 times.
Similarly, in September 2017, Facebook said it had deleted several hundred accounts it thought to be created by a Kremlin-linked Russian company that bought $100,000 in Facebook ads on topics such as race, gay rights, gun control and immigration issues during and after the 2016 election, according to The New York Times.
Twitter and Facebook have been trying to crack down on the issue since the election, but it continues to be a problem. According to Endgadget, researchers at Stanford University and New York University (NYU) examined the Facebook and Twitter engagements of 570 websites that were called out for producing fake news stories between 2015 and 2018. The study found that leading up to the 2016 presidential election, false news interactions climbed on both Facebook and Twitter — but fell off after the election by nearly 50 percent on Facebook, while continuing to increase on Twitter. Comparatively, engagement rates for major news sites stayed roughly the same throughout. Despite the decline and efforts to combat false news being share online, as of July 2018 Facebook still had around 70 million engagements with the fake news samples used by the Stanford and NYU study per month, according to Endgadget.
Maybe with Twitter's new reporting feature, the numbers will continue to go down — especially before the 2020 election.