Political Misinformation and Social Media

Overview

The second decade of the twenty-first century is likely to be remembered as a decade in which political misinformation and "fake news" flourished. The ubiquitous nature of social media provided a fertile ground for the spread of misinformation, falsehoods, and misleading statements. Political communication was made more difficult by the expanding chasm between liberals and conservatives and the sharp divide on issues such as abortion, gun control, education, and the environment. According to the Pew Research Center only 5 percent of Americans regularly used any form of social media in 2005. Six years later, that number had risen to 50 percent, while 72 percent of Americans used social media by 2019. By 2017, seven out of ten Americans regularly signed on to social media forums such as Facebook, YouTube, Twitter, Instagram, Snapchat, Pinterest, Reddit, LinkedIn, and Tumblr. Users report that they use social media for access to the news, for sharing information, viewpoints, and photographs with family and friends, and for entertainment. The spread of political misinformation on social media has been accelerated by the use of bots, computer programs designed to automatically repeat designated actions. In 2022, Twitter estimated up to 12 percent of its 192 million daily active users were bots. Limiting the spread of political misinformation is made more difficult by the fact that it is protected by the First Amendment's guarantee of free speech and by Section 230 of the Communications Decency Act, which excludes social media from responsibility for disseminating fake news. By 2020, a survey by Pew Research Center showed that 64 percent of Americans said that social media had a negative effect on how things were going in the United States.

Misinformation and falsehoods are seen as more appealing than the truth because they are novel and more likely to arouse particular emotions such as anger or resentment. Some observers argue that the worst element in the ubiquity of political misinformation on social media is that many Americans no longer trust the sources that they relied on in the past to tell them the truth. Scholars have found that people are more likely to trust information, even when it is false or misleading, if it agrees with existing opinions. People also have a tendency to cluster into like-minded groups on social media, creating what experts call the echo chamber effect and resulting in constant reinforcement of existing beliefs and the spread of inaccurate information.

Social media has increasingly become a source of news, and 67 percent of Americans report that at least some of their knowledge of news stories is based on what they read on social media. However, studies have indicated that less than one person in four actually trusts that social media is capable of providing accurate news. Contrarily, another study undertaken by the Associated Press and the NORC Center for Public Affairs Research revealed that social media users were more likely to trust fake news stories generated by celebrities than they were to trust Associated Press stories. The PEW Research Center reports that 67 percent of Americans acknowledge the role that fake news plays in spreading misinformation on social media. A study by Jennifer Swift (2018) of 4,151 adults revealed that in 2017 only 20 percent trusted national news organizations "a lot" and 52 percent trusted them "some." Respondents were more likely to trust local news organizations (25% and 60%, respectively) and less likely to trust social media (5% and 33%, respectively).

Despite the reluctance of Donald Trump and his advisors to acknowledge Russian interference in 2016 presidential election, Facebook, Twitter, and Google all admitted to accepting election-related advertising from Russians. In October 2017, Trump tweeted a response to those admissions: "Keep hearing about 'tiny' amount of money spent on Facebook ads. What about billions of dollars on Fake News on CNN, ABC, NBC, and CBS?" Special Counsel Robert Mueller announced indictments against thirteen Russian agents in early 2018. Trump's apparently reflexive strategy in countering any critical news source gave rise to the rallying cry: "fake news." From the standpoint of "mainstream media," Fox News, and in particular Fox political commentators, have contributed most to political misinformation by refusing to air information critical of Trump. On March 20, 2018, Ralph Peters, a retired lieutenant-colonel in the United States Army and a conservative expert on Russian intelligence, reported that he was leaving his position with the network because of its willingness to serve as "a mere propaganda machine for a destructive and ethically ruinous administration."

False information such as the administration claim that more than three million illegal votes were cast in the 2016 election are easily spread on credulous social media platforms. Political scientists from Princeton University, Dartmouth University, and Exeter University traced links to such rumors, learning that one of four visitors to sites reporting such news had originally followed a Facebook link. The study also demonstrated that consumers of fake news were also consumers of hard news. The group least likely to be attracted to fake new stories were those who were politically "savvy." Misinformation continued to be spread over social media during the 2020 election, including renewed claims of illegal or fraudulent voting. Trump criticized mail-in voting, calling it a scam. Mail-in voting reached record highs in 2020 because of COVID-19. Trump used his Twitter account to spread fake news.

Because there are no filters on news reporting on social media and sensational headlines make the most irresistible "click bait," falsehoods and misinformation spread quickly. There are so many rumors that fact-checking sites, such as PolitiFact, Snopes, and factcheck.org, are kept busy attempting to discover the truth behind false or misleading information. Even when they find information that challenges the veracity of rumors, their finds are less likely than rumors to be distributed among social media users. Scholars have identified three elements common to misinformation: Those who encounter it are likely to believe it; media systems make little or no effort to block it; and it is resource intensive to counter false claims (Southwell, Thorson & Sheble, 2018).

Brooke Borel (2018) suggests that social media users who wish to avoid fake news pay attention to the origin of a source, arguing that information is more likely to be reliable when it originates with academicians, experts, and eyewitnesses and when it is reported in peer-reviewed publications, academic journals, and online academic databases. Borel recommends checking bylines on stories and looking up writers' credentials. She suggests that the absence of an author byline on posted stories may provide a clue that the information contained therein is questionable. Checking for corrections to a story and examining other reports on a story may also help consumers to learn the truth behind a rumor. Since photographs may easily be manipulated or altered, photographs should not automatically be accepted as valid. Getting the opposing side of an issue may also shed light on truth.

rsspencyclopedia-20180417-53-179479.jpgrsspencyclopedia-20180417-53-179527.jpg

Further Insights

Experts in a variety of fields are working together to discover better tools for countering the spread of misinformation. Fillipo Menczer, a University of Indiana specialist in cognitive and computer science, has started the Truthy Project at the Observatory on Social Media as a joint project of the university's Network Science Institute and the Center for Complex Networks and Systems. The Duke Tech and Check Cooperative of Duke University is using a $1.2 million grant from the John S. and James L. Knight Foundation, the Facebook Journalism Project, and the Craig Newmark Foundation to develop improved fact-checking tools. Their Claim Buster tool, which uses an algorithm to differentiate truth from falsehood, is able to scan information and compare it instantaneously with a database of reliable information. Claim Buster was built by Cheng Kai Li, a computer scientist at the University of Texas at Arlington. To test the program's ability to discern truth, Li had someone else select seven false stories, including a highly politicized May 2015 article on climate change that claimed that NASA had released information stating that polar ice had not retreated since 1979. Claim Buster did not pick up on the false information expressed in the claim that climate change is believed to result from a combination of natural and human influences. Truth Googles is dedicated to discovering the truth behind online rumors and working toward making factual information more easily accepted by viewers with biased viewpoints. Brook Borel, the author of The Chicago Guide to Fact-Checking, hosts a podcast for journalists and other interested parties to teach them fact-checking techniques. Borel (2018) maintains that computerized methods of fact-checking may ultimately fail because they do not have a human's ability to detect nuances, irony, and sarcasm found within misinformation. By 2020, social media sites had found little they could to prevent the spread of political misinformation by users, however, Facebook had been able to reduce the spread of misinformation by about 75 percent by altering its advertising system to intercept fake news stories.

In 2014, Adrian Friggerri, Lada Adamic, Dean Eckles, and Justin Cheng studied social media rumors, finding that it took six times as long for truth to reach 1,500 users as it did for misinformation and falsehoods to travel similar channels. They also learned that tunnels of falsehoods were dug twenty times deeper than those that spread the truth. They prefer the term "false news" over fake news, arguing that the latter term has been so overused that it is no longer considered viable when differentiating truth from fiction. Within this context, they examined texts, photographs, and web links that were used to spread rumors on social media. The peak periods for views of false news stories was the end of 2013, 2015, and 2016. They reported that false news was 70 percent more likely than actual news to be retweeted. Their findings were presented in 2014 at the International Conference on Weblogs and Social Media held by the Association for the Advancement of Artificial Intelligence, inspiring further study.Those findings were confirmed in 2019 by researchers from MIT, who also found that some misinformation came from politicians themselves in an effort to get votes.

In the March 9, 2018, issue of Science, Soroush Vosoughi, Deb Roy, and Sinan Aral reported on their unprecedented study of 126,000 contested stories that were found on Twitter between September 2006 and December 2016. The 126,000 Twitter stories used for the study were retweeted (shared by other Twitter users) a total of 4.5 million times. While false stories about business, natural disasters, terrorism, science and technology, and entertainment were common, they found that politics was the top area of misinformation and fake news. Their findings echoed the Frigerri study that inspired them, demonstrating that falsehoods generally spread both faster and tunnel deeper than the truth.

While some false stories may serve to reinforce negative opinions or cause amusement, others may do real damage. For example, following a rumor that President Barack Obama had been seriously injured in a White House explosion in April 2013, investors lost $130 million on the stock market. Evidence is plentiful that Russian bots were employed in 2016 and have been used since to influence American public opinion. Russian bots have been particularly active after crisis events such as the shootings at a Las Vegas concert and at a Parkland, Florida, high school. However, Vosoughi and his colleagues report that in their study, bots were just as likely to spread truth as they were falsehoods. Other damaging misinformation can spread from politicians themselves, including when President Trump fueled about 38 percent—as noted by several studies—of the misinformation about the 2020 coronavirus (COVID-19) pandemic. That misinformation included false cures, which sparked a shortage of certain medications, and false assurances that the pandemic was being contained.

Some critics have questioned the findings of the Vosoughi study, particularly the behavior of bots. The use of Russian bots to spread political misinformation has been documented by scholars and news media in different countries. In 2017, the Alliance for Securing Democracy and the German Marshall Fund launched an investigation into use of Russian bots on Twitter, identifying them as a major source of misinformation aimed at sewing discord. On February 19, 2018 ("After Florida School Shooting, Russians 'Bot' Army Pounced"), The New York Timesreported that Russians had used YouTube conspiracy videos, fake interest groups on Facebook, and bots on Twitter to engage in a constant campaign of misdirection. In a 2018 study, Jennifer Swift reviewed millions of comments written in response to fake emails. She found that 23 million comments could be traced back to a mere sixty users. Some 444,938 of those comments had originated in Russia. Information linking the behavior of peaceful protesters with the violent behavior of the alt-right after the Charlottesville shooting in August 2017 was also traced back to Russia.

Issues

Politicians are easy targets for the spread of misinformation on social media. For example, Rachel Ehrenberg (2012) found that during the 2010 mid-term election two accounts that raised red flags were set up on Twitter within ten minutes of each other. The @PeaceKaren‗25 account had tweeted 10,000 pro-Republican messages that were then retweeted by the @HopeMarie‗25 account from which no original tweets were ever posted. Since no other activity was evident, both accounts were assumed to be bots. Misinformation designed to discredit Hillary Clinton was ubiquitous during the 2016 presidential election. In the fall of 2016, the twenty most popular quasi-news sites included hoax sites, hyper-partisan sites, and blogs that generated 7,367,000 stories, reactions, and comments. Seventeen of those twenty sites were either pro-Trump or anti-Clinton (Timmer, 2017). Some of the most common fake stories on these sites were claims that Hillary Clinton as Secretary of State had sold weapons to ISIS, that she had been disqualified from running for president, and that the FBI had received millions of dollars from the Clinton Foundation. When traced to their sources, scholars and fact-checkers found that many of those stories originated in other countries, including Macedonia, where individuals were paid thousands of dollars a month to post false news stories on American social media sites. A good deal of political misinformation appearing in 2016 was later traced to members of Trump's campaign team and to Russian agents and bots.

Facebook has developed political profiles of its users based on posts they "like." Political designations range from Very Liberal on the left to Very Conservative on the right. Facebook also tracks users' interest in American politics and how they feel about such issues as immigration, national security, equality, and the environment. That information may be sold to campaign managers, advertisers, fundraisers, and political extremists (Sunstein, 2018). On Twitter, hashtags responded on or retweeted provide similar clues. In March 2018, the extent of misdirection during the campaign was made more glaring when news that Cambridge Analytica, a Trump-allied organization partially under the leadership of alt-right Trump ally Steve Bannon, had illegally used the personal information of more than eighty million Facebook users to discredit Clinton and promote Trump. Investigations were launched in the United States and in Britain where Cambridge Analytica was also accused of interfering with the Brexit vote that led to the United Kingdom's withdrawal from the European Union.

Supporters of politicians may also promote the misleading notion that their favorites are more popular than they are. According to information gathered by The Internet Institute of Oxford University, in July 2012, Mitt Romney's Twitter account grew by 14,000 over a two-day period. A random selection of those new followers revealed that 10 percent had less than two followers each, and 27 percent of them had only one or zero followers. Again, the assumption was that the new followers were bots.

Political groups have also used social media to spread misinformation about lesser known politicians. In 2010, a special election in Massachusetts to fill the seat left vacant by the death of Senator Ted Kennedy pitted Democrat Martha Coakley against Republican Scott Brown. In response to Coakley's lead, an Iowa-based group of Conservatives launched a "Twitter bomb" designed to defeat Coakley. That same group had been found to be the source of the "swift boat" rumor that helped to defeat Democrat John Kerry in the 2004 presidential election. After the Massachusetts, election, Rachel Ehrenberg documented the details of the Coakley smear campaign, discovering 234,697 tweets over a four-day period leading up to the election. Some 1,112 tweets were requests for signatures on a petition to protest Coakley's record of discrimination, which proved to be false. Through retweeting, those tweets reached 60,000 people.

Misinformation about politicians continued into the 2020 election season, with both Democrats and Republicans spreading misinformation. Commonly spread misinformation at that time included false reports of voter fraud and intentionally miscounted mail-in votes, and was primarily started by a group of twenty-five right-wing commentators—including President Trump's son Donald Trump Jr.—who accounted for nearly 29 percent of voter fraud misinformation.

Misinformation was also predicted to play a role in the 2024 presidential election. Experts predicted that artificial intelligence (AI) would worsen the situation as it generates persuasive content and people will not know what to believe.

Bibliography

Auxier, B. (2020, Oct. 15). 64 Percent of Americans Say Social Media Have a Mostly Negative Effect on the Way Things are Going in the US Today. Pew Research Center. www.pewresearch.org/fact-tank/2020/10/15/64-of-americans-say-social-media-have-a-mostly-negative-effect-on-the-way-things-are-going-in-the-u-s-today/

Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65(4), 619–638.

Borel, B. (2018). Last year there were 8,164 fake news stories. Popular Science, 290(2), 64–124.

Brown, S. (2020, Oct. 5). MIT Sloan Research about Social Media, Misinformation, and Elections. MIT Management Sloan School. mitsloan.mit.edu/ideas-made-to-matter/mit-sloan-research-about-social-media-misinformation-and-elections

Ehrenberg, R. (2012). Social media sway. Science News, 182(8), 22–25.

Frenkel, S. (2020, Nov. 23). Meet the Top Election Misinformation 'Superspreading." The New York Times. www.nytimes.com/2020/11/23/technology/election-misinformation.html

Friggeri, A., Adamic, L., Eckles, D., & Cheng, J. (2014). Rumor cascades. In Proceedings of the International AAAI Conference on Web and Social Media Eighth International AAAI Conference on Weblogs and Social Media. Ann Arbor, MI: AAAI.

Highfield, T. (2016). Social media and everyday politics. Malden, MA: Polity.

Littlewood, Jesse. (2021 Oct. 8). Whistleblower confirms what we knew: Facebook blowing it on political disinformation. www.usatoday.com/story/opinion/2021/10/08/facebook-twitter-political-disinformation/6026677001/

Perarch, Rotem. (2024, Jan. 17). Some people who share fake news on social media actually think they're helping the world. The Conversation, theconversation.com/some-people-who-share-fake-news-on-social-media-actually-think-theyre-helping-the-world-215623

Southwell, B. G., Thorson, E., & Sheble, L. (Eds.). (2018). Misinformation and mass audiences. Austin: University of Texas.

Sunstein, C. R. (2018). #Republic: Divided Democracy in the Age of Social Media.Princeton, NJ: Princeton University Press.

Swift, J. (2018). Dishonest acts: As more and more misinformation spreads online, can trust ever be restored? Editor and Publisher, 151(1), 44–49.

Timmer, J. (2017). Fighting falsity: Fake news, Facebook, and the first amendment. Cardozo Arts and Entertainment Law Journal, 35(3), 669–705.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.