Deepfake
Deepfake technology represents a significant advancement in artificial intelligence, enabling users to create highly realistic, fabricated audio, images, and videos. This technology primarily utilizes generative adversarial networks (GANs), which involve two neural networks working against each other to produce convincing synthetic content. The term "deepfake" combines "deep learning," a form of AI, with the concept of fabrication, reflecting its dual nature of innovation and deception. Since its inception around 2014, deepfake technology has rapidly gained accessibility, leading to a dramatic increase in the number of deepfake videos online, many of which are of a pornographic nature.
Concerns have emerged regarding the potential misuse of deepfake technology for cyberbullying, misinformation, and political manipulation, particularly in the context of elections. Many experts warn that deepfakes could become key tools in disinformation campaigns, affecting public perception and voter behavior. Notably, legislative efforts have begun to address the challenges posed by deepfakes, especially regarding nonconsensual explicit images, particularly those involving minors. As governments and cybersecurity entities race to develop detection technologies, the ongoing debate about regulation and accountability continues to unfold. With the rise of deepfakes, society faces complex ethical and legal dilemmas about privacy, consent, and the integrity of information.
On this Page
Subject Terms
Deepfake
The term deepfake refers to emerging technology that allows computer users to create fabricated but highly convincing sounds, static images, and moving pictures. Deepfake technologies are assisted by advanced artificial intelligence (AI), mostly from a class of AI known as generative adversarial networks (GANs). Using sophisticated algorithms, GANs manipulate user-supplied input to generate sounds, images, and videos, resulting in strikingly realistic simulated content. The word “deepfake” was derived from the advanced AI words “deep learning” and “fake.”
As the underlying technologies continue to advance and improve, experts have voiced concerns that deepfake technology heralds the impending arrival of a dangerous virtual landscape. Its potential criminal and political applications are particularly worrisome to many observers. The consensus among experts is that deepfake technology is likely to introduce unprecedented complications to the problem of “fake news” through artificially manufactured but believable video clips featuring politicians and other high-profile public figures.
![Computer scientist and machine learning expert Ian Goodfellow. Ian Goodfellow [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)] rsspencyclopedia-20191011-13-176439.jpg](https://imageserver.ebscohost.com/img/embimages/ers/sp/embedded/rsspencyclopedia-20191011-13-176439.jpg?ephost1=dGJyMNHX8kSepq84xNvgOLCmsE2epq5Srqa4SK6WxWXS)
![Nancy Pelosi, Speaker of the House of Representatives, is one of many politicians victimized by altered videos that were shared virally. United States House of Representatives [Public domain] rsspencyclopedia-20191011-13-176465.jpg](https://imageserver.ebscohost.com/img/embimages/ers/sp/embedded/rsspencyclopedia-20191011-13-176465.jpg?ephost1=dGJyMNHX8kSepq84xNvgOLCmsE2epq5Srqa4SK6WxWXS)
Background
The invention of deepfake technology is generally credited to Ian Goodfellow, a machine-learning expert who created his first GAN-powered deepfakes in 2014 while he was a PhD student at the University of Montreal. Goodfellow went on to work as a research scientist at Google before joining Apple in March 2019, where he accepted a role as the company’s director of machine learning. He was named to the Massachusetts Institute of Technology’s “35 Innovators Under 35” in 2017 and ranked among Foreign Policy magazine’s “100 Global Thinkers” in 2019.
Deepfake technology relies on algorithms, which are systematized sequences of programming instructions that tell a computer how to handle a complex task. In particular, it uses advanced AI-powered GAN processes that push the limits of conventional algorithms. Most algorithms are focused on simply sorting or classifying data, while GANs use multiple algorithms that try to “trick” each other into categorizing a manufactured sound, image, or video as real. Specifically, they function by simultaneously adopting the roles of “generator” and “discriminator,” where the “generator” is responsible for drawing on user input to manufacture a fake sound clip, image, or video clip and the “discriminator” is responsible for comparing the simulated content against the authentic input. GANs are capable of testing simulated content against millions of evaluative parameters quickly, allowing users to generate fake sounds, images, and videos realistic enough to convince viewers of their authenticity.
One definitive aspect of deepfake technology is that it does not require much initial input to create a believable result. A Popular Mechanics article from August 2019 noted that GANs only need a few images to generate output that appears genuine to the untrained eye.
Deepfake software is openly available for download on the Internet, enabling any user with the requisite computer skills to use it to produce fake audio, image, and video content. According to a BBC News report published in October 2018, which drew on data supplied by the cybersecurity firm Deeptrace, approximately 15,000 deepfake videos are currently online, marking a sharp rise from the nearly 8,000 the firm counted in December 2018. Observers and analysts believe that amateur computing hobbyists are responsible for a large majority of existing deepfake content and emphasize that deepfake production is a worldwide phenomenon. The BBC report also noted that Deeptrace’s analysis found 96 percent of deepfakes to be pornographic in nature, with most simulated videos superimposing the likenesses of famous actresses onto the bodies of adult performers.
Deeptrace’s report also addressed claims that deepfake videos were used in recent political campaigns in Malaysia and Gabon. According to the firm, allegations that deepfake videos were used to influence voters in both countries did not withstand scrutiny and can be dismissed as false. Deeptrace did note that deepfake videos have the potential to be weaponized for political purposes. However, as of the report’s October 2019 publication date, the firm’s expert analysts believe the most current pressing threat comes from the technology’s potential misuse as a cybercriminal and cyberbullying tactic.
Ecommerce entrepreneurs moved quickly in their bid to monetize deepfake technology. According to the Deeptrace review, the four top-ranking adult websites featuring deepfake videos generated approximately 134 million views between February 2018 and the report’s finalization in the autumn of 2019. Software developers have also created mobile apps that allow smartphone users to create deepfakes, with the website of one such app attracting a massive spike in traffic after media sources published unfavorable reports about it. The website’s owners voluntarily took down the site and discontinued distribution of the app in the controversy’s immediate aftermath.
A different analysis, conducted in February 2019 by a collaborative group known as Witness Media Lab, reported that current deepfake technologies require a significant level of specialized knowledge to use effectively. However, Witness Media Lab researchers also stated that the end-user landscape was changing quickly, with increasingly advanced deepfake technologies requiring less and less user skill. Witness Media Lab’s conclusions matched the Deeptrace analysis, with both organizations agreeing that the production of simulated, personalized, and highly sophisticated fake content represents a pressing threat to individual users, and particularly to girls and women who face the risk of having their likenesses imposed on explicit adult videos for the purposes of phishing, extortion, harassment, and cyberbullying.
Deepfakes Today
According to Home Security Heroes, an online security research group, between 2019 and 2023, the number of deepfake videos circulating online had increased 550 percent, to a total of just over 95,800 in 2023. Of these, 98 percent were pornographic. Though members of Congress had introduced several anti-AI pornography bills by 2024, none of them had made it out of committee.
Between early 2023 and 2024, about two dozen states have introduced legislation to prevent the creation and circulation of nonconsensual sexually explicit deepfake images, also called deepfake nude images, of individuals under the age of eighteen, which started to proliferate in schools in the 2020s as AI apps made it easier to create and distribute such images on a large scale. South Dakota, Louisiana, and Washington all enacted laws that criminalize the possession, production, and/or distribution of deepfake nude images. Lawmakers and experts on child protection urged passage of deepfake nude laws, citing the permanent damage victims may suffer as well as the fact that existing legislation did not specifically ban sexually exploitative AI apps or the deepfake images that incorporate identifiable images of real people. Debate arose over what such bills should cover, who should be held responsible, and whether offenders would face civil liability, criminal charges, or both. Some legislation sought to allow victims to sue individual perpetrators, while those who advocated for survivors of sexual assault argued that AI nudification app developers should be held liable. Other exports suggested that deepfake nude images of real teenagers may not be considered child sexual abuse material in all jurisdictions unless the images can be legally proven to depict sexually explicit conduct or a lewd depiction of an individual's genitalia.
With authorities on AI agreeing that the technology poses new dangers in the virtual environment, many experts believe that it is only a matter of time before deepfakes become a central part of political disinformation campaigns and cyberwarfare. Some experts believe that deepfakes will almost certainly be weaponized during the 2024 US presidential election cycle in a bid to influence voters. In June 2023, the presidential campaign of Republican candidate Ron DeSantis, the governor of Florida, released a political attack ad that included AI-generated images of the lead candidate, former president Donald Trump hugging Dr. Anthony Fauci, who had become a target of anti-vaxxers during the COVID-19 pandemic.
US government agencies and other cybersecurity stakeholders were actively working to develop technology capable of recognizing deepfakes in what some observers have described as a kind of “virtual arms race” meant to limit or prevent deepfakes from exerting a disruptive or damaging influence on society. By 2024, six states had laws that sought to prevent deepfake election interference, while legislation in seven other states had stalled. Some state legislation required synthetic media disclosures for AI-generated political images, while other legislation banned such images, or combined a ban with an exemption for images that included a disclosure.
In February 2024, the Federal Communications Commission unanimously adopted a declaratory ruling that AI-generated voices were "artificial" and therefore illegal to use in robocalls under the Telephone Consumer Protection Act. The ruling was in response to an increase in scamming incidents involving AI voice cloning. Scammers used voice cloning to impersonate victims' loved ones, celebrities, or politicians, to ask for money, charity donations, personal information, or votes.
Bibliography
Cellan-Jones, Rory. “Deepfake Videos ‘Double In Nine Months.’” BBC News,7 Oct. 2019, www.bbc.com/news/technology-49961089. Accessed 6 Nov. 2019.
Chen, Angela. “Three Threats Posed by Deepfakes that Technology Won’t Solve.” MIT Technology Review,2 Oct. 2019, www.technologyreview.com/s/614446/deepfake-technology-detection-disinformation-harassment-revenge-porn-law/. Accessed 6 Nov. 2019.
"Deep-Fake Audio and Video Links Make Robocalls and Scam Texts Harder to Spot." Federal Communications Commission, 8 Feb. 2024, www.fcc.gov/consumers/guides/deep-fake-audio-and-video-links-make-robocalls-and-scam-texts-harder-spot. Accessed 20 May 2024.
Fitzgerald, Madyson. "States Race to Restrict Deepfake Porn as It Becomes Easier to Create." Stateline, 10 Apr. 2024, stateline.org/2024/04/10/states-race-to-restrict-deepfake-porn-as-it-becomes-easier-to-create/. Accessed 20 May 2024.
Gregory, Sam and Eric French. “How Do We Work Together to Detect AI-Manipulated Media?” Witness Media Lab,2019, lab.witness.org/projects/osint-digital-forensics/. Accessed 6 Nov. 2019.
Libby, Kristina. “This Bill Hader Deepfake Video Is Amazing. It’s Also Terrifying for Our Future.” Popular Mechanics,13 Aug. 2019, www.popularmechanics.com/technology/security/a28691128/deepfake-technology/. Accessed 6 Nov. 2019.
Porup, J.M. “How and Why Deepfake Videos Work—And What Is At Risk.” CSO,10 Apr. 2019, www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html. Accessed 6 Nov. 2019.
Shao, Grace. “What ‘Deepfakes’ Are and How They May Be Dangerous.” CNBC,13 Oct. 2019, www.cnbc.com/2019/10/14/what-is-deepfake-and-how-it-might-be-dangerous.html. Accessed 6 Nov. 2019.
Simonite, Tom. “Prepare for the Deepfake Era of Web Video.” Wired,6 Oct. 2019, www.wired.com/story/prepare-deepfake-era-web-video/. Accessed 6 Nov. 2019.
Singer, Natasha. "Spurred by Teen Girls, States Move to Ban Deepfake Nudes." The New York Times, 22 Apr. 2024, www.nytimes.com/2024/04/22/technology/deepfake-ai-nudes-high-school-laws.html. Accessed 20 May 2024.
“What Is a Deepfake?” The Economist,7 Aug. 2019, www.economist.com/the-economist-explains/2019/08/07/what-is-a-deepfake. Accessed 6 Nov. 2019.