Artificial Intelligence Cold War
The "Artificial Intelligence Cold War" refers to the competitive landscape among nations, primarily the United States, China, and Russia, as they develop artificial intelligence (AI) technologies for military applications. This term draws a parallel to the historical Cold War, characterized by an arms race aimed at deterring attacks through the threat of mutual destruction. In this context, nations are increasingly leveraging AI for potential cyberattacks, disinformation campaigns, and disruptions to critical infrastructure, with the stakes being comparably high. While discussions about an AI arms race have intensified, evidence of an active military AI race remains largely speculative.
Notably, the U.S. established the National Security Commission on Artificial Intelligence (NSCAI) to address national security implications, highlighting concerns about being unprepared for competition with China. Meanwhile, China's aggressive investment in AI, fueled by national plans to become a global leader by 2030, raises alarms about its growing influence and control over technology. Russia, although facing challenges in technological development, excels in disinformation tactics and cyber warfare.
The implications of this emerging cold war extend beyond military engagements, affecting global alliances, data privacy, and ethical considerations in AI deployment. As tensions rise, the possibility of AI-driven asymmetric warfare emerges, where conflicts may become lopsided due to the advanced capabilities of one party. This evolving landscape necessitates careful scrutiny of AI's role in warfare and its broader societal impacts.
On this Page
Artificial Intelligence Cold War
Overview
The artificial intelligence cold war refers to rivalries between multiple nations in developing artificial intelligence (AI) to attack and cripple one another. Use of the term cold war is a reference to the Cold War between the United States and Soviet Union following World War II. This was an era when both countries engaged in an arms race as a means of deterring the other from attacking. Experts suggest an AI arms race, or development of military uses of AI by the United States, China, and Russia, would amount to a second Cold War. Some potential attacks could involve disinformation campaigns or create chaos by targeting critical infrastructure, satellite security, or software supply chains. While a great deal of discussion and speculation about an AI cold war has occurred, it cannot be proven that an AI military race is taking place.
Although AI warfare is generally viewed as a future likelihood, the United Nations (UN) reported that in 2020 an autonomous drone, or lethal autonomous weapons system (LAWS), was used in combat in Libya. The organization was unsure if it was used to kill.
In the 2020s, most scrutiny and discussion of an AI cold war focused on the United States and China; many analysts discounted Russia’s likelihood of excelling in technological development. This assessment came about in part because of sanctions levied against Russia when it annexed Crimea in 2014. Other factors were demographic analysis that found Russia’s population declining and Russian brain drain as educated individuals pursued careers offering greater opportunities and pay in other countries. Moscow devoted a fraction of the funding toward technology that China and the United States allocated, but the Russians excelled in disinformation campaigns and AI technology had the potential to boost its capacity.
The US Congress created the National Security Commission on Artificial Intelligence (NSCAI) in 2018 to make recommendations about developing AI and related technologies in the interests of national security and defense. The independent commission’s report in March 2021 concluded that the country was unprepared for any attacks by or competition with China and suggested that AI technologies be integrated into all areas of combat. This was in opposition to discussions across the Atlantic. About the same time, European authorities were focused on legal guidelines toward ensuring AI use was ethical and secure. The European Parliament advised nations to ensure that military systems do not substitute human decision-making with AI. Thousands of researchers in AI and robotics strenuously opposed permitting AI to decide when to kill.


Cold War Theory
The primary focus of the first Cold War, which began in 1947 and ended in 1991 with the breakup of the Soviet Union, was buildup of defenses and weaponry. This buildup was based on the principle of mutual assured destruction (MAD), or the idea that if one superpower launched a nuclear attack on another, the aggressor would be destroyed by a nuclear counterattack. In short, both would be destroyed. This principle was believed to be a means to deter attacks. The United States built tens of thousands of nuclear weapons. Long after the Cold War ended, in the 2020s the country continued to spend $70 billion a year to maintain these weapons. A second cold war likely would involve cyberattacks on critical infrastructure, disinformation campaigns, satellite attacks, and threats to software supply chains.
Much discussion about an AI cold war dates to the late 2010s. The US government published several reports about AI, laying out benefits and risks of the technology and making recommendations for the government response to advances in the private sector. Among the recommendations were suggestions about investing in machine learning development efforts and exploring ways to cut job losses that would result from increased automation. In China, science and technology policy advisers had for some time been developing a national plan for AI. They viewed these new US papers developed by the administration of President Barack Obama as an indication that the United States was developing an AI strategy and increased their efforts.
The Chinese people had become interested in AI in early 2016 when an AI system faced off against a world champion Go player in Seoul, South Korea, and won. Go is an Asian board game created more than three thousand years ago in China but little known in the West. The AI, AlphaGo, was developed by DeepMind, a division of the American tech company Alphabet, which also owns Google. About 280 million people in China watched the match. They were aghast at the implications of an AI from a country where few people play Go defeating a South Korean champion. The stakes increased in early 2017 when AlphaGo defeated a Chinese master of Go at the Future of Go Summit in China.
Several weeks after the summit, China’s central government published its Next Generation Artificial Intelligence Development Plan. This document was the government’s plan to become a world leader in AI by 2030. Various branches of government and local governments around the country followed by developing complementary plans based on Beijing’s stated goals. The government used Chinese tech companies to make the end goals possible. The government enlisted Alibaba, an online retailer like Amazon, to develop an AI for a new Special Economic Zone. Already, Alibaba had been collecting data using street cameras in the city of Hangzhou and was using AI to control signal lights to make the flow of traffic as efficient as possible. The company went to work designing AI into the infrastructure of the new Special Economic Zone city. In October 2017, President Xi Jinping publicly stated his plans for the future of the Communist Party. AI, big data, and the Internet were at the heart of his plans to make China an even bigger player in the world’s economy.
While this was happening, the US government under President Donald Trump gave scant attention to AI technology and moved the AI reports to an archived website. In March 2017 Treasury Secretary Steven Mnuchin said it would be fifty to one hundred years before humans would lose jobs because of AI. However, the Pentagon pressured the administration to fund a government commission to study AI, and soon the possibility of an AI cold war arms race emerged in international discussions.
AI development and application frequently is related to power. Technology such as social media has shown that those who control technology wield tremendous power and can control the information that is shared and how this takes place. Social media have also demonstrated the value of data collection and how it can be used to control individuals and populations through manipulation. For example, China’s Police Cloud System monitors seven categories of people for the government. It keeps track of people’s movements in public areas including activists and ethnic minorities and is designed to predict their actions. The government is also privy to vast amounts of personal data that can be integrated with the surveillance system such as whether individuals have stayed at the same hotel. It can also access medical records, grocery orders, and academic records. It can analyze information that the police might not recognize is unusual, such as a person frequently visiting a hotel near their home. China uses this information to evaluate individuals and potentially limit their movements. The government issues social credit scores to its citizens and can use them to withhold services such as high-speed Internet and prevent people from booking flights. While in the United States, laws limit government access to personal information, this is not the case in China and Russia. Vladimir Putin, the president of Russia, has said that those who control AI control the world. Under his leadership, Russia has worked to develop cyberwarfare and disinformation methods.
Some consequences of the AI cold war have already emerged. As was the case during the twentieth century Cold War, countries that do not have their own technology have begun to choose sides. Many have made statements with their choices of partners in projects. For example, Pakistan partnered with Chinese companies to install a fiber-optic cable between China and Pakistan and for installation of surveillance cameras in cities in the name of public safety. China wins because it is paid for these systems and gains access to troves of data.
Experts say some incidents that have taken place in modern times highlight the importance of being vigilant and continuing research. For example, adversaries have struck and exposed a few of the weaknesses in US infrastructure and security. Some vulnerable targets include the power grid and water supplies. Analysts say that some types of attacks, such as hacking satellites, could qualify as acts of war. Control of satellites could enable threat actors to scramble geo-spatial data or sabotage systems such as air traffic, banking, cloud storage, and power grids. Experts warn that these and similar acts could tip the geopolitical sphere and create conditions similar to those that preceded the First and Second World Wars in the early twentieth century.
Digital disinformation campaigns have become the tool of choice for actors wishing to spread propaganda. AI can alter images and videos to create what are called deep fakes. When Russia attacked Ukraine in early 2022, some actors produced deep fake videos that attempted to persuade Ukrainian troops to surrender. Many times, individuals have trouble recognizing these as fake. Experts say that in time, deep fakes may become nearly impossible to differentiate from genuine videos and images.
Further Insights
Computers learn to make independent decisions using AI. The computer must learn rules and be exposed to data, such as ways to strategize in chess. AI is the machine’s ability to learn, plan, reason, and be creative. The two types of AI are software and embodied. Examples of AI software include face recognition systems, search engines, and virtual assistants. For example, AI offers personalized suggestions based on previous searches that an individual conducts. Embodied AI include autonomous vehicles, drones, robots, and internet-connected household appliances. Countries that invest in AI technologies become more efficient and therefore more prosperous. For this and other reasons, the world’s superpowers have pursued superiority in the field of AI development.
Military systems and law enforcement have used some applications of AI for some time. For example, AI facial recognition software is used for surveillance in some cities and countries. China uses such software and has sold the technology to other countries. This AI use in surveilling the general population is often viewed as intrusive and a violation of human rights. The US military, and the military systems of other countries, has used AI in a variety of roles for years. The public learned in 2017 about the US military’s Project Maven, which is an object recognition program, and robots have been added to security patrols at military facilities. Analysts expect near-future uses of autonomous systems to focus on reconnaissance work to avoid endangering troops. The US Department of Defense has for some time used AI to analyze footage collected by drones and moved on to doing the same with images from satellites. In early 2022 the US Department of Defense used AI to analyze publicly available imagery to aid Ukraine when Russia attacked the country. It simultaneously analyzed its data about this AI analysis to refine the software and was developing new modeling systems to try to understand what warfare of the future will look like and how it will progress. The poor showing of Russia’s military and resulting sanctions suggested its AI development would slow, so officials were primarily examining the potential of China.
Some AI applications to warfare will likely involve rapid analysis of data to discover opponents’ military strengths, discern weak points, and track and predict troop movements. AI would aid in determining the best times and means to attack or when to withdraw and could be used to analyze movements. For example, AI could determine if a truck might be attacking or is likely to contain explosives. These applications are expected to lead to fully autonomous fighting systems. Some analysts suggest that AI systems can also be programmed to use only necessary force, which could protect many civilians from violence.
Analysts predict that future conflicts may be unbalanced because of AI capabilities. Asymmetric warfare is conflict between forces with military power that is so greatly unequal that they cannot attack one another in the same way. Many countries have experienced asymmetric warfare, typically involving a conventional army fighting a guerrilla force. Suicide bombings and other forms of terrorism are also examples of asymmetric warfare. AI-driven asymmetric warfare (ADAW) will involve one party having at its disposal vastly superior, rapid intelligence gathering and analysis.
Bibliography
Bendett, Samuel. “Russia’s Artificial Intelligence Boom May Not Survive the War.” Center for a New American Security, 15 Apr. 2022, www.cnas.org/publications/commentary/russias-artificial-intelligence-boom-may-not-survive-the-war. Accessed 6 June 2022.
Heath, Ryan. “Artificial Intelligence Cold War on the Horizon.” Politico, 16 Oct. 2020, www.politico.com/news/2020/10/16/artificial-intelligence-cold-war-on-the-horizon-429714. Accessed 6 June 2022.
Manson, K. “US Has Already Lost AI Fight to China, Says Ex-Pentagon Software Chief.” Financial Times, 10 Oct. 2021, www.ft.com/content/f939db9a-40af-4bd1-b67d-10492535f8e0. Accessed 6 June 2022.
Piper, Steve. “Four Critical Risks to Watch as Experts Predict a Cyber Cold War.” Forbes, 23 May 2022, www.forbes.com/sites/forbestechcouncil/2022/05/23/four-critical-risks-to-watch-as-experts-predict-a-cyber-cold-war/. Accessed 6 June 2022.
Polyakova, Alina. “Weapons of the Weak: Russia and AI-Driven Asymmetric Warfare.” Brookings Institution, 15 Nov. 2018, www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/. Accessed 6 June 2022.
Raska, Michael, Katarzyna Zysk, and Ian Bowers, editors. The Fourth Industrial Revolution: Security Challenges, Emerging Technologies, and Military Implications. Routledge, 2022.
Thompson, Nicholas, and Ian Bremmer. “The AI Cold War That Threatens Us All.” Wired, 23 Oct. 2018, www.wired.com/story/ai-cold-war-china-could-doom-us-all/. Accessed 6 June 2022.
Tucker, Patrick. “AI Is Already Learning from Russia’s War in Ukraine, DOD Says.” Defense One, 21 Apr. 2022, www.defenseone.com/technology/2022/04/ai-already-learning-russias-war-ukraine-dod-says/365978/. Accessed 6 June 2022.