Superintelligence
Superintelligence refers to a theoretical form of artificial intelligence (AI) that would possess cognitive abilities far surpassing those of the most intelligent humans. This concept has gained traction as advancements in AI and machine learning continue to evolve, raising important ethical considerations about the future implications of AI technology. Central to discussions about superintelligence are theories like the singularity, which suggests that AI could rapidly evolve beyond human control, potentially leading to scenarios where machines dominate their creators.
Current AI systems, often categorized as "weak" or "narrow" AI, excel at specific tasks but lack the generalized intelligence associated with superintelligence. The development of such advanced AI would require breakthroughs in computing algorithms and hardware, with some experts projecting this could occur within the next century. Theoretical models suggest that superintelligence might mimic human neural networks, enabling machines to think and process information similarly to humans.
However, the potential emergence of superintelligence raises critical questions about safety and control. Experts have differing views on whether it could pose a threat to humanity, with some arguing that it might see humans as a threat to its existence, while others believe safeguards could be implemented to ensure a cooperative relationship. Ultimately, the conversation around superintelligence is multifaceted, encompassing both technological possibilities and ethical dilemmas.
On this Page
Subject Terms
Superintelligence
Superintelligence is a theoretical concept in psychology, philosophy, and technology that supposes the possible future existence of artificial intelligence (AI) with cognitive capabilities that vastly exceed the limits of the human intellect. The notion of superintelligence has garnered increased attention from the scientific community in recent years as AI and machine learning have rapidly advanced. It has emerged as a key ethical consideration in AI circles, particularly because of its associations with two conjectured outcomes of AI technology: the singularity and the runaway reaction model. The singularity theory posits that AI could cross a developmental threshold beyond which it would begin to evolve at a very rapid and uncontrollable pace. This could, in turn, lead to a runaway reaction in which AI reaches a level that allows it to dominate and subjugate its human creators.
![The Amazon Echo interfaces with virtual assistant Alexa. Piyush maru [CC BY-SA 4.0 (creativecommons.org/licenses/by-sa/4.0)] rsspencyclopedia-20190201-201-174264.jpg](https://imageserver.ebscohost.com/img/embimages/ers/sp/embedded/rsspencyclopedia-20190201-201-174264.jpg?ephost1=dGJyMNHX8kSepq84xNvgOLCmsE2epq5Srqa4SK6WxWXS)
![Gordon Moore, businessman, semiconductor pioneer, founder of Intel Corporation, and author of Moore's Law. Science History Institute [CC BY-SA 3.0 (creativecommons.org/licenses/by-sa/3.0)] rsspencyclopedia-20190201-201-174403.jpg](https://imageserver.ebscohost.com/img/embimages/ers/sp/embedded/rsspencyclopedia-20190201-201-174403.jpg?ephost1=dGJyMNHX8kSepq84xNvgOLCmsE2epq5Srqa4SK6WxWXS)
Background
Conceptually, AI refers to the capability of a computer or a computer-controlled machine to mimic human cognition with regard to problem-solving, decision-making, reasoning, and perception. Theoretical models of AI development also acknowledge the possibility for machines to gain introspective forms of cognition such as self-reflection and self-correction. Both current and theorized forms of AI rely on computer algorithms and hardware technologies that support robust data volumes. These data define the amount of digital information capable of being stored in a given database.
While continued advancements in computing algorithms and hardware technologies have vastly boosted AI capabilities in the first decades of the twenty-first century, current types of machine learning are still often referred to as “weak AI,” “applied AI,” or “narrow AI.” These terms refer to AI technologies with programmed confines that limit their capabilities to a relatively small range of highly specific tasks. Common examples of AI technologies include automated security and surveillance systems, so-called “smart” appliances, and digital virtual assistants like Apple’s Siri, Amazon’s Alexa, and Google Assistant. Into the mid-2020s, several fields utilize "weak AI" technologies which have begun to become ubiquitous. They could be found in customer service chatbots; through recommendations on sites like Netflix and Amazon; and in gaming, medical, and financial applications. While AI-powered machines are able to perform programmed tasks at a level that equals or exceeds human capabilities, they do not have the capacity to evolve into what is known as “strong AI” or “true AI,” a term that is functionally synonymous with superintelligence.
Though it remains the domain of science fiction, some experts believe that computerized superintelligence could theoretically become possible by the first half of the twenty-second century. It would require the development of highly advanced computing algorithms and data processing capabilities that far exceed current technological limits. Notably, the possible future development of strong AI would also demand willfully applied efforts on the part of scientists to create it.
Unlike weak AI, strong AI would be a generalized form of artificial intelligence purposely built to imitate human consciousness using a combination of computer hardware and software. Experts note that computer hardware possesses the capability of theoretically delivering superior cognitive functionality compared to the human brain. For example, computer hardware can store more information and process that information more quickly than a human. Over the long term, machine superintelligence would also need to master self-replication, though some observers believe this would be a relatively easy obstacle for strong AI to overcome.
Overview
Prevailing models of superintelligence posit that computers and machines could one day possess computing frameworks with structures similar to the neural networks found in the human brain. Scientists are already investigating such projects, noting that human neural networks essentially consist of a series of inputs drawn through sensory perception and outputs that produce thoughts, ideas, and conclusions. Continued refinement of this model and the algorithms that guide it, along with technological advances that increase computing power, form the theoretical basis for machine superintelligence. The current consensus among experts is that given the current rate of technological advancement, the hardware and software technologies required to build strong AI will almost certainly exist one day. The main question is not whether superintelligence might become possible but whether humans will endeavor to build it.
To meet the definitional requirements of superintelligence, a machine would have to possess a general intelligence that vastly exceeds the capabilities of the smartest human mind. Such an event could manifest in many ways, including a single computer or robotic entity with vast computing power, a group of devices connected to form a network, synthetic constructions designed to resemble organic brain tissue, or some other as-yet-conceived form.
A concept known as Moore’s law provides a potential timeline for the possible development of machine superintelligence, assuming scientists continue to pursue it. Named after technology entrepreneur Gordon Moore, who first proposed the model in 1965, Moore’s law holds that the speed of computer processors doubles every two years. The concept has since been upgraded to account for ever-faster technological advancements and now states that processing power doubles every eighteen months. However, Moore’s law remains subject to cost limitations and other practical hindrances that suggest its eventual failure is inevitable. Philosopher Nick Bostrom, who is considered one of the world’s preeminent experts on the potential dangers of superintelligence, has stated that Moore’s law would have to hold true for at least another century for computing power to achieve capabilities matching that of the neural networks of the human brain.
Even if superintelligence develops before its human creators discover a way to guarantee its safety, expert opinion is divided as to whether it would ever evolve to endanger human beings. Some believe that once strong AI crosses the singularity and a runaway reaction occurs, it is inevitable that it will come to consider human beings the primary threat to its continued existence. This theory holds that the AI would then seek to dominate, subdue, or even destroy humans. Others think such a possibility could be neutralized by programming strong AI to recognize limits or objectives that guarantee its continued cooperation and harmonious coexistence with humans. Another related viewpoint suggests that an unfeasible amount of electrical energy and physical machinery would be required for a unified group of strong AI machines to mount a credible existential threat to human beings.
Bibliography
Agar, Nicholas. “Don’t Worry About Superintelligence.” Journal of Evolution and Technology, vol. 26, no. 1, Feb. 2016, pp. 73–82.
Baum, Seth. “Countering Superintelligence Misinformation.” Global Catastrophic Risk Institute, 1 Oct. 2018, gcrinstitute.org/countering-superintelligence-misinformation. Accessed 4 Feb. 2025.
Bostrom, Nick. “How Long Before Superintelligence?” Linguistic and Philosophical Investigations, vol. 5, no. 1, 2006, pp. 11–30.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2016.
Felten, Ed. “Multiple Intelligences, and Superintelligence.” Freedom to Tinker, 4 May 2017, freedom-to-tinker.com/2017/05/04/multiple-intelligences-and-superintelligence. Accessed 4 Feb. 2025.
Ray, Amit. Compassionate Superintelligence AI 5.0. Inner Light Publishers, 2018.
Schneider, Susan. Science Fiction and Philosophy: From Time Travel to Superintelligence, 2nd ed., Wiley-Blackwell, 2016.
Snyder-Beattie, Andrew, and Daniel Dewey. “Explainer: What Is Superintelligence?” The Conversation, 18 July 2014, theconversation.com/explainer-what-is-superintelligence-29175. Accessed 4 Feb. 2025.
“Understanding the Different Types of Artificial Intelligence.” IBM, 12 Oct. 2023, www.ibm.com/think/topics/artificial-intelligence-types. Accessed 4 Feb. 2025.