Technological singularity

The technological singularity is the theoretical concept that the accelerating growth of technology will one day overwhelm human civilization. Adherents of the idea believe that the rapid advancements in artificial intelligence in the twenty-first century will eventually result in humans either merging with technology or being replaced by it. Variations of the technological singularity include the development of computers that surpass human intelligence, a computer that becomes self-aware and can program itself, or the physical merger of biological and machine life. Skeptics argue that creating machine intelligence at that high of a level is unlikely or impossible, as is the human capability to insert true consciousness into a machine. The concept was first touched upon in the 1950s and later applied to computers in the 1990s. The term singularity originated in the field of astrophysics, where it refers to the region at the center of a black hole where gravitation forces become infinite.

rssalemscience-20170808-371-164180.jpg

Background

Computers are electronic machines that perform various functions, depending on the programming they receive. In most cases, even highly advanced systems are dependent on the instructions they receive from humans. Artificial intelligence is a branch of computer engineering that seeks to program computers with the ability to simulate human intelligence. In this context, intelligence is defined as the ability to learn by acquiring information, reasoning, and self-correction.

The term artificial intelligence (AI) was first used in the 1950s and can refer to everything from automated computer operations to robotics. AI is generally divided into two categories. Weak AI is a program designed to perform a particular task. Automated personal assistants such as Amazon's Alexa or Apple's Siri are examples of weak AI. These devices recognize a user's commands and carry out their functions.

Strong AI is a system that displays general cognitive abilities, such as working out its own solution to unfamiliar tasks. To determine whether a machine can think like a human, scientists employ the Turing Test, a method developed in the 1950s by British mathematician Alan Turing. In the basic test, a computer and a human receive specific questions to answer; if the computer can mimic a human response enough to fool a human questioner at least half the time, it passes the test.

Other scientists categorize AI systems into four types, ranging from strategizing computers to self-aware machines. Reactive machines can identify and analyze a situation and decide on their own actions. For example, in 1997, the IBM computer Deep Blue was able to defeat chess grandmaster Garry Kasparov by analyzing his moves and employing a sound strategy. Limited memory AI systems examine past experiences to formulate future decisions. Theory of mind AI is a computer that has developed its own beliefs and desires. Self-aware AI is a machine that has developed its own consciousness.

In 2024, an article in Spectrum IEEE reported on a study showing that the large language models (LLM) that power OpenAI's ChatGPT were able to think like a human being, which means the LLMs have demonstrated theory of mind AI. However, some researchers were not convinced that the study's findings were accurate. According to the article, ChatGPT only matched the performance of a six-year-old child. Some researchers criticized the methods used in the study. During follow-up experiments, they found that the LLMs were getting the correct answers based on shortcuts instead of using true theory of mind reasoning. As of 2024, self-aware AI did not exist.

Overview

In 1958, mathematician John von Neumann used the term singularity to refer to a future point at which technology would change human existence. In 1993, science fiction writer Vernor Vinge attached the term to computers and artificial intelligence. Because the concept of the technological singularity was only conjecture as of 2017, scholars and computer experts disagreed on what form, if any, it would take.

In 2005, author and computer engineer Ray Kurzweil defined the technological singularity as the moment machine intelligence exceeds that of human intelligence. Kurzweil predicted that AI will be capable of passing a valid Turing test and achieving human-like intelligence in the year 2029. However, Google's AI passed the Turing test in 2022. Kurzweil also predicted that human civilization will reach the technological singularity in 2045. Kurzweil believed the singularity would be beneficial to humanity, creating the ability to implant technology in the brain as a way to increase human intellect. His vision of the future is a world where humans use intelligent machines to join in a cooperative relationship.

Other views on the technological singularity predict a future in which humans can upload consciousness into computers or microscopic machines can enter the body and cure or eliminate disease. Some theorists worry that the singularity would pose a threat to humanity as humans become too reliant on technology. Because computer capability tends to double over a period of eighteen months to a year, theorists see a point at which technological advancements occur too rapidly for civilization to keep up. The resulting crisis could have serious effects on the stability of civilization.

Extreme interpretations of the singularity warn of self-aware, intelligent robots taking over the daily lives of human beings or even pushing humanity into extinction. While few believe these changes will resemble the bleak apocalyptic landscapes of science-fiction films and novels, some theories see a gradual shift in human evolution toward a machine-based future.

Despite the attention the concept of the technological singularity receives, many experts doubt that such an event could ever become reality. Computer engineers doubt that the modern technological boom can be sustained. They foresee a time when humans reach the physical limits of what can be accomplished with computer hardware. Others claim that even if human technology eventually creates self-aware computers, the computers would be individual machines just as each human is an individual person. Because they would be constructed by humans, they also would have limitations and would not be all-powerful computers linked together in a world-controlling network.

Skeptics of the technological singularity also argue that it is impossible for humans to create a machine that exceeds true human intelligence. The ability of the human brain to reason, learn, and grow from experience is part of humanity's genetic makeup, honed by millions of years of biological evolution. Even the most advanced AI could never mimic that genetic evolution and, therefore, could never surpass human intelligence.

Bibliography

Galeon, Dom, and Christianna Reedy. "Kurzweil Claims That the Singularity Will Happen by 2045." Futurism, 5 Oct. 2017, futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045/. Accessed 4 Dec. 2017.

Hodson, Hal. "Visions of the Singularity: How Smart Can AI Get?" New Scientist, 22 Mar. 2016, www.newscientist.com/article/mg22930661-800-vision-of-singularity-questions-ai-intellect/. Accessed 4 Dec. 2017.

Kaku, Michio. "The Technological Singularity and Merging with Machines." Big Think, bigthink.com/dr-kakus-universe/the-technological-singularity-and-merging-with-machines. Accessed 5 Dec. 2017.

Kurzweil, Ray. The Singularity Is Near. Penguin Books, 2006.

Oremus, Will. "Google's AI Passed a Famous TestAnd Showed How the Test Is Broken." The Washington Post, 17 June 2022, www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/. Accessed 20 Nov. 2024.

Rouse, Margaret. "AI (Artificial Intelligence)." TechTarget, searchcio.techtarget.com/definition/AI. Accessed 4 Dec. 2017.

Sandberg, Anders. "An Overview of Models of Technological Singularity." Oxford University, agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf. Accessed 4 Dec. 2017.

Shanahan, Murray. The Technological Singularity. MIT P, 2017.

Strickland, Eliza. "AI Outperforms Humans in Theory of Mind Tests." Spectrum IEEE, 20 May 2024, spectrum.ieee.org/theory-of-mind-ai. Accessed 20 Nov. 2024.

Strickland, Jonathan. "What's the Technological Singularity?" HowStuffWorks, electronics.howstuffworks.com/gadgets/high-tech-gadgets/technological-singularity.htm. Accessed 4 Dec. 2017.