Artificial general intelligence (AGI)

Artificial general intelligence (AGI)—also known as strong artificial intelligence (AI)—is based on the hypothetical form of AI that can meet or exceed the intellect of human beings. Although artificial general intelligence has yet to be realized as of 2023, a surge in research that began at the start of the twenty-first century has moved the research community closer. Debate has also grown on whether artificial general intelligence can be achieved and, more importantly, whether it should be achieved. While some researchers have noted the potential benefits of AGI, others have pointed to the unknown threats of computers that have their own “mind” and can function fully outside of human interaction.

rssalemscience-20230731-22-195007.jpgrssalemscience-20230731-22-195008.jpg

Background

The notion of artificial intelligence has been around since the 1950s. Following World War II (1939–1945), researchers realized that computers could do more than just crunch numbers. They could also manipulate symbols. This breakthrough led to what is known as weak AI, or artificial narrow intelligence. Weak AI does not assume that computers have or need human-level intelligence to provide productive outcomes. Unlike artificial general intelligence, whereby computers should be able to “think” similar to a human brain, artificial narrow intelligence is traditionally restricted to the completion of one or more specific tasks.

While the idea of supercomputers has been a topic in fantasy and science fiction for decades, early developers’ expectation that a computer’s intelligence would exceed that of humans within a few decades was unfulfilled. As it turned out, developing a computer that could “think” in ways similar to the human mind was a much more daunting task than first assumed. Nonetheless, cutting-edge AI research companies such as OpenAI, Anthropic, and Deep Mind have pushed the envelope to create machines that some argue are close to breaking the barrier to achieving AGI. However, other computer scientists remain skeptical that artificial general intelligence can ever be realized.

Overview

Intelligent systems, such as the human brain, are commonly characterized by a number of traits, including the ability to reason. This includes actions such as solving puzzles, acting strategically, and making decisions in the midst of uncertainty. It also includes representing knowledge and common sense, making plans, learning new things, communicating, using imagination, and achieving autonomy. While some computer systems meet these fundamentals to varying degrees, most researchers as of 2023 agree that no computerized system has achieved a level of human-like intelligence.

Artificial narrow intelligence (weak AI) has been successfully shown to outshine humans on specific, limited tasks. Weak AI includes, for example, chess- and game-playing systems, chatbots, self-driving cars, and smart assistants. For example, in 2011, IBM’s computer Watson beat two former Jeopardy champions. In 1997, Deep Blue, developed by DeepMind, became the first computer system to win a chess game against a grandmaster, Garry Kasparov. By the late 2010s, other chess-playing systems, such as Stockfish and AlphaZero, improved their capabilities to such an extent they were unbeatable by human players. They could only reasonably compete against each other. However, AlphaZero’s system is limited to playing chess; it does not perform other functions. Similarly, OpenAI’s chatbot GPT-4 can create human-like, conversational text, but it cannot play high-level chess.

Unlike the limited-task nature of artificial narrow intelligence, where tasks, however complex, must be programmed to function correctly, AGI is the yet-unrealized attempt to create a system that thinks and acts with the autonomy, or freedom, of a human being. Tech billionaire Elon Musk predicted the advent of AGI as soon as 2029; however, most experts anticipate that AGI will not be fully developed until 2060 or beyond. A few experts believe it will never happen.

AI systems are fed massive amounts of data known as big data. These data are run millions of times to train the system. Through the use of large language models, neural networks, machine learning, and deep learning, the system can process and learn. For example, they can learn to recognize tone when outputting natural language, solutions to medical problems, errors in computer coding, and so on. The goal of AGI is to extend the system’s ability to adapt to various conditions or changed circumstances, just as a human may reconsider their previous decision and creatively problem-solve.

Top AGI research companies in the 2020s included, among others, OpenAI, Google DeepMind, and Anthropic. OpenAI’s premiere product is GPT-4 (Generative Pre-trained Transformer 4). GPT-4, the fourth iteration of the GPT-n series, was released on March 14, 2023. Like its predecessors, GTP-4 can interact as a conversationalist with the user as well as, among other things, write song lyrics, poetry, and fables. However, this version of the large language model also has the ability to accept and process both text and images. As such, it can describe images, summarize text from screenshots, and answer diagram-based exam questions.

Microsoft, which invested $10 billion in OpenAI, created controversy following the release of GPT-4 by publishing a paper with the title “Sparks of Artificial General Intelligence.” The authors argue that because GPT-4 includes an unprecedented amount of training data and can solve complex problems without any special prompting, it is, at least in a preliminary or limited way, exhibiting artificial general intelligence. Pushback came from OpenAI executives themselves, who dismissed the claim and suggested true artificial general intelligence was still years, if not decades, away.

GPT-4 is the software engine for OpenAI’s chatbox ChatGPT. As of July 2023, ChatGPT had more than 100 million users. OpenAI’s other product is DALL-E, which can use a text description to create original, realistic images and art. OpenAI’s stated goal is the pursuit of artificial general intelligence.

Google’s next-generation large language model system, PaLM, can complete advanced reasoning tasks such as coding and math, classification and question answering, language translation, and natural language generation. Its subsidiary, DeepMind, garnered widespread media attention in 2020 with its system AlphaFold, which predicted the structure of every known protein in the human genome. The model was expanded in 2022 to include virtually every known protein in nature. In that same year, the company unveiled AlphaCode, which can create computer code at the rate of an average human coder. DeepMind also delves into game-playing with models that best humans in Go (AlphaGo and its successors) and StarCraft II (AlphaStar). In 2023, Google merged the parent company’s AI research department with DeepMind to focus more intently on the pursuit of artificial general intelligence.

Anthropic, which released the latest version of its large language model, Claude2, in July 2023, was in the process of developing a version of Claude that would be deceptive, known as the Decepticon. The main challenge of AI, and especially AGI, is to be sure the computer arrives at the correct answer. If a model can be constructed so that the system lies, researchers can work backward to figure out how to prevent the deceit. Given that researchers are unsure whether AI systems have the ability to deceive. As of 2023, the project remains ongoing.

Despite the recent influx of funding to the AI industry, commonly in pursuit of artificial general intelligence, the idea of artificial general intelligence has its critics based along two lines: whether AGI is actually possible and whether, if possible, AGI presents an existential threat to humankind. A well-known argument against the feasibility of artificial general intelligence is the “Chinese room argument,” developed by American philosopher John Searle in 1980. Searle presented a simple thought experiment in which he is in a room, or box, and a number of Chinese speakers are outside the room. Searle-in-the-box has no knowledge of the Chinese language, but he is fluent in English. Those outside the box write questions in Chinese on cards and slip the cards through a slot to Searle. This represnts input. Searle uses a rulebook to produce Chinese based on what is on the card and slips it back through the slot. This represnts output. Although Searle gives the correct answers to the questions, he nonetheless has understood nothing. To him, both the input and the output are mere “squiggle-squoggles.”

Thus, Searle argues that although computers may arrive at the correct answer, they cannot understand it—just as he had no understanding of the Chinese. Instead, a computer can only do what he did—move about squiggle-squoggles. Next-generation large language models have primarily addressed this issue through the huge influx of data so that there is little that a computer doesn’t know or hasn’t been trained for. Most AI researchers argue from the perspective of an upward building of AI technology that will continue to advance unstopped. Nonetheless, Searle would argue that no matter how much a computer can spit out, it can never understand the output, which limits the ability to develop true artificial general intelligence.

The threat of AGI to the future of humanity has been considered, and as AGI perhaps edges closer to reality, those concerns have continued to be raised. In June 2023, 350 researchers, tech executives, and academics, including Elon Musk and Apple co-founder Steve Wozniak, signed a statement warning of the dangers of AI—in the worst case, the annihilation of humanity. Consider the argument, laid out simply by David Chalmers in 2010. The premise that AI will be created that equals human intelligence leads to the subsequent premises that there will be AI+ (created by AI), AI++ (created by AI+), and so on, such that there will be a point in time when AI exceeds human intelligence, perhaps to the point that computers overcome the necessity of human existence.

This doomsday scenario is complete speculation and remains as a complete unknown. Could artificial general intelligence have malicious intent, similar to many science fiction movies? The argument has been made that because a computer system cannot have good intentions, neither can it have bad intentions. Yet the concern is that an amoral (without morals) computer may bring about unintended consequences. Also, the question may be raised as to whether programmers can remove their own intent, preconceived notions, and prejudices from the coding or whether tech executives, needing to get a return on their (very large) investment, may allow the market to drive the programming in a way that is most beneficial to them. In the worst-case market scenario, AI would do all or most jobs, creating mass unemployment.

AI made significant advances during the first two decades of the twenty-first century. AI technology has entered the popular consumer marketplace as personal assistant devices such as Amazon’s Alexa, Google Assistant, and Apple Watch, and the number of devices and apps that use AI is consistently growing. On the medical front, AI is being used to improve the diagnoses of diseases, the design of new medications, and assist in surgery. However, the bridge from weak AI to AGI has yet to be crossed. Whether it ever will be and, if so, when remains an open debate.

Bibliography

Bubeck, Sébastien, Chandrasekaran Varun, Ronen Eldan et al. “Sparks of Artificial General Intelligence: Early experiments with GPT-4 (v5)” ArXiv, 13 Apr. 2023, arxiv.org/abs/2303.12712. Accessed 7 Aug. 2023.

Fjelland, Ragnar. “Why General Artificial Intelligence Will Not Be Realized.” Humanities and Social Sciences Communications, vol. 7, no. 10, 2020, doi.org/10.1057/s41599-020-0494-4. Accessed 7 Aug. 2023.

Heaven, Will Douglas. “Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?” MIT Technology Review, 15 Oct. 2020, www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/. Accessed 7 Aug. 2023.

Metz, Cade. “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead.” The New York Times, 4 May 2023, www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html. Accessed 7 Aug. 2023.

Morozov, Evgeny. “The True Threat of Artificial Intelligence.” The New York Times, 30 June 2023, www.nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html. Accessed 7 Aug. 2023.

Naysmith, Caleb. “6 Jobs Artificial Intelligence Is Already Replacing and How Investors Can Capitalize on It.” Benzinga, 7 Feb. 2023. www.benzinga.com/news/23/02/30766478/6-jobs-artificial-intelligence-is-already-replacing-how-investors-can-capitalize-on-it. Accessed 7 Aug. 2023.

Naudé, Wim, and Nicola Dimitri. “The Race for an Artificial General Intelligence: Implications for Public Policy.” AI & Society, vol. 35, 2020, pp. 367–379, doi.org/10.1007/s00146-019-00887-x. Accessed 7 Aug. 2023.

Shevlin, Henry, Karina Vold, Matthew Crosby, and Marta Halina. “The Limits of Machine Intelligence.” EMBO Reports, vol. 20, 2019, doi.org/10.15252/embr.201949177. Accessed 7 Aug. 2023.

Wiggers, Kyle. “DeepMind’s New AI Can Perform over 600 Tasks, from Playing Games to Controlling Robots.” TechCrunch, 13 May 2022, techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/. Accessed 7 August 2023.

Xiang, Chloe. “Microsoft Now Claims GPT-4 Shows ‘Sparks’ of General Intelligence.” Vice, 24 Mar. 2023, www.vice.com/en/article/g5ypex/microsoft-now-claims-gpt-4-shows-sparks-of-general-intelligence. Accessed 7 Aug. 2023.