Artificial intelligence
Artificial Intelligence (AI) refers to the creation and use of systems, programs, and machines capable of exhibiting human-like intelligence through functions such as reasoning, learning, and knowledge representation. This expansive field includes subareas like voice and image recognition, natural language processing, robotics, and expert systems, each utilizing various programming techniques. Definitions of AI can differ across disciplines; for computer scientists, it involves developing intelligent programs, while engineers focus on building machines that perform tasks traditionally done by humans. Cognitive scientists study AI to better model and understand human intelligence.
The history of AI dates back to the mid-20th century, with significant milestones including the invention of the Lisp programming language and the development of early expert systems. Modern AI applications are diverse, ranging from intelligent tutor systems and automated theorem provers to gaming and healthcare technologies. As AI continues to evolve, ethical considerations surrounding its use, such as privacy and automation's impact on employment, become increasingly pertinent. The future of AI promises both advancements in technology and important discussions about moral responsibility and regulation in society.
Artificial intelligence
Summary
Artificial intelligence is the design, implementation, and use of programs, machines, and systems that exhibit human intelligence. Its most important activities are knowledge representation, reasoning, and learning. Artificial intelligence encompasses a number of important subareas, including voice recognition, image identification, natural language processing, expert systems, neural networks, planning, robotics, and intelligent agents. Several important programming techniques have been enhanced by artificial intelligence researchers, including classical search, probabilistic search, and logic programming.
Definition and Basic Principles
Artificial intelligence is a broad field of study, and definitions of the field vary by discipline. For computer scientists, artificial intelligence refers to the development of programs that exhibit intelligent behavior. The programs can engage in intelligent planning (timing traffic lights), translate natural languages (converting a Chinese website into English), act like an expert (selecting the best wine for dinner), or perform many other tasks. For engineers, artificial intelligence refers to building machines that perform actions often done by humans. The machines can be simple, like a computer vision system embedded in an ATM (automated teller machine). They can also be more complex, such as a robotic rover sent to Mars. They can be extremely complex, for example, an automated factory that builds an exercise machine with little human intervention. For cognitive scientists, artificial intelligence refers to building models of human intelligence to better understand human behavior. In the early days of artificial intelligence, most models of human intelligence were symbolic and closely related to cognitive psychology and philosophy. The basic idea was that regions of the brain perform complex reasoning by processing symbols. Later, many models of human cognition were developed to mirror the operation of the brain as an electrochemical computer. This started with the simple Perceptron, an artificial neural network described by Marvin Minsky in 1969. These efforts graduated to the backpropagation algorithm described by David E. Rumelhart and James L. McClelland in 1986. The culmination was a large number of supervised and nonsupervised learning algorithms.

When defining artificial intelligence, it is important to remember that the programs, machines, and models developed by computer scientists, engineers, and cognitive scientists do not actually have human intelligence. They only exhibit intelligent behavior. This can be difficult to remember because artificially intelligent systems often contain large numbers of facts, such as weather information for New York City. These can also contain complex reasoning patterns, such as the reasoning needed to prove a geometric theorem from axioms. Another possibility is complex knowledge, such as an understanding of all the rules required to build an automobile. Last might be the to learn, such as a neural network learning to recognize cancer cells. Scientists continue to look for better models of the brain and human intelligence.
Background and History
Although the concept of artificial intelligence probably has existed since antiquity, the term was first used by American scientist John McCarthy at a conference held at Dartmouth College in 1956. From 1955–1956, the first artificial intelligence program, Logic Theorist, was developed in IPL, which was a programming language. In 1958, McCarthy invented Lisp, a programming language that improved on IPL. Syntactic Structures (1957), a book about the structure of natural language by American linguist Noam Chomsky, made natural language processing into an area of study within artificial intelligence. In the next few years, numerous researchers began to study artificial intelligence, laying the foundation for many later applications, such as general problem solvers, intelligent machines, and expert systems.
In the 1960s, Edward Feigenbaum and other scientists at Stanford University built two early expert systems: DENDRAL, which classified chemicals, and MYCIN, which identified diseases. These early expert systems were cumbersome to modify because they had hard-coded rules. By 1970, the OPS expert system shell, with variable rule sets, had been released by Digital Equipment Corporation as the first commercial expert system shell. In addition to expert systems, neural networks became an important area of artificial intelligence in the 1970s and 1980s. Frank Rosenblatt introduced the Perceptron in 1957, but it was Perceptrons: An Introduction to Computational Geometry (1969) by Minsky and Seymour Papert and the two-volume Parallel Distributed Processing: Explorations in the Microstructure of Cognition (1986) by Rumelhart, McClelland, and the PDP Research Group, that really defined the field of neural networks. Development of artificial intelligence has continued, with game theory, speech recognition, robotics, and autonomous agents being some of the best-known examples.
How It Works
The first activity of artificial intelligence is to understand how multiple facts interconnect to form knowledge and to represent that knowledge in a machine-understandable form. The next task is to understand and document a reasoning process for arriving at a conclusion. The final component of artificial intelligence is to add, whenever possible, a learning process that enhances the knowledge of a system. Machine learning, the method by which generative chatbots such as ChatGPT develop their knowledge, is one example of this process.
Knowledge Representation. Facts are simple pieces of information that can be seen as either true or false, although in fuzzy logic, there are levels of truth. When facts are organized, they become information. When information is well understood, over time, it becomes knowledge. To use knowledge in artificial intelligence, especially when writing programs, it has to be represented in some concrete fashion. Initially, most of those developing artificial intelligence programs saw knowledge as represented symbolically, and their early knowledge representations were symbolic. Semantic nets, directed graphs of facts with added semantic content, were highly successful representations used in many of the early artificial intelligence programs. Later, the nodes of the semantic nets were expanded to contain more information, and the resulting knowledge representation was referred to as frames. Frame representation of knowledge was very similar to object-oriented data representation, including a theory of inheritance.
Another popular way to represent knowledge in artificial intelligence is as logical expressions. English mathematician George Boole represented knowledge as a Boolean expression in the 1800s. English mathematicians Bertrand Russell and Alfred Whitehead expanded this to quantified expressions in 1910. French computer scientist Alain Colmerauer incorporated it into logic programming with the programming language Prolog in the 1970s. The knowledge of a rule-based expert system is embedded in the if-then rules of the system, and because each if-then rule has a Boolean representation, it can be seen as a form of relational knowledge representation.
Neural networks model the human neural system and use this model to represent knowledge. The brain is an electrochemical system that stores its knowledge in synapses. As electrochemical signals pass through a synapse, they modify it, resulting in the acquisition of knowledge. In the neural network model, synapses are represented by the weights of a weight matrix, and knowledge is added to the system by modifying the weights.
Reasoning. Reasoning is the process of determining new information from known information. Artificial intelligence systems add reasoning soon after they have developed a method of knowledge representation. If knowledge is represented in semantic nets, then most reasoning involves some type of tree search. One popular reasoning technique is to traverse a decision tree, in which the reasoning is represented by a path taken through the tree. Tree searches of general semantic nets can be very time-consuming and have led to many advancements in tree-search algorithms, such as placing bounds on the depth of search and backtracking.
Reasoning in logic programming usually follows an inference technique embodied in first-order predicate calculus. Some inference engines, such as that of Prolog, use a back-chaining technique to reason from a result, such as a geometry theorem, to its antecedents, the axioms, and also show how the reasoning process led to the conclusion. Other inference engines, such as that of the expert system shell CLIPS, use a forward-chaining inference engine to see what facts can be derived from a set of known facts.
Neural networks, such as backpropagation, have an especially simple reasoning algorithm. The knowledge of the neural network is represented as a matrix of synaptic connections, possibly quite sparse. The information to be evaluated by the neural network is represented as an input vector of the appropriate size, and the reasoning process is to multiply the connection matrix by the input vector to obtain the conclusion as an output vector.
Learning. Learning in an artificial intelligence system involves modifying or adding to its knowledge. For both semantic net and logic programming systems, learning is accomplished by adding or modifying the semantic nets or logic rules, respectively. Although much effort has gone into developing learning algorithms for these systems, all of them, to date, have used ad hoc methods and experienced limited success. Neural networks, on the other hand, have been very successful at developing learning algorithms. Backpropagation has a robust supervised learning algorithm in which the system learns from a set of training pairs, using gradient-descent optimization, and numerous unsupervised learning algorithms learn by studying the clustering of the input vectors.
Applications and Products
There are many important applications of artificial intelligence, ranging from computer games to programs designed to prove theorems in mathematics. This section contains a sample of both theoretical and practical applications.
Expert Systems. One of the most successful areas of artificial intelligence is expert systems. Literally thousands of expert systems are being used to help both experts and novices make decisions. For example, in the 1990s, Dell developed a simple expert system that allowed shoppers to configure a computer as they wished. In the 2010s, a visit to the Dell website offered a customer much more than a simple configuration program. Based on the customer's answers to some rather general questions, dozens of small expert systems suggested what computer to buy. The Dell site was not unique in its use of expert systems to guide customer's choices. Insurance companies, automobile companies, and many others have used expert systems to assist customers in making decisions.
There are several categories of expert systems, but by far the most popular are the rule-based expert systems. Most rule-based expert systems are created with an expert system shell. The first successful rule-based expert system shell was the OPS 5 of Digital Equipment Corporation (DEC), and the most popular modern systems are CLIPS, developed by the National Aeronautics and Space Administration (NASA) in 1985, and its Java clone, Jess, developed at Sandia National Laboratories in 1995. All rule-based expert systems have a similar architecture, and the shells make it fairly easy to create an expert system as soon as a knowledge engineer gathers the knowledge from a domain expert. The most important component of a rule-based expert system is its knowledge base of rules. Each rule consists of an if-then statement with multiple antecedents, multiple consequences, and possibly a rule certainty factor. The antecedents of a rule are statements that can be true or false and depend on facts that are either introduced into the system by a user or derived as the result of a rule being fired. For example, a fact could be red-wine and a simple rule could be if (red-wine) then (it-tastes-good). The expert system also has an inference engine that can apply multiple rules in an orderly fashion so that the expert system can draw conclusions by applying its rules to a set of facts introduced by a user. Although it is not absolutely required, most rule-based expert systems have a user-friendly interface and an explanation facility to justify its reasoning.
Theorem Provers. Most theorems in mathematics can be expressed in first-order predicate calculus. For any particular area, such as synthetic geometry or group theory, all provable theorems can be derived from a set of axioms. Mathematicians have written programs to automatically prove theorems since the 1950s. These theorem provers either start with the axioms and apply an inference technique, or start with the theorem and work backward to see how it can be derived from axioms. Resolution, developed in Prolog, is a well-known automated technique that can be used to prove theorems, but there are many others. For Resolution, the user starts with the theorem, converts it to a normal form, and then mechanically builds reverse decision trees to prove the theorem. If a reverse decision tree whose leaf nodes are all axioms is found, then a proof of the theorem has been discovered.
Gödel’s incompleteness theorem (proved by Austrian-born American mathematician Kurt Gödel) shows that it may not be possible to automatically prove an arbitrary theorem in systems as complex as the natural numbers. For simpler systems, such as group theory, automated theorem proving works if the user's computer can generate all reverse trees or a suitable subset of trees that can yield a proof in a reasonable amount of time. Efforts have been made to develop theorem provers for higher-order logic than first-order predicate calculus, but these have not been very successful.
Computer scientists have spent considerable time trying to develop an automated technique for proving the correctness of programs, that is, showing that any valid input to a program produces a valid output. This is generally done by producing a consistent model and mapping the program to the model. The first example of this was given by English mathematician Alan Turing in 1931, by using a simple model now called a Turing machine. A formal system that is rich enough to serve as a model for a typical programming language, such as C++, must support higher-order logic to capture the arguments and parameters of subprograms. Lambda calculus, denotational semantics, von Neuman geometries, finite state machines, and other systems have been proposed to provide a model onto which all programs of a language can be mapped. Some of these do capture many programs, but devising a practical automated method of verifying the correctness of programs has proven difficult.
Intelligent Tutor Systems. Almost every field of study has many intelligent tutor systems available to assist students in learning. Sometimes, the tutor system is integrated into a package. For example, in Microsoft Office, an embedded intelligent helper provides popup help boxes to a user when it detects the need for assistance and full-length tutorials if it detects more help is needed. In addition to the intelligent tutors embedded in programs as part of a context-sensitive help system, there are a vast number of stand-alone tutoring systems in use.
The first stand-alone intelligent tutor was SCHOLAR, developed by J. R. Carbonell in 1970. It used semantic nets to represent knowledge about South American geography, provided a user interface to support asking questions, and was successful enough to demonstrate that it was possible for a computer program to tutor students. At about the same time, the University of Illinois developed its PLATO computer-aided instruction system, which provided a general language for developing intelligent tutors with touch-sensitive screens. One of the most famouswas a biology tutorial on evolution. Of the thousands of modern intelligent tutors, SHERLOCK, a training environment for electronic troubleshooting, and PUMP, a system designed to help learn algebra, are typical.
Electronic Games. Electronic games have been played since the invention of the cathode-ray tube for television. In the 1980s, games such as Solitaire, Pac-Man, and Pong for personal computers became almost as popular as the stand-alone game platforms. In the 2010s, multiuser Internet games were enjoyed by young and old alike, and game playing on mobile devices became an important application. In all of these electronic games, the user competes with one or more intelligent agents embedded in the game, and the creation of these intelligent agents uses considerable artificial intelligence. When creating an intelligent agent that will compete with a user or, as in Solitaire, just react to the user, a programmer has to embed the game knowledge into the program. For example, in chess, the programmer would need to capture all possible configurations of a chessboard. The programmer also would need to add reasoning procedures to the game; for example, there would have to be procedures to move each individual chess piece on the board. Finally, and most important for game programming, the programmer would need to add one or more strategic decision modules to the program to provide the intelligent agent with a strategy for winning. In many cases, the strategy for winning a game would be driven by probability; for example, the next move might be a pawn, one space forward, because that yields the best probability of winning, but a heuristic strategy is also possible; for example, the next move is a rook because it may trick the opponent into a bad series of moves.
Careers and Course Work
A major in computer science is the most common way to prepare for a career in artificial intelligence. One needs substantial coursework in mathematics, philosophy, and psychology as a background for this degree. For many of the more interesting jobs in artificial intelligence, one needs a master's or doctoral degree. Most universities teach courses in artificial intelligence, neural networks, or expert systems, and many have courses in all three. Although artificial intelligence is usually taught in computer science, it is also taught in mathematics, philosophy, psychology, and electrical engineering. Taking a strong minor in any field is advisable for someone seeking a career in artificial intelligence because the discipline is often applied to another field.
Those seeking careers in artificial intelligence generally take a position as a systems analyst or programmer. They work for a wide range of companies, including those developing business, mathematics, medical, and voice recognition applications. Those obtaining an advanced degree often take jobs in industrial, government, or university laboratories developing new areas of artificial intelligence.
Social Context, Ethics, and Future Prospects
After artificial intelligence was defined by McCarthy in 1956, it has had a number of ups and downs as a discipline, but the future of artificial intelligence looks strong. Almost every commercial program has a help system, and increasingly, these help systems have a major artificial intelligence component. Health care is another area that has been poised to make major use of artificial intelligence to improve the quality and reliability of the care provided, as well as to reduce its cost by providing expert advice on best practices in health care. Smartphones and other digital devices employ artificial intelligence for an array of applications, syncing the activities and requirements of their users.
Ethical questions have been raised about trying to build a machine that exhibits human intelligence. Many of the early researchers in artificial intelligence were interested in cognitive psychology and built symbolic models of intelligence that were considered unethical by some. Later, many artificial intelligence researchers developed neural models of intelligence that were not always deemed ethical. The social and ethical issues of artificial intelligence are nicely represented by HAL, the Heuristically programmed ALgorithmic computer, in Stanley Kubrick's 1968 film 2001: A Space Odyssey, which first works well with humans, then acts violently toward them, and is in the end deactivated.
Another important ethical question posed by artificial intelligence is the appropriateness of developing programs to collect information about users of a program. Intelligent agents are often embedded in websites to collect information about those using the site, generally without the permission of those using the website, and many experts question whether this should be done.
In the mid-to-late 2010s, fully autonomous self-driving cars were developed and tested in the United States. In 2018, an Uber self-driving car hit and killed a pedestrian in Tempe, Arizona. There was a safety driver at the wheel of the car, which was in self-driving mode at the time of the accident. While the accident led Uber to suspend its driverless-car testing program for a time, by the next year testing had resumed, initially at a smaller scale. Even before the accident occurred, ethicists had raised questions regarding collision avoidance programming and moral and legal responsibility. By mid-2020, companies such as Tesla had continued devoting resources to developing fully autonomous vehicles for eventual widespread use, but despite sustained technological advancements concerning artificial intelligence, projections about getting driverless cars on the road by that point had not been met. Commentators noted that though some related technology had been incorporated into cars in use, such as automatic braking, object sensitivity, and lane detection, a lack of training data meant that the proper technology had still not been perfected for a car to be able to drive on its own reliably. Still, Waymo, which had begun implementing a commercial self-driving ride-hailing service in the Phoenix area run through a smartphone application, had made efforts to further expand the service by 2020, and companies such as Waymo and Apple continued to improve their self-driving cars in the following years. In 2021, the company Starship announced that it had made two million successful deliveries using the deployment of delivery robots, while Alibaba had made one million. During the same year, three companies in China began deploying robotaxis without safety drivers.
As more complex AI is created and imbued with general, humanlike intelligence (instead of concentrated intelligence in a single area, such as Deep Blue and chess), it will run into moral requirements as humans do. According to researchers Nick Bostrom and Eliezer Yudkowsky, if an AI is given "cognitive work" to do that has a social aspect, the AI inherits the social requirements of these interactions. The AI then needs to be imbued with a sense of morality to interact in these situations. If an AI has humanlike intelligence and agency, then Bostrom has also theorized that AI will need to be considered both persons and moral entities. There is also the potential for the development of superhuman intelligence in AI, which would breed superhuman morality. The questions of intelligence and morality and who is given personhood are some of the most significant issues to be considered contextually as AI advances. Additionally, as AI technology, such as facial recognition and data algorithms, continued evolving and played even larger roles in society, some worried about a progressive erosion of privacy.
By 2021, following the declaration of the coronavirus 2019 (COVID-19) pandemic in early 2020, some healthcare facilities had been experimenting more with incorporating artificial intelligence models and algorithms into their treatment and monitoring processes in an effort to cope with the surge of illness, particularly when little was still known about the novel coronavirus and the disease it caused. However, debates still existed around such use of artificial intelligence in clinical settings, and some argued that there were unsolved ethical issues and that the data used by algorithms would need consistent updating, meaning that they should be used cautiously and not fully relied upon.
In 2022, AI research group OpenAI released ChatGPT, a chatbot powered by generative AI capable of generating complex responses to user prompts. In the months following its release, ChatGPT went viral for its potential real-world applications, including uses in business, research, and in the classroom. However, several flaws were soon identified with the program, including potential factual unreliability in its responses. In January 2023, NBC News reported that the New York City Department of Education had banned the use of ChatGPT in the classroom due to the program's potential impact on student learning. Users discovered that ChatGPT could be used to write essays, solve complex problems, and generate computer code, among other uses, which caused fears among many that students could use the program to complete their coursework. The following month, OpenAI announced ChatGPT Plus, a subscription to an enhanced version of the chatbot with expanded features. By the start of 2024, ChatGPT had roughly 180.5 million users and a number of new capabilities, including image analysis and generation of voice responses. In May of that year, ChatGPT further improved its capabilities with OpenAI's launch of an updated model called ChatGPT-4o, which promised faster results and improved text and audio capabilities, including a new conversational AI system called "Sky." However, the company encountered controversy following its release of the update. Actor Scarlett Johansson accused OpenAI of using a likeness of her voice in its new voice assistant despite reportedly telling the company multiple times that she was not interested. In response, OpenAI removed the voice and hired a legal team to manage its growing legal challenges, which included intellectual property issues regarding the information ChatGPT drew on for its machine learning and the use of ChatGPT for plagiarism or other forms of academic dishonesty.
As access to different types of AI grew, AI executives and critics alike warned that AI could have a negative effect on job security and urged the government to begin regulating the AI industry. According to a 2023 estimate by Goldman Sachs, as reported by the New York Times, generative AI could automate activities equivalent to 300 million full-time jobs globally. Particularly, experts worried that AI could eventually automate so many tasks completed in jobs like administrative and clerical support, customer service, and technology that certain roles become obsolete and displace workers.
In addition, the safety of creative roles is also being questioned, as film and television industry networks like Netflix have begun experimenting with AI to use a person's likeness to create new scenes, change dialogue, resurrect deceased actors, or generate scripts. In May 2023, the Writers Guild of America (WGA) went on strike during a labor dispute with the Alliance of Motion Picture and Television Producers; one of their demands was that AI could only be used as a tool for research or facilitate script ideas and could not be used to replace scriptwriters. The resolution of that strike in September 2023 resulted in writers securing a number of protections against AI replacement. The Screen Actors Guild (SAG) also went on strike that year and won some similar protections.
Amid growing calls for government regulation of AI and the rapid progression of AI technology capabilities during the early 2020s, some government officials around the world took early steps toward regulation. For example, in October 2023, United States President Joe Biden signed an executive order which included a number of provisions establishing safeguards on AI use and development. The order mandated that AI developers share safety test results and other data with the federal government. It also directed the National Institute of Standards and Technology to set up guidelines to ensure the safety and security of AI technologies. However, the Biden administration considered the executive order to be a preliminary step toward a wider regulatory system for AI and called on lawmakers in the US and around the world to take further action.
One of the most human of activities is the creation and dissemination of music. In the mid-decade of the 2020s, music could be seen as an example of an area where ethical boundaries needed to be established. In addition, this was also an area where AI stood to allow for innovative employment. Nonetheless, while innovation is a celebrated aspect of music history, machine-originated artistry has many questioning its authenticity.
AI also has the capability to learn, mimic, and reproduce the creative artistry of humans in ways that enable counterfeit productions. This can be analogous to a rogue painter creating fake artwork that passes for the artistry of an authentic human genius. In March 2024, the US state of Tennessee passed the ELVIS Act (Ensuring Likeness Voice and Image Security). The name of the legislation was in deference to Elvis Presley (1935-1977), one of the first Rock and Roll artists to attain global notoriety. The ELVIS Act makes it illegal to morph the voices of human artists into AI-generated creations.
Meanwhile, advancements into the understanding of how LLMs work continued to be made, as despite creating them, companies like OpenAI did not fully have a grasp on how advanced LLMs like ChatGPT work. In May 2024, Anthropic, creator of the generative AI program Claude, revealed that researchers at the company had mapped part of its LLM, revealing important insights into how such programs work and how they might be made more useful and safer. Another development meant to make AI less dangerous was introduced the same month when several major tech companies, including Microsoft and OpenAI, agreed to implement an AI "kill switch" to ensure the safety of their AI models. The international agreement was made at the Seoul AI Safety Summit.
Bibliography
Basl, John. “The Ethics of Creating Artificial Consciousness.” American Philosophical Association Newsletters: Philosophy and Computers, vol. 13, no. 1, 2013, pp. 25–30. Philosophers Index with Full Text. Accessed 25 Feb. 2015.
Berlatsky, Noah. Artificial Intelligence. Detroit: Greenhaven, 2011.
Boak, Josh, and Matt O'Brien. "Biden Wants to Move Fast on AI Safeguards and Signs an Executive Order to Address His Concerns." AP News, 30 Oct. 2023, apnews.com/article/biden-ai-artificial-intelligence-executive-order-cb86162000d894f238f28ac029005059. Accessed 16 July 2024.
Bostrom, Nick, and Eliezer Yudkowsky. “The Ethics of Artificial Intelligence.” Machine Intelligence Research Institute. MIRI. Accessed 23 Sept. 2016.
Goldberg, Emma. "A.I.’s Threat to Jobs Prompts Question of Who Protects Workers." The New York Times, 24 May 2023, www.nytimes.com/2023/05/23/business/jobs-protections-artificial-intelligence.html. Accessed 16 July 2024.
Hight, Jewly. “AI Music Isn’t Going Away. Here Are 4 Big Questions about What’s Next.” NPR, 25 Apr. 2024. Accessed 16 July 2024.
Lee, Timothy B. “Why It’s Time for Uber to Get Out of the Self-Driving Car Business.” Ars Technica, Condé Nast, 27 Mar. 2018, arstechnica.com/cars/2018/03/ubers-self-driving-car-project-is-struggling-the-company-should-sell-it/. Accessed 16 July 2024.
Metz, Cade. "OpenAI to Offer New Version of ChatGPT for a $20 Monthly Fee." The New York Times, 1 Feb. 2023, www.nytimes.com/2023/02/01/technology/openai-chatgpt-plus-subscription.html. Accessed 16 July 2024.
Morrison, Jim. "How Doctors Are Using Artificial Intelligence to Battle Covid-19." Smithsonian Magazine, 5 Mar. 2021, www.smithsonianmag.com/science-nature/how-doctors-are-using-artificial-intelligence-battle-covid-19-180977124/. Accessed 16 July 2024.
Piper, Kelsey. "It's 2020. Where Are Our Self-Driving Cars?" Vox, 28 Feb. 2020, www.vox.com/future-perfect/2020/2/14/21063487/self-driving-cars-autonomous-vehicles-waymo-cruise-uber. Accessed 16 July 2024.
Roeloffs, Mary Whitfill. “Artists Slam AI Developers for Using Music without Permission in Letter Signed by Kacey Musgraves, Billie Eilish and More.” Forbes, 2 Apr. 2024, www.forbes.com/sites/maryroeloffs/2024/04/02/artists-slam-ai-developers-for-using-music-without-permission-in-letter-signed-by-kacey-musgraves-billie-eilish-and-more/?sh=6bca8ea33d97. Accessed 16 July 2024.
Rosenberg, Scott. "Anthropic Scientists Map a Language Model's Brain." Axios, 24 May 2024, www.axios.com/2024/05/24/ai-llms-anthropic-research. Accessed 16 July 2024.
Rosenblatt, Kalhan. "ChatGPT Banned from New York City Public Schools' Devices and Networks." NBC News, 5 Jan. 2023, www.nbcnews.com/tech/tech-news/new-york-city-public-schools-ban-chatgpt-devices-networks-rcna64446. Accessed 6 Mar. 2023.
Rumelhart, David E., James L. McClelland, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Rpt. 2 vols. MIT P, 1989.
Russell, Stuart, and Peter Norvig. Artificial Intelligence: A Modern Approach. 3rd ed., Prentice, 2010.
Templeton, Brad. "Self-Driving Cars 2021: Year in Review."Forbes, 3 Jan. 2022, www.forbes.com/sites/bradtempleton/2022/01/03/self-driving-cars-2021-year-in-review/?sh=4aa85563773b. Accessed 16 July 2024.
Watercutter, Angela. "The Hollywood Strikes Stopped AI From Taking Your Job. But for How Long?" Wired, 25 Dec. 2023, www.wired.com/story/hollywood-saved-your-job-from-ai-2023-will-it-last/. Accessed 16 July 2024.