Amplification

Type of physical science: Electromagnetism, Electrical circuits, Classical physics

Field of study: Electromagnetism

Amplification is the process of enhancing the power of a signal while maintaining its essential integrity against internal and external influences such as noise and distortion. A strict definition of amplification entails that the form of the signal—electrical, optical, thermal, acoustical, magnetic, mechanical, hydraulic—be the same at both the input and output. If the forms differ, the process is referred to as transduction.

89316876-89253.jpg

Overview

"Amplification" refers to the process of enhancing the strength of any physical quantity, such as the tiny voltage generated by a microphone, or the small control current that ignites the engines of a propulsion system, or the feeble light of a twinkling star as it hits the human eye. What is being amplified is an input signal, the amplified version of which then constitutes an output signal. When the physical system carrying out the amplification process involves input and output signals that are in the same form, such as current, voltage, displacement, or light, the system is said to be an "amplifier." If the signals differ—for example, the input may be the current flowing through a loudspeaker, while the output is sound—then the system is referred to as a "transducer." A transducer at the source or signal-generation end serves as a "sensor," while the output transducer is called an "actuator." In between the sensor and the actuator, most physical systems incorporate an amplifier to magnify the signal. This demarcation between an amplifier and a transducer also helps one specify for an amplifier a dimensionless "gain" or "amplification factor," which is defined as the ratio of output-signal strength to input-signal strength.

Devices or systems that are distinguished as amplifiers must also cause an overall increase in signal power (energy per unit time) or energy from input to output—that is, the ratio of output power to input power, or "power gain," must be greater than unity. This stipulation is essential in defining true amplification, since it is possible to obtain a gain in one element of the signal (current or displacement) at the expense of a complementary element (voltage or force). An electrical transformer increases either the current or voltage, but not the product of the two (which represents electrical power); hence, a transformer is not an amplifier. In a similar vein, the fabled Archimedian lever that can lift the world is also not an amplifier. The excess signal power is not created by the amplifier but is simply drawn from a local energy source, or "bias," powering the amplifier. Note that the energy source need not always be external; it is now possible to integrate the elements of an electrochemical battery within a silicon-chip amplifier. Thus, a necessary but not sufficient condition of amplification is the presence of an energy source, or "pump," in the system in addition to the signals.

The signal-conversion part of a system, as noted earlier, is readily handled by sensors and actuators at the input and output ends, so all that is needed is a "generic" amplifier in between to satisfy the overall system specifications. The question is, what physical entity is chosen to carry out this amplification—for example, mechanical displacement, fluid flow, light, or electrical current? The electron, the flow of which constitutes electric current, is the preeminent choice for two reasons: It has an extremely low mass, and it has a (negative) charge. The low mass means that the electron has low inertia and so will respond to extremely fast (or high-frequency) signals, while its charge implies that a simple voltage source or battery is all that is needed to act as a pump. The dominance of "electronics" in amplification essentially arises from these two facets—enormous speed and ease of control. In contrast, a "fluidic" system of amplification would entail a bulky hydraulic pump with complex sets of valves; even more important, such a system could handle only slowly varying signals. Incidentally, photons (quanta of light) can in principle operate at even higher speeds than electrons, but such photonic amplification systems lack the simplicity and versatility of electronic amplifiers.

In view of the above discussion, it may be seen that a complete measure of an amplifier cannot simply be its gain, since speed is also equally important. The maximum frequency that may be handled by an amplifier without a drop in its response or gain is called the "bandwidth"; the bandwidth (measured in hertz) is on the order of the inverse of the switching (off-to-on or on-to-off) time of the signal. According to communication theory, for example, the signal bandwidth is a direct measure of the amount of information transmitted or processed. Thus, a true figure of merit for an amplifier is the "gain-bandwidth product," which then forms a basis for comparing the performance of different systems.

An electrical circuit composed solely of "passive" circuit elements—resistors (which impede current flow), capacitors (which store charge), and inductors (which store magnetic field)—cannot be an amplifier, or an "active circuit," since the power gain in such a circuit will always be less than unity. Instead, an "active device" such as a transistor or the nearly extinct vacuum tube is needed to create an electronic amplifier. The ability to amplify signals is of surprisingly recent vintage; it was only in 1907 that Lee de Forest invented the vacuum triode, the first electronic amplifying device. It consisted of an evacuated tube with a heated cathode that emits electrons, an anode with a positive voltage to collect the negatively charged electrons, and a wire mesh called a "control grid" in between the two. By letting the input signal control the (retarding) voltage applied to the grid (just as a sluice gate would control water flow in a channel), a much larger anode current is controlled, thereby causing signal amplification. This seemingly simple device and its variations ushered in the electronic age, with rapid developments in radio, radar, telemetry, industrial control, avionics, and early computing.

Vacuum-based electronics remained supreme for nearly six decades, despite strong limitations such as bulkiness, the need for vacuums and high-supply voltages, high operating temperatures, and low reliability. The first challenge to the vacuum tube occurred in 1948, when a group of scientists studying the newly discovered solids known as semiconductors came upon an amplifier that seemed to eliminate all the disadvantages of the vacuum device. While studying the properties of the semiconductor germanium, John Bardeen, Walter Brattain, and William Shockley, working at Bell Labs in Murray Hill, New Jersey, discovered the "transistor" effect. Their transistor is also a three-terminal device with an "emitter" (of electrons), a "base" (into which the electrons are injected by the input signal) and a "collector" (which collects most of the injected electrons). Amplification occurs in a transistor because the (electron) current is transferred from a low-resistance emitter-base input circuit to a high-resistance collector-base output circuit. (The term "transistor" is derived from "transfer resistor.") Unlike in a vacuum amplifier, in a transistor the electrons never leave the solid material; thus, transistorized devices became known as "solid-state" devices. The transistor turned out to be a revolutionary invention that won the 1956 Nobel Prize in Physics for its inventors.

It was soon realized that the semiconductor silicon (apart from being abundant, as its source is common sand) had more attractive properties than germanium, and silicon became the preferred material for use in solid-state technology. An important consequence of the transistor was the miniaturization of many electronic systems, which resulted in a mushrooming of new applications—in computing and telecommunications in particular—that had previously been thought impossible. Also, the early, so-called bipolar transistor was soon eclipsed by the Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET), in which an input signal applied to a "gate" terminal induces a "channel" in the silicon, separated by an insulating oxide layer to control the output current flowing between the "source" and "drain" terminals. Thus the MOSFET is also a three-terminal amplifier, with the gate as input, the drain as output, and the source as the common terminal.

The basic operating principle in all the amplifying devices noted above is surprisingly similar. The key concepts behind amplification are "memory," or information storage, and "transit." The information that is stored is basically a record of the previous presence of an input signal. This record is interrogated repeatedly, and at the end of each interrogation, the result is transmitted to the output. Thus, the input signal is sampled and held in memory for infinitesimal periods of time, and during this memory period, an electron completes (repetitive) transit around the external circuit. The gain of the amplfier thus is the number of trips made around the circuit by a given unit of information before the memory transit disappears—that is, the ratio of the "memory time" to the "transit time." Applying this concept to the bipolar transistor, for example, the current gain is simply the ratio of the injected carrier lifetime ("memory time") in the base region to the "transit time" across the base from the emitter to the collector. More exhaustive calculations confirm this to be true. Further, the bandwidth of the transistor is the inverse of the carrier lifetime, so that the gain-bandwidth product then is merely the inverse of the transit time.

There are also other means of obtaining amplification. The use of "breakdown" of a semiconductor junction to generate additional electrons in proportion to the input signal by the so-called avalanche multiplication process is one. The use of a nonlinear reactive element such as an inductor or capacitor is another (the "parametric amplifier"). The latter is an attractive technique for amplifying high-frequency signals with very low added noise, since it does not involve use of an electrical resistor, the principal source of noise (because of the random thermal motion of electrons within a resistor) in a circuit. Parametric amplifiers are used in satellite receivers.

The natural outgrowth of miniaturization was the development of monolithic integration, in which all circuit elements, including transistors, are fabricated on a single silicon chip a few square centimeters in area. The continuing trend has been to reduce the dimensions of the device, including the gate length of the MOSFET. The electron transit time is directly proportional to the gate length, so the progressive reduction (to below a millionth of a meter) of this parameter translates to increased gain-bandwidth product. Correspondingly, the packing density also goes up; the 1996 Intel Pentium Pro chip contained a mind-boggling 5.5 million transistors. This process has resulted in the ever-improving performance of computers, with increasingly higher speeds and computational capabilities.

Applications

Even before the advent of the microchip, electronic amplification had become an essential operation in countless industrial, commercial, and household applications. An amplifier may contain one or more of the passive circuit elements—resistors, capacitors, and inductors—but it must have at least one active element such as a transistor to obtain the requisite power gain. Amplification in any system typically proceeds in several stages, with each gain stage operating at the same or a different frequency range. Typically, most stages provide voltage or current amplification, with the final "power amplifier" providing much of the power gain. For example, the basic radio receiver contains several stages of amplification, covering a wide range of power as well as frequency. The feeble signal (of variable frequency, corresponding to different stations) received by the antenna is first amplified at the "radio frequency" (RF), then converted to a fixed "intermediate frequency" (IF) and amplified further before being detected and amplified to deliver the high power "audio frequency" (AF) signal into the loudspeakers. The situation in a television is qualitatively similar, except that the information content—continuously varying spatial image—is much higher, requiring a much wider bandwidth. A television also needs two sets of amplifiers—one to handle the audio and the other to handle the video signal.

In applications such as radio, television, and stereo systems, the fidelity of signal amplification is of great importance. Signal fidelity is impaired mainly by "noise" and distortion. Electronic noise is any random source of current or voltage that interferes with the information-bearing signal. It can arise both during transmission (over the air or through a cable, for example) and during amplification. Any amplifier adds its own noise, so that the signal-to-noise ratio (SNR) at the output will always be less than at the input. A measure of the quality of an amplifier is the "noise figure," which is defined as the ratio of SNR at the output to that at the input. The noise figure of any practical amplifier will be greater than the ideal value of unity. Distortion is the corruption of the signal by the amplifier as a result of its "nonlinear" response, particularly at high voltage and power levels such as in the output stage; distortion makes the output signal deviate in shape from that of the source. Thus, the minimum and maximum levels of signals that can be handled by an amplifying sytem are determined by noise and distortion, respectively. These define the "dynamic range" of an amplifier. An amplifier can do nothing to offset the noise coming with the signal, but careful design using low-noise components (or lowering the component temperature) will minimize the noise contribution of the amplifier itself. Distortion is controlled by circuit techniques, including the use of "feedback," a powerful concept in circuits and systems.

Telecommunications through open (such as microwave relays) as well as closed (such as buried or underwater fiber-optic cable) channels require periodic amplification of the attenuated signals to replenish them on their way to their destinations. This is done in "repeaters," which are amplifiers with added detectors and emitters to perform the necessary microwave-electrical or optical-electrical signal conversion. The TAT-9 underseas optical-fiber system installed in 1991 linking North America with Europe can carry eighty thousand voice channels and has a repeater spaced every seventy-five miles. A noteworthy recent development is the "doped optical fiber amplifier," which can amplify the optical signal directly without conversion to electrical signal; the local source of energy is a laser "pump."

Not all applications of amplification require the output to be a linear replica of the input. For example, in controlling a lamp dimmer, what is needed is a simple correspondence between the control input (turning of a knob or continual pressing of a button) and the electrical power fed to the lamp; the input and output wave shapes are inconsequential. Numerous industrial control systems use such amplifiers, which are classed as nonlinear amplifiers.

A particularly handy form of amplifier, available in modular form through a silicon chip, is the so-called operational amplifier or "op amp." It is a high-gain (a million times or more), high-input resistance amplifier with only input, output, and power-supply terminals. Originally developed for performing "analog" computation, it has now become an extremely versatile building block with which numerous different circuit functions (such as integration, summation, and multiplication) can be realized, usually with feedback. Extremely simple to use, op amps enable circuit construction with only a modicum of familiarity with electronics.

An amplifier is also the essential part of an oscillator, which is a source of sinusoidal voltage or current of fixed amplitude and frequency. The oscillator is a constitutive element of many electronic systems used to carry out signal generation, conversion, or processing. In these applications, the amplifier is used in a feedback configuration, but in a destabilizing or regenerative way to provide a "positive" feedback. In effect, the gain of these amplifiers tends to infinity, resulting in an output "signal" without any input. The oscillator thus becomes a waveform source.

Context

Amplification has physical significance well beyond the applications cited above. For example, the basic nature of amplification is used in designing systems for automatic pattern recognition. Recognition of a visual image—say, a small part of an aerial photograph containing perhaps a hundred million resolvable image points—is carried out by the human eye with ease, whereas this process would strain the capabilities of even the highest-speed computers. This is because the human eye solves this problem by handling the data more slowly, but in parallel. Here again, the energy of the absorbed photon must be multiplied a millionfold before it can trigger a nerve pulse. The ingenious amplifier that nature has designed remains a puzzle. Parallel processing, however, is a technique that has been adopted in the design of recent ultra-high-speed supercomputers.

The process of photography may also be thought of in terms of the interaction of a single photon with a molecule that initiates a chemical reaction, which then triggers a chain reaction involving numerous nearby molecules—again, a process of signal amplification. In an electrochemical cell with two platinum electrodes immersed in a normally nonconducting solution, no current flows when a voltage is applied. However, if a small amount of iodine is introduced in the solution (the input), it causes reactions at the platinum electrodes, causing a current to flow. The amount of current (the output) is proportional to the amount of iodine, and the current continues to flow until the iodine is removed. The natural decay of the memorized input signal is very slow—a period of days—so that almost infinite amplification can be obtained in this liquid-state amplifier. The penalty, however, is that the process would require almost infinite time.

Amplification is also inherent to lasers (a term derived from the acronym for "light amplification by stimulated emission of radiation"), in which an internal optical gain is essential to offset losses and thereby obtain emission of coherent radiation. Laser amplification occurs by forcing a higher occupancy of electrons in upper-energy levels (in an atom or a solid) than in the equilibrium lower-energy levels. This "population inversion" is achieved with a local power source (pump) and results in a synchronized, stimulated light emission.

Amplification is closely related to the the phenomenon of "negative resistance" seen in a number of solid-state device structures. Whereas the usual positive resistance dissipates power, a negative resistance "generates" power (with the assistance of a local source); thus there is amplification of signal power. Quantum-mechanical tunneling across a very narrow barrier is responsible for negative resistance in devices such as the tunnel diode. Since electron tunneling is extremely fast, these devices can operate at frequencies up to several gigahertz, or billion hertz.

Cutting-edge amplifiers based on quantum effects include the single-electron transistor, which uses "quantum dots" (collections of hundreds of atoms each) synthesized by spontaneous self-assembly, and resonant-tunneling transistors, in which tunneling occurs only at well-defined electron energies.

Principal terms

bandwidth: the range of frequency over which a system offers a substantially flat response or gain; bandwidth has the same units as frequency

frequency: the number of oscillations occuring during a unit of time; the unit of frequency is hertz (Hz), defined as one oscillation per second

gain: the dimensionless ratio of strength (typically, voltage, current, or power) of an output signal to that of the input signal

noise: any of the naturally occurring fluctuations in physical measurables (such as voltage or current) that interfere with signals

photon: the quantum of electromagnetic radiation, with energy inversely related to the wavelength

signal: any time-dependent, information-bearing entity such as voltage, current, or displacement

Bibliography

Hawkins, J. K. IEEE Student Journal. Vol. 8. New York: IEEE, 1970. An original perspective on the basic principles underlying the phenomenon of amplification. The relationship between memory and transit in a device, as well as the role of amplification in human vision and pattern recognition, are discussed.

Pierce, John R., and A. Michael Noll. Signals. New York: Scientific American Library, 1990. This is a superbly written survey of the science of telecommunications for the interested layperson. The basic principles of communications, the mathematical concepts as well as the physical systems doing the job, are covered elegantly, with ample illustrations and appropriate historical notes. The first author was a distinguished scientist at the famed Bell Labs and witnessed firsthand many a technical development that has ushered in miracles of late-twentieth-century information technology.

Shea, R. F., ed. Amplifier Handbook. New York: McGraw-Hill, 1966. An enormous compendium of information about amplifiers. While this work was published before the microchip revolution, the advances in amplifiers have been essentially evolutionary; thus, this handbook still covers the essential principles relating to amplification.

Turton, R. The Quantum Dot. New York: Oxford University Press, 1995. A highly descriptive, nonmathematical introduction to solid-state theory and applications. Both established and futuristic microelectronic devices are discussed with simple illustrations.

By S. Ashok