Signal Processing
Signal processing is a field that involves the analysis, manipulation, and interpretation of signals, which are physical variables that convey information and can vary over time, space, or other dimensions. Signals can be one-dimensional, such as sound waves or temperature readings, and are typically affected by noise, which can obscure or distort the original information. The discipline encompasses both analogue and digital signal processing, with digital methods offering advantages in speed, reliability, and flexibility. Digital signal processing often requires converting analogue signals into discrete formats through a process called sampling.
Key techniques in signal processing include filtering, which separates desired signals from noise based on frequency characteristics, and convolution, which combines multiple signals to extract information. The Fourier transform is a fundamental tool that breaks down signals into their constituent frequency components, aiding in analysis and interpretation. Applications of signal processing span various fields, including telecommunications, biomedical engineering, and geophysics, facilitating tasks like signal detection, restoration, and characterization. The evolution of digital signal processing has been significantly enhanced by advancements in computing technology, leading to faster and more efficient methods of handling complex signal data across numerous practical applications.
Subject Terms
Signal Processing
Type of physical science: Mathematical methods
Field of study: Signal processing
Signal processing comprises statistical and model-based methods for detecting, characterizing, enhancing, and extracting information from signals in the presence of noise.


Overview
Any physical variable, carrying energy or representing a message, pattern, or piece of information in a communication or measurement system, which varies with time, space, and/or other independent variables can be called a "signal." Measurements of temperature versus atmospheric height, sound pressure versus time in speech, blood velocity as a function of location in an artery, and electromagnetic radio waves and Morse code versus time are all examples of (one-dimensional) signals. Signal processing comprises the interdisciplinary methodology and technologies concerned with the manual and numerical manipulation of data and signals, incorporating several data-processing stages to detect and measure the presence, source, characteristics, and information content of diverse signals.
In contrast to purely analogue or optical signal processing, the stages of the most frequent forms of digital signal processing are implemented in hardware and software using simple operations such as electronic storage and delay, addition, subtraction, and multiplication to extract and estimate a wide variety of important signal measures and information types. Digital signal processing offers the advantages of lower cost and greater speed, reliability, flexibility, modularity, and accuracy than other signal-processing techniques.
Most signals in science and engineering are innately analogue--that is, functions of a continuous variable, such as wave heights at a harbor buoy over time, as defined for every time and taking on any values in the continuous interval (Amax, Amin). Other signal types, such as telegraph outputs, are intrinsically digital, while other signals, such as those of radar and television, have both analogue and digital components. For digital processing of analogue signals, it is necessary first to employ analogue-to-discrete (A/D) conversion. Discrete signals are signals that either intrinsically or after A/D conversion are defined only for a particular set of time instants or sampling occurrences. Selecting specific values of an analogue signal at discrete time instants is termed sampling. Examples of discrete sampled signals include hourly temperatures at a given location and recording whether eyes are open or shut versus time. These examples respectively illustrate sampled-data signals, which intrinsically exhibit a continuous range of amplitude values quantized by the measuring process into a series of finite steps, and signals that are intrinsically discrete (that is, binary).
The information-related characteristics of continuous and discrete signals can be any of a broad spectrum of features, such as signal shapes, time durations, amplitudes, and other physically measurable and statistical properties. Most signals are contaminated by or mixed with various kinds of noise or other unwanted signals, arising either before or after signal generation, propagation, and reception. Noise can be considered as the more or less jumbled background or extra (additive) features that distort, degrade, or otherwise complicate the original signal. Examples of different types of noise include radio-wave static from lightning discharges, as well as electronic scrambling, encryption, and jamming countermeasures in military radar and sonar.
An important conceptual as well as operational model for digital signal processing is that of the so-called generalized communication system, developed primarily by Norbert Wiener and Claude Shannon. In this model, an information "source" generator transmits a signal and both source- and signal-generated noise through a "channel" that also contributes additional noise, to a final "receiver" array having varying degrees of signal reliability and noise. A system can be defined as a physical device or software algorithm that arises out of a particular signal-processing operation or stage; passing a signal through a system is to have "processed" it. The digital signal-processing system may be a small microprocessor or a smaller hard-wired digital processor, programmed to perform the desired operations on an input signal. The specific information content of any signal can be quantified using mathematical measures of signal properties such as amplitude, phase, velocity, direction, polarization, bandwidth, or coherency. Measured signals--even from many well-known physical and physiological processes--are generally best modeled, however, as products of one or more random processes in a generalized communication system. Especially in the case in which an exact functional relationship is unknown or too complicated for practical use, such signal measures are frequently formulated in terms of the theory of probability and statistics.
Shannon's probabilistic information theory, THE MATHEMATICAL THEORY OF COMMUNICATION (1949), established a quantitative measure of the amount of information carried in a message signal. The degree to which messages are unpredictable is here taken as the measure of information, or entropy, of the source. If the source produces n messages with equal probabilities, the amount of information is defined as log2n bits per message. Shannon's characterization of a message's information rate does not deal with the specific meaning or content of the messages themselves. His information rate is a measure of how difficult it is to transmit a message without error. In this connection, Shannon proved a fundamental theorem, which states that if the source rate is less than or equal to the channel capacity, messages from a source can in principle be transmitted over the channel without error. The Nyquist sampling theorem states that any continuous signal containing no significant frequency components above f cycles per second can in principle be recovered from its sampled version if the sampling interval is less than 1/2f seconds. Another important reciprocal relation is that between the time duration and frequency bandwidth of a signal.
The two basic tools of digital signal processing are linear convolution, and the fast Fourier transform (FFT). Convolution is the "folding" of one function into another, by means of multiplying a time-reversed version of one function by another. The FFT is the computationally fast form of the discrete Fourier transform, itself the approximate Fourier representation of signals that are both discrete and of finite duration in both the time and the frequency domains. The FFT readily permits time/phase shifting, conjugation, convolution, multiplication, addition, and many other computational building blocks of more complicated digital signal-processing operations.
Mixing and filtering are other basic linear signal-processing procedures. Mixing is the addition of two or more signal waveforms. Filtering, by contrast, implies separating one signal from the other on the basis of differences in their frequency spectra, or some other signal property. Here a filter is the frequency-selective component implicit in a very wide variety of practical signal processors. For example, a signal having a finite average level frequently has this "d.c." level removed by filtering. Filters are generally characterized by their impulse response or transfer function. The impulse response is defined as the system's or filter's response to a unit impulse (Dirac delta function, equaling 0 for all time except t1<0<t2).
A key problem is that of initially recognizing, or detecting, a signal pattern that has become mixed or buried in noise. Signal recovery means the extraction of a given signal from noise when signal waveshape is unknown. Signal detection is extraction where signal waveshape and/or noise properties are already known. Signal prediction means forecasting a random signal on the basis of its history and its known statistical properties. Improvement of signal-to-noise ratio depends strongly on the frequency range or bandwidth of the noise; if it is the same as that of the signal, there will generally be no improvement. The improvement of the signal-to-noise ratio (SNR) is defined as the product of the ratio of peak signal output/input times root-mean-square noise input/output.
A simple band-pass filter can improve SNR by passing a particular band of frequencies and rejecting those outside the band. It is frequently helpful in signal detection and analysis to smooth the raw data by reducing rapid fluctuations caused by noise, thereby revealing basic underlying trends. A common smoothing technique from statistics is to estimate a moving average. Each smoothed data value is computed as the average of a number of previous raw-data values, and the process is repeated sample by sample throughout the entire record. Such a computation is said to be "non-recursive," because each output sample is calculated only from prior input values.
Wide and narrow band signals and noise indicate whether signal or noise energies are distributed over a wide or narrow band of frequencies. Given the history, present value, and autocorrelation or spectral properties of a random signal, it is possible to define, for example, a Wiener or Kalman filter that, on average, gives the best possible prediction of the signal's next value in the sense of minimizing the mean square error between the desired filter output (pure signal) and its actual output (signal plus noise). Wiener filters are basically nonrecursive filters, whereas Kalman filters employ recursive techniques to achieve the same ends; a filter that calculates a new output value using one or more previous outputs is called "recursive." The design of SNR optimum filters for processing signals in noise is designated "signal estimation."
Two of the earliest and most widespread ways to describe a signal waveform quantitatively are to describe analytically or graphically how the measured signal varies with physical parameters by identifying the number and amplitude of its simpler component waveforms, or frequency spectrum. "Frequency" is the number of complete cycles of a signal per unit time. "Phase" is given by the time at which a waveform changes from a negative to a positive value. The frequency spectrum of a signal is obtained by graphing the maximum amplitudes of all of its frequency components. If a signal is a single sine wave, its frequency spectrum has a single spike at that frequency. More generally for a multifrequency pulse, the signal spectrum comprises a series of many different spikes as well as impulses. The best-known example of this general technique is the Fourier series and Fourier transform representations, in which any repetitive or aperiodic signal respectively can be broken down into the additive or integral weighted summation of harmonically related sinusoidal components. In this sense, signals can be seen as fundamentally shaped like the messages they convey. In this way, a practical signal can always be analyzed into, or synthesized from, a set of sine and cosine components with appropriate amplitudes at each frequency. Thus, the real and imaginary output values of the discrete or fast Fourier transform of a signal give its amplitude and phase spectra, respectively. In practice, approximation of a signal by a finite number of frequency components gives a statistical "best fit" in the least-squares sense.
Applications
Signal processing is an interdisciplinary collection of tools in an increasingly broad use in almost all fields of physical, engineering, medical, and other sciences. Digital signal processing, or time-series analysis, involves the extraction of a flow of messages or information from a background of unwanted signals and random noise, and the study of both their statistical distribution and relationships in time. These analyses involve use of Fourier series and integral transforms; auto- and cross-correlation; amplitude-, phase-, and power-spectral estimates; and frequency and predictive filters, among other computations.
System design and modification of sensors require knowledge of the signal's characteristics. Signal detection is identification and extraction of a signal in the presence of noise, such as the distinction of an airplane's true radar bleep from the background of bird flocks and ground clutter.
Signal characterization seeks to extract specific features or information of special interest, such as the instantaneous heart rate from an electrocardiogram. Signal restoration involves the enhancement of signals that have been degraded by recording, storage, and/or transmission stages; well-known examples include removing low- and high-frequency noise from old wax-cylinder recordings of speech and music.
A typical signal-processing sequence for measured time series in physical oceanography, geophysics, hydrology, sonar, and radar applications might include interpolation, decimation, multiplexing, inverse filtering/deconvolution, and spectral estimation. Demultiplexing sorts out multiple sensor data into a coherent multichannel sequence. Reformatting data from its original format to that of the computer processing is an important translation-like operation. Gain recovery is sometimes applied to compensate for amplitude reduction or clipping in recorders with a limited dynamic range. Bandpass filtering removes low, middle, and higher frequencies most strongly corrupted with noise, if by so doing only minor losses of signal result.
White noise is a stationary time series with zero autocorrelation--that is, a flat frequency spectrum. Noise from many physical processes can depend strongly upon the wavelengths/frequencies of the signal in question; long (red) and short (blue) wavelength noise is common in many meteorologic and geophysical phenomena.
Another problem is stationarity. A stationary time series has statistical properties that do not change with time or spatial location, so that a finite-length segment represents the same processes as an infinite-length time series. A large class of statistically stationary processes, called ergodic processes, are defined as those whose local properties equate with their global properties (for example, the average heights of waves in one part of a lake will be the same as the average heights of waves in another part of the same lake).
Context
Although the basic mathematical principles of statistical correlation, Fourier, and periodogram analysis were available since the mid-nineteenth century, computationally useful forms of these and other signal-processing techniques date only from the late World War II era. Here the publications of Hurewicz (1947) on Laplace transforms for digital filtering and of Norbert Wiener (1949) on Fourier methods for the extrapolation, interpolation, and smoothing of time series were signal accomplishments.
Signal-processing theory accompanied the development of advanced radio, radar, telephone, and seismologic-acoustic transceivers, employing electronic circuits and particularly digital computing. Early work by Blackman and Tukey (1958) on power spectral estimation in electrical engineering, and by Enders Robinson (1962) on least-squares and other optimum filters in radar and seismology, were some of the first texts explicitly addressing "signal processing" as an identifiable methodology. Some of the earliest physical science applications of power spectral analysis on one- and two-dimensional time series occurred in oceanography, in studying the spatial and temporal regularities of sea surf caused by local and distant storms. Two-dimensional cross-correlation and power spectral functions were also widely employed to detect and characterize the spatial and temporal trends of bathymetric, gravimetric and magnetic, and ocean-wave patterns. Cross-correlations were determined by transforming each matrix of two-dimensional field data with a fast Fourier transform algorithm, filtering out d.c. components, and computing spectral measures. Also important was early seismologic work in studying the long-period free oscillations of the earth.
The increasing availability of digital hardware, computers, microprocessors, and hard-wired digital signal processing electronics has created new possibilities for signal recovery and analysis. Probably the most notable developments in DSP are application-specific integrated circuits, where programming flexibility is traded for processing speed--for example, in seismic, radar, and image processing. Over the past thirty years, digital signal processing has evolved rapidly in many areas of science and engineering. This rapid development is largely the result of major advances in digital computer technology and integrated-circuit design and fabrication. Medium- and very-large-scale integration (MSI/VLSI) of electronic circuits such as gates, flip flops, multiplexers, A/D converters, and especially microprocessors has aided the development of smaller, more powerful, faster, and cheaper digital computers and special-purpose DSP hardware, especially for high-speed real-time signal processing. VLSI typically refers to chips having more than 100,000 component devices. After the microprocessor's development of a complete signal-processing system on a single IC chip, the resulting higher speed and improved reliability opened numerous new signal-processing applications.
Other developments include the trends toward "parallel" computational processing, which identifies serial computational bottlenecks restricting data throughput and substitutes a number of simultaneous calculations. A second, related trend is toward so-called distributed signal processing. Here, instead of concentrating processing power in a single location or device, it is shared between a number of peripheral microprocessors or application-specific integrated circuits. However, particularly at the high-frequency end (mega- and giga-hertz bands), great reliance is put on analogue filters and amplifiers.
The goal of general real-time signal-processing operations requires between 2 million and 4 million operations per second, a predominantly hardware-dependent condition. Perhaps the most significant opportunities as of the early 1990's are the efforts at developing more highly structured VLSI chips in ever-smaller circuit packages, with a minimum number of output pins and increasingly dense layout.
Principal terms:
CONVOLUTION: the summation operation computed by time-reversing and shifting signal f(t) with respect to signal g(t), where signal values are then multiplied at each discrete point
CORRELATION FUNCTION: a statistical measure of the tendency of two random signals to vary together in a correlated fashion; auto- and cross-correlation functions are respectively measures of the similarity of a signal with itself, and with some other signal
FOURIER ANALYSIS: mathematical operations permitting transformation between the time domain f(t) and F(w); the Fourier series summation represents complex periodic signals as a sum of trigonometric series; discrete/fast Fourier transform is similarly defined for aperiodic signals
FREQUENCY SPECTRUM: a plot of the Fourier series coefficients of a signal, the result of the discrete Fourier transform; it shows the harmonic frequency components of which a given signal is composed
LINEARITY: the term applied to any system exhibiting additive and multiplicative scaling, where superposition holds NOISE: any unwanted interfering signal, usually additive
PHASE RESPONSE: the fractional part of a waveform period through which the time variable has moved; mathematically, the additive constant in the argument of a trigonometric function, measured in degrees or radians
RANDOM SIGNALS: signals whose amplitude and phase are not specifiable or predictable in time, generally arising from random source processes and/or noise
SAMPLING: the operation converting signals from continuous-time to discrete-time forms; mathematically modeled as multiplication of an input signal by a uniform train or comb of delta impulse functions
SUPERPOSITION: the principle stating that the output response of a linear system resulting from several simultaneous inputs is equal to the sum of its separate responses
Bibliography
Ackroyd, M. H. DIGITAL FILTERS. London: Butterworth, 1973. A classic reference that explores the theory and use of low, band, and high pass filters. Contains many examples of representative signal and noise from physics and several engineering disciplines.
Cadzow, J. A. FOUNDATIONS OF DIGITAL SIGNAL PROCESSING AND DATA ANALYSIS. New York: Macmillan, 1987. Presents a broad interdisciplinary view and formulation of digital signal-processing techniques, considering the fundamental statistical as well as physical relationships between many DSP concepts.
Champeney, D. C. FOURIER TRANSFORMS AND THEIR PHYSICAL APPLICATIONS. London: Academic Press, 1973. At a somewhat higher technical level, this book progressively explores the mathematical formulations, properties, and development of the continuous and discrete Fourier transform as used in radar, NMR, and other imaging applications.
Davenport, W. B. PROBABILITY AND RANDOM PROCESSES. New York: McGraw-Hill, 1970. In addition to providing a general-audience introduction to probability and statistics, this book includes some of the broader mathematical aspects of DSP theory and image processing.
Graupe, D. IDENTIFICATION OF SYSTEMS. New York: R. E. Krieger, 1976. A more detailed and specialized monograph, treating digital signal-processing methods useful in determining the frequency/impulse response and higher-order characteristics of a wide variety of electronic circuits and analogous physical systems.
Jackson, Leland B. SIGNALS, SYSTEMS, AND TRANSFORMS. Reading, Mass.: Addison-Wesley, 1991. A general and elementary introduction to the mathematics and basic concepts of digital signal processing in one-dimensional random processes, relating probabilistic and frequency domain data-processing operations.
Kanasewich, E. R. TIME SEQUENCE ANALYSIS IN GEOPHYSICS. Calgary: University of Alberta Press, 1981. In its fifth edition since 1968, this book gives perhaps the most comprehensive description and demonstration of digital signal-processing methods and data types as used in seismic and other means of oil exploration and underground nuclear explosion detection.
Lynn, Paul A. AN INTRODUCTION TO THE ANALYSIS AND PROCESSING OF SIGNALS. New York: Hemisphere/Taylor & Francis, 1989. A very accessible introductory synopsis of digital signal processing, combining hardware and software descriptions in many real-world examples, with a minimum of mathematics.
Rowland, J. R. LINEAR CONTROL SYSTEMS: MODELING, ANALYSIS, AND DESIGN. New York: Wiley, 1986. This book looks at signal processing from the viewpoint of the information transmission systems of which it is a part. Gives an excellent analysis of basic concepts such as information, channel capacity, signal distortion, transfer function, inverse filtering, and both forward and inverse models.
Taub, H., and D. L. Schilling. PRINCIPLES OF COMMUNICATION SYSTEMS. New York: McGraw-Hill, 1986. Presents an interesting historical as well as technical discussion of a variety of signal-processing procedures employed in scientific and engineering communications systems.