Acoustics
Acoustics is the scientific study of sound, encompassing the production, transmission, and effects of vibrations in various media. When sound travels through air at frequencies between 18 Hz and 18,000 Hz, it is recognized as audible sound. This field extends to include the behavior of sound in solids, liquids, and the psychological impacts of sound on humans. Acoustics is highly interdisciplinary, drawing principles from physics, engineering, psychology, and even music, making it integral to many aspects of human life, from communication to art.
The science of acoustics has evolved over centuries, with roots tracing back to ancient civilizations that explored sound in relation to music and speech. Modern applications span a wide array of technologies, including ultrasonic cleaning, medical diagnostics, and audio engineering. Key areas within acoustics include ultrasonics, psychological and physiological acoustics, speech acoustics, musical acoustics, and underwater sound.
Despite the advancements in technology, challenges such as environmental noise pollution and hearing loss remain prevalent in contemporary society. The emergence of innovative materials, like acoustic metamaterials, signifies ongoing progress in acoustics, aiming to further refine sound manipulation and control. Overall, acoustics plays a vital role in enriching human experiences and addressing societal needs.
Acoustics
Summary
Acoustics is the science dealing with the production, transmission, and effects of vibration in material media. If the medium is air and the vibration frequency is between 18 and 18,000 hertz (Hz), the vibration is termed "sound." Acoustics is also used in a broader context to describe sounds in solids and underwater and structure-borne sounds. Because mechanical vibrations, whether natural or human-induced, have accompanied humans through the long course of human evolution, acoustics is among the most interdisciplinary sciences. For humans, hearing is a very important sense, and the ability to vocalize greatly facilitates communication and social interaction. Sound can have profound psychological effects; music may soothe or relax a troubled mind, and noise can induce anxiety and hypertension.
Definition and Basic Principles
The words "acoustics" and "phonics" evolved from ancient Greek roots for hearing and speaking, respectively. Thus, acoustics began with human communication, making it one of the oldest if not the most basic of sciences. Because acoustics is ubiquitous in human endeavors, it is perhaps the broadest and most interdisciplinary of sciences, and its most profound contributions have occurred when it is commingled with an independent field. The interdisciplinary nature of acoustics has often consigned it to a subsidiary role as a minor subdivision of mechanics, hydrodynamics, or electrical engineering. Certainly, the various technical aspects of acoustics could be parceled out to larger and better-established divisions of science, but then acoustics would lose its unique strengths and its source of dynamic creativity. The main difference between acoustics and more self-sufficient branches of science is that acoustics depends on physical laws developed in and borrowed from other fields. Therefore, the primary task of acoustics is to take these divergent principles and integrate them into a coherent whole in order to understand, measure, and control vibration phenomena.

The Acoustical Society of America subdivides acoustics into fifteen main areas, the most important of which are ultrasonics. This field examines high-frequency waves not audible to humans. Psychological acoustics studies how sound is perceived in the brain. Physiological acoustics focuses on human and animal hearing mechanisms. Speech acoustics studies the human vocal apparatus. Oral communication is auditory sounds used to convey messages. Musical acoustics involves the physics of musical instruments. Underwater sound examines the production and propagation of sound in liquids. Noise concentrates on the control and suppression of unwanted sound. Other important areas of applied acoustics are architectural acoustics (including the acoustical design of concert halls and sound reinforcement systems) and audio engineering (recording and reproducing sound).
Background and History
Acoustics arguably originated with human communication and music. The caves in which the prehistoric humans displayed their most elaborate paintings have resonances easily excited by the human voice, and stalactites emit musical tones when struck or rubbed with a stick. Paleolithic societies constructed flutes of bird bone, used animal horns to produce drones, and employed rattles and scrapers to provide rhythm.
In the sixth century Before the Common Era (BCE), Pythagoras was the first to correlate musical sounds and mathematics by relating consonant musical intervals to simple ratios of integers. In the fourth century BCE, Aristotle deduced that the medium that carries a sound must be compressed by the sounding body. The third century BCE philosopher Chrysippus correctly depicted the propagation of sound waves with an expanding spherical pattern. In the first century BCE, the Roman architect and engineer Marcus Vitruvius Pollio explained the acoustical characteristics of Greek theaters. When the Roman civilization declined in the fourth century, scientific inquiry in the West essentially ceased for the next millennium.
In the seventeenth century, modern experimental acoustics originated when the Italian mathematician Galileo explained resonance as well as musical consonance and dissonance. Theoretical acoustics got its start with Sir Isaac Newton's derivation of an expression for the velocity of sound. Although this yielded a value considerably lower than the experimental result, a more rigorous derivation by Pierre-Simon Laplace in 1816 obtained an equation yielding values in complete agreement with experimental results.
During the eighteenth century, many famous mathematicians studied vibration. In 1700, French mathematician Joseph Sauveur observed that strings vibrate in sections consisting of stationary nodes located between aggressively vibrating antinodes. He noted these vibrations have integer multiple frequencies, or harmonics, of the lowest frequency. Sauvner deducted that a vibrating string could simultaneously produce the sounds of several harmonics. In 1755, Daniel Bernoulli proved that this resultant vibration was the independent algebraic sum of the various harmonics. In 1750, Jean le Rond d'Alembert used calculus to obtain the wave equation for a vibrating string. By the end of the eighteenth century, the basic experimental results and theoretical underpinnings of acoustics were extant and in reasonable agreement. Nonetheless, it was not until the following century that theory and a concomitant advance of technology led to the evolution of the major divisions of acoustics.
Although mathematical theory is central to all acoustics, the two major divisions, physical and applied acoustics, evolved from the central theoretical core. In the late nineteenth century. Hermann von Helmholtz and Lord Rayleigh (John William Strutt), two polymaths, developed the theoretical aspects. Helmholtz's contributions to acoustics were primarily in explaining the physiological aspects of the ear. Rayleigh, a well-educated wealthy English baron, synthesized virtually all previous knowledge of acoustics and also formulated an appreciable corpus of experiment and theory.
Experiments by Georg Simon Ohm indicated that all musical tones arise from simple harmonic vibrations of definite frequency. He noted the constituent components determining the sound quality. Helmholtz's studies of instruments and Rayleigh's work contributed to the nascent area of musical acoustics. In addition, Helmholtz's knowledge of ear physiology shaped the field that was to become physiological acoustics.
Underwater acoustics commenced with theories developed by the nineteenth-century mathematician Siméon-Denis Poisson, but further development had to await the invention of underwater transducers in the next century.
Two important nineteenth-century inventions, the telephone (patented in 1876) and the mechanical phonograph (invented in 1877), commingled and evolved into twentieth-century audio acoustics when they were united with electronics. Products where sound production and reception combined included microphones, loudspeakers, radios, talking motion pictures, high-fidelity stereo systems, and public sound-reinforcement systems. Improved instrumentation for the study of speech and hearing stimulated the areas of physiological and psychological acoustics. Ultrasonic devices are now routinely used for medical diagnosis and therapy, as well as for burglar alarms and rodent repellants. Underwater transducers are employed to detect and measure moving objects. Audio engineering technology has transformed music performance as well as sound reproduction. Virtually no area of human activity has remained unaffected by continually evolving technology based on acoustics.
How It Works
Ultrasonics. Dog whistles, which can be heard by dogs but not by humans, can generate ultrasonic frequencies of about 25 kilohertz (kHz). Two types of transducers, magnetostrictive and piezoelectric, are used to generate higher frequencies and greater power. Magnetostrictive devices convert magnetic energy into ultrasound by subjecting ferric material (iron or nickel) to a strong oscillating magnetic field. The field causes the material to alternately expand and contract, thus creating sound waves of the same frequency as that of the field. The resulting sound waves have frequencies between 20 Hz and 50 kHz and several thousand watts of power. Such transducers operate at the mechanical resonance frequency where the energy transfer is most efficient.
Piezoelectric transducers convert electric energy into ultrasound by applying an oscillating electric field to a piezoelectric crystal (such as quartz). These transducers, which work in liquids or air, can generate frequencies in the megahertz region with considerable power. In addition to natural crystals, ceramic piezoelectric materials, which can be fabricated into any desired shape, have been developed.
Physiological and Psychological Acoustics. Physiological acoustics studies auditory responses of the ear and its associated neural pathways. Psychological acoustics is the subjective perception of sounds through human auditory physiology. Mechanical, electrical, optical, radiological, or biochemical techniques are used to study neural responses to various aural stimuli. Because these techniques are typically invasive, experiments are performed on animals with auditory systems that are similar to the human system. In contrast, psychological acoustic studies are noninvasive and typically use human subjects.
A primary objective of psychological acoustics is to define the psychological correlates to the physical parameters of sound waves. Sound waves in air may be characterized by three physical parameters: frequency, intensity, and spectrum. When a sound wave impinges on the ear, the pressure variations in the air are transformed by the middle ear to mechanical vibrations in the inner ear. The cochlea then decomposes the sound into its constituent frequencies and transforms these into neural action potentials. These travel to the brain, where the sound is evidenced. Frequency is perceived as pitch, the intensity level as loudness, and the spectrum determines the timbre, or tone quality, of a note.
Another psychoacoustic effect is masking. When a person listens to a noisy version of recorded music, the noise virtually disappears if the music is being enjoyed. This ability of the brain to selectively listen has had important applications in digitally recorded music. When the sounds are digitally compressed, such as in MP3 (MPEG-1 audio layer 3) systems, the brain compensates for the loss of information. Thus, one experiences higher fidelity sound than the stored content would imply. Also, the brain creates information when the incoming signal is masked or nonexistent, producing a psychoacoustic phantom effect. This phantom effect is particularly prevalent when heightened perceptions are imperative, as when danger is lurking.
Psychoacoustic studies have determined that the frequency range of hearing is from 20 to about 20,000 Hz for young people, and the upper limit progressively decreases with age. The rate at which hearing acuity declines depends on several factors, not the least of which is lifetime exposure to loud sounds. These progressively deteriorate the hair cells of the cochlea. Moderate hearing loss can be compensated for by a hearing aid; severe loss requires a cochlear implant.
Speech Acoustics. Also known as acoustic phonetics, speech acoustics deals with speech production and recognition. The scientific study of speech began with Thomas Alva Edison's phonograph, which allowed a speech signal to be recorded and stored for later analysis. Replaying the same short speech segment several times using consecutive filters passing through a limited range of frequencies creates a spectrogram. This visualizes the spectral properties of vowels and consonants. During the first half of the twentieth century, Bell Telephone Laboratories invested considerable time and resources in the systematic understanding of all aspects of speech, including vocal tract resonances, voice quality, and prosodic features of speech. For the first time, electric circuit theory was applied to speech acoustics, and analog electric circuits were used to investigate synthetic speech.
Musical Acoustics. Musical acoustics combines music, craftsmanship, auditory science, and vibration physics. It is used to analyze musical instruments to better understand how the instruments are crafted. It also seeks to understand the physical principles of their tone production and why each instrument has a unique timbre. Musical instruments are studied by analyzing their tones and then creating computer models to synthesize these sounds. When the sounds can be recreated with minimal software complications, a synthesizer featuring realistic orchestral tones may be constructed. The second method of study is to assemble an instrument or modify an existing instrument to perform testing so that the effects of various modifications may be gauged.
Underwater Sound. Also known as hydroacoustics, this field uses frequencies between 10 Hz and 1 megahertz (MHz). Although the origin of hydroacoustics can be traced back to Rayleigh, the deployment of submarines in World War I provided the impetus for the rapid development of underwater listening devices. These included hydrophones and sonar (sound navigation and ranging). Sonar is the acoustic equivalent of radar. Pulses of sound are emitted, and the echoes are processed to extract information about submerged objects. When the speed of underwater sound is known, the reflection time for a pulse determines the distance to an object. If the object is moving, its speed of approach or recession is deduced from the frequency shift of the reflection, or the Doppler effect. Returning pulses have a higher frequency when the object approaches and lower frequency when it moves away.
Noise.Noise may be defined as an intermittent or random oscillation with multiple frequency components. From a psychological standpoint, noise is simply any unwanted sound. Noise can adversely affect human health and well-being by inducing stress, interfering with sleep, increasing heart rate, raising blood pressure, modifying hormone secretion, and even inducing depression. The environmental effects of noise are no less severe. The vibrations in irregular road surfaces caused by large rapid vehicles can cause adjacent buildings to vibrate to an extent that is intolerable to the buildings' inhabitants, even without structural damage. Machinery noise in industry is a serious problem because continuous exposure to loud sounds will induce hearing loss. In apartment buildings, noise transmitted through walls is always problematic; the goal is to obtain adequate sound insulation using lightweight construction materials.
Traffic noise, both external and internal, is ubiquitous in modern life. The first line of defense is to reduce noise at its source by improving engine enclosures, mufflers, and tires. The next method, used primarily when interstate highways are adjacent to residential areas, is to block the noise by the construction of concrete barriers or the planting of sound-absorbing vegetation. Internal automobile noise has been greatly abated by designing more aerodynamically efficient vehicles to reduce air turbulence, using better sound isolation materials, and improving vibration isolation.
Aircraft noise, particularly in the vicinity of airports, is a serious problem exacerbated by the fact that as modern airplanes have become more powerful, the noise they generate has risen concomitantly. The noise radiated by jet engines is reduced by two structural modifications. Acoustic linings are placed around the moving parts to absorb the high frequencies caused by jet whine and turbulence, but this modification is limited by size and weight constraints. The second modification is to reduce the number of rotor blades and stator vanes, but this is somewhat inhibited by the desired power output. Special noise problems occur when aircraft travel at supersonic speeds (faster than the speed of sound), as this propagates a large pressure wave toward the ground that is experienced as an explosion. The unexpected sonic boom startles people, breaks windows, and damages houses. Sonic booms have been known to destroy rock structures in national parks. Because of these concerns, commercial aircraft are prohibited from flying at supersonic speeds over land areas.
Construction equipment (such as earthmoving machines) creates high noise levels both internally and externally. When the cabs of these machines are not closed, the only feasible manner of protecting operators' hearing is by using earplugs. By carefully designing an enclosed cabin, structural vibration can be reduced, and sound leaks made less significant, thus quieting the operator's environment. Although manufacturers are attempting to reduce the external noise, it is a daunting task because quieter components, such as the rubber tractor treads occasionally used to replace metal, are often not as durable.
Applications and Products
Ultrasonics. High-intensity ultrasonic applications include ultrasonic cleaning, mixing, welding, drilling, and various chemical processes. Ultrasonic cleaners use waves in the 150 to 400 kHz range on items (such as jewelry, watches, lenses, and surgical instruments) placed in an appropriate solution. Ultrasonic cleaners have proven to be particularly effective in cleaning surgical devices. This is because they loosen contaminants by aggressive agitation, irrespective of an instrument's size or shape. Also, disassembly is not required. Ultrasonic waves are effective in cleaning most metals and alloys, as well as wood, plastic, rubber, and cloth. Ultrasonic waves are used to emulsify two nonmiscible liquids, such as oil and water, by forming the liquids into finely dispersed particles that then remain in homogeneous suspension. Many paints, cosmetics, and foods are emulsions formed by this process.
Although aluminum cannot be soldered by conventional means, two surfaces subjected to intense ultrasonic vibration will bond—without the application of heat—in a strong and precise weld. Ultrasonic drilling is effective where conventional drilling is problematic, for instance, drilling square holes in glass. The drill bit, a transducer with the required shape and size, is used with an abrasive slurry that chips away the material when the suspended powder oscillates. Some of the chemical applications of ultrasonics are in the atomization of liquids, in electroplating, and as a catalyst in chemical reactions.
Low-intensity ultrasonic waves are used for nondestructive probing to locate flaws in materials for which complete reliability is mandatory. Examples are those used in spacecraft components and nuclear reactor vessels. When an ultrasonic transducer emits a pulse of energy into the test object, flaws reflect the wave and are detected. Because objects subjected to stress emit ultrasonic waves, these signals may be used to interpret the condition of the material as it is increasingly stressed. Another application is ultrasonic emission testing. Here, the ultrasound emitted by porous rock is recorded as natural gas is pumped into cavities formed by the rock. This determines the maximum pressure these natural holding tanks can withstand.
Low-intensity ultrasonics is used for medical diagnostics in two different applications. First, ultrasonic waves penetrate body tissues but are reflected by moving internal organs, such as the heart. The frequency of waves reflected from a moving structure is Doppler-shifted, thus causing beats with the original wave, which can be heard. This procedure is particularly useful for performing fetal examinations on a pregnant woman. Because sound waves are not electromagnetic, they will not harm the fetus. The second application is to create a sonogram image of the body's interior. A complete cross-sectional image may be produced by superimposing the images scanned by successive ultrasonic waves passing through different regions. This ultrasonography procedure, unlike an X-ray, displays all the tissues in the cross-section and also avoids any danger posed by the radiation involved in X-ray imaging.
Physiological and Psychological Acoustics. Because the ear is a nonlinear system, it produces beat tones that are the sum and difference of two frequencies. For example, if two sinusoidal frequencies of 100 and 150 Hz simultaneously arrive at the ear, the brain will, in addition to these two tones, create tones of 250 and 50 Hz (sum and difference, respectively). Thus, although a small speaker cannot reproduce the fundamental frequencies of bass tones, the difference between the harmonics of that pitch will recreate the missing fundamentals in the listener's brain.
Another psychoacoustic effect is masking. When a person listens to a noisy version of recorded music, the noise virtually disappears if the individual is enjoying the music. This ability of the brain to selectively listen has had important applications in digitally recorded music. When sounds are digitally compressed, as in MP3 systems, the brain compensates for the loss of information, thus creating a higher fidelity sound than that conveyed by the stored content alone.
As twentieth-century technology progressed, environmental noise increased. Lifetime exposure to loud sounds, both commercial and recreational, has created an epidemic of hearing loss. This is not noticeable in older people because the effects are cumulative. Wearing a hearing aid fitted adjacent to or inside the ear canal is an effective means of counteracting this handicap. The device consists of one or several microphones, which create electric signals that are amplified and transduced into sound waves redirected back into the ear. More sophisticated hearing aids incorporate an integrated circuit to control volume, either manually or automatically. They can also switch to volume contours designed for various listening environments, such as conversations on the telephone or where excessive background noise is present.
Speech Acoustics. With the advent of the computer age, speech synthesis moved to digital processing. This was either by bandwidth compression of stored speech or by using a speech synthesizer. The synthesizer reads a text and then produces the appropriate phonemes on demand from their basic acoustic parameters, such as the vibration frequency of the vocal cords and the frequencies and amplitudes of the vowel formants. This method of generating speech is considerably more efficient in terms of data storage than archiving a dictionary of prerecorded phrases.
Another important, and probably the most difficult, area of speech acoustics is the machine recognition of spoken language. As machine speech recognition programs became sufficiently advanced, computers were able to listen to a sentence in any reasonable dialect and produce a printed text of the utterance. Two basic recognition strategies exist, one dealing with words spoken in isolation and the other with continuous speech. In both cases, it was necessary to program the computer to recognize the speech of different people through a training program. Because recognition of continuous speech is considerably more difficult than the identification of isolated words, very sophisticated pattern-matching models were employed. Speech recognition technologies advanced greatly in the twenty-first century. Applications were included on smartphones and computers along with virtual personal assistants, such as Apple's Siri and Amazon's Alexa. Such systems were able to process full sentences, enact commands, and even respond with synthesized speech in a realistic manner.
Musical Acoustics. The importance of musical acoustics to manufacturers of quality instruments is apparent. During the last decades of the twentieth century, research led to vastly improved French horns, organ pipes, orchestral strings, and the creation of an entirely new family of violins. Acoustics opened up the possibilities of advanced technologies such as software instruments and other innovative forms of music-making and recording.
Underwater Sound. Applications for underwater acoustics included devices for underwater communication by acoustic means, remote control devices, underwater navigation and positioning systems, acoustic thermometers to measure ocean temperature, and echo sounders to locate schools of fish or other biota. Low-frequency devices became used to explore the seabed for seismic research.
Although primitive measuring devices were developed in the 1920s, a decade later, sonar systems began incorporating piezoelectric transducers to increase their accuracy. These improved systems and their increasingly more sophisticated progeny became essential for the submarine warfare of World War II. After the war, theoretical advances in underwater acoustics coupled with computer technology raised sonar systems to ever more sophisticated levels.
Noise. One system for abating unwanted sound is active noise control. The first successful application of active noise control was noise-canceling headphones. These reduced unwanted sound by using microphones placed in proximity to the ear to record the incoming noise. Electronic circuitry then generated a signal exactly opposite to the incoming sound, which was reproduced in the earphones, thus canceling the noise by destructive interference. This system enabled listeners to enjoy music without having to use excessive volume levels to mask outside noise and allowed people to sleep in noisy vehicles such as airplanes.
Because active noise suppression is more effective with low frequencies, most commercial systems rely on soundproofing the earphone to attenuate high frequencies. To effectively cancel high frequencies, the microphone and emitter had to be situated adjacent to the user's eardrum, but this was not technically feasible. Active noise control was also considered as a means of controlling low-frequency airport noise.
Careers and Course Work
Career opportunities occur in academia (teaching and research), industry, and national laboratories. Academic positions dedicated to acoustics are few, as are the number of qualified applicants. Most graduates of acoustics programs find employment in research-based industries where acoustical aspects of products are important, and others work for government laboratories.
Although the subfields of acoustics are integrated into multiple disciplines, most aspects of acoustics can be learned by obtaining a broad background in a scientific or technological field, such as physics, engineering, meteorology, geology, or oceanography. Physics probably provides the best training for almost any area of acoustics. An electrical engineering major is useful for signal processing and synthetic speech research, and a mechanical engineering background is requisite for comprehending vibration. Training in biology is expedient for physiological acoustic research, and psychology coursework provides essential background for psychological acoustics. Architects often employ acoustical consultants to advise on the proper acoustical design of concert halls, auditoriums, or conference rooms. Acoustical consultants also assist with noise reduction problems and help design soundproofing structures for rooms. Although a background in architecture is not a prerequisite for becoming this type of acoustical consultant, engineering or physics is.
Acoustics is not typically a university major; therefore, specialized knowledge is best acquired at the graduate level. Many electrical engineering departments have at least one undergraduate course in acoustics, but most physics departments do not. Nevertheless, a firm foundation in classical mechanics (through physics programs) or a mechanical engineering vibration course will provide, along with numerous courses in mathematics, sufficient underpinning for successful graduate study in acoustics.
Social Context and Future Prospects
Acoustics affects virtually every aspect of modern life; its contributions to societal needs are incalculable. Ultrasonic waves clean objects, are routinely employed to probe matter, and are used in medical diagnosis. Cochlear implants restore people's ability to hear, and active noise control helps provide quieter listening environments. New concert halls are routinely designed with excellent acoustical properties, and vastly improved or entirely new musical instruments have made their debut. Infrasound from earthquakes is used to study the composition of Earth's mantle, and sonar is essential to locate submarines and aquatic life. Sound waves are used to explore the effects of structural vibrations. Automatic speech recognition devices and hearing aid technology are constantly improving.
Many societal problems related to acoustics remain to be tackled. The technological advances that made modern life possible have also resulted in more people with hearing loss. Environmental noise is ubiquitous and increasing despite efforts to design quieter machinery and pains taken to contain unwanted sound or to isolate it from people. Also, although medical technology has been able to help many hearing- and speech-impaired people, other individuals still lack appropriate treatments. For example, although voice generators exist, there is considerable room for improvement.
In the 2020s, a new field of acoustical study emerged which was designated as acoustic metamaterials, or AAM. AAMs are human-generated materials specifically crafted to produce acoustics that do not naturally occur in nature. Because of the physical properties of AAMs, sound and elastic waves can be tailored as necessary to perform desired functions. They can also control sound to minute specifications.
Bibliography
Bass, Henry E., and William J. Cavanaugh, eds. ASA at Seventy-Five. Acoustical Soc. of Amer., 2004.
Beyer, Robert T. Sounds of Our Times: Two Hundred Years of Acoustics. Springer, 1999.
Crocker, Malcolm J., ed. The Encyclopedia of Acoustics. 4 vols. Wiley, 1997.
Everest, F. Alton, and Ken C. Pohlmann. Master Handbook of Acoustics. 6th ed. McGraw, 2015.
Gülcan Aydın and Sait Eren San. “Breaking the Limits of Acoustic Science: A Review of Acoustic Metamaterials.” Materials Science and Engineering, July 2024, www.sciencedirect.com/science/article/pii/S0921510724002137. Accessed 1 June 2024.
Kistovich, Anatoly, et al. Ocean Acoustics. Springer, 2021.
Pierce, Allan D. Acoustics: An Introduction to Its Physical Principles and Applications. 3rd ed. Springer, 2019.
Rossing, Thomas, and Neville Fletcher. Principles of Vibration and Sound. 2nd ed. Springer, 2004.
Rumsey, Francis, and Tim McCormick. Sound and Recording: An Introduction. 7th ed. Elsevier, 2014.
Strong, William J., and George R. Plitnik. Music, Speech, Audio. 3rd ed. BYU Acad., 2007.
Swift, Gregory. “Thermoacoustic Engines and Refrigerators.” Physics Today 48.7 (1995): 22–28. Print.
“What is Acoustics?” Brigham Young University, 2024, acoustics.byu.edu/what-is. Accessed 1 June 2024.