Audio engineering
Audio engineering is the discipline focused on the capture, enhancement, and reproduction of sound, particularly in music and multimedia contexts. It combines artistic creativity with scientific principles and technical proficiency, requiring an understanding of sound physics and familiarity with sophisticated recording equipment and software. Audio engineers play key roles in various sectors, including music production, film and television soundtracks, video game audio, live broadcasting, and advertising, among others. Their work involves running recording sessions, manipulating audio signals, and ensuring high-quality sound output for both live and recorded media.
The field has evolved significantly since its inception with the phonograph in the late 19th century, transitioning from analog methods to digital technologies in the 1980s and beyond. Audio engineers utilize diverse tools, such as microphones and mixing consoles, to capture sound effectively and may also engage in mastering and compressing audio for distribution. The profession is supported by a range of educational programs, from technical certificates to university degrees, catering to both formal training and self-taught individuals. As technology advances, audio engineering continues to adapt, with emerging trends like artificial intelligence impacting the way sound is produced and processed, presenting both opportunities and ethical considerations for the industry.
On this Page
- Summary
- Definition and Basic Principles
- Background and History
- How It Works
- Sound
- Hearing
- Sound Capture
- Signal Processing
- Sound Output
- Applications and Products
- Instrumental and Vocal Music
- Film and Television
- Live Broadcasting
- Radio and Television Commercials
- Video Games
- Forensic Evidence Analysis
- Audio Books
- Careers and Course Work
- Social Context and Future Prospects
- Bibliographies
Subject Terms
Audio engineering
Summary
Audio engineering is the capture, enhancement, and reproduction of sounds. It requires an aesthetic appreciation of music and sound quality, a scientific understanding of sound physics, and a technical familiarity with recording equipment and computer software. This applied science is essential to the music industry, film, television, video game production, live television, radio broadcasting, and advertising. In addition, it contributes to educational services for the visually impaired and to forensic evidence analysis.
Definition and Basic Principles
Audio engineeringalso known as sound engineering and audio technologyis the recording, manipulation, and reproduction of sound, especially music. Audio engineers run recording sessions, work the equipment, and collaborate on the finished productsimultaneously as technicians, scientists, and creative advisers. They work in recording studios, producing instrumental and vocal music recordings, film and television soundtracks, syncing, and sound effects, music and voice-overs for radio and television commercials, and music and sound effects for video games.

A recording studio is a specialized environment designed to capture sounds accurately for enhancement and reproduction. The acoustic sounds produced by instruments and vocalists are picked up by strategically placed microphones and transmitted as analog electrical signals to recording equipment, where they may be converted into digital data. Signals may be modified by the use of a mixing console, also called a mixing board or sound board. This device changes the characteristics and balance of the input, which may be coming from multiple microphones or signals recorded in different sessions. The final product then undergoes mastering for commercial reproduction and compression for distribution in a digital format.
Audio engineers are not acoustic engineers. These are graduates of a formal university program in engineering who work with architects and interior designers to plan and install audio systems for large venues such as churches, school auditoriums, and concert halls. Audio engineers are engineers in the sense that they are needed to devise a creative solution to a complex sound challenge and oversee its implementation.
Background and History
Sound capture and reproduction, and thus audio engineering, began with the invention of the phonograph by Thomas Alva Edison in 1877. Sound was originally recorded on cylinders. The first were wrapped in tin foil and later ones in wax. By 1910, cylinders were replaced with disks, which held longer recordings, were somewhat louder, and could be more economically mass-produced. The disks were spun on a turntable at standard speeds—initially 78 revolutions per minute (rpm). Larger disks were played at 33 rpm and smaller disks were played at 45 rpm. Discs were originally made of shellac and later made of vinyl. They were played with needles (styli) made of industrial diamond, which held a point.
Concurrently, RCA was creating microphones that improved recorded sound quality. In the 1940s, sound began being recorded on magnetic tape and could be reproduced in stereo and as mixed multiple tracks. Digital technology appeared in the 1980s and by the turn of the century, digital recordings were produced with computer technology. Using data compression, digital audio recordings can produce quality replication of the original music in a format that requires less data storage (computer memory). MP3 is one such format and is popular for portable consumer music systems.
How It Works
Sound
Sound refers to the waves of pressure a vibrating object emits through air or water. The three most meaningful characteristics of sound waves are wavelength, amplitude, and frequency. The wavelength is the distance between equivalent points on consecutive waves, such as peak to peak. A short wavelength means that more waves are produced per second, resulting in a higher sound. The amplitude is the strength of the wave; the greater the amplitude, the greater the volume (loudness). The frequency is the number of wavelengths that occur in one second. The greater the frequency, the higher the pitch because the sound source is vibrating quickly.
Hearing
Hearing is the ability to receive, sense, and decipher sounds. To hear, the ear must direct the sound waves inside, sense the sound vibrations, and translate the sensations into neurological impulses that the brain can recognize. The outer ear funnels sound into the ear canal. It also helps the brain determine the direction from which the sound is coming.
When the sound waves reach the ear canal, they vibrate against the eardrum. These vibrations are amplified by the eardrum's movement against three tiny bones located behind it. The three bones are the malleus, incus, and stapes . The stapes rests against the cochlea, and when it transmits the sound, it creates waves in the fluid of the cochlea.
The cochlea is a coiled, fluid-filled organ that contains 30,000 hairs of different lengths that resonate at different frequencies. Vibrations of these hairs trigger complex electrical patterns that are transmitted along the auditory nerve to the brain, where they are interpreted.
The frequency of sound waves is measured in hertz (Hz). Humans have a hearing range from 20 to 20,000 Hz. Another name for the frequency of a sound wave is the musical pitch. Pitches are often referred to as musical notes, such as middle C.
The relative loudness of a sound compared with the threshold of human hearing is measured in decibels (dB). Conversation is usually conducted at 40 to 60 dB, while a car passing at 10 meters may be 80 to 90 dB, a jet engine 100 meters away may be 110 to 140 dB, and a rifle fired 1 meter away is 150 dB. Long-term, though not necessarily continuous, exposure to sounds greater than 85 dB may cause hearing loss.
The human ear can discern between two musical instruments playing the same note at the same volume by the recognition of a sound characteristic called timbre. Often described by adjectives such as bright versus dark, smooth versus harsh, and regular versus random or erratic, timbre is often what distinguishes music from noise.
Sound Capture
Transducers are devices that change energy from one form into another. A microphone changes acoustical signals into electrical signals, while a speaker changes electrical signals into acoustical signals. The source of the incoming electrical signals may be immediate, such as a microphone or electrical musical instrument, or it may be a recording, such as a compact disc or MP3 file.
Microphones come in many varieties, such as dynamic, ribbon, condenser, parabolic, and lavaliere. They also vary by their polar patterns, that is, their area of sensitivity to sounds coming in from different directions relative to the receiving membrane. They may be omnidirectional (sensitive to sounds coming from all directions), unidirectional (intended for directed sound reception), or cardioid (having a heart-shaped area of sensitivity). The choices of variety, polar pattern, and placement affect the quality and quantity of sound capture.
Signal Processing
The auditory electrical signal from a microphone is relatively weak, so it must be amplified before the sound can be deliberately modified through signal processing. Incoming sound may be modified in its analogue form or converted to digital data before alteration. Sound mixing is the process of blending sounds from multiple sources into a desired end product. It often starts with finding a balance between vocal and instrumental music or dissimilar instruments so that one does not overshadow the other. It involves the creation of stereo or surround sound from the placement of sound in the sound field to simulate directionality (left, center, or right). Equalizing adjusts the bass and treble frequency ranges. Effects such as reverberation may be added to create dimension. Signals may undergo gating and compression to remove unwanted noise and extraneous data selectively.
Sound Output
Auditory electrical signals may then be sent to speakers, where they are converted into acoustical signals to be heard by a live audience. Digital signals may be broadcast in real time over the Internet as streaming audio. Otherwise, the processed signals may be stored for future reproduction and distribution. Analogue signals may be stored on magnetic tape. Digital signals may be stored on a compact disc or subjected to MP3 encoding for storage on a computer or personal music player.
Applications and Products
Instrumental and Vocal Music
As specialists in the capture, enhancement, and reproduction of sound, audio engineers are crucial to successful recording sessions. They collaborate with producers and performers to generate the shared artistic vision. They determine the choice and placement of microphones and closely scrutinize the parameters of the incoming signals to collect sufficient data with which to work. They manage the scheduling of studio sessions to keep all participants working efficiently, especially when multiple tracks are being recorded and mixed at different times. They act professionally and deliver the finished product with the highest quality possible.
Audio engineers are also responsible for the restoration of classic recordings that would otherwise be lost. They rescue the raw data that was captured in the first recording, strengthen the sound while preserving the style of the original period, and return it to audiences in a contemporary format.
Because musicians go on concert tours, audio engineers accompany them to provide optimum live sound quality in each different venue. They conduct sound checks before performances and make adjustments for conditions such as wind on outdoor stages.
Film and Television
In the recording studio, audio engineers oversee the production of music soundtracks for films and television shows. Unlike songs that stand alone, the music must be carefully synchronized to the action of the film. It must also swell and ebb with precision to arouse audience emotion.
Foley recording is the production of sound effects that are inserted into videos after they are filmed to add realism and dramatic tension. Foley recording can be synchronized efficiently to video footage because the sound effects are produced in real time, not modified from stock recordings. In addition, sounds that do not exist in reality and so would not be cataloged in a prerecorded audio library must be created.
Live Broadcasting
Audio engineers may be seen sitting at mixing consoles or computers monitoring and adjusting the audio input and output quality at church services, lectures, theatrical performances, and events held in large auditoriums. They may similarly be found as part of a broadcasting team at live sporting events held outdoors, such as football or baseball games, golf tournaments, and the Olympic Games.
Radio and Television Commercials
Audio engineers are instrumental in the production of radio and television commercials, not only for their recording and sound-processing skills but also for their production skills. Because advertising time is sold in specific brief allotments, engineers must encourage the actors to perform at an accelerated pace and later edit the audio to fit within the time allowed. They may also be asked to recruit or audition competent musicians and voice actors to meet the client's needs.
Video Games
The skills of audio engineers enhance the production of popular video games. In addition to providing sound effects such as explosions and gunfire, engineers must create appropriate imaginary sounds such as spaceships landing, ambient sound effects such as slot machine bells and crowd murmurs, and realistic situational sounds, such as footsteps going from grass to gravel. In some cases, they may be called on to provide minor character voices or record spoken instructions.
Forensic Evidence Analysis
Police may seek the assistance of an experienced audio engineer to remove unnecessary background noise from covert recordings of suspected criminals and to make voiceprint comparisons with known exemplars. Voiceprints, vocal qualities that can be demonstrated on a sound spectrograph, are personal because each person's oral and pharyngeal anatomy is distinctive. They are not, however, unique like fingerprints because children often sound like their parents and share similar voiceprints. Research has shown that the error rates of misidentifying suspects (false positives) and improperly eliminating suspects (false negatives) are respectably low.
Audio Books
Recordings of books originated in 1932 under the auspices of the American Foundation for the Blind as educational tools for the visually impaired. Books were recorded on shellac discs and played on a turntable. Books on audio cassettes emerged about twenty years later. Audio books likewise could be listened to on CDs or portable digital music devices. Audio engineers are responsible for processing the audio signal to optimize the clarity of human speech and editing numerous recitations into one continuous, flawless performance.
Careers and Course Work
Technical and vocational schools offer diploma and certificate programs in audio engineering along with internships for hands-on experience. The Musicians Institute in Los Angeles, for example, offers a certificate in audio engineering with subspecialties such as postproduction or live-sound production. Other schools, such as the Conservatory of Recording Arts and Sciences in Gilbert, Arizona, provide certification for demonstrated proficiency in software and equipment related to audio engineering.
Some universities, such as Indiana University, offer a Bachelor of Arts in music with a concentration in the music industry and a track within that concentration in sound engineering. Texas State University School of Music offers a Bachelor of Science in sound recording technology. The Peabody Institute at Johns Hopkins University offers a Bachelor of Music in recording arts and sciences and a Master of Arts in audio sciences.
Many audio engineers are self-taught. Audio engineering is a multifaceted discipline, requiring a creative appreciation of music and sound quality, a scientific understanding of sound-wave physics, and precise technical familiarity with recording equipment and computer software.
Careers vary with expertise and experience. An assistant audio engineer is typically responsible for the setup and breakdown of a recording session, including the placement of microphones. A staff engineer records the sound. A mixing engineer coordinates multiple recording tracks to produce the desired effect. A mastering engineer adds the finishing touches to the final product and compresses it for mass duplication. A chief engineer works with the record producer, making the technical decisions that help achieve the artistic vision for the project.
Social Context and Future Prospects
Audio engineering is a combination of technology, science, and art. Advancements in audio engineering will come in all three areas. On the technical front, classicespecially pre-1920recordings continue to be found, researched, and digitally restored. Improved transducer materials are being sought and new computer software applications for signal processing are being developed. Surround sound is being refined to accompany three-dimensional television programs and films as well as video games.
Scientific research into psychoacoustics, the study of sound perception, is expanding. The eventual understanding of how music affects a person's brain will advance the field of music therapy, which seems to touch every facet of a person's being to restore and maintain health. Researchers are also exploring the connections between sound characteristics and the perceptions of timbre and spatial placement and between these perceived attributes and listening preference.
The artistic manipulation of sound is broadening the definition of music and musical instruments. Computer-mediated music has inspired the creation of mobile phone and laptop orchestras. Music enhancement by selectively masking undesired frequencies of instruments and highlighting others is introducing new sound combinations previously not experienced.
In the 2020s, Artificial Intelligence (AI) made a profound entry into the audio recording business. In some cases, AI was used to enhance already existing processes, such as voice-to-text transcription. In other instances, the use of AI has not come without controversyAI can be employed for purposes that are perceived as positive as well as negative. AI can analyze enormous quantities of data to perform pattern analysis. It can learn to reproduce audio styles that may have originated from a different source. This can be the voice of a famous person, or the playing style of a musician. AI can then reproduce this audio style in a different creation altogether. This may be done without the consent or knowledge of the original audio source. Such counterfeit productions have been termed “deepfakes.” They have the potential to influence unknowing or unsuspecting audiences.
Bibliographies
Dittmar, Tim. Audio Engineering 101: A Beginner's Guide to Music Production. Waltham: Focal, 2012. Print.
Eisenberg, Wick. "The Science of the Sound of Music." Hub, 7 Dec. 2021, hub.jhu.edu/2021/12/07/acoustics-course-engineering-peabody. Accessed 10 Feb. 2020.
Friedman, Dan. Sound Advice: Voiceover from an Audio Engineer's Perspective. Bloomington: AuthorHouse, 2010. Print.
Hampton, Dave. So, You're an Audio Engineer: Well, Here's the Other Stuff You Need to Know. Parker: Outskirts, 2005. Print.
Hampton, Dave. The Business of Audio Engineering. 2nd ed. New York: Hal Leonard, 2013. Print.
“How to apply AI effectively for Audio Engineering.” HogoNext, 8 Apr. 2024, hogonext.com/how-to-apply-ai-effectively-for-audio-engineering. Accessed 3 June 2024.
McIntyre, Hugh. "Is Audio Engineering a Good Career? How to Become an Audio Engineer." CareersinMusic.com, 27 Aug. 2021, www.careersinmusic.com/audio-engineering. Accessed 10 Feb. 2022.
Michael-B. “Unleashing the Power of AI: Exploring AI-Assisted Audio Production.” TrackinSolo, 13 June 2023, trackinsolo.com/ai-assisted-audio-production. Accessed 3 June 2024.
Powell, John. How Music Works: The Science and Psychology of Beautiful Sounds, from Beethoven to the Beatles and Beyond. New York: Little, 2010. Print.
Ricci, Benjamin. “The Pros, Cons, and Future of Artificial Intelligence in Music.” Performermag, 23 Apr. 2020, performermag.com/band-management/the-pros-cons-and-future-of-artificial-intelligence-in-music. Accessed 3 June 2024.
Talbot-Smith, Michael. Sound Engineering Explained. 2nd ed. Woburn: Focal, 2001. Print.
Talbot-Smith, Michael, ed. Audio Engineer's Reference Book. 2nd ed. Woburn: Focal, 1999. Print.