Mathematics and cinematography

Summary: A variety of mathematics, including signal processing, geometry, and lighting, are required for making movies.

It takes many people with many different talents to make a movie. Some of these required talents are very technical, so these filmmakers must have a working knowledge of various mathematical principles to employ the tools of their trade. A sampling of these areas includes camerawork, sound recording, and special effects. Signal processing, a branch of applied mathematics, is necessary both during the production of a film (for selection of filters and set dressings of acceptable visual frequency) and in postproduction, where dialogue must be made understandable in the sound track. In addition, often the shooting of a scene itself, with its restrictions on space and desired camera angles as well as satisfying lighting needs, becomes a problem in geometry. Physical phenomena and their interactions can increasingly be modeled using mathematics. Mathematicians such as Tony DeRose, who won a 2006 scientific and technical Academy Award for his work on surface representations, play an increasingly important role in producing modern special effects.

98697107-91121.jpg98697107-91122.gif

Camera Work

In the shooting of a scene, the number of variables is considerable, and those directing the operation of a camera have many decisions to make. Considerations include viewing angles, shutter speeds, lens selection, current lighting, and the format of the film. These considerations become considerably more complicated when working with miniatures in which an attempt is made to fool the eye of the viewer into believing the miniature is a real, full-sized object. The choice of the camera itself, which has many parameters, has a significant effect on the look of the film.

The operation of the camera depends a great deal on the lighting of the set. An f-stop, which has been used for many years on cameras, is the ratio of the focal length of the lens to the diameter of the entrance pupil. This unit was used to control the quantity of light reaching the film. However, because of the fact that much of the light reaching the film plane is lost to diffraction, reflection, and refraction, more modern cameras use the T-stop calibration, which is a measure of the actual amount of light reaching the film plane. If no light were lost to optical factors, these two values would be identical. Both of these measures are used extensively: the f-stop for depth of field calculations and the T-stop for light transmission.

The gaffer (crew boss responsible for planning the lighting) uses a variety of tools to light a scene so that the film can be recorded with the desired viewing window, shutter speed, and camera angles, as well as various aesthetic considerations. One such tool is the inverse square law. This law states that the intensity of a single source of light decreases in proportion to the square of its distance from the subject. Using this law, a small light puts less light on the background, if desired, or a larger light farther away creates a larger area with a similar light level. The light used will also affect the T-stop to be used on the camera, so the light placements must be planned carefully and light output levels must be known exactly.

One calculation the camera operator must constantly make is to determine the depth of field. A lens can focus on only one distance at a time. Therefore, technically, both the foreground and the background of a scene are never in focus simultaneously; in fact, only one point on an actor is in focus at any one time. However, objects close to this distance will not appear blurry to the human eye, which does not perceive imperfection within a certain distance of the point of focus. The distance interval in which all objects are acceptably focused is called the “depth of field.” To determine the depth of field, one must first determine the hyperfocal distance, the smallest distance such that all objects from half this distance through infinity are in acceptable focus. This distance can be approximated algebraically, with a parameter known as the “circle of confusion” determining what is considered to be acceptable focus dependent on the focal length and f-stop setting of the lens. Finally, the near and far limits of the depth of field can be determined with the equations

where Dn and Df are the near and far limits of the depth of field, S is the distance from the camera to the subject, and H is the hyperfocal distance. These formulas are simplified versions of the normal depth of field equations, which have an interesting geometric derivation.

Audio and Visual Signal Processing

The production sound mixer is in charge of recording the sound and dialogue for a film. Typically, crewmembers are hired to operate microphones, often using long poles with a microphone on the end. These microphones are used to record various sounds on the set, with wireless microphones attached to the actors to record dialogue. Sound effects are recorded separately, as is the score. During post-production, unwanted noise must be filtered out of the recordings, the dialogue must be made understandable, and the effects, score, and dialogue must be mixed together meaningfully.

To remove background noise, the audio signal (composed of sound waves) is decomposed using a Fourier transform, so that the model of the audio signal is divided into simpler, trigonometric components. These components are then analyzed, isolating frequencies corresponding to unwanted artifacts in the recording, such as the sound of the wind on the microphone. Background noise is removed by removing the Fourier components of amplitude below a certain level. Finally, by reversing the transform, a more filmworthy audio signal is obtained.

Processing must also be done to the visual signal. Video cameras record at a “frame rate,” the frequency with which the camera produces images. These images are recorded as discrete signals, which are then reconstructed on film. If objects of a high visual frequency are used in a scene, a loud tie for example, then the image on film will experience aliasing, causing visual distortion or artifacts. To avoid this, the set designer or costumer needs to avoid objects above a certain visual frequency. This frequency, called the “Nyquist frequency,” is half the frame rate. If images of high-visual-frequency objects are desired, then an antialiasing filter must be used, such as a lowpass filter, which will pass the low-frequency objects but reduce the amplitude of the high-frequency objects. Filmmakers have many filters that can be used to capture a wide variety of objects in a scene, depending on the mix of visual frequencies present.

Bibliography

Burum, Stephen, ed. American Cinematographer Manual. Hollywood, CA: ASC Press, 2007.

Haunsperger, Deanna, and Steve Kennedy. “Math Makes the Movies.” Math Horizons 9 (November 2001).

McAdams, A., S. Osher, and J. Teran. “Crashing Waves, Awesome Explosions, Turbulent Smoke, and Beyond: Applied Mathematics and Scientific Computing in the Visual Effects Industry.” Notices of the American Mathematical Society 57, no. 5 (2010).