Measurements of length

Summary: The challenge of measuring lengths spurred numerous mathematical developments.

The origin of length measurements certainly predates any recorded history. One can imagine a hunter in the Pleistocene making arrows whose length only marginally exceeds the draw length of his bow, or perhaps measuring spear-throwing distance so that when hunting, throws are not wasted on animals out of range. The introduction of new technologies invariably increased the demand on the range and precision of measuring abilities. To build a house, beams need to be cut to specific lengths and notched at nearly exact positions. To build a cart or any wheeled object, lengths need to be gauged with remarkable precision in order for the wheel to have the freedom to rotate while still having weight-bearing support in all directions.

94981972-91501.jpg94981972-91502.jpg

As older technologies were improved and new inventions arose, the terms and mathematics of length measurement were forced to keep pace. In order to convey the perception of length without having to give an example—indicating, for example, the width of a farming field to a friend, or the height of a horse to a potential buyer—it quickly became useful to adopt certain units (agreed-upon conventions for fixed lengths that could be used for reference when desired). Many of these units originated from roughly constant measurements of parts of the body, simply because this turned every person into a walking measuring stick. The foot and the hand are perhaps the most obvious examples of body-related units of measure. In fact, the width of the palm of the hand is roughly 4 inches (including the thumb when closed against the palm), and is still used today to indicate the height of horses. The inch was originally the width of a thumb. The cubit was perhaps the first standardized unit of length and is defined to be the length from the elbow to the tip of ones longest finger. There is some evidence indicating that the yard was defined by King Henry I to be the distance from the tip of the king’s nose to the end of his outstretched thumb.

History of Standardized Measures

The adoption of widespread and official standardization began, as far as is known, in Europe during the reign of Richard the Lion-Hearted in the late twelfth century. At this time it was decreed that, “Throughout the realm there shall be the same yard of the same size and it should be of iron.” During the reign of Edward I, in the late thirteenth century, additional terms were created:

It is remembered that the Iron Ulna of our Lord the King contains three feet and no more; and the foot must contain 12 inches, measured by the correct measure of this kind of ulna; that is to say, one thirty-sixth part [of] the said ulna makes one inch, neither more nor less… . It is ordained that three grains of barley, dry and round make an inch, twelve inches make a foot; three feet make an ulna; five and a half ulna makes a perch (rod); and forty perches in length and four perches in breadth make an acre.

This quest for standardization lasted through multiple revisions of terms and new techniques for representing the meter or the yard. In fact, measurements of weight and time evolved in very similar ways with similar revisions. These efforts occasionally reached giant proportions. In 1791, after a protracted debate over the most natural and elegant way to define these units of length, the French National Assembly decided that the meter should be defined as one ten-millionth of one-quarter of the circumference of the Earth. Using geometric techniques, they had already been able to estimate this distance to be very similar to the previously held definition of the meter. France then sent surveyors all over the globe to more exactly measure this distance. Although the surveyors often encountered hostility, occasionally being arrested as French spies, in 1799 the project was completed and a platinum bar representing the definition of the meter was created and stored in a safe location.

As technology improved, so did the definitions of units of length. In the mid-twentieth century, the meter was redefined using the wavelength of light emitted by fluorescing krypton atoms. This definition, although much more complicated, had the enormous advantage that meter could now be reproduced almost exactly by any laboratory that had sufficiently advanced equipment. No longer was the definition for the meter something that lived in isolation, requiring careful guarding. Once the laser was invented in 1960, it became practical to redefine the meter in terms of the speed of light, often considered the ultimate physical constant. Thus the meter became, precisely, the distance traveled by light in a vacuum during 1/299,792,458 seconds—a definition that continues to be used at the beginning of the twenty-first century. This definition, of course, gives rise to the question of exactly how a second is defined.

Other Considerations

During this time, however, there were many more complicated considerations than simply how to define a unit of length. Once the units of length were defined, it was invaluable to have the ability to calculate the lengths of objects that seemed difficult to measure or to predict the lengths of objects that did not yet exist. When building a house, the builder must decide first how wide and how deep and how high the house is to be, and then the builder will cut down trees of the right size, and trim them down to obtain the needed logs. But how can the builder be sure that a tree is of the right size? If a builder is planning to cut down a tall tree, it is usually impractical to climb it just for the sake of measurement. Because of this impracticality, people adopted several clever methods for estimating the height of a tree without having to leave the ground.

Native Americans had a particularly clever tool-free method. They would bend over and look through their legs at the tree. Then they would walk away from the tree and repeat this process until they found a point where they could just barely see the top of the tree. It turns out that the distance from this point to the base of the tree is almost exactly the height of the tree (provided the measurer is an average-sized person). The reason for this measurement technique is that a normal person looking between one’s legs sees at about a 45-degree angle upward. Geometric principles indicate that since a tree makes roughly a 90-degree angle with the ground, then the measurer, the base of the tree, and the top of the tree make a 45-45-90 triangle. Such a triangle has equally long legs, which means that the height of the tree is equal to the distance from the base to where the measurer is standing. The Native Americans probably did not think about it in these exact terms, of course; they most likely discovered this trick by trial and error. Nonetheless, the ability to make these calculations is extremely important when it comes to building large structures.

Measuring Triangles

Ancient civilizations have long known that when building structures that need to hold weight, triangular supports are very effective. A natural question, then, is how long to make the triangular piece. Say a person is building a simple box to stand on. The box will be 1 meter wide, 1 meter deep, and 1 meter tall. If the person builds just the box, there is a danger it will collapse when stood upon, so triangular supports are included. Specifically, this person decides to build each of the four “wall” sides to be a square with a single piece added in diagonally to form two triangles. Since each of the squares is 1 meter tall and 1 meter wide, how long should the single piece of wood be so that it can join opposite edges of the square?

Pythagoras, a Greek philosopher and mathematician who lived around 500 b.c.e., developed a simple formula, the Pythagorean Theorem, that can be used to answer the question. His formula states that for a triangle where one of the angles measures to be 90 degrees, and a, b, and c are the side lengths where c represents the hypotenuse (the side across from the 90-degree angle) then a2+b2=c2. Using this formula, the square can be imagined as two triangles, each of which has a 90-degree angle, and it can be seen that c2, the length of the hypotenuse squared, must be equal to 12+12=2. Therefore, the length of the piece of wood needed is 1/7, which is approximately 1.4 meters.

When Pythagoras answered this question, it provided a huge boost to the ability to manufacture precisely engineered constructs. But practically and mathematically, it also raised additional questions. How can one determine the length of the triangular piece if the triangle does not possess a 90-degree angle? What are the properties of triangles that are best at supporting weight? The mathematical field of trigonometry (from the Greek words trigonon, meaning “triangle” and metron, meaning “measure”) was invented to answer these questions.

Measuring Curves

The ancient Greeks were obsessed with geometry and incorporated it into many nonmathematical aspects of their lives. This incorporation led to the aesthetic, if intellectually dubious, concept of sacred geometry that infuses some spiritual movements. Perhaps the single most prevalent concept in geometry is that of a circle. The concept of relating various geometric shapes to circles can be seen anywhere, from images showing triangles (or more complicated shapes) connecting to a circle in various ways to Leonardo Da Vinci’s drawing of the Vitruvian Man. Although the circle relates heavily to the measurements computed via trigonometry, it is the source of a different and altogether more elusive measurement of length—the length of curves.

Of course when one thinks of the measurement of length, one generally considers the length of straight line segments. But even thousands of years ago, it was obvious to some mathematicians that it made sense to ask about the length of a curve. It is easy enough to draw a circle, take a piece of string, mold it to nearly the same shape as the circle, cut it off at the right point, and then remove the string, lay it in a straight line, and measure it. This measurement gives, roughly, what is called the “arc length” of the circle. To attempt to compute this length using pure mathematics, mathematicians would undergo a tedious process where they would approximate the curve using dozens, or hundreds, of small line segments. Then they would measure each tiny line segment and add up the results to get, again, roughly, the arc length of the curve. It was through this process that mathematicians discovered the remarkable fact that—although circles could be made with very large or very small arc lengths—for any circle, its arc length (also called “circumference” when referring to a circle) divided by its diameter was always a fixed number. This fixed number is π, which is approximately equal to 3.14. This fact was known to the ancient Egyptians, who, like the Greeks, had a penchant for incorporating mathematical references into culture, literature, and architecture. In fact, the Great Pyramid at Giza was built with a perimeter of 1760 cubits and a height of 280 cubits. 1760÷280 is almost exactly equal to 2π.

Calculus

In part because of the appeal of discoveries made about the arc length of circles and in part because of the practical application, there was more research done into calculating arc lengths of curves, and in general calculating other abstruse quantities, such as the area enclosed by a curve, or how wind would change the velocity of a balloon. The bulk of the theory necessary to make these computations was the mathematical field of calculus, coinvented by Isaac Newton and Gottfried Wilhelm Liebniz.

The fundamental concept inherent in calculus is to break up an object into a very large number of very small pieces and to put those pieces back together again. The advantage calculus has over the cumbersome approach used by earlier mathematicians (breaking up a curve into many individual line segments, measured individually) is twofold. First, instead of using large numbers of small pieces, they actually used infinitely many infinitely small pieces. This means that instead of getting only an approximation, the error was infinitely small, and so the methods of calculus would actually yield exactly the correct answer. The second advantage is that calculus incorporates many methods to simplify these calculations involving infinity. These techniques are so simple that many high school and college students routinely master the subject. However, a lingering flaw in calculus after Newton and Leibniz’s development was the fact that the notion of “infinitely small” and “infinitely big” was vague and never precisely defined.

Mathematician Augustin-Louis Cauchy, in the mid-1800s, created the precise definition of these elusive concepts. An infinitely small quantity was defined to be a sequence of numbers that got arbitrarily close to zero; for example, 1, 1/2, 1/3, 1/4, 1/5,… . To make this even more precise, Cauchy pointed out that this sequence had a special property. To illustrate, pick as small a positive number as you can, for example, 1/1,000.000. Draw a circle around the point 0 of that small radius. At some point along the sequence, all the terms past that point will lie inside that circle—in other words, all the terms past that point will have distance from the point 0 less than the specified number. Even if you chose a new number, much smaller than the first one you picked, that would still be true—you would simply have to traverse farther along the sequence before you would find that special point. This property is called convergence—and clearly relies heavily on the notion of distance for its definition.

In the late 1800s and early 1900s, there was a large amount of work done in improving the techniques and perfecting the details of calculus. The mathematical operation that allowed one to find the area enclosed by a curve was called an “integral.” One of the more important improvements to calculus, created by Henri Lebesgue in 1901, was the Lebesgue integral, a concept that extended and strengthened the original idea and allowed the development of more robust mathematical machinery. An interesting feature of this new integral was that it was so general it could be applied to curves that in some sense couldn’t even be graphed. Soon after the development of the Lebesgue integral, a mathematician named Maurice Frechet, impressed by the generality of Lebesgue, invented metric spaces.

A metric space is a very general idea. It is the concept that one begins with a group of objects about which absolutely nothing is known, except that the distances between them are measurable. This idea turned out to be enormously powerful because it was able to capture the precise definitions made by Cauchy while at the same time being so general that they could apply to almost any mathematical system that people wished to study. The genius of the idea was in the realization that in so much of the complicated mathematics that was now being done, the one idea always relied on was that of measuring distance. Cauchy defined an infinitely small quantity to be a collection of numbers that becomes arbitrarily small. But this definition can be generalized to a collection of these objects, where the distance between them becomes arbitrarily small. Metric spaces quickly permeated all areas of mathematics, and metric space theory remains one of the foundational components of the mathematical area of analysis, the branch of mathematics used most heavily by scientists.

Bibliography

Alder, Ken. The Measure of All Things: The Seven-Year Odyssey and Hidden Error That Transformed the World. New York: The Free Press, 2002.

National Physics Laboratory. “History of Length Measurement.” http://www.npl.co.uk/educate-explore/factsheets/history-of-length-measurement/history-of-length-measurement-%28poster%29.

University of Cambridge NRICH. “History of Measurement.” http://nrich.maths.org/2434.

Whitelaw, Ian. A Measure of All Things: The Story of Man and Measurement. New York: St. Martin’s Press, 2007.