Parallel processing

Summary: Parallel processing speeds up the run-time of computing through the use of mathematical algorithms.

In computing, parallel processing is the action of performing multiple operations or tasks simultaneously by two or more processing cores. Ideally, this arrangement reduces the overall run-time of a computer program because the workload is shared among a number of engines—central processing units (CPUs) or cores. In practice, it is often difficult to distribute the instructions of a program in such a way that each CPU core operates continuously and efficiently, and without interfering with other cores. It should be noted that parallel processing differs from multitasking, in which a single CPU core provides the effect of simultaneously executing instructions from several different programs by rapidly switching between them, or interleaving their instructions. Modern computers typically include multi-core processor chips with two or four cores. The most advanced supercomputers in the early twenty-first century may have thousands of multi-core CPU nodes organized as a cluster of single processor computers and connected using a special-purpose, high-speed, fiber communication network. Although it is also possible to perform parallel processing by connecting computers together using a local area network, or even across the Internet, this type of parallel processing requires the individual processing elements to work predominantly in isolation because of the comparatively slow communication between nodes. Parallel processing requires data to be shared among processors and thus leads to the concept of “shared memory” where multiple processing cores work with the same physical memory. In large computer clusters, the memory is usually distributed across the nodes, with each node storing its own part of the full problem. Data are exchanged between nodes using message-passing software, such as Message Passing Interface (MPI).

98697140-91170.jpg98697140-91169.jpg

Amdahl’s Law and Gustafson’s Law

The speed-up gained through parallelization of a program would ideally be linear; for example, doubling the number of processing elements should halve the run-time. However, very few parallel algorithms achieve this target. The majority of parallel programs attain a near-linear speed-up for small numbers of processing elements but for large numbers of processors the addition of further cores provides negligible benefits.

The potential speed-up of an algorithm on a parallel computing platform is given by Amdahl’s law, originally formulated by Gene Amdahl in the 1960s. A large mathematical or engineering problem will typically consist of several parallelizable parts and several non-parallelizable parts. The overall speed-up attainable through parallelization is proportional to the size of the non-parallelizable portion of the program and is given by the equation

where S is the speed-up of the program (as a factor of its original sequential runtime), and P is the fraction that is parallelizable. Amdahl’s law assumes the size of the problem is fixed and that the relative proportion of the sequential section is independent of the number of processors. For example, if the sequential portion of a program is 10% of the run-time (P=0.9), no more than a 10-times speed-up could be obtained, regardless of how many processors are added. This characteristic puts an upper limit on the usefulness of adding more parallel execution units.

Gustafson’s law is closely related to Amdahl’s law, but is not so restrictive on the assumptions made about the problem. It can be formulated algebraically as

S(P) = Pa(P − 1)

where P is the number of processors, S is the speed-up, and a is the non-parallelizable proportion of the process.

Applications

Parallel computing is used in a broad range of fields, including mathematics, engineering, meteorology, bioinformatics, economics, and finance. However, all of these applications usually involve performing one or more of a small set of highly parallelizable operations, such as sparse or dense linear algebra, spectral methods, n-body problems, or Monte Carlo simulations. Frequently, the first step to exploiting the power of parallel processing is to express a problem in terms of these basic parallelizable building blocks.

Parallel processing plays a large part in many aspects of everyday life, such as weather prediction, stock market prediction, and the design of cars and aircraft. As parallel computers become larger and faster, it becomes feasible to solve larger problems that previously took too long to run on a single computer.

Bibliography

Barney, Blaise. “Introduction to Parallel Computing.” Lawrence Livermore National Laboratory, 2007. https://computing.llnl.gov/tutorials/parallel‗comp/.

Gupta, A., A. Grama, G. Karypis, and V. Kumar. An Introduction to Parallel Computing: Design and Analysis of Algorithms. Reading, MA: Addison Wesley, 2003.

Jordan, Harry F., and Gita Alaghband. Fundamentals of Parallel Processing. Upper Saddle River, NJ: Prentice Hall, 2002.