Statistical Quality Control

To be successful in today's global marketplace, companies need to have a constant eye on the quality of their products and services. Quality control utilizes tools from both descriptive statistics and inferential statistics in the continuing pursuit of quality. Quality control charts are a family of simple graphing procedures that help quality control engineers and managers monitor processes and determine whether or not they are in control. In addition, inferential statistics can help the quality control engineer make deductions from the data that can be applied to refining business processes so that they better meet quality goals. Statistical approaches to quality control, however, are not without problems and continuing development of new tools is needed. An example of an approach to statistical quality control is Six Sigma which attempts to keep processes within specification 99.9999997 percent of the time.

The globalization of many businesses has brought with it a concomitant concern about quality. Foreign countries with lower wage structures can often produce goods more cheaply than can be done in the United States. Although stories of the recall of foreign-made goods often reach the front page, the quiet march of superior products made in other countries does not. Japan, for example, had a reputation for shoddy workmanship before the Second World War. They have recovered, however, and are now known for the excellence of their automobiles and electronics. For many applications, it is no longer possible to say that "made in the USA" implies a better product. Lower quality costs the organization not only in terms of customer goodwill and loyalty, but also in terms of costs for scrap and rework. It has been estimated that in some companies, scrap and rework costs run as high as 20 to 30 percent of sales; an unacceptable cost in a time of high competition.

Total Quality Management

In reaction to increased competition at home and abroad, companies have come to realize that to be successful in the global marketplace, they need to have a constant eye on the quality of their products and services. There are just too many competitors eagerly waiting to gain a larger market share for a company to be able to rest on its laurels and not be concerned with continuing quality. The 1980s brought with them an increased concern with quality and saw the development of the concept "total quality management," a management strategy that attempts to continually increase the quality of goods and services as well as customer satisfaction through raising awareness of quality concerns across the organization. The rallying cries for US businesses became: "Quality is everyone's job" and "do it right the first time."

Kaizen

One of the key points to the total quality management approach is kaizen, a Japanese concept of continuously searching for incremental improvement. This is considered by some observers to be the most important difference between US and Japanese businesses. The achievement of kaizen is accomplished through the integration of research and development efforts with production facilities and getting the underlying business processes "right." The key to doing this is through the application of statistical processes and tools in a search for better processes and improved quality. It is a truism that if productivity and quality are to improve, a change in current processes is needed. Statistics provides a rational basis on which to make these changes and addresses the questions of what data need to be collected and how they should be analyzed in order to give quality control engineers and managers the information they need to continuously improve the quality of goods and services.

Descriptive Statistics & Quality Control

Descriptive Statistics Tools

Descriptive statistics used in quality control include histograms, Pareto diagrams, scatter plots, and graphs.

  • Histograms are a type of vertical bar chart that graphs frequencies of objects within various classes on the y axis against the classes on the x axis. Frequencies are graphed as a series of rectangles.
  • A Pareto diagram is a vertical bar chart that graphs the number and types of defects for a product or service against the order of magnitude (from greatest to least). These charts are used to display the most common types of defects in ranked order of occurrence. Pareto charts are often shown with cumulative percentage line graphs to show to more easily show the total percentage of errors accounted for by various defects.
  • Another type of graph commonly used in quality control is the scatter plot. This type of diagram graphically depicts two-variable numerical data so that the relationship between the variables can be examined. For example, one might want to know the relationship between number of defects observed in a given month and the cost of the loss of quality to the company. The two values (number and cost) could be graphed on a two-dimensional graph so that one could better understand the relationship.

ors-bus-453-126407.jpg

Quality Control

Quality control engineering is most frequently concerned with the quality of goods produced on a production line. With today's high technology equipment and emphasis on automation, it would be tempting to assume that production lines would repeatedly produce quality products without adjustment. Unfortunately, this assumption flies in the face of the laws of physics and of probability. No matter how automated a process or how advanced the technology used to control quality, errors -- in the form of defects and waste -- continue to creep in. Sometimes these errors are due to "noise," random variability that occurs naturally. For example, the amount or quality of ore produced from a mine varies naturally from day to day. Changes in quality or quantity can affect the inputs into the production line (e.g., lower quality ore may result in greater breakage of the widgets which were produced using it). Other errors, however, are due to problems with the process, equipment, materials, or humans working the line. It is the task of the quality control engineer to examine the process for ways that it can be continually improved in order to increase the quality of the product.

Measuring Statistical Control

One of the ways that quality control engineers deal with this situation is through the use of quality control charts. This is a simple graphing procedure that helps quality control engineers and managers monitor processes and determine whether or not they are in control. Quality control charts are based on two statistical ideas. First, random noise occurs naturally in any process. Second, within a random process there is a certain amount of regularity. This means that only five percent of the time (i.e., one occurrence in 20) will a variable differ from its mean by more than two standard deviations. Within these parameters, a process is said to be within statistical control if it performs within the limits of its capability.

Types of Quality Control Charts

Shewhart Control Charts

Quality control charts (also called Shewhart control charts after their originator) are one way to examine whether or not a process is within statistical control. There are two categories of control charts:

  • Control charts for measurements
  • Control charts for compliance

The X-bar chart (so named for X¾ , the mathematical symbol for the arithmetic mean), is a chart of the means of some characteristic of the product (e.g., acceptability of solder joints) of small random samples taken from the production line over time. As shown in Figure 2, these means are plotted over time on a chart that contains a center line (i.e., the mean for the process) and upper and lower control limits. The center line is the arithmetic mean of the means of the samples. The upper control limit is three standard deviations above the center line and the lower control limit is three standard deviations below the center line. If all the points plotted on the chart fall between the upper and lower control limits, the process is considered to be in control. However, if computed sample means fall outside the control limits, the process is considered to be out of control and the process is stopped so that an assignable cause can be determined. Sometimes assignable causes are merely easily explained, passing phenomena that are unlikely to occur again. Other times, however, assignable causes are more serious and require corrective action (e.g., replacing a defective part or machine, retraining of employees, switching suppliers). In addition to X-bar charts that keep track of processes by examining the means of samples, quality control charts include R charts that keep track of the range, p charts track the proportion of defective products, c charts track the number of defects, and s charts that examine sample variance.

ors-bus-453-126408.jpg

Multivariate Charting Methods

Although Shewhart control charts are the most commonly used charting method in quality control, there are more sophisticated charting methods available. Multivariate charting methods are available that allow the quality control engineer to monitor several related variables simultaneously. Methods are also available for charting a single measurement rather than a sample (e.g., moving average charts, exponentially weighted moving average charts) and cumulative sum methods that are more sensitive than Shewhart control charts for detecting small, consistent changes.

Inferential Statistics & Quality Control

Although descriptive statistics can be very helpful for organizing and presenting data, inferential statistics help the quality control engineer make deductions from the data that can be applied to refining business processes so that they better meet quality goals.

Experimental Research

Experimental research entails the testing of different processes in order to determine which one better meets the quality goals predetermined by the organization while concomitantly meeting cost and budgetary demands. There are various ways to test research a hypothesis depending on the nature and type of data collected and the assumptions that can be made (or not made) about the underlying distribution and its parameters (i.e., the mean and standard deviation).

  • T-tests. T-tests comprise one class of statistical tests that can be used to analyze quality control data. This type of statistical technique is used to analyze the mean of a population or compare the means of two different populations.
  • Z Statistics. In other situations where one wishes to compare the means of two populations, a z statistic may be used.
  • ANOVA. Another frequently used technique for analyzing data in apps is analysis of variance (ANOVA). This family of techniques is used to analyze the joint and separate effects of multiple independent variables on a single dependent variable and to determine the statistical significance of the effect.
  • Multiple Regression. Quality control engineers also use multiple regression techniques that allow them to build mathematical models for use in predicting one variable or more variables from the knowledge of other variables.

Challenges of Using Statistical Techniques

Statistical approaches to quality control, however, are not without their problems. It should be borne in mind that although statistics can be invaluable in helping assisting quality control efforts, nonstatistical methods are also useful and should not be dismissed. Difficulties in using and interpreting statistical methods can also be problematic. Some of these difficulties can be resolved through education, but a need for more transparent methodologies and consistency of terminology is also needed. In addition, statistical process control and management is occasionally used as a method to demonstrate quality to third parties such as clients and government agencies. As a result, the temptation always exists to manipulate methods in order to make a process seem under control when it is in fact not. Finally, advances in technology and the changing business environment call for advances in statistical quality control methods. New methods that are both flexible and transparent are needed in order to help today's companies gain or maintain competitive advantages through quality.

Applications

The Six Sigma Process

The Six Sigma process is a spin-off the Total Quality Management strategy that attempts to continually increase the quality of goods and services as well as customer satisfaction through raising awareness of quality concerns across the organization. The term "six sigma" is a reference to the number of standard deviations (symbolized by the Greek letter sigma, "s") a data point is from the middle of the normal curve. In terms of quality control, six sigma signifies the degree to which a product reaches its quality goal. At this level, a product is reaching its quality goal 99.9999997 percent of the time, or has only 3.4 defects per million. The Six Sigma program attempts to reduce costs by making changes before defects or problems occur. As part the Six Sigma program, employees and managers are trained in relevant skills including statistical analysis, project management, and problem solving methodology so that they can use these skills to reduce defects in their products. Most organizations that have implemented Six Sigma programs report increased profitability resulting from lower production costs and doing the thing correctly the first time in combination with reduced costs for not having to redo work previously done.

Goal of Six Sigma

Six sigma is a difficult goal to reach, but it is consistent with the philosophy of kaizen. The goal of six sigma was chosen for a number of reasons. First, the three sigma goal (i.e., 99.74 percent) that was previously the standard is unacceptable in many situations. For example, in the pharmaceutical industry, a three sigma target would mean that .26 percent of the time the product would not meet production specifications. Although this may seem like a small number, .26 percent translates to 2600 out of every million prescriptions for that drug being filled with drugs that had either too much or too little of the active ingredient. Similarly, three sigma in the airline industry means that it was acceptable to have 2600 unsatisfactory landings out of every million. Although the difference between three sigma and six sigma sounds small on paper, in situations such as these, it can make the literal difference between life and death. A second reason that six sigma was adopted as the goal for quality control of processes is that it spurs companies to work harder to improve quality in their processes, thereby helping them rise above their competition. This level of excellence can also help a company attain a world class reputation for excellence and become competitive not only locally or regionally, but globally.

The implementation of Six Sigma programs begins with the determination of the number of opportunities for performing an operation successfully for each component. The more complex a product, the more opportunities exist for defects to occur. The number of defects that occur is then related to the sigma level in the normal distribution to determine how close the process is to the six sigma goal. Six Sigma programs are data driven and require an investment of time, money, and human capital to gather, store, and analyze data.

Terms & Concepts

Business Process: Any of a number of linked activities that transform an input into the organization into an output that is delivered to the customer. Business processes include management processes, operational processes (e.g., purchasing, manufacturing, marketing), and supporting processes, (accounting, human resources).

Control Charts: A family of quality control charting techniques that help determine whether or not a process is under control. X-bar charts keep track of processes by examining the means of samples, R charts keep track of the range, p charts track the proportion of defective products, c charts track the number of defects, and s charts examine sample variance. Also called Shewhart control charts after their originator.

Descriptive Statistics: A subset of mathematical statistics that describes and summaries data.

Globalization: Globalization is the process of businesses or technologies spreading across the world. This creates an interconnected, global marketplace operating outside constraints of time zone or national boundary. Although globalization means an expanded marketplace, products are typically adapted to fit the specific needs of each locality or culture to which they are marketed.

Hypothesis: An empirically-testable declaration that certain variables and their corresponding measures are related in a specific way proposed by a theory.

Inferential Statistics: A subset of mathematical statistics used in the analysis and interpretation of data. Inferential statistics are used to make inferences such as drawing conclusions about a population from a sample and in decision making.

Kaizen: The Japanese concept of continuously searching for incremental improvement. This is considered by some observers to be the most important difference between US and Japanese businesses.

Mean: An arithmetically derived measure of central tendency in which the sum of the values of all the data points is divided by the number of data points.

Quality Control: A set of procedures or processes that help ensure that products or services comply with predefined quality criteria or otherwise meet the requirements of the client or customer. Quality control activities include the collection and statistical analysis of data to determine whether the process includes systematic (i.e., nonrandom) variation in quality. Quality control activities include monitoring and inspecting products or services in relation to predefined specifications or quality standards, determining the cause of variation, and developing and implementing changes to help meet target quality goals.

Sample: A subset of a population. A random sample is a sample that is chosen at random from the larger population with the assumption that such samples tend to reflect the characteristics of the larger population.

Six Sigma (6s): An approach to improving quality. The term "six sigma" is a statistical term referring to the degree to which a product reaches its quality goal. At six sigma, a product is reaching its quality goal 99.9999997 percent of the time, or has only 3.4 defects per million. The six sigma system was originally developed by Motorola.

Standard Deviation: A measure of variability that describes how far the typical score in a distribution is from the mean of the distribution. The standard deviation is obtained by determining the deviation of each score from the mean (i.e., subtracting the mean from the score), squaring the deviations (i.e., multiplying them by themselves), adding the squared deviations, and dividing by the total number of scores. The larger the standard deviation, the farther away it is from the midpoint of the distribution.

Statistics: A branch of mathematics that deals with the analysis and interpretation of data. Mathematical statistics provides the theoretical underpinnings for various applied statistical disciplines, including business statistics, in which data are analyzed to find answers to quantifiable questions. Applied statistics uses these techniques to solve real world problems.

Total Quality Management (TQM): A management strategy that attempts to continually increase the quality of goods and services as well as customer satisfaction through raising awareness of quality concerns across the organization.

Bibliography

Black, K. (2006). Business statistics for contemporary decision making (4th ed.). New York: John Wiley & Sons.

Gutiérrez, L., Bustinza, O. F., & Molina, V. (2012). Six sigma, absorptive capacity and organisational learning orientation. International Journal of Production Research, 50(3), 661-675. Retrieved November 27, 2013 from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=74279949

John, P. W. (1990). Statistical methods in engineering and quality assurance. New York: John Wiley & Sons.

Kumar, M., Antony, J., & Tiwari, M. K. (2011). Six Sigma implementation framework for SMEs -- a roadmap to manage and sustain the change. International Journal of Production Research, 49(18), 5449-5467. Retrieved November 27, 2013 from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=63634570

Wood, M. (2001). Statistical process monitoring in the twenty-first century. In J. Antony (Ed.), Understanding, managing and implementing quality (pp.103-119). London: Routledge. Retrieved August 21, 2007, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=17441552&site=ehost-live

Sans, W. (2013). A critical review of statistical methods used in quality control. Economic Quality Control, 27(2), 97-142. Retrieved November 27, 2013 from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=85862366

Suggested Reading

Antony, J. (2001). Understanding, managing and implementing quality. London: Routledge.

Beckford, J. (2002). Quality. London: Routledge.

Caulcutt, R. (2001). Why is Six Sigma so successful? Journal of Applied Statistics, 28(3/4), 301-306. Retrieved August 21, 2007, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=4395731&site=ehost-live

Coleman, S. Y., Arunakumar, G., Foldvary, F., & Feltham, R. (2001). SPC as a tool for creating a successful business measurement framework. Journal of Applied Statistics, 28(3/4), 325-334. Retrieved August 21, 2007, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=4395729&site=ehost-live

CONKLIN, J. D. (2013). Roundabout estimation. Quality Progress, 4(5), 54-56. Retrieved November 27, 2013 from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=87468388

de Mast, J. (2006). Six Sigma and competitive advantage. Total Quality Management & Business Excellence, 17(4), 455-464. Retrieved August 21, 2007, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=20379915&site=ehost-live

Leitnaker, M. G. & Cooper, A. (2005). Using statistical thinking and designed experiments to understand process operation. Quality Engineering, 17(2), 279-289. Retrieved August 21, 2007, from EBSCO Online Database Business Source Complete. http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=17003962&site=ehost-live

Yang, M., Wu, Z., Lee, K., & Khoo, M. C. (2012). The X control chart for monitoring process shifts in mean and variance. International Journal of Production Research, 50(3), 893-907. Retrieved November 27, 2013 from EBSCO Online Database Business Source Premier. http://search.ebscohost.com/login.aspx?direct=true&db=buh&AN=74279946

Essay by Ruth A. Wienclaw, PhD

Dr. Ruth A. Wienclaw holds a doctorate in industrial/organizational psychology with a specialization in organization development from the University of Memphis. She is the owner of a small business that works with organizations in both the public and private sectors, consulting on matters of strategic planning, training, and human/systems integration.