Phone (416) 332-8727 ; Add to Favorites
Home Programs Admission Financial Aid e-Learning Events & News Career Services Contact
Six Sigma - Definition, Methodology, Origin and More

The term Six Sigma

Sigma (the lower-case Greek letter σ) is used to represent standard deviation (a measure of variation) of a population (lower-case 's', is an estimate, based on a sample). The term "six sigma process" comes from the notion that if one has six standard deviations between the mean of a process and the nearest specification limit, there will be practically no items that fail to meet the specifications. This is the basis of the Process Capability Study, often used by quality professionals. The term "Six Sigma" has its roots in this tool, rather than in simple process standard deviation, which is also measured in sigmas. Criticism of the tool itself, and the way that the term was derived from the tool, often sparks criticism of Six Sigma.

The widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO). A process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided Capability Study). This implies that 3.4 DPMO corresponds to 4.5 sigmas, not six as the process name would imply. This can be confirmed by running on QuikSigma or Minitab a Capability Study on data with a mean of 0, a standard deviation of 1, and an upper specification limit of 4.5. The 1.5 sigmas added to the name Six Sigma are arbitrary and they are called "1.5 sigma shift" (SBTI Black Belt material, ca 1998). Dr. Donald Wheeler dismisses the 1.5 sigma shift as "goofy".

In a Capability Study, sigma refers to the number of standard deviations between the process mean and the nearest specification limit, rather than the standard deviation of the process, which is also measured in "sigmas". As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, the Process Capability sigma number goes down, because fewer standard deviations will then fit between the mean and the nearest specification limit (see Cpk Index). The notion that, in the long term, processes usually do not perform as well as they do in the short term is correct. That requires that Process Capability sigma based on long term data is less than or equal to an estimate based on short term sigma. However, the original use of the 1.5 sigma shift is as shown above, and implicitly assumes the opposite.

As sample size increases, the error in the estimate of standard deviation converges much more slowly than the estimate of the mean (see confidence interval). Even with a few dozen samples, the estimate of standard deviation often drags an alarming amount of uncertainty into the Capability Study calculations. It follows that estimates of defect rates can be very greatly influenced by uncertainty in the estimate of standard deviation, and that the defective parts per million estimates produced by Capability Studies often ought not to be taken too literally.

Estimates for the number of defective parts per million produced also depends on knowing something about the shape of the distribution from which the samples are drawn. Unfortunately, there are no means for proving that data belong to any particular distribution. One can only assume normality, based on finding no evidence to the contrary. Estimating defective parts per million down into the 100s or 10s of units based on such an assumption is wishful thinking, since actual defects are often deviations from normality, which have been assumed not to exist.

While the particulars of the methodology were originally formulated by Bill Smith at Motorola in 1986, Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects. Like its predecessors, Six Sigma asserts the following:

·Continuous efforts to reduce variation in process outputs is key to business success
·Manufacturing and business processes can be measured, analyzed, improved and controlled
·Succeeding at achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management

In addition to Motorola, companies that adopted Six Sigma methodologies early on and continue to practice it today include Honeywell International (previously known as Allied Signal) and General Electric (introduced by Jack Welch).

Recently some practitioners have used the TRIZ methodology for problem solving and product design as part of a Six sigma approach

Origin

Bill Smith did not really "invent" Six Sigma in the 1980s; rather, he applied methodologies that had been available since the 1920s developed by luminaries like Shewhart, Deming, Juran, Ishikawa, Ohno, Shingo, Taguchi and Shainin. All tools used in Six Sigma programs are actually a subset of the Quality Engineering discipline and can be considered a part of the ASQ Certified Quality Engineer body of knowledge. The goal of Six Sigma, then, is to use the old tools in concert, for a greater effect than a sum-of-parts approach.
The use of "Black Belts" as itinerant change agents is controversial as it has created a cottage industry of training and certification. This relieves management of accountability for change; pre-Six Sigma implementations, exemplified by the Toyota Production System and Japan's industrial ascension, simply used the technical talent at hand—Design, Manufacturing and Quality Engineers, Toolmakers, Maintenance and Production workers—to optimize the processes.

Methodology

Six Sigma methodology consists of the following five (5) steps:

  • Define the process improvement goals that are consistent with customer demands and enterprise strategy.
  • Measure the current process and collect relevant data for future comparison.
  • Analyze to verify relationship and causality of factors. Determine what the relationship is, and attempt to ensure that all factors have been considered.
  • Improve or optimize the process based upon the analysis using techniques like Design of Experiments.
  • Control to ensure that any variances are corrected before they result in defects. Set up pilot runs to establish process capability, transition to production and thereafter continuously measure the process and institute control mechanisms

The ±1.5 Sigma Drift

The ±1.5σ drift is the drift of a process mean, which is assumed to occur in all processes. If a product is manufactured to a target of 100 mm using a process capable of delivering σ = 1 mm performance, over time a ±1.5σ drift may cause the long term process mean to range from 98.5 to 101.5 mm. This could be of significance to customers.

The ±1.5σ shift was introduced by Mikel Harry. Harry referred to a paper about tolerancing, the overall error in an assembly is affected by the errors in components, written in 1975 by Evans, "Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts". Evans refers to a paper by Bender in 1962, "Benderizing Tolerances – A Simple Practical Probability Method for Handling Tolerances for Limit Stack Ups". He looked at the classical situation with a stack of disks and how the overall error in the size of the stack, relates to errors in the individual disks. Based on "probability, approximations and experience", Bender suggests:

A run chart depicting a +1.5σ drift in a 6σ process. USL and LSL are the upper and lower specification limits and UNL and LNL are the upper and lower natural tolerance limits.

Harry then took this a step further. Supposing that there is a process in which 5 samples are taken every half hour and plotted on a control chart, Harry considered the "instantaneous" initial 5 samples as being "short term" (Harry's n=5) and the samples throughout the day as being "long term" (Harry's g=50 points). Due to the random variation in the first 5 points, the mean of the initial sample is different from the overall mean. Harry derived a relationship between the short term and long term capability, using the equation above, to produce a capability shift or "Z shift" of 1.5. Over time, the original meaning of "short term" and "long term" has been changed to result in "long term" drifting means.

Harry has clung tenaciously to the "1.5" but over the years, its derivation has been modified. In a recent note from Harry, "We employed the value of 1.5 since no other empirical information was available at the time of reporting." In other words, 1.5 has now become an empirical rather than theoretical value. Harry further softened this by stating "... the 1.5 constant would not be needed as an approximation". Interestingly, 1.5σ is exactly one half of the commonly accepted natural tolerance limits of 3σ.

Despite this, industry is resigned to the belief that it is impossible to keep processes on target and that process means will inevitably drift by ±1.5σ. In other words, if a process has a target value of 0.0, specification limits at 6σ, and natural tolerance limits of ±3σ, over the long term the mean may drift to +1.5 (or -1.5).

In truth, any process where the mean changes by 1.5σ, or any other statistically significant amount, is not in statistical control. Such a change can often be detected by a trend on a control chart. A process that is not in control is not predictable. It may begin to produce defects, no matter where specification limits have been set.

OCOT Hotline

(phone) 416-332-8727

OCOT Advantages

100 %Instructor-Led Class
State-of-the-Art Facilities
Unlimited Lab Time
Labs Open 7-days a Week
Free Repeat
Free Job Placement
Financial Aid Possible
Resume Writing
Interview Skills


© 2008 Ontario College of Technology