# Six Sigma, Monte Carlo Simulation, and Kaizen for Outsourcing

### Conclusion

Flexibility in sourcing may be controversial, but it is probably here to stay. The commercial reality is that outsourced tasks, from manufacturing to IT and from facilities to investment management, and even research and analysis, are moving up the value chain.

Outsourcing forces organizations to reassess core competencies continually and evaluate options in relation to noncore activities. Because they often focus on cost cutting, organizations may not take the time to work through the issues including potential corporate structures and locations.

Outsourcing enables companies to find partners who are already in the business they want to be in. It allows them to fill a gap in their suite of services and take specific services to market. Instead of waiting for mergers and acquisitions to help them make these gains, companies are turning to outsourcing to reshape themselves, shedding the processes and operations that no longer distinguish them competitively and tapping into other providers' hubs of expertise for the skills and services they need. IBM, for example, sold its PC hardware business to Chinese PC-maker Lenovo in 2004 and now outsources its PC procurement services to Lenovo. At the same time, Lenovo outsources marketing and sales support to IBM.

Finally, getting back to Six Sigma, the cost savings and quality improvements that have resulted from Six Sigma corporate implementations are significant. Motorola has reported $17 billion in savings since implementation in the mid 1980s. Lockheed Martin, GE, Honeywell, and many others have experienced tremendous benefits from Six Sigma.

### Appendix 1: The Terms Standard Deviation (σ), Mean (µ), and Six Sigma

Sigma (the lower-case Greek letter σ) is used to represent standard deviation (a measure of variation) of a population. The term "six sigma process" comes from the notion that if one has six standard deviations between the mean (indicated by lower-case Greek letter µ) of a process and the nearest specification limit, there will be practically no items that fail to meet the specifications. This is based on the calculation method employed in a Process Capability Study, often used by quality professionals. The term "Six Sigma" has its roots in this tool.

*Click here for a larger image.*

**Figure 7:** Normal distributions plus upper and lower specification limits

In a Capability Study, the number of standard deviations between the process mean and the nearest specification limit is given in sigma units. As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, the Process Capability sigma number goes down, because fewer standard deviations will then fit between the mean and the nearest specification limit (see Process capability index).

Experience has shown that in the long term, processes usually do not perform as well as they do in the short. As a result, the number of sigmas that will fit between the process mean and the nearest specification limit is likely to drop over time, compared to an initial short-term study. To account for this real-life increase in process variation over time, an empirically-based 1.5 sigma shift is introduced into the calculation. According to this idea, a process that fits six sigmas between the process mean and the nearest specification limit in a short-term study will in the long term only fit 4.5 sigmas—either because the process mean is likely to move over time, or because the long-term standard deviation of the process is likely to be greater than that observed in the short term, or both.

Hence the widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided Capability Study). So, the 3.4 DPMO of a "Six Sigma" process in fact corresponds to 4.5 sigmas, namely 6 sigmas minus the 1.5 sigma shift introduced to account for long-term variation. This is designed to prevent overestimation of real-life process capability.

The allowance for the 1.5 (or any other) sigma shift can be inserted in the bottom text box of the @RISK Output Properties dialog shown in Figure 8.

**Figure 8:**Dialog for defining a RiskSixSigma property function within a RiskOutput function

The important points regarding the repeatability of a process are the following:

- Any process can be called a six-sigma process, depending on the accepted upper and lower limits of variability.
- The term six sigma alone means very little. It must be accompanied by an indication of the limits within which the process will deliver six-sigma repeatability.
- To improve the repeatability of a process from, say three sigma to six sigma without changing the limits, you must halve the standard deviation of the process.

Page 5 of 7

*This article was originally published on June 11, 2008*