JavaEnterprise JavaSix Sigma, Monte Carlo Simulation, and Kaizen for Outsourcing

Six Sigma, Monte Carlo Simulation, and Kaizen for Outsourcing

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Figure 1: Monte Carlo simulation to analyze thousands of different possible outcomes

Six Sigma refers to having six standard deviations between the average of the process center and the closest specification limit or service level. That translates to fewer than 3.4 failures per one million opportunities, as explained in Appendix 1.

Now, look at how you might go about using Six Sigma with Monte Carlo simulation in the outsourcing of a critical component for precision machining.

The Model

Component length for each vendor is described by @RISK distribution functions. These cells also are defined as @RISK outputs with RiskSixSigma functions to enable you to calculate Cpm, an index that measures a process’s ability to conform to a target length, for each vendor as well as generate distribution graphs of the component lengths with specification markers. The RiskSixSigma functions contain the USL, LSL, and Target value of 66.6 mm, with a tolerance of only +/- .1 mm.

Appendix 2 contains a discussion of RiskOutputand RiskSixSigma.

Table 1: Component specifications

In this model, the component length for Vendor 1 is described by a Pert distribution:

= RiskOutput(,,,RiskSixSigma(B30,D30,C30,0,6))+RiskPert(66.4,66.6,66.7),

the component length for Vendor 2 is described by a Normal distribution:

=RiskOutput(,,,RiskSixSigma(B30,D30,C30,0,6))+RiskNormal(66.6,0.0706)

and the component length for Vendor 3 is described by a Truncated Normal distribution:

=RiskOutput(,,,RiskSixSigma(B30,D30,C30,0,6))+RiskNormal(66.58,0.05,RiskTruncate(66.5,66.7)).

Each output contains RiskSixSigma properties and B30, D30, and C30 reference cells in Table 1, as shown in the upper-left corner of Figure 2.

@RISK provides a Function Properties window that can be used to enter a RiskSixSigma property function into a RiskOutput function, as shown in Figure 8. This window has a tab titled Six Sigma that has entries for the arguments to the RiskSixSigma function. Access the RiskOutput Function Properties window by clicking on the properties button in the @RISK Add Output window.

Figure 2: Six Sigma Monte Carlo simulation in @RISK

After simulating, you see in Figure 2 that Vendor 1 has the lowest real unit cost. The simulated mean of each vendor’s unit cost is displayed as well, by using a RiskMean function. Finally, Cpm, defined below in the Quantitative Methods Employed in Six Sigma section, is calculated for the component length of each vendor.

Table 2: Key output cells used in the outsourcing decision

The difference between some of the values in Table 2 and the corresponding values in Figure 2 is explained by the fact that the results of different simulations, especially when a limited number of iterations are used, will vary. However, as suggested in Figure 3, you can set the number of iterations before you run a simulation. As the number of iterations in a given simulation is increased, the variation between different simulations will usually decrease.

Figure 3: One of several windows for real-time monitoring of a simulation

The variation in product from Vendor 1 is illustrated in Figure 4.

Figure 4: Component length output for Vendor 1 described by Pert distribution functions

You now have information on cost and quality to form a more efficient ordering strategy. A next step might be to analyze how to further reduce costs; for example, by using a Kaizen event to reduce internal inspection time.

Kaizen

The word Kaizen comes from the Japanese words “Kai” meaning school and “Zen” meaning wisdom. It is a Japanese philosophy that focuses on continuous improvement throughout all aspects of life. When applied to the workplace, Kaizen activities continually improve all functions of a business from manufacturing to management and from the CEO to the assembly line workers. By improving the standardized activities and processes, Kaizen aims to eliminate waste. Kaizen was first implemented in several Japanese businesses during the country’s recovery after World War II, including Toyota, and has since spread to businesses throughout the world.

Kaizen is based on making changes anywhere that improvements can be made. Western philosophy may be summarized as, “if it ain’t broke, don’t fix it.” The Kaizen philosophy is to “do it better, make it better, and improve it even if it isn’t broken, because if we don’t, we can’t compete with those who do.”

Quantitative Methods Employed in Six Sigma

Assuming normal distributions, look at some quantitative methods employed in your use of Six Sigma. A brief discussion of mean (µ) and standard deviation (σ) used in this section is given in Appendix 1.

First, consider Process Capability (Cp):

Cp = where

USL = An upper specification limit that is a value below which performance of a product or process is acceptable.

and

LSL = A lower specification limit that is a value above which performance of a product or process is acceptable.

Figure 5a: Cp for different USL-LSL to 6σ ratios

The capability index Cp measures the ratio of the width of a specification to the width of a process. This is useful in telling you your process’s ability to meet specification, if you get the process output to average right in the center of the specification. If your natural tolerance is exactly equal to your specification width, Cp = 1 and the process is said to be “potentially minimally capable” of meeting specifications. “Potentially” because it might be terribly off target and making 100-percent scrap but with variation equal to the specification width. More usually, companies have a goal of getting all their capability indexes equal to 1.33.

Many processes will only have one limit: either an upper or lower control limit. These are sometimes called ‘One Sided Specs’. A Cp cannot be calculated for one-sided specs.

Figure 5b: Normal (Gaussian) curve showing mean and standard deviation

The normal distribution curve (the bell curve in Figures 5a and 5b) is also called Gaussian. A consistent inaccuracy will displace the curve to the left or right of the nominal value, while a perfectly accurate machine will result in a curve centered on the nominal. Repeatability, on the other hand, is related to the gradient of the curve either side of the peak value; a steep, narrow curve implies high repeatability. If the machine were found to be repeatable but inaccurate, this would result in a narrow curve displaced to the left or right of the nominal. As a priority, machine users need to be sure of adequate repeatability. If this can be established, the cause of a consistent inaccuracy can be identified and remedied.

Figure 5c: Small standard deviation does not guarantee conforming to specifications

Consider the possibilities of accuracy versus repeatability. Suppose you measure the offset error 10 times and plot the 10 points on a target chart (refer to Figure 5c). Case 1 in this diagram shows a highly repeatable machine because all measurements are tightly clustered and on target.

The average variation between each point, known as the standard deviation, is small. However, a small standard deviation does not guarantee an accurate machine. Case 2 shows a very repeatable machine that is not very accurate.

As the bulk of the measurements are clustered more closely around the target, the standard deviation becomes smaller and the bell curve will become narrower.

Much more useful is Process Capability Index (Cpk,), which measures your ability to conform to specification. If the process happens to be one that follows the normal distribution, then

Cpk = min(CpU,CpL) where

CpU =

and

CpL =

You’ll go out of business pretty quickly if you don’t at least conform to specification most of the time.

A simple modification of the Cp formula allows you to penalize that index for being off target by the square of the deviation from Target, where Target is customer defined.

Cpm =

The equation is based on the reduction of variation from the target value as the guiding principal to quality improvement, an approach championed by Taguchi.

Cpm is used when the target value is not the center of the specification spread. From this equation, note that variation from target is expressed as two components; namely, process variability (σ) and process centering (µ – T).

Whatever losses you incur due to variation will be at a minimum when your output is at the real target and probably not much more loss if you’re fairly close to target.

Cpm is an index that measures a process’s ability to conform to target. If the process is on target, and if the target is in the middle of the specification limits, Cp = Cpk = Cpm. But if this is not the case, Cp ≥ Cpk ≥ Cpm. If the process is off target, Cpm < 1.

Six Sigma Statistics Functions

A set of @RISK statistics functions return a desired Six Sigma statistic on a simulation output. For example, the function RiskCPM(A10) returns the Cpm value for the simulation output in Cell A10. These functions are updated real-time as a simulation is running. These functions are similar to the standard @RISK statistics functions (such as RiskMean) in that they calculate statistics on simulation results; however, these @RISK functions calculate statistics commonly required in Six Sigma models. These functions, a few of which are given in Figure 6, can be used anywhere in spreadsheet cells and formulas in your model.

Figure 6: Six Sigma statistics functions (partial list) that are available for simulations such as the one depicted in Figure 2

Conclusion

Flexibility in sourcing may be controversial, but it is probably here to stay. The commercial reality is that outsourced tasks, from manufacturing to IT and from facilities to investment management, and even research and analysis, are moving up the value chain.

Outsourcing forces organizations to reassess core competencies continually and evaluate options in relation to noncore activities. Because they often focus on cost cutting, organizations may not take the time to work through the issues including potential corporate structures and locations.

Outsourcing enables companies to find partners who are already in the business they want to be in. It allows them to fill a gap in their suite of services and take specific services to market. Instead of waiting for mergers and acquisitions to help them make these gains, companies are turning to outsourcing to reshape themselves, shedding the processes and operations that no longer distinguish them competitively and tapping into other providers’ hubs of expertise for the skills and services they need. IBM, for example, sold its PC hardware business to Chinese PC-maker Lenovo in 2004 and now outsources its PC procurement services to Lenovo. At the same time, Lenovo outsources marketing and sales support to IBM.

Finally, getting back to Six Sigma, the cost savings and quality improvements that have resulted from Six Sigma corporate implementations are significant. Motorola has reported $17 billion in savings since implementation in the mid 1980s. Lockheed Martin, GE, Honeywell, and many others have experienced tremendous benefits from Six Sigma.

Appendix 1: The Terms Standard Deviation (σ), Mean (µ), and Six Sigma

Figure 7: Normal distributions plus upper and lower specification limits

In a Capability Study, the number of standard deviations between the process mean and the nearest specification limit is given in sigma units. As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, the Process Capability sigma number goes down, because fewer standard deviations will then fit between the mean and the nearest specification limit (see Process capability index).

Experience has shown that in the long term, processes usually do not perform as well as they do in the short. As a result, the number of sigmas that will fit between the process mean and the nearest specification limit is likely to drop over time, compared to an initial short-term study. To account for this real-life increase in process variation over time, an empirically-based 1.5 sigma shift is introduced into the calculation. According to this idea, a process that fits six sigmas between the process mean and the nearest specification limit in a short-term study will in the long term only fit 4.5 sigmas—either because the process mean is likely to move over time, or because the long-term standard deviation of the process is likely to be greater than that observed in the short term, or both.

Hence the widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided Capability Study). So, the 3.4 DPMO of a “Six Sigma” process in fact corresponds to 4.5 sigmas, namely 6 sigmas minus the 1.5 sigma shift introduced to account for long-term variation. This is designed to prevent overestimation of real-life process capability.

The allowance for the 1.5 (or any other) sigma shift can be inserted in the bottom text box of the @RISK Output Properties dialog shown in Figure 8.

Figure 8: Dialog for defining a RiskSixSigma property function within a RiskOutput function

The important points regarding the repeatability of a process are the following:

  • Any process can be called a six-sigma process, depending on the accepted upper and lower limits of variability.
  • The term six sigma alone means very little. It must be accompanied by an indication of the limits within which the process will deliver six-sigma repeatability.
  • To improve the repeatability of a process from, say three sigma to six sigma without changing the limits, you must halve the standard deviation of the process.

Appendix 2: RiskSixSigma Property Function

In an @RISK simulation, the RiskOutput function identifies a cell in a spreadsheet as a simulation output. A distribution of possible outcomes is generated for every output cell selected. These probability distributions are created by collecting the values calculated for a cell for each iteration of a simulation.

When Six Sigma statistics are to be calculated for an output, the RiskSixSigma property function is entered as an argument to the RiskOutput function. This property function specifies the lower specification limit, upper specification limit, target value, long term shift, and the number of standard deviations for the six sigma calculations for an output.

RiskOutput(“Part Diameter”,,RiskSixSigma(.88,.95,.915,1.5,6))

specifies an LSL of .88, a USL of .95, target value of .915, long term shift of 1.5, and a number of standard deviations of 6 for the output Part Diameter. You also can use cell referencing in the RiskSixSigma property function.

When @RISK detects a RiskSixSigma property function in an output, it automatically displays the available Six Sigma statistics on the simulation results for the output in the Results Summary window and adds markers for the entered LSL, USL, and Target values to graphs of simulation results for the output (refer to Figures 2 and 4).

Figure 9: RiskOutput description, examples and guidelines

Appendix 3: The @RISK Developer’s Kit

The @RISK Developer’s Kit (RDK) is a risk analysis programming toolkit. The RDK allows you to build Monte Carlo simulation models using Windows and .NET programming languages, such as C, C#, C++, Visual Basic, or Visual Basic .NET. Unfortunately, as of the writing of this article, the RDK doesn’t support Six Sigma. However, I’ve been assured that an upcoming release of this development tool will.

Unlike the Excel version of @RISK, the RDK does not require a spreadsheet to run simulations. This means user models can be larger and execute faster. All simulation results can be accessed programmatically, directly in the developer’s application. Two powerful charting engines—that of @RISK and Microsoft Excel with its extensive customization capabilities—can be used to generate graphs of simulation results.

RDK applications can be run in a desktop, network server, or web environment. The RDK fully supports multithreading to allow the development of scalable web applications. Models built using the RDK can run simulations and generate results entirely in the user’s program or the @RISK Interface can be used to display reports and graphs on simulation results.

Why Use the RDK?

For many users, the spreadsheet is the preferred risk modeling environment. However, many times an application written in a standard programming language needs Monte Carlo simulation or distribution fitting capabilities. The application will have its own user interface, variables, and model calculations. In addition, a developer may want to allow users to access the application over a corporate network or over the Internet.

The RDK allows custom applications such as these to run simulations and generate result graphs and statistics. Applications written with the RDK often will run faster and can contain larger numbers of distributions and outputs when compared with models written in spreadsheet versions of @RISK. This is because RDK applications are compiled and do not contain additional routines executed during spreadsheet recalculation.

On-line Real-time Notification

With the RDK, you also could perform trend analysis and have one or more actions, such as a point outside sigma limits, trigger an emailm or other alert to help isolate the causes of poor performance. Or, you could continuously calculate Cpm or other index on historical data to initiate an alert.

Distribute Custom Solutions Over the Web

The RDK allows you to streamline the distribution of your risk analysis solutions through enterprise-wide web deployment. Server-based risk analysis models—such as corporate financial models, engineering applications, and financial planning tools—can be accessed over the Internet from any browser, allowing users to enter model parameters and inputs, run simulations, and view results and graphs. Model structure, logic, and RDK simulation libraries are stored on the server, ensuring consistency for all end-users and removing local installation and support issues.

Appendix 4: Time Related Data

Although the normal distribution is best known, time related data frequently takes the form of a Weibull distribution.

Figure 10: A Weibull distribution with parameters suitable for describing time-related data

Here’s an example of a process where the data is time related: In a given organization, the IT department and the rest of the business might agree to set the times to fix laptops in their service level agreement (SLA). The IT department will want to measure whether the target times are met, to report to the business about its performance.

  • Process definition: Under the SLA, customers will be helped in less than one hour with all laptop problems they experience.
  • Measurements: 30 laptop problems will be independently timed from the moment a customer reports the problem to the moment a customer agrees on the fixing of that problem.
  • Upper specification limit (USL): 60 minutes
  • Lower specification limit (LSL): 0 minutes (a boundary)

Does the process have to be Six Sigma? The answer is usually no, it does not. It depends entirely on the service level agreement.

References

  • Hambleton, L. Treasure Chest of Six Sigma Growth Methods, Prentice Hall (2007)
  • Brussee, W. Statistics for Six Sigma Made Easy, McGraw-Hill (2004)
  • Creveling, C. M. et al Design for Six Sigma, Prentice Hall (2002)
  • Harmon, P. Business Process Change, Morgan Kaufman (2007)
  • Kumar, D. Six Sigma Best Practices, J. Ross (2006)
  • Boer, S. et al Six Sigma for IT Management, ITSMF-NL (2006)
  • Summers, D. Six Sigma: Basic Tools and Techniques, Prentice Hall (2006)
  • Pyzdek, T. The Six Sigma Handbook, McGraw Hill (2003)
  • Breyfogle, F. Implementing Six Sigma, Second Edition, Wiley (2003)
  • Jelen, B. Special Edition Using Microsoft Office Excel, Que (2006)
  • Gulesian, M. Service Level Agreements

About the Author

Marcia Gulesian is an IT strategist, hands-on practitioner, and advocate for business-driven architectures. She has served as software developer, project manager, CTO, and CIO. Marcia is author of well more than 100 feature articles on IT, its economics, and its management.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories