Showing posts with label Analyze. Show all posts
Showing posts with label Analyze. Show all posts

Sunday, April 29, 2007

Six Sigma Answer to Material Shortages

One of Six Sigma¡¯s strengths is its facility for revealing causes and solutions that run contrary to our initial assumptions. When a persistent condition resists all attempts at improvement, or when an obvious fix to a newly discovered problem turns out to be lacking, a methodical approach like Six Sigma¡¯s can uncover even the most unlikely of causes and deliver results.

In the following case study the continuous improvement team was in for just such a surprise. Conventional wisdom was wrong, and the path the team started down hid unexpected complexities.

Definition

The XYZ Pump Garage program overall performance was poor. Future customer orders would not have been forthcoming without substantial improvements in quality and delivery.

  • On-time delivery was 80% vs. >99% goal.
  • Direct labor overtime was running 15% vs. a goal of zero.
  • Field reported defects were found in 50% of system shipments vs. 0.5% goal.
  • Project margin was approximately 22% vs. a 33% goal.

A process improvement team was formed with members from Customer Service, Manufacturing, Production Control, Engineering, Operations, and Purchasing.

Measurement

' Initial ' majority team consensus was that the program¡¯s poor on-time delivery was the result of material shortages due to understaffing in Purchasing. More buyers seemed to be the probable solution. The team suspected that the field defects were principally a result of poorly trained assembly staff.

The team began daily monitoring of data for number of daily kit shortages, overdue suppliers, and daily purchasing workload based on Material Requirements Planning (MRP) demands. Field personnel were interviewed for detailed descriptions of field defect rework.

Briefly summarized, the data showed:

  • Typical labor overtime occurred near the end of the manufacturing process.
  • 100% of all kits were issued with shortages.
  • The key suppliers were >3 days late 50% of the time.
  • The MRP system was posting material demands inside the material lead times!
  • The requested delivery dates for material in the MRP system did not match well with project ship dates!
  • A majority of customer-reported defects appeared to be the result of incomplete or incorrect manufacturing documentation.
  1. Overtime was being worked to make up lost time due to late material deliveries.
  2. Understaffing in Purchasing was not the problem! An army of buyers would not result in on-time material when the MRP ¡®buy¡¯ signal came too late or not at all. The team¡¯s true analysis problem was to understand why the MRP System was giving wrong signals. The team decided to focus on one specific sales order line item that exemplified the problem set for a typical system.

    What they found:
  1. The sales order was coded incorrectly in a fashion that would generate several MRP problems.
  2. Item master attributes were not properly populated for many of the material items that had MRP problems.
  3. Customer engineering change orders (ECO) had been accepted without renegotiating product delivery dates with the customer to allow time for ECO implementation, including new material delivery.
  4. A check of other customer order line items showed similar problems.
  1. Customer ECO information was not being properly transmitted and propagated throughout the organization, resulting in out-of-date manufacturing instructions and field defects.
  2. Problems would not have occurred if program participants had properly followed the procedures and work instructions documented in the Quality Management System.

Improvement

The improvements we implemented can be summed up in one word: training. The company had grown significantly during the past year and while all employees had received training, it had sometimes been rushed or had not been completely absorbed by the new personnel. Mandatory training was scheduled immediately for all Customer Service, Engineering, and Operations personnel on the documented procedures for sales order entry, customer engineering change orders, creating item masters, and creating engineering masters. Retraining took 7 working days with approximately 30 personnel participating.

ERP data for all active purchase orders was audited for the most common errors the team had recently discovered. This process required 5 working days.

New delivery dates were negotiated with the customer¡¯s buyer based on the new solid data foundation. This was difficult, but fortunately the customer¡¯s buyer is a mature personality with a long-term partnership attitude.

Results:

  • Within 4 weeks material shortages had improved considerably.
  • On-time delivery reached 100%.
  • Overtime labor became negligible.
  • After 8 weeks there had been no field defects found in the 6 systems shipped in the prior 5 weeks.
  • Margin has improved to 28%, but this needs further investigation.
  • Teamwork between organizations improved as a result of greater appreciation for the needs and complexities of their respective jobs.

Control

On-time delivery, customer field defects, and margin remain the bottom-line metrics for process control on the XYZ program. However, most importantly, as a result of the XYZ team findings, a new continuous improvement team was formed: the Enterprise Resource Planning Data Integrity Team (EDIT). EDIT is tasked with developing a set of strategies and process control tools to insure there are no repeats of the XYZ difficulties on other programs.

Implications

This single Six Sigma project thus had far-reaching implications for the XYZ Pump Garage program. First, in fulfilling the immediate purpose of improving our performance, we achieved customer retention for the near future. On a broader level, we also seized an opportunity to enhance our overall long-term approach to improvement. The value of reaching beyond obvious solutions having been so dramatically reinforced, we created a new continuous improvement team charged with making the pursuit of quality a more proactive endeavor.

Friday, April 27, 2007

The Cause and Effect Diagram (a.k.a. Fishbone)

By Kerri Simon

When utilizing a team approach to problem solving, there are often many opinions as to the problem's root cause. One way to capture these different ideas and stimulate the team's brainstorming on root causes is the cause and effect diagram, commonly called a fishbone. The fishbone will help to visually display the many potential causes for a specific problem or effect. It is particularly useful in a group setting and for situations in which little quantitative data is available for analysis.

The fishbone has an ancillary benefit as well. Because people by nature often like to get right to determining what to do about a problem, this can help bring out a more thorough exploration of the issues behind the problem - which will lead to a more robust solution.

To construct a fishbone, start with stating the problem in the form of a question, such as 'Why is the help desk's abandon rate so high?' Framing it as a 'why' question will help in brainstorming, as each root cause idea should answer the question. The team should agree on the statement of the problem and then place this question in a box at the 'head' of the fishbone.

The rest of the fishbone then consists of one line drawn across the page, attached to the problem statement, and several lines, or 'bones,' coming out vertically from the main line. These branches are labeled with different categories. The categories you use are up to you to decide. There are a few standard choices:

Table 1: Fishbone Suggested Categories
Service Industries
(The 4 Ps)

Manufacturing Industries
(The 6 Ms)

Process Steps
(for example)

  • Policies
  • Procedures
  • People
  • Plant/Technology
  • Machines
  • Methods
  • Materials
  • Measurements
  • Mother Nature
    (Environment)
  • Manpower
    (People)
  • Determine Customers
  • Advertise Product
  • Incent Purchase
  • Sell Product
  • Ship Product
  • Provide Upgrade

You should feel free to modify the categories for your project and subject matter.

Once you have the branches labeled, begin brainstorming possible causes and attach them to the appropriate branches. For each cause identified, continue to ask 'why does that happen?' and attach that information as another bone of the category branch. This will help get you to the true drivers of a problem.

Once you have the fishbone completed, you are well on your way to understanding the root causes of your problem. It would be advisable to have your team prioritize in some manner the key causes identified on the fishbone. If necessary, you may also want to validate these prioritized few causes with a larger audience.

Six Sigma Case Study: Defect Reduction in the Service Sector

by Chris Bott

This case study discusses the effective use of Six Sigma tools to improve our plastic issuance processes. It will take you through a project American Express completed, “Eliminate Non-received Renewal Credit Cards.?This analysis demonstrates how we applied Six Sigma techniques to reduce the defect rate with ongoing dollar savings.

Define and Measure the Problem
(Data has been masked to protect confidentiality.)

  • On average (in 1999), American Express received 1,000 returned renewal cards each month.
  • 65% (650) were due to the fact that the card members changed their addresses and did not tell us.
  • The U.S. Post Office calls these forwardable addresses. Please note: Amex does not currently notify a card member when we receive a returned plastic card.

Analyze the Data

We applied various Six Sigma tools to identify Vital Xs, or the root causes of the defect. The use of Chi Square indicated the following:

  • By type of card/plastic: We isolated significant differences in the causes of returned plastics among product types. Optima, our revolving card product, had the highest incident of defects but was not significantly different in the percentage of defects from the other card types.
  • Issuance reason: Renewals had far and away the highest defect rate in the three areas in which we issue plastic—replacement, renewal, and new accounts.
  • Validated reason for returned: Because we suffered scope creep early in the project, it was important to confirm what our initial data was telling us. After testing the five reasons for returns, returns with “forwardable?addresses were overwhelmingly the largest percentage and quantity of returns.

Improve the Process

An experimental pilot was run on all renewal files issued. This “bumping?against the “National Change of Address?service was implemented on all renewal cards in mid August. Due to the strict file matching criteria, this solution will impact 33% of the remaining population (or 333 cards monthly).

As a result of a successful pilot, we were able to reduce the defect rate by 44.5%, from 13,500 to 6,036 defects per million, reflecting annual savings of $1,228. Figure 1 outlines the combined test results.

Fig. 1 Combined Test Results

Non-Received Renewal Credit Cards

Baseline

Test Results

Defect rate

1.35%

.6%

DPMO

13552

6036

COPQ

$3,360

Total annual savings

$1,228

Sigma level

3.71

4.01

Control the Process

To ensure that we perform within the acceptable limits on an ongoing basis, it is important to monitor the new process. To achieve “control?status, we will be using the p chart, a tool that tracks proportions of returns over time.

In addition, our vendor has constructed reporting, which gives us the ability to monitor the defect rate on a monthly basis. The report will tell us if any credit cards that were “bumped?against the "National Change of Address" database were returned back to our warehouse.

Impact on Customer Satisfaction

Using the "National Change of Address" will enable over 1,200 card members to get their credit cards. Prior to this implementation, these card members would have never received their cards automatically. Revenue and customer satisfaction will undoubtedly increase.

Regression and Correlation Analysis

As you develop Cause & Effect diagrams based on data, you may wish to examine the degree of correlation between variables. A statistical measurement of correlation can be calculated using the least squares method to quantify the strength of the relationship between two variables. The output of that calculation is the Correlation Coefficient, or (r), which ranges between -1 and 1. A value of 1 indicates perfect positive correlation - as one variable increases, the second increases in a linear fashion. Likewise, a value of -1 indicates perfect negative correlation - as one variable increases, the second decreases. A value of zero indicates zero correlation.

Before calculating the Correlation Coefficient, the first step is to construct a scatter diagram. Most spreadsheets, including Excel, can handle this task. Looking at the scatter diagram will give you a broad understanding of the correlation. Following is a scatter plot chart example based on an automobile manufacturer. In this case, the process improvement team is analyzing door closing efforts to understand what the causes could be. The Y-axis represents the width of the gap between the sealing flange of a car door and the sealing flange on the body - a measure of how tight the door is set to the body. The fishbone diagram indicated that variability in the seal gap could be a cause of variability in door closing efforts.

In this case, you can see a pattern in the data indicating a negative correlation (negative slope) between the two variables. In fact, the Correlation Coefficient is 0.78, indicating a strong relationship.

Simple Regression Analysis

While Correlation Analysis assumes no causal relationship between variables, Regression Analysis assumes that one variable is dependent upon: A) another single independent variable (Simple Regression) , or B) multiple independent variables (Multiple Regression). Regression plots a line of best fit to the data using the least-squares method. You can see an example below of linear regression using the same car door scatter plot:

You can see that the data is clustered closely around the line, and that the line has a downward slope. There is strong negative correlation expressed by two related statistics: the r value, as stated before is .78 - the r2 value is therefore 0.61. R2, called the Coefficient of Determination, expresses how much of the variability in the dependent variable is explained by variability in the independent variable. You may find that a non-linear equation such as an exponential or power function may provide a better fit, and higher r2 than a linear equation.

Multiple Regression Analysis
Multiple Regression Analysis uses a similar methodology as Simple Regression, but includes more than one independent variable. Econometric models are a good example, where the dependent variable of GNP may be analyzed in terms of multiple independent variables, such as interest rates, productivity growth, government spending, savings rates, consumer confidence, etc.
Many times historical data is used in multiple regression in an attempt to identify the most significant inputs to a process. The benefit of this type of analysis is that it can be done very quickly and relatively simply. However, there are several potential pitfalls:


The data may be inconsistent due to different measurement systems, calibration drift, different operators, or recording errors.

The range of the variables may be very limited, and can give a false indication of low correlation. For example, a process may have temperature controls because temperature has been found in the past to have an impact on the output. Using historical temperature data may therefore indicate low significance because the range of temperature is already controlled in tight tolerance.

There may be a time lag that influences the relationship - for example, temperature may be much more critical at an early point in the process than at a later point, or vice-versa. There also may be inventory effects that must be taken into account to make sure that all measurements are taken at a consistent point in the process.
Once again, it is critical to remember that correlation is not causality. As stated by Box, Hunter and Hunter: "Broadly speaking, to find out what happens when you change something, it is necessary to change it. To safely infer causality the experimenter cannot rely on natural happenings to choose the design for him; he must choose the design for himself and, in particular, must introduce randomization to break the links with possible lurking variables".1
Returning to our example of door closing efforts, you will recall that the door seal gap had an r2 of 0.61. Using multiple regression, and adding the additional variable "door weatherstrip durometer" (softness), the r2 rises to 0.66. So the durometer of the door weatherstrip added some explaining power, but minimal. Analyzed individually, durometer had much lower correlation with door closing efforts - only 0.41. This analysis was based on historical data, so as previously noted, the regression analysis only tells us what did have an impact on door efforts, not what could have an impact. If the range of durometer measurements was greater, we might have seen a stronger relationship with door closing efforts, and more variability in the output.

Thursday, April 26, 2007

(Illustration) To find out the potential X's

To find out the potential X's

Instructor : OK, we've know where we're starting from and where we're going, so now the real detective work starts. What are the variables that are preventing us from reaching our goal? Remember, we're looking for the Vital X's that can influence our Y, what the customer cares about.

Instructor : So we now know where our process currently is and what our eventual goal will be. Let's look at the learning objectives for Step 6. By the time you finish this section, you will be able to:
> Explain the purpose of a hypothesis test
>Define the terms random sample, null hypothesis, alternative hypothesis, type I error, type II error, p-value, and confidence interval
>You will be able to Identify and explain the tools (for each data type) that can be used to analyze the influence of the X's on a Y (1 and 2 sample T tests, homogeneity of variance, 1-way ANOVA (analysis of variance), Scatter plot, and Simple Regression)
>You will be able to evaluate the results of a T test.
>Finally, you will be able to identify potential sources of variation in a process, which is the primary reason for this step in the overall D M A I C process.

Instructor : Hypothesis testing provides a statistical comparison of two samples and of one sample to the population.
Instructor : It provides an objective basis for concluding whether there is a difference between them.
Instructor : There are a number of reasons to do hypothesis testing.

Instructor : There are a number of reasons why hypothesis testing is a fundamental concept in executing a DMAIC project.
Instructor : First of all, determine if there is a significant difference between processes. In those cases, the formal hypothesis test will indicate objectively whether or not there is a difference, thus leading all parties to come to the same conclusions and make the same decisions in a collaborative manner.
Instructor : Once we have identified these factors and made adjustments for improvement, we need a way to validate that improvement.
Instructor : Finally, we need to identify factors which impact the Mean or standard deviation of the process.

Instructor : A hypothesis is a statement of assumptions. You can make a hypothesis about virtually anything and then statistically test it. For every hypothesis there is an alternate hypothesis. Hypothesis testing is done by applying a number of statistical tools to the data to compare alternative explanations, which we call the Null Hypothesis and the Alternative Hypothesis. In order to proceed, we must be very clear concerning definitions.
Instructor : A Null Hypothesis, often referred to as H-sub zero, is a statement of status quo. It presumes no change. If you are comparing two product lines, processing machines, or other industrial processes, a null hypothesis claims that there will be no difference if observed over time.
Instructor : Let's take as a Null Hypothesis, the following statement:
Instructor : GE's nut removal time is equal to the competitor's nut removal time. Remember, this is the hypothesis which we are looking to reject.
Instructor : An Alternative Hypothesis, often referred to as H-sub-A, is a statement of difference. It is often a statement of something we want to prove. An example would be that if we observe the production of a given part with two different kinds of machine tool, there will be a significant difference over time.
Instructor : GE's nut removal time is not equal to the competitor's nut removal time.
Instructor : We now have our hypotheses defined, but before we actually start the testing, we should look at the risks associated with this process.

Instructor : There are a number of other tools and terms which address various aspects of hypothesis testing depending on the circumstances involved.
Instructor : The risks associated with hypothesis testing are divided into two categories of potential errors.
Instructor : A Type I error is rejecting the Null Hypothesis when the Null Hypothesis is true.
Instructor : A Type II error is failing to reject the Null Hypothesis when in fact the Null Hypothesis is false.
Instructor : Alpha expresses the probability of committing a type one error
Instructor : While beta expresses the probability of committing a type two error.
Instructor : In the United States, criminal courts operate under an assumption of innocence. So a person accused of a crime is considered innocent until they have been proven guilty to the satisfaction of the appropriate judge or jury.
Instructor : So we can use the statement "The defendant is innocent" as a null hypothesis.
Instructor : It follow that the alternative must be "The defendant is guilty."
Instructor : Convicting an innocent person would be a Type I, or alpha, error.
Instructor : While freeing someone who is truly guilty would be a Type II, or beta, error.
Instructor : Another way to look at the two types of error is in this format. In this case we are looking at whether or not to accept and stock finished goods
Instructor : The Null Hypothesis states the usual status quo; the products meet specifications and are accepted and stocked
Instructor : The Alternative Hypothesis states that the finished goods fail to meet specifications and are rejected.
Instructor : If you reject goods when in fact, the goods meet specifications, you are committing a Type I error.
Instructor : If you stock goods, when in fact they do not meet specifications, you are committing a Type II error.

Instructor : There are five basic steps to hypothesis testing
>The first step is to confirm that the samples you test are representative of the process, and that they have normal distribution.
>Then you must lay out clear statements of your Null and Alternative Hypotheses.
>Next you must determine which test to use. Some choices include the T Test and the Analysis of Variance, or ANOVA for difference in means, and Homogeneity of Variance test for difference in standard deviations.
>Fourth, you run the test
>Finally, you interpret the results.

Instructor : The hypothesis test will provide the statistical basis for rejecting or accepting the Alternative Hypothesis.
Instructor : The test statistic (called a P Value) indicates the probability of making a Type One error. You will use it to decide whether to reject or not reject the Null Hypothesis.
Instructor : The Null Hypothesis is assumed true unless otherwise shown. This is the "innocent until proven guilty" statement.
Instructor : The value that answers this question critical item in this case is the p value.
Instructor : The p value in this case is less than zero point zero five, indicating a less than five percent chance of making a type one error.
Instructor : This means that the Null Hypothesis is rejected and we accept the alternative hypothesis. Our next task is to identify potential variations, including differences between our process and that of our competitor.
Instructor : In order to identify the source of variation, we must look at all of the possible areas that may provide differences between the two processes and be the cause of the variation. A Fishbone diagram is often a good starting point.
Instructor : We place the title Removal time too long in the title box of the fishbone diagram. Let's see what categories we can define for potential causes of variation that lead to this result.
Instructor : Some areas we can look into include the tools used in the process,
Instructor : The various factors associated with the people doing the work
> The environment in which the work is done
> The methods used to remove the nuts, including heating them when they won't come off.
> And the type of nut used. There is one extra space, it we think of any other categories, but this seems to cover everything necessary for this process.

Instructor : First, let's look at the factors associated with the people. There are four items listed under this title.

Instructor : Here is our completed fishbone diagram. If we think about something that we can test easily and which may make a significant difference, what would you pick? You won't find out until the Improve phase.
Instructor : Other tools, such as the F M E A and the Q F D could have been used to potential causes as well. Different tools may be appropriate for different circumstances. However you do this brainstorming exercise, it will help your planning on either the experimentation or the data mining which will complete the search for potential Vital X's. Some variables may be discarded at this point if historical data exists to justify that action. If the effect of a variable is unknown, it should be added to the list of potential Vital X's
Instructor : However you discover them, some of these potential causes produce continuous data and some produce discrete data. There are different tools for handling the two data types.

Instructor : Besides the 2-sample T-Test, which we have recently used, there is also a 1-sample T-Test. There are a number of other tools for hypothesis testing with continuous data. These tools can be used to determine which X has what level of impact on Y.
Instructor : The Homogeneity of Variance test determines if the variances between two populations are the same.
Instructor : The Anova, or Analysis of Variance test allows you to compare the centering of multiple populations or samples.
Instructor : Scatter plots and simple regression allow you to assess the relationship between two variables.
Instructor : There are two primary tools used for discrete data. The Chi-Square analysis and Logistic Regression are both useful in dealing with discrete items. Logistic regression allows you to investigate the relationship between a categorical response variable (a discrete type of data) and one or more predictors. The resulted prediction functions are often used for optimizing the process. When both the response variables and predictors are discrete, the Chi-square test allows one to investigate their relationship. More information on these can be found in the Resources section.

Instructor : We have now identified many potential sources of variation, but we still have to discover the Vital Xs which will lead us to our solution.

Instructor : It's really easy to get caught up in the details of the process, so it's time to step back and look at what we've done in the this step.

Instructor : Well, here we are at the end of step six, and the end of the Analyze phase. Before I turn you back over to Master, let's recap what we accomplished here in step six.
Instructor : We examined the purpose of Hypothesis Testing,
Instructor : We identified and explained the tools used in this step,
Instructor : We performed a two-sample T-test
Instructor : and evaluated the results of the T test used to compare groups of data.
Instructor : Finally, we used a cause and effect, or fishbone, diagram to brainstorm possible sources of variation. This will be the starting point for the next phase.
Instructor : The statistical tools which we use allow us to analyze historical data in order to reduce the number of variables to investigate in the Improve Phase. You'll get to see them at work in the next section of this course.
Instructor : You've done a great job with the Analyze phase and it's been a pleasure being your instructor. However my job is done and I'm going to turn things back over to Master so he can take you to Improve

(Illustration) Determine our performance objectives

Determine our performance objectives

Instructor :Now that we know where we are starting from, we can begin to determine our performance objectives, or where we want to wind up at the end of the project. So in this step we're going to determine our performance objective.

Instructor : Remember that in Step Four we determined our current process performance, as well as a number of other factors. Here in Step Five we'll use some of the results calculated in Step Four to determine what the end results of this Six Sigma project will be. After you have finished with this section, you will be able to:
>Define benchmark, entitlement, and baseline
>and explain how you determine a performance objective.

Instructor : As we said in the previous section, when you're dealing with improvement you must understand where you are starting.
Instructor : The starting point is called the Baseline. It is the point at which measurement of improvement will begin.
Instructor : The Process Entitlement is the best that can be accomplished with current technology. The usefulness of this concept is that it will indicate whether or not you need to consider new technology in order to complete your project. Technology is a fairly broad term in this context, and can include new tools, parts, and other key resources
Instructor : It can consist of materials, skills available, equipment available, communications infrastructure, or even an accounting system. It's an extremely broad term.

Instructor : While the baseline identifies where you are and the entitlement shows what can be expected with current technology
Instructor : The Benchmark is the current best practice.
Instructor : Benchmarking is defined as a process for identification of best-in-class practices and standards for comparison against internal practices.

Instructor : Benchmarking is a continual search for the best practices, methods, and processes. The aim is to adapt the best features of these "benchmarks" no matter where they originate and use them to make our own processes and products the "best of the best."
There are three primary sources of best practices...
...The top performers within GE...
...Top performers among our competitors within the industry
...And top performers in a similar situation in any industry.
Instructor : All of these sources should be examined whenever possible.
Instructor : There are three keys to success in benchmarking.
Instructor : Consider all organizations, not just corporations;
Instructor : Look at all sectors, including government, non-profit, and hybrid organizations. Corporations don't have all the answers.
Instructor : And look at both domestic and international organizations.

Instructor : The overall DPMO for the nut removal process is one hundred thirty eight thousand, three hundred and eighty defects per million opportunities. So how much should you look to reduce this value for your project? There are new corporate guidelines for Six Sigma certification. Please check with your business Quality Leader or Master Black Belt for help understanding the requirements in your business.

Instructor :
Let's assume for the moment that your guidelines say to reduce the DPMO by ninety percent to start. If the actual D P M O is over one hundred thirty eight thousand, how feasible is it to consider reducing by over ninety percent to near fourteen thousand?

Instructor : One way to get a quick "reality check" is to look at the short-term P P M or D P M O. This is a close approximation of the process entitlement. As you can see, this value is under twenty three hundred; reaching a level of fourteen thousand should be quite do-able. In fact, we could even think about approaching five thousand or less, according to this report.

Instructor : However, we have another value we need to look at before we can truly set our performance objective. This involves comparing our process to that of a competitor through benchmarking.
Instructor : We were able to acquire data from a competitor as part of our benchmarking process.
Instructor : According to our limited data, our competitor has been able to show a long term defect rate of one thousand, four hundred and six defects per million opportunities. The goal should be to at least catch up, and if possible exceed, competitor performance. The long term goal is always to achieve six sigma performance.
Instructor : So we have tentatively set our performance objective for the removal process at a long term D P M O of less than fourteen hundred and six.

Instructor : While our current process is not very competitive,
Instructor : and even our process entitlement falls short of our goal. This implies that we may need a radical change in our process, including the implementation of new technology, to meet our goal,
Instructor : Through a discussion with the team, we have determined to set the goal to reduce the DPMO from 138380 to at least 1400 and with the intention to reduce this DPMO further so that we can reach six sigma and delight our customer.

Instructor : We still have the issue of our benchmarking study defining a performance objective exceeds our process entitlement. What does this mean?

Instructor : If the actual D P M O is over one hundred thirty eight thousand, how feasible is it to consider reducing by over ninety percent to near eleven thousand?
Instructor : One way to get a quick "reality check" is to look at the short-term P P M or D P M O. This is a close approximation of the process entitlement. As you can see, this value is under forty sixty hundred; reaching a level of eleven thousand should be quite do-able. In fact, we could even think about approaching five thousand or less, according to this report.
Instructor : However, we have another value we need to look at before we can truly set our performance objective. This involves comparing our process to that of a competitor through benchmarking.

Instructor : In this step we used Benchmark, Entitlement, and Baseline in the context of business strategy. , to assist in our goal setting.
Instructor : The team set the project objective in consultation with black belts and master black belts.
Instructor : In this case, exceeding the competitor's process is the minimum goal.
Instructor : Improvement is measured by reduction of the actual D P M O number. This is the metric that will determine the success of your project.
Instructor : So where do we stand with our Rockledge case?
Instructor : Due to a discussion referencing the process entitlement and the competitor benchmark, we intend to reduce the long term D P M O of the nut removal process from over one hundred thirty eight thousand to about fourteen hundred defects per million opportunities.
Instructor : Because the performance objective D P M O is less than our estimated process entitlement, we will look for a change in process technology, which could include new tools, parts, or other similar factors.
Instructor : After looking at data for both the installation and removal processes, we have decided to concentrate our efforts on improving the removal process.
Instructor : Well, you've done it again. You've finished step five. We've only got one step left in the Analyze phase, so if you're ready, click Next and we'll go on. If you decide to leave now and come back later, you can go right to Step six.

(Illustration) Evaluating current process capability

Evaluating current process capability

Master: In the Analyze phase, you'll answer three critical questions leading to the actions needed to improve the process.
First of all, you have to Establish process capability, this means finding out "Where are we now?" or, in other words, "how good is what you're doing?"
Master:
Next, we have to define a performance objective to decide "Where are we going?" That's when we figure out what our goals are for the project.
Master: Finally, you have identify variation sources that could keep you from achieving the goal. That's all we need to do here.

Master: Instructor is our resident expert in all aspects of the Analyze phase. She'll take you through it from baseline to benchmark, hitting everything in between. I'll see you for a wrap-up after the two of you are done.
Master:
It's all yours Instructor...

Instructor :Hi I'll be your instructor for the Analyze phase in this course. As a Master Black Belt, I'm very familiar with all phases of D M A I C, but I have to admit that I really like the Analyze phase. It's great to pull meaningful information out of the raw data collected in Measure and then use that information to decide where you will be going with the process. I know you're excited about it too.

Instructor: Welcome to Step four, the beginning of Analyze. You can set a measurable goal without a clear understanding of where you are. Here in step four, we'll determine that starting point.

Instructor: In step four, we need to accomplish a number of things. They all involve determining the capability of our process. Let's look at the learning objectives for this step. By the time you finish this lesson, you will be able to:
>Explain the key statistical measures used to determine variation in a process.
>Define short term and long term process capability and how to calculate various factors such as Z Bench, Z Long Term, Z Short Term and D P M O.

>Evaluate the results of the process capability report
>Explain and evaluate the results of the product capability report for discrete data, and
>Describe the different types of yield. While that seems like a lot to cover, you'll find that it all hangs together as a means to answer the single question we have in this step, "Where are we starting from in our improvement effort."

Instructor : You cannot plan a journey unless you know your starting point. In step four we will determine our starting point from which we will set our goals. Our primary measure for determining this value is the sigma level. The sigma level is calculated through statistical analysis of the collected data.

Statistics is about organizing data, presenting it in a variety of formats, and drawing meaningful, quantitative conclusions from it. We have collected the data and will now use a variety of statistical tools to draw conclusions.

Instructor: If you look at the headings, you will see some familiar labels. The P P M, which also corresponds to the Defects per Million Opportunities, the Z Shift, and the Z Bench.
Instructor:
The data shown in the middle part of the report show these factors for each characteristic tested. If there were multiple characteristics, there would be more than one line here.
Instructor: The bottom line shows the total for all characteristics, which in this case is the same as the one characteristic tested here.

Instructor : Variation is the enemy. You will often hear that phrase in connection with six sigma activities. What does this mean?
Instructor :
Variation is deviance from the expected norm. The more variation in a process or product, the less predictable the outcome. So how do we measure variation?
Instructor : Range, the width of the Distribution, and the Standard Deviation all indicate the level of variability in a process. Higher variance numbers, a larger standard deviation, or a broader distribution, all indicate high degrees of variation in the process.


Instructor : If your data follows a normal distribution, approximately sixty eight percent of all values will lie within one standard deviation of the mean.
Instructor : So a lower standard deviation implies,less variation in a process and its outputs, more predictability in the process, which implies,better control of the process. Let's see how it works.

Instructor : We start with a simple histogram with a normal curve overlaid
Instructor :
First draw a vertical line from the X axis to the curve at the mean.
Instructor :
Then draw two vertical lines from the axis to the curve. One is at one standard deviation lower than the mean
Instructor :
and the other is one standard deviation higher
Instructor :
Sixty eight percent of all the data falls between the two lines.
Instructor :
As you have seen, the standard deviation directly relates to the ability to predict our arrival time.
Instructor :
In a very similar way, the standard deviation also relates to our ability to meet requirements.
Instructor :
If the requirements set limits at one standard deviation away from the target, sixty eight percent of the time the process will produce outcomes that meets the customer requirements. In this case, the capability of the process is not good.
Instructor :
If the requirements are two standard deviations away from the target, the process outcome will the meet the requirements ninety five point four percent of the time.
Instructor :
Better yet, a process with a smaller standard deviation, so that the previous requirements are three standard deviations away from the target, will produce acceptable outcomes ninety nine point seven percent of the time and only produce undesired or unacceptable results only zero point three percent of the time.
Instructor : Therefore, the more standard deviations that can fit between the requirements, the better our ability to predict the outcome, and the better chance that the outcome will meet customer requirements. This concept will be extended in Six Sigma methodology to evaluate the ability of a process to meet requirements. We will discuss this concept in details later.


Instructor : There are several related terms which you must comprehend before you can continue. The first is Process Capability.
Instructor :
The process capability measure the ability of your process to meet customer requirements. It takes the customer viewpoint and applies it to an analysis of your process.
Instructor :
If the area under the curve beyond the U S L is ten percent of the total area under the curve, it represents a ten percent chance of a nonconformance in the process. This indicates that one of ten times, the process is very likely to produce a defect from the customer perspective. This technique essentially provides a measure of the capability of the process to meet customer requirements. We have a rigorous system to calculate this capability in six sigma practice.
Instructor :
The customer acceptable variability is defined by a specification. When we compare our process capability to the requirements of the specification, there are a number of values which may describe the results. While there are a number of these capability indices generated, General Electric uses the Z bench, often called the sigma level, which can be presented as either a short term Z, called Z sub S T and a long term Z, called Z sub L T. The Z values are the common communication language on process capability.
Instructor : But what does these Z values have to do with our process?

Instructor : As we discussed previously, the percentage of area outside the specification limits represents the probability to produce an outcome that does not meet the customer requirements. This is the defect rate.
Instructor :
Given the mean, standard deviation and specification limits, you can use mathematical manipulation to calculate the defect rate. This can be very cumbersome at times.
Instructor :
Luckily, Every normal curve can be normalized by setting the mean to zero and constructing a scale in standard deviation units. This standardized structure then allow us to use one simple table to determine the defect rate by looking at a parameters called the Z scores.
Instructor :
The Z scores are calculated by subtracting specification limits from the mean and then dividing by d by the standard deviation. In other words, the Z score measures the distance between the mean and the specification limit standard deviation units.
Instructor :
Once the Z-score is calculated based on this simple formula, one can easily resort to the single-tail Z table to determine the defect rate or the probability of producing an outcome that is outside the specification limits. Let me show you how this works. The mean, in our actual measurement units, is eight point five. Our standard deviation is zero point one. Now that we have the actual measurement scale matched up with the standard deviation scale, we can proceed to calculate our scores.
Instructor
: This is the Z table.
Instructor :
Before we leave this table, there are two more important items to know. The DPMO, defect per million opportunities is simply by multiplying defect rate P(d) with 1000000. Both DPMO and P(d) are important parameters in calculating six sigma level. The Z-table can be used in a reverse manner. For a given DMPO, one can identify the corresponding Z score by reversing the procedures that we just practiced. For example,
Instructor :
What is the z score for a DPMO of 93400. (DPMO=93400 show on the graph)
Instructor :
One should divide DPMO by 1000000 to obtain a P(d) of 0.0934. (phasing out DMPO and phasing in P(d)=0.0924)
Instructor :
Locate the P(d) of 0.0934 on the table. (move the magnifying glass at random on the chart and then settle on the right number 0.0934)
Instructor :
Read z score from by adding the headings at the top raw and the left column corresponding to this P(d). the corresponding z score is 1.32. Finding the z-score for a given DMPO comes in handy when we starting the estimate the process capability in sigma level.
Instructor :
Now that you have learned about the z-score and z table. Let's see how this applies to calculating process capability in six sigma methodology.
Instructor :
The graph shows the distribution of all possible outcome for a given process. The mean and standard deviation of the outcome distribution are 8.5 and 0.1.
Instructor :
The upper and lower specification limits are 8.2 and 8.7, respectively. To calculate the process capability of this process, we need to calculate the z score against the upper and lower specification limits, Z sub LSL and Z sub USL using the z score formula that we have learned previously.
Instructor :
The Z sub LSL is calculated by subtracting the lower specification limit, which in this case is eight point two, from the mean of eight point five, and dividing the result by the standard deviation of zero point one. This results in a Z sub L S L of three.
Instructor :
Similarly, we subtract the mean from the upper specification limit and divide by the standard deviation to provide a value of two for the Z sub U S L.
Instructor :
Now we must look at the areas under the curve outside of the upper and lower spec limits. Those areas define the probability of the process producing a defect. By calculating those values we can determine the Z bench for the process. Z bench is simply the z score with the total P(d).
Instructor :
Here is the Z table again. To determine the defect rate or P parenthesis D for Z sub L S L of 3.0.
Instructor :
Move down the left column until you reach the entry three point zero.
Instructor :
Then move over one column to the right, under the column heading zero point zero. The entry in that column is one point three five, E minus zero three, or zero point one three five percent. This indicates a P parenthesis d, or probability of a defect of point one three five percent by falling below the lower spec limit.
Instructor : So we have our value for the lower spec limit. Next, we need to determine P parenthesis D for a Z sub U S L of two point zero
Instructor :
Move to the Z value of two point zero on the left hand column, and move over one column to read the P parenthesis D for the upper specification limit.
Instructor :
This number is two point two eight time ten to the power of minus two, or point zero two eight.
Instructor :
If we add those two numbers together, we get a value of point zero two four one five, or two point four two times ten to the power of minus two, which is the probability of producing a defect by violating either the lower or upper specification limit. This is the overall probability of producing a defect in this process.
Instructor :
Remember the conversion of P(d) to DPMO. This P(d) corresponding a DPMO of 24200 by simply multiply 0.00242 with 1000000. To complete the calculation, we need to figure out the Z score corresponding to the P(d) of 0.00242.
Instructor :
The closest we can find is two point four times ten to the minus two.
Instructor
: Moving to the left, we find the first part of our Z bench is one point nine zero
Instructor
: While moving up, we find the rest of it is point zero seven.
Instructor : So our Z bench for this process is one point nine seven, or about a two-sigma process and the corresponding DPMO is 24200.

Instructor : Before we continue, there are four terms with which we need to become familiar.
Instructor :
The Short Term Z, or Z sub S T, indicates the best performance that may be expected from a given process. This is the Z bench based on a short-term data. It is an optimized Z score for the process.
Instructor :
The Long Term Z, or Z sub L T, indicates the longer-term performance of the process. This is the z bench based on a long-term data. It takes into account the expected drift of process performance over time.
Instructor :
We will look at two general categories of causes for our performance deficiencies. The first is Common Cause. Common Cause is a source of variation which is random, usually associated with the "trivial many" process input variables, and which will not produce a highly predictable change in the process output response (dependent variable).
Instructor : A Special Cause is a process input variable that can be identified and that contributes in an observable manner to non-random shifts in process mean and/or standard deviation.

Instructor : Let us review the process.
>The upper and lower limits are given by the specification
>Performance data is then analyzed to provide Z scores for both the upper and lower limit.
> The probability for a defect per unit is determined from the Z score and the Z table
> Adding the probability of defect values together and using the Z table in a reverse manner from the previous step allows you to determine the Z bench for the process.
>If you only have a single spec limit, the Z score for that limit becomes your Z bench.

Instructor : Now that we have determined that the data supports determination of process capability, we'll use MiniTab to generate the report.
Instructor :
First select the Six Sigma Menu from within MiniTab and then select the Process Report.
Instructor
: Upon selecting Process Report, the system displays this menu
Instructor :
Since there is no lower specification limit, leave that field blank. Enter thirty into the Upper Spec Limit field and fifteen for the Target.
Instructor :
Since the data we want to use is in one column, you will click on the Single Column button and place the cursor in the first field. Then click on C2 which is the column holding the data, in the left window.
Instructor :
Now click on Select and the selected column's name appears in the Single Column field, place the cursor in the Subgroup size field and click on C3, which is the column holding the data group id's. We collected samples from eight different maintenance cycles over a full year. Since the eight cycles covered the entire gamut of possible situations, the entire data set may be considered as long term data, while each of the eight sub groups is considered short term.
Instructor :
When you click on Select, that field name appears in the subgroup size field. Now click on
Instructor : OK and the report will be generated. Click on the next button to see the results.

Instructor : By default, the Process Report command generates two report screens. Report 2 includes a great deal of information which may assist you in understanding the process. Let's take a quick look at this chart.

Instructor : This is the process capability report for nut removal.
Instructor :
It contains the X-bar and S charts, and other information, which will be discussed in more detail in the Control phase, as well as a number of factors which may be used to determine the process capability.
Instructor : The numerical results of this analysis are printed in a table to the right of the charts. This table contains valuable information on both long term and short term process capability.

Instructor : While various segments of the organization utilize different numbers, two of the primary values from the Process Capability report are the
Instructor :
Z Bench for both Long Term (Zst) and Short Term (Zlt) process capability. These indicate some of the most critical factors you address in this phase.
Instructor : The difference between the Z sub S T and the Z sub L T is the Z shift. While the Z sub short term reflects the capability of the process, the Z shift indicates how well the process is controlled. A process which is poorly controlled will display a larger Z shift. A Z shift of one point five is considered typical for a manufacturing process, over the long term. For additional information on the Capability Indices, look for the topic in the Resources section of the left hand menu.

Instructor : The first Z value is designated as Z. short term, or Z sub S T
Instructor :
It is calculated by subtracting the Target, or T, from one of the Specification Limits, represented here by SL.
Instructor : That value is divided by the short term standard deviation. The result will be the short term Z for that specification limit.
Instructor :
The Z sub S T describes how precise the process is at any given moment in time. It is the value used when referring to the "SIGMA" of a process.
Instructor :
It represents the true potential of the process technology to meet the given performance specification(s);
Instructor :
It shows what the process can do if everything is controlled to such an extent that only background noise, also known as common cause variation, is present.
Instructor :
This metric assumes the data were gathered in accordance to the principals and spirit of a "rational sampling" plan. as indicated by the phrase short-term, the standard deviation is estimated based on the outcome of samples within a relatively short time span.
Instructor :
The second Z value is designated as Z long term, or Z sub L T.
Instructor :
To calculate Z sub L T, you subtract the mean from the specification limit
Instructor :
and then divide the result by the long-term standard deviation. to get the long term Z score.
Instructor :
The Z sub L T describes the sustained reproducibility of a process. Because of this, it is also called "long-term capability."
Instructor :
This value is used to estimate the long-term process D P M O or the P P M as shown in the six sigma process report.
Instructor :
It reflects the influence of special cause variation, dynamic nonrandom process centering error, and any static offset present in the process mean. From this perspective, it considers all of the "vital few" sources of manufacturing error.
Instructor :
It is a measure of how well the process is controlled (over many cycles) when compared to Z sub S T.
Instructor : This metric assumes the data were gathered in accordance to the principals and spirit of a "rational sampling" plan. As indicated by the phrase long-term, the standard deviation is estimated based on outcomes sampled over a relatively larger time span.

Instructor: The Executive Summary provides a simplified view of the results. It provides, the critical Z scores that will be used to make decisions in this phase, but also provides a clear picture of the process capabilities.
Instructor: Let's take a closer look at the process performance curves. The Process Performance graphs provide overall visual representation of the data.

Instructor: In this graph the vertical line represents upper specification limit for disassembling each bolt.
Instructor:
The dotted line represents the short term process capability,
Instructor:
while the solid line represents the long term process capability.
Instructor: The reason for the shift between the short term and the long term graphs is the expected drift of the process over time. As you can see, the defect rate is expected to increase as the process shifts.

Instructor : Z sub S T is based on the analysis of subgroups. Z sub ST is calculated based on the variation without the subgroup and assuming the process mean is centered at the target. Over time, process mean shifts and variation may widen due to so-called common causes variation.
Instructor :
Therefore, the long-term capability Z sub LT reflects the ability to maintain Z sub S T over a long period of time. A process lack of control will result in a large Z shift.
Instructor :
Commonly, if Z Shift is greater than 1.5, this is a definite indication of a control problem.
Instructor:
The advantage of this analysis is to give a high level perspective on the direction of improvements and its associated difficulties. New technology often means inventing new process or a new product, which is perhaps more inherently challenging than optimizing current technology. This summary chart puts it into perspective.
Instructor:
The Z sub S T is on the X axis,
Instructor :
With the zone of average technology indicated as between three and four point five sigmas.
instructor:
while the Z Shift is on the Y axis.
Instructor :
With the zone of typical control indicated by a Z shift of between one and two. The results are indicated by quadrants.
Instructor:
Quadrant A shows poor control and poor technology
Instructor:
Quadrant B shows good technology, but a poorly controlled process.
Instructor:
Quadrant C shows good control, but not good technology,
Instructor: while Quadrant D is where we want to be; with good control of good technology.

Instructor: The final area we will look at on the Process Capability Executive Summary is the Process Benchmark box. This area provides two critical pieces of information which will be used in the next step. Notice that two of the values are prominent, while the other two are faded out. In Six Sigma practice, Z sub S T and long term D P M O .are reported by convention. When only long term data is available, a Z sub L T is calculated and a Z Shift of one point five applied to report a Z sub S T.
Instructor:
In the sigma row, what MiniTab calls the Potential, or short term, sigma is the same as your Zst. The term potential refers to the result of calculating Z sub S T with the process centering technique. Process centering assumes the mean can be shifted to the targeted value and the standard deviation is only associated with background variation The Z sub S T calculation may actually represent the process entitlement if new technology is required to meet project objectives.
Instructor:
The ppm row is the same as Defects Per Million Opportunities, or D P M O.

Instructor: We now have all the information we need about the case to move on to step five, but we need to take a look at another tool that we use when the process capability report is not applicable.

Instructor : The Six Sigma Product Report is used when we deal with discrete data, rather than continuous data.
Instructor :
There are three key pieces of information needed to generate a product report.
Instructor :
An Opportunity is anything you measure or test. It can be the size of a hole, the length of a pipe, or the thickness of a sheet. If it is a measurement we choose to make as addressing a C T Q, we can consider it an opportunity. There are often multiple opportunities for a defect per unit.
Instructor : A Defect is any non-conformity in a product. Units refers to the number of items, parts, subassemblies, assemblies, or systems inspected or tested.

Instructor: Manufacturing organizations have always needed a way to measure the overall quantity and quality of products leaving the factory in order to quantify the overall productivity of the organization. Quality system have used the concept of yield since the early days of the industrial revolution.
Instructor:
In general, yield refers to a passing rate for completed parts that are free of defects as measured against customer specifications.
Instructor:
However, the way we measure defects and calculate yield can give strikingly different results.
Instructor: Let' examine three different kinds of yield.

Instructor: Classical yield is the number of defect-free parts for the whole process divided by the total number of parts inspected.
Instructor:
It does not show which of many potential sub processes may be causing specific problems.
Instructor: It also does not take into account rework which may take place within a complex process. Therefore, valuable data is lost.

Instructor: First Time Yield is the number of defect-free parts divided by the total number of parts inspected for the first time.
Instructor: This is a better yield estimate to use in driving improvement.

Instructor: Throughput Yield is the percentage of units that pass through an operation without any defects at any stage, this is the best overall yield estimate to use in driving improvement to a process.

Instructor: In this step we examined a number of topics. These included:
> Key statistical measures used to determine variation
> The values generated for continuous and discrete data
> Visual displays of data, including histograms, run charts, and other displays produced by MiniTab
> The concepts of process capability, variation, and yield
> Examined the Process Capability Report, the Product Report, and the L1 Spreadsheet.
>and the product report for discrete data.
Instructor:
So where are we now with the Rockledge Case? Well, we have now defined the process capabilities.
Instructor:
We looked at removal and installation as two different processes and generated information for each of them.
Instructor:
So we have now established our baselines for each process.
Instructor:
The Removal process seems to have some problems,
Instructor:
While the installation process is in much better shape, with the DPMO value far less than one.
Instructor: Congratulations. You have completed Analyze Phase, Step 4. You have clearly defined the capability of your process.

Instructor :Now that we know where we are starting from, we can begin to determine our performance objectives, or where we want to wind up at the end of the project. So in this step we're going to determine our performance objective. Click Next and we'll look at the learning objectives for this step.

Instructor : Remember that in Step Four we determined our current process performance, as well as a number of other factors. Here in Step Five we'll use some of the results calculated in Step Four to determine what the end results of this Six Sigma project will be.After you have finished with this section, you will be able to:
Instructor : Define benchmark, entitlement, and baseline
Instructor : And Explain how you determine a performance objective. So, when you're ready, click Next and we'll move on.

Instructor : As we said in the previous section, when you're dealing with improvement you must understand where you are starting.
Instructor : The starting point is called the Baseline. It is the point at which measurement of improvement will begin.
Instructor : The Process Entitlement is the best that can be accomplished with current technology. The usefulness of this concept is that it will indicate whether or not you need to consider new technology in order to complete your project. Technology is a fairly broad term in this context, and can include new tools, parts, and other key resources
Instructor : It can consist of materials, skills available, equipment available, communications infrastructure, or even an accounting system. It's an extremely broad term. Click Next and we'll look at our third term.

Instructor : While the baseline identifies where you are and the entitlement shows what can be expected with current technology
Instructor : The Benchmark is the current best practice.
Instructor : Benchmarking is defined as a process for identification of best-in-class practices and standards for comparison against internal practices. Click Next and we'll take a more in-depth look at this critical activity.

Instructor : Benchmarking is a continual search for the best practices, methods, and processes. The aim is to adapt the best features of these "benchmarks" no matter where they originate and use them to make our own processes and products the "best of the best."
Instructor : There are three primary sources of best practices...
Instructor : ...The top performers within GE...
Instructor : ...Top performers among our competitors within the industry
Instructor : ...And top performers in a similar situation in any industry.
Instructor : All of these sources should be examined whenever possible.
Instructor : There are three keys to success in benchmarking.
Instructor : Consider all organizations, not just corporations;
Instructor : Look at all sectors, including government, non-profit, and hybrid organizations. Corporations don't have all the answers.
Instructor : And look at both domestic and international organizations.

Instructor : The overall DPMO for the nut removal process is one hundred thirty eight thousand, three hundred and eighty defects per million opportunities. So how much should you look to reduce this value for your project? There are new corporate guidelines for Six Sigma certification. Please check with your business Quality Leader or Master Black Belt for help understanding the requirements in your business.

Instructor :
Let's assume for the moment that your guidelines say to reduce the DPMO by ninety percent to start. If the actual D P M O is over one hundred thirty eight thousand, how feasible is it to consider reducing by over ninety percent to near fourteen thousand?

Instructor :
One way to get a quick "reality check" is to look at the short-term P P M or D P M O. This is a close approximation of the process entitlement. As you can see, this value is under twenty three hundred; reaching a level of fourteen thousand should be quite do-able. In fact, we could even think about approaching five thousand or less, according to this report. Click on the underlined text if you wish to see the glossary entry for Process Entitlement.

Instructor :
However, we have another value we need to look at before we can truly set our performance objective. This involves comparing our process to that of a competitor through benchmarking.

Instructor : We were able to acquire data from a competitor as part of our benchmarking process.
Instructor : According to our limited data, our competitor has been able to show a long term defect rate of one thousand, four hundred and six defects per million opportunities. The goal should be to at least catch up, and if possible exceed, competitor performance. The long term goal is always to achieve six sigma performance.
Instructor : So we have tentatively set our performance objective for the removal process at a long term D P M O of less than fourteen hundred and six. Click Next to continue.

Instructor : While our current process is not very competitive,
Instructor : and even our process entitlement falls short of our goal.
Instructor : This implies that we may need a radical change in our process, including the implementation of new technology, to meet our goal,
Instructor : Through a discussion with the team, we have determined to set the goal to reduce the DPMO from 138380 to at least 1400 and with the intention to reduce this DPMO further so that we can reach six sigma and delight our customer.

Instructor : We still have the issue of our benchmarking study defining a performance objective exceeds our process entitlement. What does this mean?

Instructor : We still have the issue of our benchmarking study defining a performance objective exceeds our process entitlement. What does this mean? Here's a hint; the answer is in the glossary definition of process entitlement.When you have selected your choice, click Done to continue.

Instructor : If the actual D P M O is over one hundred thirty eight thousand, how feasible is it to consider reducing by over ninety percent to near eleven thousand?
Instructor : One way to get a quick "reality check" is to look at the short-term P P M or D P M O. This is a close approximation of the process entitlement. As you can see, this value is under forty sixty hundred; reaching a level of eleven thousand should be quite do-able. In fact, we could even think about approaching five thousand or less, according to this report. Click on the underlined text if you wish to see the glossary entry for Process Entitlement.
Instructor : However, we have another value we need to look at before we can truly set our performance objective. This involves comparing our process to that of a competitor through benchmarking.

Instructor : In this step we used Benchmark, Entitlement, and Baseline in the context of business strategy. , to assist in our goal setting.
Instructor : The team set the project objective in consultation with black belts and master black belts.
Instructor : In this case, exceeding the competitor's process is the minimum goal.
Instructor : Improvement is measured by reduction of the actual D P M O number. This is the metric that will determine the success of your project.
Instructor : Remember to check out all of the resources that provide additional information on the topics of this step. When you click

Instructor : So where do we stand with our Rockledge case?
Instructor : Due to a discussion referencing the process entitlement and the competitor benchmark, we intend to reduce the long term D P M O of the nut removal process from over one hundred thirty eight thousand to about fourteen hundred defects per million opportunities.
Instructor : Because the performance objective D P M O is less than our estimated process entitlement, we will look for a change in process technology, which could include new tools, parts, or other similar factors.
Instructor : After looking at data for both the installation and removal processes, we have decided to concentrate our efforts on improving the removal process.
Instructor : Well, you've done it again. You've finished step five. We've only got one step left in the Analyze phase, so if you're ready, click Next and we'll go on. If you decide to leave now and come back later, you can go right to Step six. Don't forget to check all the extras in the Tools, Resources, and Glossary menus to the left. See you in Step six.

Instructor : Well, you've done it again. You've finished step five. We've only got one step left in the Analyze phase, so if you're ready, click Next and we'll go on. If you decide to leave now and come back later, you can go right to Step six. Don't forget to check all the extras in the Tools, Resources, and Glossary menus to the left. See you in Step six.