Six Sigma - Black Belt

Six Sigma - Black Belt
Six Sigma - Black Belt
TABLE OF CONTENTS
1. Enterprise-Wide Deployment .................................................................................................. 4
1.1. Enterprise-Wide View .......................................................................................................................................4
1.2. Leadership........................................................................................................................................................ 12
2. Organizational Process Management and Measures........................................................... 15
2.1. Impact on Stakeholders .................................................................................................................................... 15
2.2. CTx Requirements ........................................................................................................................................... 16
2.3. Benchmarking .................................................................................................................................................. 16
2.4. Business Performance Measures ...................................................................................................................... 16
2.5. Financial Measures .......................................................................................................................................... 18
3. Team Management ................................................................................................................. 21
3.1. Team Formation ............................................................................................................................................... 22
3.2. Team Facilitation ............................................................................................................................................. 24
3.3. Team Dynamics ............................................................................................................................................... 28
3.4. Time Management ........................................................................................................................................... 29
3.5. Decision-making tools ..................................................................................................................................... 30
3.6. Management and Planning Tools ..................................................................................................................... 32
3.7. Team Performance Evaluation and Reward ..................................................................................................... 42
4. Define Phase ............................................................................................................................ 42
4.1. Voice Of the Customer .................................................................................................................................... 42
4.2. Project Charter ................................................................................................................................................. 44
4.3. Project Tracking ............................................................................................................................................... 46
5. Measure Phase ......................................................................................................................... 48
5.1. Process Characteristics..................................................................................................................................... 49
5.2. Data Collection ................................................................................................................................................ 51
5.3. Measurement Systems ..................................................................................................................................... 54
5.4. Statistics ........................................................................................................................................................... 57
5.5. Probability........................................................................................................................................................ 69
5.6. Process Capability............................................................................................................................................ 80
6. Analyze Phase .......................................................................................................................... 90
6.1. Modeling and Measuring Between Variables .................................................................................................. 90
6.2. Hypothesis Testing .......................................................................................................................................... 97
Six Sigma - Black Belt
6.3. FMEA ............................................................................................................................................................ 110
6.4. Other Analysis Methods ................................................................................................................................ 114
7. Improve Phase ....................................................................................................................... 119
7.1. Design of Experiments (DOE) ....................................................................................................................... 120
7.2. Waste Elimination.......................................................................................................................................... 129
7.3. Cycle-time Reduction .................................................................................................................................... 130
7.4. Kaizen and kaizen blitz .................................................................................................................................. 134
7.5. Theory of constraints (TOC) .......................................................................................................................... 135
7.6. Implementation .............................................................................................................................................. 139
7.7. Risk Analysis and Mitigation......................................................................................................................... 142
8. Control Phase ........................................................................................................................ 144
8.1. Statistical Process Control (SPC) ................................................................................................................... 145
8.2. Other Control Tools ....................................................................................................................................... 162
8.3. Maintain Control ............................................................................................................................................ 172
8.4. Sustain Improvements.................................................................................................................................... 175
9. Design for Six Sigma (DFSS) ............................................................................................... 177
9.1. Common Design Methodologies ................................................................................................................... 178
9.2. DFX ............................................................................................................................................................... 179
9.3. Robust Design and Process ............................................................................................................................ 180
9.4. Special Design Tools ..................................................................................................................................... 187
Six Sigma - Black Belt
1. ENTERPRISE-WIDE DEPLOYMENT
A Six Sigma project is applicable across the enterprise due to it's nature. Six sigma is deployed
as per results from data analysis which results in affecting different activities to reach an
optimum state. An enterprise-wide view is to be established for better implementation of six
sigma.
1.1. Enterprise-Wide View
Six sigma is a method on quality, which is focused on results. It's also a technique of
measurements which results in lower defects which convert into cost savings and competitive
advantage.
Sigma (σ), is an mathematical symbol representing one standard deviation from the average or
mean. Most control charts set their range at +3σ, but Six Sigma extends three more standard
deviations. With six sigma, there are only 3.4 parts per million (PPM) defective. A 6 Sigma level
process is operating at 99.9997% quality level.
History of Continuous Improvement
Continuous improvement involves constantly identifying and eliminating the causes that prevent
a system or process from functioning at its optimum level. The concept of continuous
improvement originated in Japan in the 1970s. It was adopted in many countries, including
U.S.A., in the early 1980s. Continuous improvement—and consequent customer satisfaction—is
the principle on which the concept of Lean manufacturing is developed. When this principle is
combined with just-in-time technique, it results to Lean manufacturing.. Continuous
improvement helps an organization to add value to its products and services by reducing defects,
mistakes, etc. and to maximize its potential. As continuous improvement requires constant
ongoing efforts, it is essential that the top management takes a long term view and commits itself
for its implementation.
Continuous improvement enables organizations identify and rectify problems as and when they
occur. Thus, it ensures smooth functioning of the processes. Many modern quality improvement
models or tools like control charts, sampling methods, process capability measures, value
analysis, design of experiments, etc. have been influenced by the concept of continuous
improvement.
History of six sigma encompassed various events which shaped it’s formation and spread. Six
sigma has evolved over time. It’s more than just a quality system like TQM or ISO. The events
for six sigma evolution are as
Carl Frederick Gauss (1777-1855) introduced the concept of the normal curve.
Walter Shewhart in 1920’s showed that three sigma from the mean is the point where a
process requires correction.
Six Sigma - Black Belt
Following the defeat of Japan in World War II, America sent leading experts including Dr.
W. Edwards Deming to encourage the nation to rebuild. Leveraging his experience in
reducing waste in U.S. war manufacture, he offered his advice to struggling emerging
industries.
By the mid-1950s, he was a regular visitor to Japan. He taught Japanese businesses to
concentrate their attention on processes rather than results; concentrate the efforts of
everyone in the organization on continually improving imperfection at every stage of the
process. By the 1970s many Japanese organizations had embraced Deming's advice. Most
notable is Toyota which spawned several improvement practices including JIT and TQM.
Western firms showed little interest until the late 1970s and early 1980s. By then the success
of Japanese companies caused other firms to begin to re-examine their own approaches and
Kaizen began to emerge in the U.S.
Many measurement standards (Zero Defects, etc.) later came on the scene but credit for
coining the term “Six Sigma” goes to a Motorola engineer named Bill Smith. (“Six Sigma” is
also a registered trademark of Motorola). Bill Smith, along with Mikel Harry from Motorola,
had written and codified a research report on the new quality management system that
emphasized the interdependence between a product’s performance in the market and the
adjustments required at the manufacturing point.
Motorola, under the direction of Chairman Bob Galvin, used statistical tools to identify and
eliminate variation. From Bill Smith’s yield theory in 1984, Motorola developed Six Sigma
as a key business initiative in 1987.
Various models and tools emerged which are
Kaizen – It refers to any improvement, one-time or continuous, large or small
TQM – It is Total Quality Management with Organization management of quality consisting
of 14 principles
PDCA Cycle - Edward Deming’s Plan Do Check Act cycle
Lean Manufacturing – It focuses on the elimination of waste or “muda” and includes tools
such as Value Stream Mapping, the Five S’s, Kanban, Poka-Yoke
JIT– It is Just in Time Business or catering to needs of customer when it occurs.
Six Sigma – It is designed to improve processes and eliminate defects; includes the DMAIC
and DMADV models inspired by PDCA
Dr. W. Edwards developed 14 points on Quality Management, a core concept on implementing
total quality management, is a set of management practices to help companies increase their
quality and productivity. The 14 points are
Create constancy of purpose for improving products and services.
Adopt the new philosophy.
Cease dependence on inspection to achieve quality.
End the practice of awarding business on price alone; instead, minimize total cost by working
with a single supplier.
Improve constantly and forever every process for planning, production and service.
Institute training on the job.
Adopt and institute leadership.
Six Sigma - Black Belt
Drive out fear.
Break down barriers between staff areas.
Eliminate slogans, exhortations and targets for the workforce.
Eliminate numerical quotas for the workforce and numerical goals for management.
Remove barriers that rob people of pride of workmanship, and eliminate the annual rating or
merit system.
Institute a vigorous program of education and self-improvement for everyone.
Put everybody in the company to work accomplishing the transformation.
Value and Foundation of Six Sigma
The Six Sigma concept was developed at Motorola in the 1980s. Six Sigma can be viewed as a
philosophy, a technique, or a goal.
Philosophy - Customer-focused breakthrough improvement in processes
Technique - Comprehensive set of statistical tools and methodologies
Goal - Reduce variation, minimize defects, shorten the cycle time, improve yield, enhance
customer satisfaction, and boost the bottom line
Six sigma is not just quality improvements but also providing better value to customers,
investors and employees. Six Sigma is a business initiative or a way of doing business which
improves quality and productivity, increases competitiveness and reduces cost. By controlling
the amount of variation within the allowable upper and lower limits of a process, the frequency
of out of control conditions reduces. Making six sigma as part of doing business reduces errors,
identifies and corrects deviations in processes and impacts the success of the organization. Six
Sigma is a process of asking questions that lead to tangible and quantifiable answers that
ultimately produce profitable results. There are four groups of quality costs, which are
External failure cost - warranty claims, service cost
Internal failure cost - the costs of labor, material associated with scrapped parts and rework
Cost of appraisal and inspection - these are materials for samples, test equipment, inspection
labor cost, quality audits, etc..
Cost related to improving poor quality - quality planning, process planning, process control,
and training.
Usually companies are at 3 Sigma level which translates to 25-40% of annual revenue being
taken by cost of quality. Thus, if a company can improve its quality by 1 sigma level, its net
income will increase hugely, approximately 10 percent net income improvement.
Furthermore, when the level of process complexity increases (eg. output of one sub-process
feeds the input of another sub-process), the rolled throughput yield of the process will decrease,
then the final outgoing quality level will decline, and the cost of quality will increase. Project
teams with well-defined projects improve the company's profits.
Six Sigma - Black Belt
Mathematical Six Sigma - The term ‘Six Sigma’ is drawn from the statistical discipline
‘process capability studies’. Sigma, represented by the Greek alphabet ‘σ’, stands for standard
deviation from the ‘mean’. ‘Six Sigma’ represents six standard deviations from the ‘mean.’ This
implies that if a company produces 1,000,000 parts/units, and its processes are at Six Sigma
level, less than 3.4 defects only will result. However, if the processes are at three sigma level, the
company ends up with as many as 66,807 defects for every 1,000,000 parts/units produced.
The table below shows the number of defects observed for every 1,000,000 parts produced (also
referred to as defects per million opportunities or DPMO).
Sigma Level
Two Sigma
Three Sigma
Four Sigma
Five Sigma
Six Sigma
Defects
per
opportunities
308,507 DPMO
66,807 DPMO
6,210 DPMO
233 DPMO
3.4 DPMO
million
Process standard deviation (σ) should be so minimal that the process performance should be able
to scale up to 12σ within the customer specified limits. So, no matter how widely the process
deviates from the target, it must still deliver results that meet the customer requirements. Few
terms used are
USL – It is upper specification limit for a performance standard and deviation away is a
defect.
LSL – It is lower specification limit for a performance standard and deviation below is a
defect.
Target – Ideally, this will be the middle point between USL and LSL.
Six Sigma - Black Belt
Six Sigma approach is to find out the root causes of the problem, symbolically represented by Y
= F(X). Here, Y represents the problem that occurs due to cause (s) X.
Y
Dependent
Customer related output
Effect
Symptom
Monitor
x1, x2, x3, …., xn
Independent
Input-process
Cause
Problem
Control
Benefits of Six Sigma
Continuous defect reduction in products and services
Enhanced customer satisfaction
Performance dashboards and metrics
Process sustenance
Project based improvement, with visible milestones
Sustainable competitive edge
Helpful in making right decisions
Value and Foundation of Lean
Lean manufacturing evolved from 17th century by Eli Whitney, who developed the idea of
interchangeable parts. Henry Ford in early 19th century arranged all the elements of a
manufacturing system in a continuous system and the Toyota Production System (TPS) by
Toyota combined all process and techniques as lean manufacturing.
Lean manufacturing focuses on lean philosophy which is about elimination of waste in all forms
at the workplace. Specific lean methods include just-in-time inventory management, Kanban
scheduling systems and 5S workplace organization.
Manufacturing industries in these ever competitive times face immense pressure to minimize
turn around time or cycle time, provide greater product variety and maintain product quality with
the most economical output.
Hence, a system is needed which results in continual improvement and simultaneous extension
of the bottom line profits without extreme initial costs or increase in administrative costs. Lean
Six Sigma - Black Belt
Manufacturing systems, which implemented can serve this purpose in easing out manufacturing
challenges.
Lean manufacturing reduces waste by focusing on team with well informed employees, clean
and organized workspaces, flow for every system, making systems pull in nature so as to adapt to
consumer demand and reducing lead times.
Lean and Six Sigma Integration
Both Six Sigma and Lean have combined and concepts such as "Lean-Six Sigma" have emerged
as process improvement needs both for getting better results. Both the Lean and the Six Sigma
methodologies have proven to achieve dramatic improvements in cost, quality, and time by
focusing on process performance. As Six Sigma focuses on reducing variation and improving
process yield by following a problem-solving approach using statistical tools, Lean is primarily
concerned with eliminating waste and improving flow by following the Lean principles and a
defined approach to implement each of these principles.
Six Sigma eliminate defects but will not optimize the process flow and the Lean principles
exclude the advanced statistical tools often required to achieve the process capabilities needed to
be truly ‘lean’. Hence both methods are considered as complementing each other. Therefore,
many firms are looking for an approach that allows to combines both methodologies into an
integrated system or improvement roadmap.
Business Processes and Systems
A business process is a group of tasks which result in a specific service or product for customers.
It can be visualized with a flowchart or a process matrix. Business processes are fundamental to
every company’s performance. Understanding and optimizing the business process is the aim of
six sigma. It is a series of actions, changes, or functions bringing about a result. A business has
various core functions or processes like Sales, Marketing, Engineering, Production and Customer
Service.
Flowchart to change a bulb. Process Matrix
Six Sigma - Black Belt
Dissecting and truly understanding root cause for process performance is critical to effective
process improvement which is can be accomplished by six sigma. Each process, have the three
elements of inputs, process and outputs that affect its function. A business process is a collection
of related activities that produce something of value to the organization, its stakeholders or its
customers. Processes are definable portions of a system or subsystem that consist of a number of
individual elements, actions, or steps.
Having a standard model such as DMAIC (Define-Measure-Analyze-Improve-Control) makes
process improvement and optimization much easier by providing the teams with an easy
roadmap. This disciplined, structured, rigorous approach consists of steps which are linked
logically to the previous step and to the next step. It is not enough for organizations to treat
process improvement as one-time or periodic events. A sustaining focus on process management
and continuous improvement is the key.
Types of Processes - Processes can be classified as management processes, operational
processes and supporting processes.
Management processes - These processes administer the operation of a system. Some
examples of management processes are planning, corporate governance, etc.
Operational processes - These processes create the primary value stream for the customers.
Hence, they are also called ‘core business processes’. Some examples of operational
processes are purchasing of raw materials, manufacturing of goods, rendering of services,
marketing, etc.
Supporting processes - These processes support the core business processes of the
organization. Some examples of supporting processes are accounting, technical support, etc.
These processes can be divided into many sub-processes that play their intended roles to
successfully complete the respective head processes.
Business System - A business system is a group of business processes which combine to form a
single and identifiable unit of business for a common mission. It is composed of processes,
which in turn are composed of sub-processes and which are further composed of individual
tasks.
A business system is a system that implements a process or a set of processes. It ensures that all
the processes operate smoothly without delays or lack of resources like a PC system. The basic
aim of a business system is to ensure that the processes, products, and services are subjected to
continuous improvement. To ensure that continuous improvement takes place in processes,
products, and services, a business system must provide scope for collection and analysis of data
from processes and other reliable sources.
It is important to have an appropriate business system in place and the relevant processes under
the system are well-documented. The documentation of the processes must be done in such a
way that every task, activity, and their sequence are taken into account for proper execution as
planned for in the business system.
Six Sigma - Black Belt
Each process has it's endpoints defined by inputs and outputs that are monitored and their
measurements are used for it's optimization. Measurements within the process are used to
effectively control the process.
Lean and Six Sigma Applications
Lean and Six Sigma, when both combined, make the process improvement project much richer
than either of them taken individually. Six Sigma has its origins to the application of statistical
methods in an industrial context whereas Lean has its origins to Japanese quality concepts of
waste removal.
The integrated approach of six sigma and Lean principles to process improvement should include
implementing value stream mapping for having a pipeline of projects to applying Six Sigma or
Lean.
Recent applications of Lean and Six Sigma in health care attempt to improve the health care
delivery by making project deliverables more discrete and measurable, retaining a strong
customer focus, quantifying results, and attempting to deliver specific quality improvements
within a designated time frame.
Programs utilizing Lean approaches resulted in substantially reduced turnaround time for
pathologist reports from an anatomical pathology lab and Lean-facilitated improvements
included reducing IV backlog in the pharmacy, reducing the time needed to perform glucose
checks on patients, decreasing time to enter new medication orders and complete chart entries,
and streamlining electronic payment for large vendor accounts.
An integrated Lean and Six Sigma approach also led to reducing the complexity of hiring parttime clinical staff, optimizing operating room scheduling by designing a new pre-surgical
admissions process, and developing a new work planning system to expedited completion of
equipment maintenance requests. The UK’s National Health System adopted a variety of Lean
strategies, including redesigning the number of steps, and hence the time, needed for collection
and processing of blood samples at it's hospitals.
Lean and Six Sigma methodologies are well suited for application to laboratory settings because
of the inherent need for statistical precision and quality control in laboratory testing and
measurement activities, as well as the highly repetitive nature of laboratory work.
To ensure success of Lean and Six Sigma implementations, it is always preferable that a group
representing the top management oversees the implementation. This group will identify and rank
the difficulties that come in the way of efficient implementation, and assemble teams to solve
them on a priority basis. This group is responsible to train, support, recognize, and reward the
teams involved in the Lean and Six Sigma applications. The various areas in which Six Sigma
and Lean applications can be implemented are given below, as
A services organization can make use of Six Sigma and Lean for various purposes like
determining ideal lead time, meeting tight schedules, etc.
Six Sigma - Black Belt
A manufacturing organization can use Six Sigma and Lean for various purposes like
reducing cycle time on assembly lines, improving productivity, etc.
1.2. Leadership
Effective and efficient leadership plays a vital role in implementation and management of six
sigma projects. Successful implementation of Six Sigma projects requires the top leadership's
commitment. Six Sigma focuses on cross-functional and enterprise-wide processes thus,
requiring leadership and support from the executive staff. Various facets of leadership are
discussed in this chapter.
Enterprise Leadership Responsibilities
The most essential characteristic of a six sigma project leader is of being a problem solver. A
leader should have the expertise and skills to identify and remove any problems or bottlenecks
which crop up smooth functioning of the team.
A leader should be knowledgeable and tactical in resource allotment to balance team dynamics
and synergies to achieve the objectives of the project. Team dynamics is needed for teams to
succeed and is a part of teamwork. Team dynamics is built by harmonious relationships within
the team and outside of it. Healthy relationships are built by establishing communication
channels and which makes accepting and adopting changes or new ideas in unison by the team.
The crucial responsibilities of leadership are to allocate resources for problem identification,
solution and correction for future.
Organizational Roadblocks
Various external and internal factors in the organization stall Six Sigma projects and are called as
organizational roadblocks.
Some of the common internal roadblocks are
Structure and culture of the organization
Policies of the organization
Stakeholder resistance
Some of the common external roadblocks are
Rules and regulations imposed by the government
Market conditions
Customer acceptance trends
Usually internal roadblocks act as greater roadblocks for implementation of Six Sigma as it
imposes new and radical changes within the organization. Different organizations have different
structures and the level of resistance to the implementation of Six Sigma depends on the type of
the organizational structure. As in a rigid centralized organizational structure is more resistant to
the changes for implementation of Six Sigma project.
Various techniques can be used to overcome organizational roadblocks and which are
Six Sigma - Black Belt
Modify organizational structure and culture - One of the simplest techniques to overcome
organizational roadblocks is to modify the organizational structure and culture to an extent
that the roadblocks cease to exist.
Improve infrastructure - Effective communication infrastructure can nullify the effects of
organizational roadblocks. Proper and efficient communication channels ensure smooth flow
of information so that ambiguities within and outside the team are resolved.
Provide training on change management - Providing adequate training on change
management at managerial levels will help influencing the mindsets to regard change as an
organizational necessity and accept it.
Change management
Managing changes has become an integral part of a project leader’s job. It has taken precedence
over many other aspects of project management. Change management helps organizations to
rework their organizational structures, objectives and tactical and strategic approaches to doing
business in step with changing times, evolving technologies, heightened customer expectations,
and rapidly transforming political, social, and cultural trends.
Change management process primarily aims at changing living mindsets by drawing upon the
principles of industrial psychology. The changed minds, in turn, can be trusted to bring about the
required organizational transformation.
Some of the characteristics of ineffective change management are as follows
Inadequate resources - Non-allocation of adequate resources necessary to implement the
assigned changes is a major constraint to manage change.
Improper communication - Information on changes to be implemented is not communicated
or is miscommunicated to the concerned personnel. This can result in anxiety and unrest
among the employees who may resist these well-intentioned changes fearing threats to job
security.
Some of the characteristics of effective change management are as follows
For change management to be effective, it is always preferable that it is entrusted to
executives from senior management.
The need for change, and both its positive and negative implications must be explained by
the management in order to make the employees realize the importance of change.
The change agents within the organization have to be identified and used to smoothen the
change management process. Similarly, strategies to overcome the resistors to change have to
be planned and implemented.
Kaizen Events
The broad objectives of the organization must be aligned with its long term strategies. One of the
techniques that an organization can use to align its objectives with long term strategies is ‘hoshin
planning’. Hoshin planning helps an organization to develop its business plan and deploy the
same across the organization in order to reach the set goals.
Six Sigma - Black Belt
Project selection is a testimony to a leader’s role in successfully aligning the broad objectives of
the organization with its long term strategies. A project selection committee or group can be
formed to screen and select projects. It can include Champions, Master Black Belts, Black Belts,
and important executive supporters.
The project selection committee sets the criteria to select the projects. The project selection
criteria are framed on the basis of the key factors that define the business case and business need
of an organization. After selecting the projects, the project selection committee matches the
projects selected with teams assigned to execute them.
The projects assigned to a Six Sigma team will generally be characterized by extensive data
analysis, use of design of experiments etc. whereas those involving process improvement without
the use of Six Sigma techniques will be assigned to lean manufacturing teams using kaizen tools.
The projects assigned to a kaizen team will be distinctly different from the Six Sigma projects.
The kaizen projects that aim to produce a new product design will generally adhere to the
guidelines for Design for Six Sigma. Kaizen projects using Lean principles are normally
undertaken to bring swift improvement and reduce waste in the organization.
Six Sigma Roles and Responsibilities
Roles and responsibilities are defined before six sigma program for implementation, various
roles as outlined are
Champion
Sets and maintains broad goals for improvement projects in area of responsibility
Owns the process
Coaches and approves changes, if needed, in direction or scope of a project
Finds (and negotiates) resources for projects
Represents the team to the Leadership group and serves as its advocate
Helps smooth out issues and overlaps
Works with Process Owners to ensure a smooth handoff at the conclusion of the project
Regular reviews with Process Owner on key process inputs and outputs
Uses DMAIC tools in everyday problem solving
Process Owner
Maximizes high level process performance
Launches and sponsors improvement efforts
Tracks financial benefit of project
Understands key process inputs and outputs and their relationship to other processes
Key driver to achieve Six Sigma levels of quality, efficiency and flexibility for this
process
Uses DMAIC tools in everyday problem solving
Participates on GB/BB teams
Team Member
Participates with project leader (GB or BB)
Provides expertise on the process being addressed
Performs action items and tasks as identified
Six Sigma - Black Belt
Uses DMAIC tools in everyday problem solving
Subject matter expert (SME)
Green Belt (GB)
Leads and/or participates on Six Sigma project teams
Identifies project opportunities within their organization
Know and applies Six Sigma methodologies and tools appropriately
Black Belt (BB)
Proficient in Six Sigma tools and their application
Leads/supports high impact projects to bottom line full-time
Directly supports MBB’s culture change activities
Mentors and coaches Green Belts to optimize functioning of Six Sigma teams
Facilitates, communicates, and teaches
Looks for applicability of tools and methods to areas outside of current focus
Supports Process Owners and Champions
Master Black Belt (MBB)
Owns Six Sigma deployment plan and project results for their organization
Responsible for BB certification
Supervisor for DMAIC BBs; may be supervisor for DFSS BBs
Influences senior management and Champions to support organizational engagement
Leads culture change – communicates Six Sigma methodology and tools
Supports Champions in managing project and project prioritization
Ensures that project progress check, gate review, and closing processes meet corporate
requirements and meet division needs
Communicates, teaches, and coaches
Coach
Some businesses have coaches who support the GBs and others coach the BBs.
Trains Green Belts with help from BBs and MBB
Coaches BBs and GBs in proper use of tools for project success
Is a consulting resource for project teams
2. ORGANIZATIONAL
MEASURES
PROCESS
MANAGEMENT
AND
Stakeholders are the entity which has interest in the process or the business and they include the
supplier, customer, employees and investors. Similarly the process stakeholder includes the
process operators, executive, managers, suppliers, customer and supporting staff like logistics
persons. The interest of stakeholders may also vary with time.
2.1. Impact on Stakeholders
Stakeholders are affected by implementation of six sigma projects as changes in process inputs
also modifies requirements for suppliers, procedural changes affect operators and managers
working and monitoring and altered process outputs affects customers.
Six Sigma - Black Belt
2.2. CTx Requirements
A six sigma project may impact the any or all of the process stakeholders as per change
envisioned under optimization. Hence, CTx (Critical to x) concept was developed to understand
the areas of influence to the customer. The ‘x’ in CTx can refer to different focus attribute like
quality, safety, delivery, etc. Few important ‘x’ attributes are discussed, as
Critical-to-Quality (CTQ) – It focuses on physical characteristics of the product like weight,
size, etc. Customers may specify tolerance levels for the product to comply to.
Critical-to-Cost – (CTC) – It aims at cost impact to the customer. Physical characteristic of
the product may be taken as CTC to achieve the impact on cost due to specific physical
value.
Critical-to-Process – (CTP) – It includes the inputs to the key business processes for example
the temperature of paint before it is applied on car’s body for an automobile company.
Critical-to-Safety – (CTS) – They enlist the safety levels for the product or process required
by the customer to feel safe in using the product.
Critical-to-Delivery – (CTD) – In it customer clearly states the delivery related timelines to
be met regarding the product or service like pizza delivery timelines.
2.3. Benchmarking
Benchmarking refers to the process of identifying "best practice" in relation to the present
processes. It measures products, services and processes against those of organizations which are
leaders them. It enables providing insights for knowing the present deficiencies against similar
other organizations thus, enlisting the areas, systems, or processes for improvements. It is
classified usually in two major types, as
Technical benchmarking — It is conducted by design persons to know the capabilities of
products or services against competitors.
Competitive benchmarking — It is usually done by external entities to compares an
organization with respect to the leading competition for critically important attributes or
functions.
2.4. Business Performance Measures
Business performance can mean many different things. Improving the business performance is
accomplished by a comprehensive, systemic approach to managing accountability on an
organizational as well as individual basis. Business performance measurement has a variety of
uses and it is be measured to
Monitor and control
Drive improvement
Maximize the effectiveness of the improvement effort
Achieve alignment with organizational goals and objectives
Reward and discipline
Different frameworks and reference models are used for measuring business performance, which
usually includes the balanced scorecard and KPI
Six Sigma - Black Belt
The Balanced Scorecard
It is the most widely used business performance measurement framework, introduced by Robert
S. Kaplan and David P. Norton in 1992. Balanced scorecards were initially focused on finding a
way to report on leading indicators of a business’s health, they were refocused to measure the
firm’s strategy that directly relate to the firm’s strategy. Usually the balanced scorecard is broken
down into four sections, called perspectives, as
The financial perspective - The strategy for growth, profitability and risk from the
shareholder’s perspective. It focuses on the ability to provide financial profitability and
stability for private organizations or cost-efficiency/effectiveness for public organizations.
The customer perspective - The strategy for creating value and differentiation from the
perspective of the customer. It focuses on the ability to provide quality goods and services,
delivery effectiveness, and customer satisfaction
The internal business perspective - The strategic priorities for various business processes that
create customer and shareholder satisfaction. It aims for internal processes that lead to
“financial” goals
The learning and growth perspective - The priorities to create a climate that supports
organizational change, innovation and growth. It targets the ability of employees, technology
tools and effects of change to support organizational goals.
The Balanced Scorecard is needed due to various factors, as
Focus on traditional financial accounting measures such as ROA, ROE, EPS gives
misleading signals to executives with regards to quality and innovation. It is important to
look at the means used to achieve outcomes such as ROA, not just focus on the outcomes
themselves.
Executive performance needs to be judged on success at meeting a mix of both financial and
non-financial measures to effectively operate a business.
Some non-financial measures are drivers of financial outcome measures which give
managers more control to take corrective actions quickly.
Too many measures, such as hundreds of possible cost accounting index measures, can
confuse and distract an executive from focusing on important strategic priorities. The
balanced scorecard disciplines an executive to focus on several important measures that drive
the strategy.
KPI
Key Performance Indicators or KPI are quantifiable measurements which are fixed as targets as
the critical success factors for an organization. KPIs being tied to organizational goals, varies
depending on the organization, as
The percentage of its income that comes from return customers
Number of students graduating from a college
Percentage of customer calls answered in the first minute
Measure how effective marketing campaigns are at generating increased revenue and sales.
Six Sigma - Black Belt
Irrespective of the KPI selected, it should be in sync with the organization's goals and should be
measurable. KPIs are applied at multiple levels across the company as, high-level KPIs focus on
the overall performance of the company and lower-level KPIs on departmental processes. One
most followed way to evaluate the relevance of a KPI is to use the SMART criteria, which
expands to specific, measurable, attainable, relevant, time-bound as,
Is objective Specific?
Can user Measure progress towards that goal?
Is the goal realistically Attainable?
How Relevant is the goal to organization?
What is the Time-frame for achieving this goal?
Performance dashboards facilitate getting the data needed to measure a KPI in one place, as it is
easy to see if the KPIs are succeeding in their purpose or not.
2.5. Financial Measures
Financial performance is the key underlying foundations of an organization. The success of a Six
Sigma project is addressed by value being created by the project’s success. Some of the
important financial measures that organization uses to assess or forecast the financial
performance are discussed.
Revenue Growth
Revenue growth refers to the estimated rise in income that is accrued by implementing a project.
It is computed by deducting the cost from the gross income. It can be shown either in terms of
percentage per year or dollars per year.
Market Share
Market share refers to the share of an organization in the total sales of a particular product or
service by the all the organizations in a given market. The market share measure becomes
prominent in the times of economic crisis as it helps an organization to forecast its short term and
long term progress. For instance, if an organization has low sales but high market share for a
product during market slowdown, it can be sure that its sales will increase in the future.
Cost of Quality (COQ)
Cost of quality is the sum of various costs as that of appraisal costs, prevention costs, external
failure costs, and internal failure costs. It is generally believed that investing in prevention of
failure will decrease the cost of quality as failure costs and appraisal costs will be reduced.
Understanding cost of quality helps organizations to develop quality conformance as a useful
strategic business tool that improves their product, services & brand image. This is vital in
achieving the objectives of a successful organisation.
Six Sigma - Black Belt
COQ is primarily used to understand, analyze & improve the quality performance. COQ can be
used by shop floor personnel as well as a management measure. It can also be used as a standard
measure to study an organisation’s performance vis-à-vis another similar organisation and can be
used as a benchmarking indices.
The various costs which constitute cost of quality are
Appraisal cost is the cost incurred because of inspecting the processes. The cost associated
with checking and testing to find out whether it has been done first time right.
Prevention cost is the cost incurred because of carrying out activities to prevent failures. The
cost associated with planning, training and writing procedures associated with doing it first
time right.
External failure cost is the cost incurred because of the failure that occurred when the
customer used the product.
Internal failure cost is the cost incurred because of the failures within the organization.
Examples of the various costs are
Prevention - Training Programme, Preventive Maintenance
Appraisal - Depreciation of Test/ Measuring Equipment, Inspection Contracts
Internal Failure - Scrap, Rework, Downtime, Overtime
External Failure - Warranty, Allowances, Customer Returns, Customer Complaints, Product
Liability, Lawsuits, Lost Sales
Identifying COQ can have several benefits, as
It provides a standard measure across the organisation & also inter-organisation
It builds awareness of the importance of quality
It identifies improvement opportunities
Being a cost measure, it is useful at shop floor as well as at management level
Net present value (NPV)
The net present value (NPV) of an amount to be received in future is given by the formula for
using the time value of money to appraise the long term projects.
-n
P = A (1 + i)
Where, P is net present value, A is the money that is to be received in ‘n’ years from the present
time and 'i' is annual rate of interest depicted as a decimal like if i is 10%, then 0.10 is taken to
calculate the NPV. This measure is used to select the project with the maximum Net Present
Value. The time value of money is already taken into account while calculating NPV. As an
example
Assume that Rs. 2400 will be available in five years. What is the NPV of that money if the
annual interest rate is given as 9%. Solution - Substitute A as Rs. 2400, n as 5 and i as 0.09 in the
Six Sigma - Black Belt
formula. We have P = 2400(1 + 0.09) = Rs. 1559.84 Thus, Rs. 1559.84 invested at 9% for five
years will be worth Rs. 2400.
Another example, there are 2 projects. Project A has as NPV of Rs. 1,000 and will be completed
in 5 years. Project B has a NPV of Rs. 800 and will be completed in 1 year. Which project to
select? Solution - Project A will be selected. The fact that project B has a lesser duration than
project A does not matter because time is already taken into account in NPV calculations.
Return on investment (ROI)
Usually, ROI is used to estimate the organization’s potential to utilize its resources to produce
revenues. It is one of the most common financial measures. It is computed as
ROI = Net income/Total investment x 100%,
Where, net income includes the money that is earned or expected to be earned by the project and
money that is saved by avoiding certain costs. Investment refers to the money that is needed to
carryout the project. Another ROI approach is to calculate the ‘amortization’ time, i.e., the time
in which the organization can get back the money invested on the project. Thus, it is also known
as ‘payback period’.
Cost-benefit analysis
Cost-Benefit Analysis is done before the project is initiated. It is conducted in the following way
List the anticipated benefits of the project.
State the benefits in financial terms and specify the time limits.
List the costs involved in undertaking the project.
Compute whether the benefits exceed costs and decide whether to implement the project or
not. At times, the management may decide to implement a project even when the costs
exceed the benefits, for instance, projects that provide social benefits, projects that are
prestigious to the organization, etc.
Some of the indicators used in cost-benefit analysis are Present Value of benefits (PVB), Present
Value of Costs (PVC), NPV (NPV = PVB - PVC), and Benefit-Cost Ratio (BCR).
Benefit- cost ratio (BCR) is calculated using the formula
Benefits (payback or revenue)/costs
BCR can be used as project selection criteria. A project with higher BCR is preferable while
selecting projects, as an example
There are 2 projects. Project A has an investment of $ 500,000 and a BCR of 2.5. Project B has
an investment of $ 300,000 and a BCR of 1.5. Using the Benefit Cost Ratio criterion, which
project to be selected?
Six Sigma - Black Belt
Solution - Project A will be selected as project B has a lower investment than project A will not
impact the selection.
Payback Period
Number of years required for an organization to recapture an initial investment. It is the project
selection criterion to select a project with a shorter payback period as for an example
There are 2 projects. Project A has an investment of $ 500,000 and payback period of 3 years.
Project B has an investment of $ 300,000 and payback period of 5 years. Using the payback
period criterion, which project to select?
Solution - Project A will be selected. (The fact that project B has a lower investment than project
A will not impact the selection.
Margin
Margin is calculated as the difference between incomes and costs.
3. TEAM MANAGEMENT
A team is a group of people but every group is not a team. A team is different from a group in
the sense that it is usually small and exists for relatively long period of time till the objective for
which it is formed is accomplished. A team must, ideally, consist of members who possess
Six Sigma - Black Belt
multifarious skills to efficiently handle various types of tasks. These skills should match job
responsibilities and tasks that are to be carried out. The skills of the team members can differ
according to the nature, scope and size of the project.
3.1. Team Formation
The purpose of forming a team is to improve the internal and external efficiencies of the
company. This is done through the efforts of the team members to improve quality, methods, and
productivity.
Management supports the team process by
Ensuring a constancy of purpose
Reinforcing positive results,
Sharing business results
Giving people a sense of mission
Developing a realistic and integrated plan
Providing direction and support
Team Types and constraints
A team can generally be classified as ‘formal’ or ‘informal’.
Formal team – It is a team formed to accomplish a particular objective or a particular set of
objectives. The objective of the team formation is called as ‘mission’ or ‘statement of
purpose’. It may consist of a charter, list of team members, letter of authorization and support
from the management.
Informal team – This type of team will not have the documents that a formal team will have.
But an informal team consist versatile membership as the members in it can be changed as
per the requirements of the task on hand.
A team can also be classified into following types depending on a given situation and constraints
that prohibit the formation of either formal or informal teams, as
Virtual team - A virtual team is usually formed to overcome the constraint of geographical
locations which separate members. Some of the characteristics of a virtual team are as follows It consists of members who live in different places and who may never meet one another
during the course of accomplishment of the goal of the team.
In a virtual team, the members make use of different technologies like telephone, internet,
etc. to coordinate within the team for the achievement of the common goal.
Process Improvement Team - It is formed to discover the modifications required in a particular
process in order to improve it. It consists of members who belong to various groups that will be
affected by the proposed changes, thus making it cross functional in nature.
Six Sigma - Black Belt
Self-directed and work group teams - It has wide-ranging goals that are ongoing and
repetitive. This necessitates the team to carry out activities on a daily basis. They are usually
formed to make decisions on matters such as safety, personnel, maintenance, quality, etc.
Team Roles
A team performs optimally when all the members are assigned appropriate roles and they
understand their roles in terms of the overall functioning of the team. Some of the major team
roles and responsibilities are as
Team leader - It is a person who motivates guides and helps the team stay focused. Team
leader heads and conducts team meetings and also supervises the performance of the team.
Team leader also documents and administers the activities of the team and divides the work
among members and monitors their work.
Sponsor - They define the scope and goals of the project and provide the essential resources
required to achieve the preset goals. They also monitors and the controls the team and its
activities through the team leader.
Facilitator - They facilitate the team members in expressing their ideas and at times head the
team meetings. They also aids the team leaders in keeping the team focused and also aids the
team in making decisions on matters of high importance. They provide assistance to the team
in overcoming sub standard performance, if any. They also assist in avoiding and resolving
conflicts.
Coach - Coach coordinates with the team leader and the facilitator to help the team function
smoothly. Coach also assists the team members in fulfilling their obligations by supplying
required resources.
Team member - They participate and share their views in the team meetings and also Uses
their expertise to accomplish the tasks assigned to them. Team member tries to carryout the
tasks as per the schedule.
Team Member Selection
For a team to achieve its broad goals it must have members who are experts in the critical
spheres of the project. Some of the major factors that influence the selection of team members
are
Ideal combination of required skills - A team must consist of members with varied skills.
Varied behavioral styles - Apart from varied skills, the presence of different personalities or
behavioral styles too is a factor to be considered during team member selection.
Optimal number of members - Ideally a team must consist of five to eight members.
Basic teamwork attributes - All the members must possess at least elementary teamwork
training.
Adaptability - Members must possess behavioral flexibility.
Schedule - Team members must be available during the project timeframe. They must be
able to dedicate the required amount of time and energy the project requires.
Launching Teams
Six Sigma - Black Belt
Teamwork can achieve goals that individual efforts cannot achieve. So, for a group of people to
produce teamwork certain conditions have to be met before launching the team, which are
The goals should be explicitly stated and directly related to the project work.
Appropriate training on team dynamics must be provided to the team members.
A well planned schedule must be prepared. The team must not be made to wait for specific
work to be assigned after the project is underway.
The team must be made aware about what they are authorized to do. They must also be
assured of total support from the management.
It must be ensured that there is balanced participation of members in the carrying out of the
project.
There must be sponsor who has vested interest in the success of the project.
Proper communication channels must be created.
3.2. Team Facilitation
The team leader and/or facilitator must understand group dynamics. Facilitators are useful in
assisting a group in the following ways
Identifying members of the group that need training or skill building
Avoiding team impasses
Providing feedback on group effectiveness
Summarizing points made by the group
Balancing group member activity
Helping to secure resources that the team needs
Providing an outside neutral perspective
Clarifying points of view on issues
Keeping the team on track with the process
Helping with interpersonal difficulties that may arise
Focusing on progress
Assessing the change process
Assessing cultural barriers (attitudes, personalities)
Assessing how well groups are accomplishing their purpose
Asking for feelings on sensitive issues
Helping the leader to do his/her job more easily
Coaching the leader and participants
The facilitator must avoid
Being judgmental of team members or their ideas, comments, opinions
Taking sides or becoming caught-up in the subject matter
Dominating the group discussions
Solving a problem or giving an answer
Making suggestions on the task instead of on the process
Team Motivation
Six Sigma - Black Belt
Probably the most important part of management is the manager’s responsibility for motivating
the people for whom he or she is responsible. Certainly the most challenging management
responsibility is how to both sustain and increase internal motivation in the work group.
Effective managers have confidence in their subordinates and trust them to a greater degree than
do less effective leaders. Few motivational theories are discussed
Abraham Maslow - Maslow’s theory is that individuals are motivated to lower-order needs until
these are relatively satisfied, and then higher-order needs must be met to sustain satisfaction.
Self-actualization needs - Maximum achievement for self-fulfillment
Esteem needs - Respect, prestige, recognition, personal mastery
Social needs - Love, affection, relationships
Safety needs - Security, protection, and stability
Physiological needs – Basic human needs; food, water, housing
Douglas McGregor - Douglas McGregor introduced new theories, Theory X and Theory Y.
McGregor contended that traditional management practices were rooted in certain basic negative
assumptions about people (Theory X)
Are fundamentally lazy, work as little as possible
Avoid responsibility, lack integrity
Are not very bright, are indifferent to organizational needs
Prefer to be directed by others
Avoid making decisions, are not interested in achievement
Theory Y contains the following important points Physical effort in work is as natural as play.
The threat of punishment is not the only means to achieve objectives.
Man can exercise self-direction and self-control.
Commitment is a function of the rewards.
Humans can accept and seek responsibility.
Imagination, ingenuity, and creativity are widely, not narrowly, distributed.
Only a fraction of the intellectual potential of workers is utilized.
McGregor listed the following forms of motivation that would be effective for various basic
human needs.
Human Needs
Forms of Motivation
Physical needs (food, shelter, clothing, etc.) This Provide an opportunity to increase wages
translates into a job paying minimum wages.
through good work.
Safety needs. A need to maintain employment Appeal to job security. Quality products satisfy
even at a subsistence level.
the customer’s needs making jobs secure.
Social needs. The desire to be accepted as a
member of a group.
Ego needs. The need for respect both internal and
external.
Appeal to employees to not let members of
their work group down.
Appeal to an employee’s pride through awards
and recognition.
Six Sigma - Black Belt
Human Needs
Self-fulfillment. Self actualization
expression and creativity.
Forms of Motivation
through Give the employees the training and
encouragement to propose creative ideas and
implement them.
Frederick W. Herzberg - Herzberg proposed that motivation can be divided into two factors,
which have been referred to by a variety of names, as
Dis-satisfiers and Satisfiers
Maintenance factors and Motivators
Hygiene factors and Motivators
Extrinsic factors and Intrinsic factors
The dis-satisfiers or hygiene factors do not provide strong motivation, but do cause
dissatisfaction if they are not present. On the other hand, satisfiers, motivators, or intrinsic
factors do provide strong motivation and satisfaction when they are present.
Team Stages
Most teams go through four development stages before they become productive - forming,
storming, norming, and performing. Bruce W. Tuckman first identified the four development
stages, which are
Forming - Expectations are unclear. When a team forms, its members typically start out by
exploring the boundaries of acceptable group behavior.
Storming - Consists of conflict and resistance to the group’s task and structure. Conflict often
occurs. However, if dealt with appropriately, these stumbling blocks can be turned into
performance later. This is the most difficult stage for any team to work through.
Norming - A sense of group cohesion develops. Team members use more energy on data
collection and analysis as they begin to test theories and identify root causes. The team
develops a routine.
Performing - The team begins to work effectively and cohesively.
Two more stages of team are present, which are
Adjourning - At the end of most Six Sigma projects the team disbands. Adjourning is also a very
common practice for non Six Sigma companies in regards to project teams, task forces and ad
hoc teams.
Six Sigma - Black Belt
Recognition and Reward - The ultimate reason that rewards and recognition are given is to
provide positive reinforcement for good performance or correct behavior, with the expectation
that this performance will be repeated in the future. The effect of the reward will depend on the
perception of the person receiving the reward.
Recognition and rewards for teams and team members can be grouped into the following
types, as
Material items of significant value or equivalent
Material incidental value or equivalent
Intangible Items
Probably one of the best rewards is “Thank you” when it is sincerely meant.
Team Communication
Communications is a two-way process that starts with the sender. The sender should be
conveying information necessary for mission accomplishment. The sender must be proactive in
making the receiver understand the message.
Team communication skills are critical for ensuring the success of the team effort, whether the
team is charged with creating a new product, making a process improvement, or planning the
summer picnic. Strong team communication skills can help build relationships, ensure the
sharing of new ideas and best practices, and benefit team members through coaching and
counseling.
Communication barriers are influencing factors which impede or breakdown the continuous
communications loop. They block, distort, or alter the information. By identifying the barriers
and applying countermeasures, team members can effectively communicate. Barriers includes
Non-assertive behavior
Task-preoccupation
Anger or frustration
Personal bias
Team diversity
Lack of confidence
Inappropriate priorities
Organizational structure
Distractions
Tunnel vision
Interruptions
Rank differences.
Steps for effective communication amongst team, includes
Acknowledge (“Roger”) communications.
Provide information in accordance with SOP’s.
Provide information when asked.
Six Sigma - Black Belt
Repeat, as necessary, to ensure communication is accurately received.
Use standard terminology when communicating information.
Request and provide clarification when needed.
Ensure statements are direct and unambiguous.
Inform the appropriate individuals when the mission or plans change.
Communicate all information needed by those individuals or teams external to the team.
Use nonverbal communication appropriately.
Use proper order when communicating information.
3.3. Team Dynamics
Team dynamics refers to the force that inspires or drives a group of people to work collectively
for a common cause or objective. A genuine team is a team with different personalities and a
common identity. To achieve common identify, good team dynamics is of immense importance.
After assembling suitable members for the team, it should be ensured that they are well aware of
their roles and responsibilities in relation to the common goal of the team.
Characteristics of Ideal Team Dynamics
The major characteristics of good team dynamics are
Presence of a recognized leader before the team begins to function.
Pre-determined common objectives and a planned path or course to achieve these objectives.
Well determined roles and responsibilities of the team members in relation to the functioning
of the team.
Presence of pre-determined conflict resolution procedures.
Established ground rules to ensure smooth functioning of the team and trouble-free
organization of meetings.
Utilization of high quality equipment and tools in the execution of the project.
Mitigation of behaviors those are dysfunctional in nature.
Swift and precise initiation of the team into the execution of the project.
Barriers to Team Dynamics
The major barriers to the development of good team dynamics are
Dominating members - The leader has to subdue the dominating members and make sure that
they cooperate in making a collective effort towards the common goal.
Hesitant or non participatory members - Members who are hesitant to participate can damage
team dynamics so they must be recognized and encouraged to express themselves freely.
Decision making based on raw data or opinions - This can lose the trust of team members.
So, the data or opinions used in the decision making process must be analyzed appropriately.
Opinions of all the team members are not taken into account. It has to be ensured that
proper consensus is arrived at by addressing the views of all the team members.
Disputes among team members - Well defined procedures on how to work towards the
common objective can avoid disputes, but at times, disputes may still occur which requires
the team to coordinate and collectively solve them.
Wavering team focus - At times, the team may lose its focus and disregard the common
goals. Many techniques like PERT, Gantt chart, etc. can be used to help the team focus.
Six Sigma - Black Belt
Constraints of time - An ill planned schedule can make a team face time constraints. Tools
like brainstorming, fishbone diagram, etc. can be used to overcome this obstacle.
Remedies for Barriers
Some of the important techniques to ensure good team dynamics are
Coaching - Coaching involves helping or coaching a team member on a particular aspect. A
leader has to coach his team members to actively participate and contribute positively in the
execution of the project. The leader has to subdue dominating members from imposing
themselves and encourage hesitant members to express themselves.
Mentoring - The team leader help the members by solving the general problems that hamper
their work and keep them focused on the common goals of the team. Mentoring focuses on
overall contribution of the team members whereas coaching focuses on helping a team
member on acquiring a specific skill or helping him on a specific task.
Intervention - A leader has to intervene when there is a conflict in the team. If the conflict is
destructive to team dynamics, he has to find solutions to it. But, if it is constructive he should
withdraw and let the team grow.
3.4. Time Management
Time is one of the most important resources that a team possesses. Time management is the key
to solve many of the organization’s problems. Team time management differs from individual
time management in many aspects as the former places importance on teamwork to manage time
collectively. Effective time management requires an able team and a competent leader, who can
communicate and coordinate efficiently.
Effects of poor team time management
Poor team time management can result in Stress on resources - Poor time management can result in disproportionate distribution of
time and tasks.
Poor quality - A resource may produce poor quality output due to unrealistic timelines.
Missed deadlines - Poor time management will result either in missing of realistic deadlines
or setting of unrealistic deadlines. In both cases, work remains incomplete, which, in turn,
will put excessive pressure on resources to perform. Sustained stress damages the team
morale.
Importance of Team Time Management
Team time management is an essential technique to keep the team focused on the work on hand
with an aim to achieve the organizational objectives. The advantages of team time management
are
Prudent time management improves productivity by ensuring that appropriate personnel are
deployed to perform tasks that match their skills. This will result in producing quality output
necessary for customer satisfaction and increased profitability.
Six Sigma - Black Belt
Effective time management requires proper understanding of the organization’s goals and
objectives so that the team’s time and the individual members’ time may be invested wisely
to realize those goals and objectives.
Well-planned team time management helps reduce stress on team members concurrently
ensuring that the deadlines are met.
Tools and Techniques Used in Team Time Management
Some of the tools and techniques used for effective team time management are as
Analysis of past performance - This technique helps the team to avoid past errors, plan
realistically, prioritize the tasks in the order of importance, minimize wastages, and
maximize potential.
Creation of an ‘agenda committee’ - This will help a team to utilize the time spent in
meetings very effectively. The agenda committee will ensure that the agenda for the meeting
is clearly defined and the team adheres to it and does not digress, and also time is evenly
spent on each agenda item.
Gantt charts - A team can make use of Gantt charts that represent goals and milestones in
relation to the time needed to realize them.
Checklists and reviews - A team can use checklists and reviews to evaluate its performance
from time to time to make sure that it stays on course.
Time-logs - A team can use time-logs to account for the time spent on different tasks.
3.5. Decision-making tools
Decision making tools for teams which are widely used are
Brainstorming
The brainstorming technique was introduced by Alex Faickney Osborn in his book Applied
Imagination in 1930. It is used as a tool to create ideas about a particular topic and to find
creative solutions to a problem.
Brainstorming Procedure - The first and foremost procedure in conducting brainstorming is to
review the rules and regulations of brainstorming. Some of the rules and regulations are - all the
ideas should be recorded, no scope for criticism, evaluation and discussion of ideas.
The second procedure is to examine the problem that has to be discussed. Ensure that all the
team members understand the theme of brainstorming. Give enough time (i.e., one or two
minutes) for the team members to think about the problem. Ask the team members to think
creatively to generate ideas as much as possible. Record the ideas generated by the members so
that everyone can review those ideas. Proper care has to be taken to ensure that there is no
criticism of any of the ideas and everyone is allowed to be creative.
Brainstorming Rules – Rules to be followed for brainstorming are
Ensure that all the team members participate in the brainstorming session because the more
the ideas that are produced, the greater will be the effect of the solution.
Six Sigma - Black Belt
As the brainstorming session is a discussion among various people, no distinction should be
made between them. The ideas generated by other people should not be condemned.
At the time of building people’s ideas, consider each person’s ideas as the best, because the
ideas generated by each individual may be superior to the other person.
While generating ideas, always put more trust on quantitative ideas rather than qualitative
ideas. As a facilitator you can tally these generated ideas with the team’s performance.
Nominal Group Technique (NGT)
The nominal group technique was introduced by Delbecq, Van de Ven, and Gustafson in 1971. It
is a kind of brainstorming that encourages every participant to express his/her views. This
technique is used to create a ranked list of ideas. In this technique, all the participants are
requested to write their ideas anonymously and the moderator collects the written ideas and each
is voted on by the group. It helps in decision-making and organizational planning where creative
solutions are sought. It is generally carried out on a Six Sigma project to get feedback from the
team members.
NGT Procedure - All the members of the team are asked to create ideas and write them down
without discussing with others. The inputs from all members are openly displayed and each
person is asked to give more explanation about his/her feedback. Each idea is then discussed to
get clarification and evaluation. This is usually a repetitive process. Each person is allowed to
vote individually on the priority of ideas and a group decision is made based on these ratings.
Multi-voting
Multivoting, which is also called NGT voting or nominal prioritization, is a simple technique
used by teams to choose the most significant or highest priority item from a list with limited
discussion and difficulty. Generally it follows the brainstorming technique.
Multivoting is used when the group has a lengthy list of possibilities and wants to specify it in a
small list for later analysis and discussion. It is applied after brainstorming for the purpose of
selecting ideas.
Multivoting Procedure – The procedure to be followed for conducting Multivoting, is
Conduct a brainstorming process to create a list of ideas and record the ideas that are created
during this process. After completing this, clarify the ideas and combine them so that
everyone can easily understand. The group should not discuss the ideas at this time.
Participants will vote for the ideas that are eligible for more discussion. Here the participants
are given freedom to vote for as many ideas as they desire. Tally the vote for each item. If
any item gets the majority of votes, it is placed for the next round.
In the next level of voting, the participants can cast their vote for the remaining items in the
list.
Participants will continue their voting till they get a proper number of ideas for the group to
examine as a part of the decision-making or problem solving process. When the group holds
a discussion about pros and cons of the project, the remaining ideas are discussed.
This discussion may be completed by a group as a whole.
Six Sigma - Black Belt
Continue proper actions by creating a choice of the best option or discovering the top
priorities.
3.6. Management and Planning Tools
Various management and planning tools are used which are
Flowchart
It is used to develop a process map. A process map is a graphical representation of a process
which displays the sequence of tasks using flowcharting symbols.
It shows the inputs, actions and outputs of a given system. Inputs are the factors of production
like land, materials, labor, equipment, and management. Actions are the way in which the inputs
are processed and value is added to the product like procedures, handling, storage, transportation,
and processing. Outputs are the finished good or delivered service given to the customer but,
output also includes un-planned and undesirable entities like scrap, rework, pollution, etc.
Flowchart symbols are standardized by ANSI and common symbols used are
Symbol
Function
Process Flow
Terminator or start/stop of process
Decision or branching
Data Input or Output
Process or Action step
The flowchart shows a high-level view of a process view and it's capability analysis. The flow
chart can be made either more complex or less complex.
Check Sheets
They consist of lists of items and are indicator of how often each item on the list occurs. It is also
called as confirmation check sheets. They are used for data collection process easier by prewritten descriptions of events likely to occur like ‘‘Have all inspections been performed?’’
‘‘How often does a particular problem occur?’’ ‘‘Are problems more common with part X than
with part Y?’’
It is a simple tool for process improvement and problem solving. It can also highlight items of
importance during data collection. They are an effective tool for quality improvement when used
with histograms and Pareto analysis. It is not a check list which is used to ensure that all
important steps or actions have been taken but check sheet is a tally sheet to collect data on
frequency of occurrence of defects or errors. It is of two types
Six Sigma - Black Belt
Location or concentration diagram - In it the marking is done on a diagram like before
submitting car to service center, a car diagram is used to list defects at present by marking
and writing on the diagram. Online application forms highlight errors before submission by
highlighting the error section, is also an example of this type.
Graphical or Distribution check sheet - It is commonly used for collecting frequency by
marking to visualize the distribution of the data as shown in diagram below
Pareto charts
It is a type of bar chart in which the horizontal axis represents categories which are usually
defects, errors or sources (causes) of defects/errors. The height of the bars can represent a count
or percent of errors/defects or their impact in terms of delays, rework, cost, etc.
By arranging the bars from largest to smallest, a Pareto chart determines focusing on which
categories will yield the biggest gains if addressed, and which are only minor contributors to the
problem. It is the process of ranking opportunities to determine which of many potential
opportunities should be pursued first. It is used at various stages in a quality improvement
program to determine which step to take next.
Pareto Chart Development – It involves the following steps
Collect data on different types or categories of problems.
Tabulate the scores.
Determine the total number of problems observed and/or the total impact. Also determine the
counts or impact for each category.
For small or infrequent problems, add them together into an "other" category
Sort the problems by frequency or by level of impact.
Draw a vertical axis and divide into increments equal to the total number observed. Do not
make the vertical axis as tall as the tallest bar, which can overemphasize the importance of
the tall bars and lead to false conclusions
Draw bars for each category, starting with the largest and working down.
The "other" category always goes last even if it is not the shortest bar
Six Sigma - Black Belt
Cause and Effect Diagram
It helps teams uncover potential root causes by providing structure to cause identification effort.
It is also called as fishbone or Ishikawa diagram. It helps in ensuring new ideas being generated
during brainstorming by not overlooking any major possible cause.
It should be used for cause identification after clearly defining the problem. It is also useful as a
cause—prevention tool by brainstorming ways to maintain or prevent future problems.
Developing Cause and Effect Diagram – It involves the following steps
Name the problem or effect of interest. Be as specific as possible.
Write the problem at the head of a fishbone "skeleton"
Decide the major categories for causes and create the basic diagram on a flip chart or
whiteboard.
Typical categories include the manpower, machines, materials, methods, measurements and
environment
Brainstorm for more detailed causes and create the diagram either by working through each
category or open brainstorming for any new input.
Write suggestions onto self-stick notes and arrange in the fishbone format, placing each idea
under the appropriate categories.
Review the diagram for completeness.
Eliminate causes that do not apply
Brainstorm for more ideas in categories that contain fewer items
Discuss the final diagram. Identify causes which are most critical for follow-up investigation.
Six Sigma - Black Belt
Tree Diagram
They are also similar to cause and effect diagram but tree diagram break down problem
progressively in detail by partitioning bigger problem into smaller ones. This partitioning brings
a level when the problem seems easy to solve. It is made by starting from right and going
towards the left. It is used by quality improvement programs. Sometimes goals are placed on left
and resources on right and then both are linked to for achievement of goal.
It starts with single entity which branches into two or more, each of which branch into two or
more, and so on. It looks like a tree, with trunk and multiple branches. It is used for known issues
whose specific details are to be addressed for achieving an objective. It also assists in listing
other solution, detailing processes and probing the root cause of a problem. It is also known as
systematic diagram or tree analysis or analytical tree or hierarchy diagram.
Affinity Diagram
The word affinity means a ‘‘natural attraction’’ or kinship. The affinity diagram organizes ideas
into meaningful categories by recognizing their underlying similarity. It reduces data by
organizing large inputs into a smaller number of major dimensions, constructs or categories. It
organizes facts, opinions and issues into natural groups to help diagnose a complex situation or
find themes.
It helps to organize a lot of ideas and identify central themes in them. It is useful when
information about a problem is not well organized and solution beyond traditional thinking is
needed. It organizes ideas from a brainstorming session in any phase of DMAIC and can find
themes and messages in customer statements gleaned from interviews, surveys, or focus groups.
Six Sigma - Black Belt
Developing Affinity Diagram
Gather inputs from brainstorming session or customer feedbacks.
Write each input on cards and place them randomly.
Allow people to silently start grouping the cards.
When the clustering is done, create a "header" label (on a note or card) for each group.
Write the theme on a larger self-stick note or card (the "Header") and place it at top of
cluster.
Continue until all clusters are labeled
Complete the diagram and discuss the results.
Matrix Diagram
It is also known as matrix or matrix chart as it uses a matrix to display information. The matrix
diagram displays relationship amongst two, three or four groups of information like the strength
of relationship amongst the group, the roles played by various groups, etc. It helps in analyzing
the correlations between groups of information. It enables systematic analysis of correlations. Six
different matrix shaped diagram are possible: L, T, Y, X, C and roof–shaped, depending on how
many groups must be compared.
Relationship amongst two groups of entities is done by an L–shaped matrix or roof shaped
matrix. T–shaped, Y–shaped or C–shaped matrix are used to show relationship amongst three
groups and four groups, X–shaped matrix is used. Various matrix types showing relationship is
listed below
Six Sigma - Black Belt
L-shape
T-shape
Y-shape
X-shape
Interrelationship Digraph
Interrelationship digraphs helps in organizing disparate information, usually ideas generated
during brainstorming sessions. It defines the ways in which ideas influence one another instead
of arranging ideas into groups as done by affinity diagrams.
Similar to affinity diagram, interrelationship digraphs are developed by writing down the ideas or
information on paper like Post-it notes which are then placed on a large sheet of paper and
arrows are drawn between related ideas. An idea that has arrows leaving it but none entering is a
root idea. By evaluating the relationships between ideas the functioning is made clear and
usually the root idea is the key to improving the system.
Benchmarking
Benchmarks are measures (of quality, time, or cost) that have already been achieved by others. It
indicates about the level of possible goal so as to set goals for own operations. It is helpful for
listing new ideas into the process though borrowed from others.
Usually the benchmarking data is sourced from surveys or interviews with industry experts, trade
or professional organizations, published articles, company tours, prior experience of current staff
or conversations.
Types of Benchmarks
Internal/Company - It establishes a baseline for external benchmarking Identifies differences
within the company and provides rapid and easy-to-adapt improvements though opportunities
for improvement are limited to the company's practices.
Direct Competition - It prioritizes areas of improvement according to competition and is of
interest to most companies but often involves a limited pool of participants thus,
opportunities for improvement are limited to "known" competitive practices and may lead to
potential antitrust issues.
Six Sigma - Black Belt
Industry - It provides industry trend information and is a conventional basis for quantitative
and process-based comparison though opportunities for improvement may be limited by
industry paradigms
Best-in-Class - It examines multiple industries to provide the best opportunity for identifying
radically innovative practices and processes by building a brand new perspective but, usually
difficult to identify best-in-class companies and get them to participate.
Prioritization Matrix
It is used to prioritize is to arrange or deal with in order of importance. A prioritization matrix is
a combination of a tree diagram and a matrix chart and used to help decision makers determine
the order of importance of the activities. It narrows down options by systematically comparing
choices through the selection, weighing, and application of criteria.
It quickly surfaces basic disagreements, forces the team to narrow down all solutions from all
solutions to the best solutions, limits "hidden agendas" by bringing decision criteria to the
forefront of a choice and increases follow-through by asking for consensus after each step of the
process.
Developing a prioritization matrix - It involves five simple steps, as
Determine criteria and rating scale - Determine the factors to assess the importance of each
entity. Choose factors that will clearly differentiate important from unimportant which are
the criteria like the value it brings to the customer, etc. Then, for each criteria, establish a
rating scale to use in assessing how well a particular entity satisfies that criteria.
Establish criteria weight - Place criteria in descending order of importance and assign a
weight.
Create the matrix - List criteria down the left column and the weight and names of potential
entities across the top in an L-shaped matrix to judge the relative importance of each
criterion.
Work in teams to score entities - Review each entity and rate the entity on each of the
criteria. Next, multiply the rating for each criterion by its weight and record the weighted
value. After evaluating the entity against all of the criteria, add up the weighted values to
determine the entity’s total score.
Discuss results and prioritize list - After entities have been scored, undertake a discussion to
compare notes on results and develop a master list of prioritized entities that everyone agrees
upon.
An example of prioritization matrix where, 10 is much less expensive, 5 is less expensive, 1
is same cost, 0.2 is more expensive and 0.1 is much more expensive
Six Sigma - Black Belt
Focus Group
They are facilitated discussion sessions of customers that help an organization understand the
Voice of the Customer (VOC). Usually they are of 1-3 hour sessions with maximum 20
customers. It facilitates better understanding of the voice of customer and organizes the gathered
data. It also enables evaluation of the feedbacks and channelizes them for further action..
Usually two types of focused groups are applied, first being the explorative focus group which
explores the collective needs of customers, develop and evaluate concepts for new product
development as sensed or demanded by the voice of the customer. The next, experiential focus
group observes the usage of products in the market and study what the customers feel and
experience about the products, learning their reasons and motivations to use the product.
Online focus groups have gained importance in recent times due to access to internet but, the
discussion takes place on the internet instead of a interview site. Online focus groups are more
suited for younger age groups.
Gantt Chart
It is a graphical chart, showing the relationships amongst the project tasks, along with time
constraints. The horizontal axis of a Gantt chart shows the units of time (days, weeks, months,
etc.). The vertical axis shows the activities to be completed. Bars show the estimated start time
and duration of the various activities. A Gantt chart shows what has to be done (the activities)
and when (the schedule) as shown in the figure below
Six Sigma - Black Belt
Milestone Charts - Gantt charts are often modified in a variety of ways to provide additional
information. One common variation is milestone charts. The milestone symbol represents an
event rather than an activity; it does not consume time or resources.
CPM/PERT Chart
CPM or "Critical Path Method" - It is a tool to analyze project and determine duration, based
on identification of "critical path" through an activity network. The knowledge of the critical
path can permit project managers to change duration. It is a project modeling technique
developed in 1950s and is used with all forms of projects. It displays activities as nodes or circles
with known activity times.
CPM is a diagram showing every step of the project, as letters with lines to each letter
representing the sequence in which the project steps take place. A list of activities is required to
complete the project and the time (duration) that each activity will take to complete, along with
the sequence and dependencies between activities. CPM lays out the longest path of planned
activities to the end of the project as well as the earliest and latest that each activity can start and
finish without delaying other steps in the project. The project manager can then, determine which
activities in the project need to be completed before others and how long those activities can take
before they delay other parts of the project. They also get to know which set of activities is likely
to take the longest, also called as the critical path which is also the shortest possible time period
in which the project can be completed.
PERT Chart - A PERT chart (program evaluation review technique) is a form of diagram for
CPM that shows activity on an arrow diagram. PERT charts are more simplistic than CPM charts
because they simply show the timing of each step of the project and the sequence of the
activities. In PERT, estimates are uncertain and ranges of duration and the probability that
activity duration will fall into that range is taken whereas CPM is deterministic.
A PERT chart is a graphic representation of a project’s schedule, showing the sequence of tasks,
which tasks can be performed simultaneously, and the critical path of tasks that must be
completed on time in order for the project to meet its completion deadline. The chart can be
constructed with a variety of attributes, such as earliest and latest start dates for each task,
earliest and latest finish dates for each task, and slack time between tasks. A PERT chart can
document an entire project or a key phase of a project. The chart allows a team to avoid
unrealistic timetables and schedule expectations, to help identify and shorten tasks that are
bottlenecks, and to focus attention on most critical tasks. It is most useful for planning and
tracking entire projects or for scheduling and tracking the implementation phase of a planning or
improvement effort.
Six Sigma - Black Belt
Developing PERT Chart
Identify all tasks or project components - Ensure the team has knowledge of the project so
that during the brainstorming session all component tasks needed to complete the project are
captured. Document the tasks on small note cards.
Identify the first task that must be completed - Place the appropriate card at the extreme left
of the working surface.
Identify any other tasks that can be started simultaneously with task #1 - Align these tasks
either above or below task #1 on the working surface.
Identify the next task that must be completed - Select a task that must wait to begin until task
#1(or a task that starts simultaneously with task #1) is completed. Place the appropriate card
to the right of the card showing the preceding task.
Identify any other tasks that can be started simultaneously with task #2 - Align these tasks
either above or below task #2 on the working surface.
Continue this process until all component tasks are sequenced.
Identify task durations - Reach a consensus on the most likely amount of time each task will
require for completion. Duration time is usually considered to be elapsed time for the task,
rather than actual number of hours/days spent doing the work. Document this duration time
on the appropriate task cards.
Construct the PERT chart - Number each task, draw connecting arrows, and add task
characteristics such as duration, anticipated start date, and anticipated end date.
Determine critical path - The project’s critical path includes those tasks that must start or
finish on time to avoid delays to the total project. Critical paths are typically displayed in red.
Activity Network Diagram
It charts the flow of activity between separate tasks and graphically displays interdependent
relationships between groups, steps, and tasks as they all impact a project. Bubbles, boxes, and
arrows are used to depict these activities and the links between them. It shows the sequential
relationships of activities using arrows and nodes to identify a project’s critical path. It is similar
to the CPM/ PERT and also called as arrow diagram.
Developing Activity Network Diagram - Development starts with compiling a list of tasks
essential for completion of the project. These tasks are then arranged in a chronological order,
depending on the project considering inter-task dependency. All tasks are placed in a progressing
line with tasks that can be done simultaneously, is placed on parallel paths, whereas jobs that are
dependent should be placed in a chronological line. Apply realistic estimate to each task then,
enlist the critical path.
Six Sigma - Black Belt
3.7. Team Performance Evaluation and Reward
Performance evaluation for teams should be an integral part for which measurement system and
criteria should be agreed in advance. Evaluation criteria usually are time oriented (team progress
vs schedule), objective oriented (goals achieved vs defined) and financial (expenditure vs
budgeted). Self-evaluation is also a useful technique which is applied. Reward and recognition in
ceremonies motivates team members which usually includes small gifts, tokens, certificate,
mention on company board or newsletter and financial rewards.
4. DEFINE PHASE
The define phase focuses on defining the customer requirements usually by Voice of the
customer and the project charter.
4.1. Voice Of the Customer
It is the term used to describe the stated and unstated needs or requirements of the customer. It
helps in listing the relative importance of features and benefits associated with the product or
service thus, showing the expectations and promises that are both fulfilled and unfulfilled by the
product or service. Voice of the Customer (VOC) is describes customer’s feedback about their
experiences with and expectations for the products or services.
Gathering VOC information can be done by
Direct interviews of customers like site intercepts, personal interviews, focus groups,
customer feedback forms, or structured online surveys.
Indirect interviews with representatives like sales people or customer service representatives,
who interface with the customer and report on their needs.
Conducting VOC helps by
Customize products, services, add-ons and features to meet the needs and wants of customers
No one becomes an industry leader without listening to the customer. Quality (customer
perceived) is the leading driver of business success
Maximize company’s profit. Higher market share companies have higher profits
The typical outputs of the VOC process areIdentification of customer markets and customer segments
Six Sigma - Black Belt
Identification of relevant reactive and proactive sources of data
Verbal or numerical data that identify customer needs
Defined critical-to-quality requirements (CTQs)
Specifications for each CTQ
Customer Identification
It is crucial step of VOC process as the end user can easily tailor products and services
accordingly and thus helps the company to become a customer-focused company and acquire
customers easily whilst retaining existing customers.
Customers may be classified as internal or external. An internal customer is anyone in the
company who is affected by the product or service as it is generated and are the employees of the
company. The internal customer is usually forgotten as organizations are focused towards the
external customer. Staff members should think themselves as service providers.
External customers are not employee of the company but are impacted by it and usually consist
of end Users and intermediate customers. They provide the major part of company revenues.
Customers are also categorized on basis of demography (like age groups, geographical location)
and industry types as well. Typical segmentation includes
Geography - region, county size, city
Demographic - age, sex, family size, income, occupation, race
Psychographic - compulsive, extrovert, introvert, conservative, leader, etc.
Buyer behavior - heavy user, aware of need, status, loyalty
Volume - grouping based on usage of the product (heavy, medium, light)
Marketing-factor - brand loyal customers
Product space - customer perception comparison, brand A versus brand B
External customers can be identified as business customers or the consumer customer. Business
customers can include for-profit and not-for-profit companies. The consumer customer consists
of a large number of customers with small purchases as against business customers.
Customer Feedback
Customer data collection he voice of the customer is performed at various levels which usually is
business, operations and process level. Many companies use complaints as the important way to
listen to their external customers. Surveys establish a communication process of overall
improvement for the internal customer.
Customer feedback comes from a growing number of channels, including in-person, phone,
comment cards, surveys, email, Web, social networking, mobile devices, and more. In addition, a
number of individuals and departments within company are collecting customer feedback, and in
a variety of formats. For example, marketing may be conducting Web-based surveys, product
development may be conducting focus groups, the contact center may be collecting customer
feedback from the support line.
Six Sigma - Black Belt
The challenge these multiplicity creates is that, team is not always aware of what feedback is
being captured, who is capturing it, where it is being stored and who is responsible for following
up on it. This also makes it difficult to use this information to improve customer relationships.
Customer Requirements
Various ways to successfully manage customer feedback are
Have well-defined goals and objectives: Before starting, know what business objectives are
at stake, why you are collecting the data and how company is will use it to make decisions.
Also, consider the reports needed and who need to access that information.
Get executive buy-in and internal support: Work with executive team to communicate and
share customer feedback and VOC program goals and objectives with all employees. Keep
VOC programs top-of-mind with executives and employees by including metrics in
executive dashboards and sharing positive customer feedback during company meetings.
Collect and manage customer feedback in a centralized system: Having multiple feedback
systems in separate databases is cumbersome and leads to duplication of effort. Companies
now have access to technology-driven, real-time Voice of the Customer (VOC) feedback
programs. These solutions allow businesses to continually collect customer and employee
feedback through multiple channels into a central database for analysis and immediate action.
Become a customer advocate throughout the feedback process: Be in a position to rapidly
respond to customer feedback. Keep customers informed about the ongoing status of their
issues and requests. Let customers know when company uses one of their suggestions. Help
organization resolve chronic customer complaints and concerns. Track, measure and monitor
customer feedback response times and continually work to improve them.
Communicate and share customer feedback with others: Quickly distribute real-time
customer feedback and share reports and survey data findings with others in organization —
from the c-suite to managers and employees. Openly share actionable insights with
employees and conduct post-mortem meetings to discuss what did and did not work as well
as what is needed to improve VOC program in the future.
Collect real-time, ongoing feedback: To build strong, lasting and engaging relationships with
customers, gather and respond to feedback in real-time. To accomplish this, make it easy for
customers to submit feedback at every interaction point and regularly monitor customer
needs and concerns.
Integrate customer feedback into the business: Be sure to work with other departments to
ensure that their customer feedback is incorporated into the company's strategic goals. For
example, sort through open-ended comments to see whether a customer has complimented an
individual employee. Then, make sure that the employee is recognized for providing positive
customer service.
4.2. Project Charter
A project charter is a document that gives an overview of the project by stating the project’s
goals and objectives. It states the scope, objectives, boundaries, resources, participants, and
anticipated outcomes of the project. The project charter
specifies the motives to carry out the project
explains the project objectives
Six Sigma - Black Belt
provides a clear definition of accountability, team roles, and
responsibilities
includes expected costs and financial benefits
determines the deadlines
identifies the main stakeholders
defines the standards on which to judge the project’s success
A project charter requires the following
Business case - It explains why the project is necessary for the organization.
Problem statement - It is the explanation of conditions that adversely impacts the business
and which has to be addressed through the six sigma projects.
Goal statement - It explains the expected outcomes from the project. This should be
established in terms of quantitative metrics.
Project scope - It describes the boundaries of the project. Impacted stakeholders - It describes
the influence on the internal and external stakeholders of the company.
Summary milestone schedule - It is the list of the major milestones, deliverables, and
scheduled end dates for delivery. It identifies whether the timelines for all the stages of
DMAIC have been set and the team members are in agreement with the milestones.
Problem Statement
Problem statement must include accurate definition of conditions that can cause undesirable
effects on the project and the organization. Problem definition is carried out in the ‘Define’ phase
of the Project Charter. Problem definition is carried out in the ‘Define’ phase of the DMAIC
(Define, Measure, Analyze, Improve, and Control) cycle. The Define phase is crucial for the
successes of the subsequent phases.
Project Scope
Project scope defines the boundaries of the project. Project scope definition must be done in such
a manner that the project scope is neither too broad nor too narrow. Project scope that is too
broad can result in the teams pursuing issues that are irrelevant to the project. And a project
scope that is too narrow can restrict teams from analyzing certain relevant issues.
Tools for project scope definition - The important tools that can be used to define the project
scope are as
Pareto charts (to prioritize processes)
Affinity diagrams (to depict linkages between a particular project and other projects or
processes)
Cause-and-effects diagrams (to widen the thinking and arrive at the root causes of specific
problems)
Process maps (to provide a pictorial depiction of project boundaries and relationships)
Goals and objectives
Six Sigma - Black Belt
The goals and objectives for the project are developed on the basis of the project scope and
problem statement. The statements on the goals and objectives in the project charter should
adhere to the SMART principles. So, the goal statements must be
Specific (precise goals and objectives must be stated)
Measurable (the goals must be quantifiable in terms of project progress)
Achievable (the goals must be realistic)
Relevant (the goals must be specifically related to the broad goals of the organization)
Timely (the goals must be achievable within the time span allotted to the team to carry out
the project).
Project Performance Measures
Measuring the progress of the project is an essential part of project definition. Appropriate
financial and non-financial parameters must be established to assess the impact of the project on
the organization. Inappropriate project performance measures can misdirect the efforts of the
team, for instance, if cost minimization is taken as a metric to measure the performance of the
processes, the focus of the team may shift from defect and cycle time reduction to cost reduction,
which may result in poor quality processes.
4.3. Project Tracking
Assessing a project at various points during its implementation is necessary to ensure its success.
The criteria on which the project is to be assessed must be determined before the commencement
of project execution. General criteria to assess the projects are
Time-based criteria - Assesses the project on the basis of ‘schedule’. It is used to assess
whether the project is on schedule or delayed. It provides information for the implementation
of appropriate measures to speed project implementation, if required.
Objective-oriented criteria - Evaluates the project on the basis of concrete goals. It is used to
analyze whether the concrete goals of the projects are accomplished or not.
Monetary criteria - Evaluates the financial aspects of the project. Monetary criteria like
income, cost, etc. are useful to track the progress of the project.
Objectives of project tracking
The major objectives of monitoring a project are as
To control quality and ensure that required quality levels are achieved.
To ensure optimum performance levels.
To monitor, control, and minimize risks.
To ensure that the project does not slide beyond its established scope.
To ensure that the project remains on schedule.
Work Breakdown Structure (WBS)
WBS is a most widely used tool for project tracking. It involves creation of WBS involves
breaking down of higher level components to lower levels (small and manageable deliverables)
that is acceptable to stakeholders for planning, delivery and control. WBS is obtained through the
process of ‘decomposition’. Decomposition is the process of breaking down project requirements
into small and manageable deliverables that are acceptable to stakeholders for planning, delivery
Six Sigma - Black Belt
and control. Work package is a deliverable or project work component at the lowest level of each
branch of the work breakdown structure. The decomposition is illustrated below as to, how a
deliverable is hierarchically broken down into deliverables or work packages
Project requirements should be broken down into smaller and more manageable deliverables (in
a hierarchical structure) that is acceptable to stakeholders. The Work Breakdown Structure helps
project management by providing an outline of all the subdivided elements. This will enable the
organization to
Track time, cost, and performance.
Logically link objectives to company resources.
Establish schedules and procedures for status-reporting.
Initiate network construction and control planning.
Characteristics of WBS includes the following
A deliverable oriented hierarchical decomposed work components to be executed by the team
Developed by project team. Since whole team is involved, this provides team “buy-in” or
“shared ownership” and provides for better communication between project team and
stakeholders.
Contains the project scope. Work not in the WBS is outside the project scope.
First higher level component is decomposed and then the next level components are further
decomposed
Decomposition of the scope of work is extended till it is acceptable to stakeholders for
planning, delivery and control
Verifying the correctness of the decomposition requires determining that the lower-level
WBS components are those that are necessary and sufficient for completion of the
corresponding higher level deliverables.
WBS components are marked with a unique identifier called code of account identifier.
WBS is illustrated with an example to build a 3 floored building. The first level of
decomposition is each floor of the building. Second level is for that floor, electrical, plumbing,
etc. Then third level, the electrical is broken down into laying cables, switches, and electrical
points.
Six Sigma - Black Belt
5. MEASURE PHASE
A process is a set of interrelated resources and activities which transform inputs into outputs with
the objective of adding value. The activities which are in relation to a process of importance,
should be documented and controlled.
SIPOC is an effective tool that summarizes the inputs and outputs of one or more processes in
table form. It stands for Suppliers-Inputs-Process-Outputs-Customers. SIPOC addresses issues
regarding the input, output, supplier and customers like output being produced by the process,
who provide inputs to this process, what are the inputs, what resources does this process use,
which steps add value, etc. These issues apply to all processes and SIPOC addresses by putting
in place a standard format.
SIPOC development is initiated with persons having knowledge of the process and then
conducting a brainstorming session to describe the problems and garner consensus for resolution.
Six Sigma - Black Belt
Development of SIPOC involves identifying the process steps then, identifying the outputs of the
process followed by the customers receiving the outputs of the process, then the inputs and the
supplies of the required inputs. It has three typical uses as
To give people who are unfamiliar with a process a high-level overview
To reacquaint people whose familiarity with a process has faded or become out-of-date due
to process changes
To help people in defining a new process
5.1. Process Characteristics
When the significant process variables that affect a key process are being controlled, then the
process output is predictable. Costly (and imperfect) inspection may be eliminated. The
characteristics of business process includes
Flow simply means the order of operations or activities within a process.
Resource Consumption is materials, wages, energy, information consumed in producing one
unit of output.
Cycle Time is total time taken to transform one unit of input into one unit of output. This is
the total cycle time and not just the process time.
Effectiveness: how well does the process meet its targets for cost, time and quality.
Process Value Creation, every process either creates or destroys value. Two improvement
processes may cost the same. However, one process may increase revenue more or reduce
expenses more or increase customer satisfaction more and thus increase revenue. Just looking
at cost does not provide the total picture of value.
Input and Output Variables
Process output variables sometimes referred to as Key Characteristics are traits or features of a
part, piece of material, assembly, subsystem, or system whose variation has a significant
influence on fit, performance, reliability, manufacturability, or assembly. In short, they are
characteristics that have a big impact on efficiency and/or customer satisfaction. Variation in key
process output variables leads to lower levels of quality and reliability and, ultimately, higher
costs. Examples of key process output variables are
Slip torque of an actuator
Gap between car body and rear quarter glass panel
Cabin wind noise
Fill volume for a bottled beverage
Process input variables are process inputs that have a significant impact on the variation found in
a key process output variable. That is, if the key process input variables were controlled (e.g.
held constant), the process would produce predictable and consistent outputs. For example, if the
flatness of a clutch disk has a significant impact on the slip torque of an actuator, then clutch disk
flatness would be a key process input variable.
Process Flow Metrics
Six Sigma - Black Belt
The team should select the metrics that best help visualize a process and address the specific
issues of interest. A production test flow that detects problems (or failures) in the early
operations is more effective than a similar test flow that detects the same problems in the later
stages.
Several metrics are used for manufacturing flow which includes first pass yield, throughput yield
and cycle time.
First Pass Yield - It is also known as throughput yield (TPY) and is defined as the number of
units coming out of a process divided by the number of units going into that process over a
specified period of time. Only good units with no rework are counted as coming out of an
individual process. It measures the level of rework. FPY for all operations can be averaged
together to measure the entire production flow to give the level of effectiveness of the overall
flow is performing.
Throughput Yield - It is the number of acceptable pieces at the end of the end of a process
divided by the number of starting pieces excluding scrap and rework (meaning they are a part
of the calculation). It is used to only to measure a single process.
Rolled Through Yield (RTY) - A yield measures the probability of a unit passing a step
defect-free, and the rolled throughput yield (RTY) measures the probability of a unit passing
a set of processes defect-free. This takes the percentage of units that pass through several
sub-processes of an entire process without a defect. The number of units without a defect is
equal to the number of units that enter a process minus the number of defective units. For
illustration, the number of units given as an input to a process is P, the number of defective
units is D then, the first-pass yield for each sub-process or FPY is equal to (P – D)/P. After
getting FPY for each sub-process, multiply them altogether to obtain RTY as, the yields of 4
sub-processes are 0.994, 0.987, 0.951 and 0.990, then the RTY =
(0.994)(0.987)(0.951)(0.990) = 0.924 or 92.4%.
Cycle Time - It is the measurement of elapsed time. Cycle time can be measured at the
individual operational level or across the entire production process. For example, cycle time
for the production flow might measure the time from system configuration (the assembly of
components to create the system) to customer acceptance. Two types of cycle time are
typically generated. Static cycle time is the actual measurement of a single unit from start to
finish. Dynamic cycle time is the overall average cycle time. While static cycle time can
fluctuate depending on failures, parts availability, resource availability or even demand,
dynamic cycle time is thought to reflect more accurately the steady state of production.
Takt time is a measure of customer demand expressed in units of time and is calculated as
Takt time = Available time per shift / Demand per shift or Cycle time/Number of People
Throughput is the amount of work that passes through the system in a given time. In the
Theory of Constraints the throughput is constrained by the bottleneck process.
In a production system the Work in Queue is the work that is at a work station awaiting
processing.
In a production system the Work in Progress (or Work in Process) is the work that is in
various stages of production between raw material and finished product:
Process Analysis Tools
Six Sigma - Black Belt
Flow charts, process maps, written procedures and work instructions are tools used for process
analysis and documentation. These tools are discussed as
Flow Charts – It is also called as process map. It is useful process user and to novice for the
process or auditor. It can depict the sequence of product, containers, paperwork, operator
actions or administrative procedures. It is usually the starting point for process improvement.
Process Mapping - There are advantages in depicting a process in schematic format of having
the ability to visualize the process being described. Similar to a flow chart it uses symbols,
arrows and words. It is used to outline new procedures and review old procedures.
Written Procedures – These are created for most operations in advance by the appropriate
users. The procedure should be developed by those having responsibility for the process of
interest. Documenting process in the form of a procedure facilitates consistency in the
process.
Work Instructions - Procedures describe the process at a general level, while work
instructions provide details and a step-by-step sequence of activities. Flow charts may also be
used with work instructions to show relationships of process steps. Controlled copies of work
instructions are kept in the area where the activities are performed.
5.2. Data Collection
Data collection is based on crucial aspects of what to know, from whom to know and what to do
with the data. Factors which ensure that data is relevant to the project includes
Person collecting data like team member, associate, subject matter expert, etc.
Type of Data to collect like cost, errors, ratings etc.
Time Duration like hourly, daily, batch-wise etc.
Data source like reports, observations, surveys etc.
Cost of collection
Types of data
There are two types of data, discrete and continuous.
Attribute or discrete data - It is based on counting like the number of processing errors, the
count of customer complaints, etc. Discrete data values can only be non-negative integers
such as 1, 2, 3, etc. and can be expressed as a proportion or percent (e.g., percent of x,
percent good, percent bad). It includes
Count or percentage – It counts of errors or % of output with errors.
Binomial data - Data can have only one of two values like yes/no or pass/fail.
Attribute-Nominal - The "data" are names or labels. Like in a company, Dept A, Dept B,
Dept C or in a shop: Machine 1, Machine 2, Machine 3
Attribute-Ordinal - The names or labels represent some value inherent in the object or
item (so there is an order to the labels) like on performance - excellent, very good, good,
fair, poor or tastes - mild, hot, very hot
Variable or continuous data - They are measured on a continuum or scale. Data values for
continuous data can be any real number: 2, 3.4691, -14.21, etc. Continuous data can be
recorded at many different points and are typically physical measurements like volume,
length, size, width, time, temperature, cost, etc. It is more powerful than attribute as it is
Six Sigma - Black Belt
more precise due to decimal places which indicate accuracy levels and specificity. It is any
variable measured on a continuum or scale that can be infinitely divided.
Data are said to be discrete when they take on only a finite number of points that can be
represented by the non-negative integers. An example of discrete data is the number of defects in
a sample. Data are said to be continuous when they exist on an interval, or on several intervals.
An example of continuous data is the measurement of pH. Quality methods exist based on
probability functions for both discrete and continuous data.
Data could easily be presented as variables data like 10 scratches could be reported as total
scratch length of 8.37 inches. The ultimate purpose for the data collection and the type of data
are the most significant factors in the decision to collect attribute or variables data.
Converting Data Types - Continuous data, tend to be more precise due to decimal places but,
need to be converted into discrete data. As continuous data contains more information than
discrete data hence, during conversion to discrete data there is loss of information.
Discrete data cannot be converted to continuous data as instead of measuring how much
deviation from a standard exists, the user may choose to retain the discrete data as it is easier to
use. Converting variable data to attribute data may assist in a quicker assessment, but the risk is
that information will be lost when the conversion is made.
Measurement Scales
A measurement is assigning numerical value to something, usually continuous elements.
Measurement is a mapping from an empirical system to a selected numerical system. The
numerical system is manipulated and the results of the manipulation are studied to help the
manager better understand the empirical system. Measured data is regarded as being better than
counted data. It is more precise and contains more information. Sometimes, data will only occur
as counted data. If the information can be obtained as either attribute or variables data, it is
generally preferable to collect variables data.
The information content of a number is dependent on the scale of measurement used which also
determines the types of statistical analyses. Hence, validity of analysis is also dependent upon the
scale of measurement. The four measurement scales employed are nominal, ordinal, interval, and
ratio and are summarized as
Scale
Nominal
Ordinal
Definition
Only the presence/absence of an attribute. It
can only count items. Data consists of
names or categories only. No ordering
scheme is possible. It has central location at
mode and only information for dispersion.
Data is arranged in some order but
differences between values cannot be
determined or are meaningless. It can say
that one item has more or less of an attribute
Example
go/no-go,
success/fail,
accept/reject
Statistics
percent,
proportion, chisquare tests
taste,
attractiveness
rank-order
correlation,
or run test
sign
Six Sigma - Black Belt
Scale
Interval
Ratio
Definition
than another item. It can order a set of
items. It has central location at median and
percentages for dispersion.
Data is arranged in order and differences
can be found. However, there is no inherent
starting point and ratios are meaningless.
The difference between any two successive
points is equal; often treated as a ratio scale
even if assumption of equal intervals is
incorrect. It can add, subtract and order
objects. It has central location at arithmetic
mean and standard deviation for dispersion.
An extension of the interval level that
includes an inherent zero starting point.
Both differences and ratios are meaningful.
True zero point indicates absence of an
attribute. It can add, subtract, multiply and
divide. It has central location at geometric
mean and percent variation for dispersion.
Example
Statistics
calendar
time, correlations,
ttemperature
tests,
F-tests,
multiple
regression
elapsed
time, t-test,
F-test,
distance, weight
correlations,
multiple
regression
Sampling Methods
Practically all items of population cannot be measured due to cost or being impractical hence,
sampling is used to get a representative group of items to measure. Various sampling strategies
are
Random Sampling - The use of a sampling plan requires randomness in sample selection and
requires giving every part an equal chance of being selected for the sample. The sampling
sequence must be based on an independent random plan. It is the least biased of all sampling
techniques, there is no subjectivity as each member of the total population has an equal
chance of being selected and can also be obtained using random number tables.
Sequential or Systematic Sampling – Init every nth record is selected from a list of the
population. Usually, these plans are ended after the number inspected has exceeded the
sample size of a sampling plan. It is used for costly or destructive testing. If the list does not
contain any hidden order, this strategy is just as random as random sampling.
Stratified Sampling – It selects random samples from each group or process that is different.
If the population has identifiable categories, or strata, that have a common characteristic,
random sampling is used to select a sufficient number of units from each strata. Stratified
sampling is often used to reduce sampling error. The resulting mix of samples can be biased
if the proportion of the samples does not reflect the relative frequency of the groups.
Sample Homogeneity - It occurs when the data chosen for a sample have similar characteristics.
It focuses on how similar the data are in a given sample. If data are from a variety of sources,
such as several production streams or several geographical areas then, the results will reflect
these combined sources. It aims for homogeneous data so as to relate data from a single source to
the degree as much possible, to evaluate and determine the influence from an input of concern on
Six Sigma - Black Belt
data. Non-homogeneous data result in errors. Deficiency of homogeneity in data will hide the
sources and make root cause analysis difficult.
Sampling Distribution of Means - If the means of all possible samples are obtained and
organized, we could derive the sampling distribution of the means. The mean of the sampling
distribution of the mean is the mean of the population from which the scores were sampled.
Therefore, if a population has a mean µ, then the mean of the sampling distribution of the mean
is also µ.
Sampling Error - The sample statistics may not always be exactly the same as their
corresponding population parameters. The difference is known as the sampling error.
Collecting data
Few types of data collection methods includes
Check sheets - It is a structured, well-prepared form for collecting and analyzing data
consisting of a list of items and some indication of how often each item occurs. There are
several types of check sheets like confirmation check sheets for confirming whether all steps
in a process have been completed, process check sheets to record the frequency of
observations with a range of measurement, defect check sheets to record the observed
frequency of defects and stratified check sheets to record observed frequency of defects by
defect type and one other criterion. It is easy to use, provides a choice of observations and
good for determining frequency over time. It should be used to collect observable data when
the collection is managed by the same person or at the same location from a process.
Coded data- It is used when presence of too many digits are to be recorded into small blocks
or during data capturing of large sequences of digits from a single observation or rounding
off errors are observed whilst recording large digit numbers. It is also used if numeric data is
used to represent attribute data or data quantity is not enough for a statistical significance in
the sample size. Various types of coded data collection are
Truncation coding for storing only 3,2 or 9 for 1.0003, 1.0002, and 1.0009
Substitution coding – It stores fractional observation, as integers like expressing the
number 32 for 32-3/8 inches with 1/8 inch as base.
Category coding - Using a code for category like "S" for scratch
Adding/subtracting a constant or multiplying/dividing by a factor – It is usually used for
encoding or decoding
Automatic measurements - In it a computer or electronic equipment performs data gathering
without human intervention like radioactive level in a nuclear reactor. The equipment
observes and records data for analysis and action.
5.3. Measurement Systems
In order to ensure a measurement method is accurate and producing quality results, a method
must be defined to test the measurement process as well as ensure that the process yields data
that is statistically stable.
Measurement Methods
Various terms used in measurement systems are
Six Sigma - Black Belt
Measuring instruments – They are typically expensive and should be treated with care.
Measuring tools must be calibrated on a scheduled basis as well as after any suspected
damage.
Reference/Measuring Surfaces - A reference surface is the surface of a measuring tool that is
fixed. The measuring surface is movable.
Transfer Tools - Transfer tools have no reading scale, an example, is spring calipers. The
measurement is transferred to another measurement scale for direct reading.
Types of Gages used are
Attribute Gages - Attribute gages are fixed gages which typically are used to make a go, nogo decision. Examples of attribute instruments are master gages, plug gages, contour gages,
thread gages, limit length gages, assembly gages, etc. Attribute data indicates only whether a
product is good or bad.
Variable Gages - Variable measuring instruments provide a physical measured dimension.
Examples of variable instruments are line rules, vernier calipers, micrometers, depth
indicators, run out indicators, etc. Variable information provides a measure of the extent that
a product is good or bad, relative to specifications. Variable data is often useful for process
capability determination and may be monitored via control charts.
Attribute screens are screening tests performed on a sample with the results falling into one of
two categories, such as acceptable or not acceptable. Because the screen tests are conducted on
either the entire population of items or on a significantly large proportion of the population, the
screen test must be of a nondestructive nature.
Various gages and measuring instruments are used for measurement which are
Gage (Gauge) Blocks - Carl Johansson developed steel blocks to establish a measurement
standard to duplicate national standards and that could be used in any shop and had accuracy
within a few millionths of an inch. Gage blocks are made from high carbon or chromium
alloyed steel, tungsten carbide, chromium carbide or fused quartz. They are used to set a
length dimension for a transfer measurement, and for calibration of a number of other tools.
Calipers - Calipers are used to measure length. The length can be an inside dimension,
outside dimension, height, or depth. Calipers are of four as spring calipers, dial calipers,
vernier calipers and digital calipers.
The Vernier Scale - Vernier scales are used on a variety of measuring instruments such as
height gages, depth gages, inside or outside vernier calipers and gear tooth verniers.
Optical Comparators - Comparators use a beam of light directed upon the part to be
inspected, and the resulting shadow is magnified and projected upon a viewing screen. The
image can then be measured by comparing it with a master chart or outline on the viewing
screen or measurements taken. To pass inspection, the shadow outline of the object must fall
within predetermined tolerance limits.
Micrometers - Micrometers, or “mics,” are may be purchased with frame sizes from 0.5
inches to 48 inches. A 2" micrometer readings are from 1-2". Most “mics” have an accuracy
of 0.001" and using a vernier scale, an accuracy of 0.0001" can be obtained.
Six Sigma - Black Belt
Supermicrometers when used in temperature and humidity controlled rooms, are able to
make linear measurements to millionths of an inch.
Dial Indicators - They are mechanical instruments for measuring distance variations. Most
dial indicators amplify a contact point reading by use of an internal gear train mechanism.
Surface Plates - They are a reference plane for dimensional measurements. They are
customarily used with a toolmaker's flat, angles, parallels, V blocks and cylindrical gage
block stacks.
Ring Gages - They are used to check external cylindrical dimensions and are often in “go,
no-go” sets. A thread ring gage is used to check male threads.
Plug Gages - They are generally “go, no-go” gages, and are used to check internal
dimensions. The thread plug gage is designed exactly as the plug gage but instead of a
smooth cylinder at each end, the ends are threaded.
Pneumatic Gages - Pneumatic amplification gages types include one actuated by varying air
pressure and the other by varying air velocity at constant pressure. Measurements can be read
to millionths of an inch.
Non-Destructive Testing (NDT) and Non-Destructive Evaluation (NDE) - NDT and NDE
techniques evaluate material properties without impairing the future usefulness of the items
being tested. The advantages of NDT techniques include the use of automation, 100%
product testing and the guarantee of internal soundness. Some NDT results are open to
interpretation and demand considerable skill on the part of the examiner.
Visual Inspection - Visual examination of product color, texture, and appearance gives
valuable information. The human eye is frequently aided by magnifying lenses or other
instrumentation. This technique is sometimes called scanning inspection.
Measurement Systems Analysis
It refers to the analysis of precision and accuracy of measurement methods. It is an experimental
and mathematical method of determining how much the variation within the measurement
process contributes to overall process variability. Characteristics contribute to the effectiveness
of a measurement method which is
Accuracy - It is an unbiased true value which is normally reported and is the nearness of
measured result and reference value. It has different components as
Bias - It is the systematic difference between the average measured value and a reference
value. The reference value is an agreed standard, such as a standard traceable to a
national standards body. When applied to attribute inspection, bias refers to the ability of
the attribute inspection system to produce agreement on inspection standards. Bias is
controlled by calibration, which is the process of comparing measurements to standards.
Linearity – It is the difference in bias through measurements. How does the size of the
part affect the accuracy of the measurement method?
Stability – It is the change of bias over time and usage. How accurately does the
measurement method perform over time?
Sensitivity - The gage should be sensitive enough to detect differences in measurement as
slight as one-tenth of the total tolerance specification or process spread.
Precision - It is the ability to repeat the same measurement by the same operator at or near
the same time with nearness of measurement in any random measurement. Its components
are
Six Sigma - Black Belt
Reproducibility - The reproducibility of a single gage is customarily checked by
comparing the results of different operators taken at different times. It is the variation in
the average of the measurements made by different appraisers using the same measuring
instrument when measuring the identical characteristic on the same part.
Repeatability - It is the variation in measurements obtained with one measurement
instrument when used several times by one appraiser, while measuring the identical
characteristic on the same part. Variation obtained when the measurement system is
applied repeatedly under the same conditions is usually caused by conditions inherent in
the measurement system.
Repeatability serves as the foundation that must be present in order to achieve reproducibility.
Reproducibility must be present before achieving accuracy. Precision requires that the same
measurement results are achieved for the condition of interest with the selected measurement
method.
A measurement method must first be repeatable. A user of the method must be able to repeat the
same results given multiple opportunities with the same conditions. The method must then be
reproducible. Several different users must be able to use it and achieve the same measurement
results. Finally, the measurement method must be accurate. The results the method produces
must hold up to an external standard or a true value given the condition of interest.
Measurement Systems in the Enterprise
Measurement systems are not just limited to production floor for testing purposes but are also
applied across the enterprise in different functions to measure performance, input and output to
achieve quality and customer satisfaction.
Performance appraisal is an effective tool with human resource function of an organization to
measure efficiency of employees against laid KPIs (Key Performance Indicator). Marketing and
Sales also apply measurement by surveys to gauge customer satisfaction. Logistics, supply chain
and store department in an organization also apply measurements to inventory items to optimize
it.
Metrology
It is the science of measurement. The word metrology derives from two Greek words: matron
(meaning measure) and logos (meaning logic). Metrology involves the following
The establishment of measurement standards that are both internationally accepted and
definable
The use of measuring equipment to correlate the extent that product and process data
conforms to specification
The regular calibration of measuring equipment, traceable to established international
standards
5.4. Statistics
Statistics is the study of the collection, organization, analysis, interpretation and presentation of
data. It deals with all aspects of data including the planning of data collection in terms of the
design of surveys and experiments.
Six Sigma - Black Belt
Terminologies
Various statistics terminologies which are used extensively are
Data - facts, observations, and information that come from investigations.
Measurement data sometimes called quantitative data -- the result of using some
instrument to measure something (e.g., test score, weight);
Categorical data also referred to as frequency or qualitative data. Things are grouped
according to some common property and the number of members of the group are
recorded (e.g., males/females, vehicle type).
Variable - property of an object or event that can take on different values. For example,
college major is a variable that takes on values like mathematics, computer science, etc.
Discrete Variable - a variable with a limited number of values (e.g., gender
(male/female).
Continuous Variable – It is a variable that can take on many different values, in theory,
any value between the lowest and highest points on the measurement scale.
Independent Variable - a variable that is manipulated, measured, or selected by the user
as an antecedent condition to an observed behavior. In a hypothesized cause-and-effect
relationship, the independent variable is the cause and the dependent variable is the
effect.
Dependent Variable - a variable that is not under the user's control. It is the variable that
is observed and measured in response to the independent variable.
Central Limit Theorem
The central limit theorem is the basis of many statistical procedures. The theorem states that for
sufficiently large sample sizes ( n ≥ 30), regardless of the shape of the population distribution, if
samples of size n are randomly drawn from a population that has a mean µ and a standard
deviation σ , the samples’ means X are approximately normally distributed. If the populations are
normally distributed, the sample's means are normally distributed regardless of the sample sizes.
Hence, for sufficiently large populations, the normal distribution can be used to analyze samples
drawn from populations that are not normally distributed, or whose distribution characteristics
are unknown. The theorem states that this distribution of sample means will have the same mean
as the original distribution, the variability will be smaller than the original distribution, and it
will tend to be normally distributed.
When means are used as estimators to make inferences about a population’s parameters and n ≥
30, the estimator will be approximately normally distributed in repeated sampling. The mean and
standard deviation of that sampling distribution are given as µx = µ and σx = σ/√n. The theorem
is applicable for controlled or predictable processes. Most points on the chart tend to be near the
average with the curve's shape is like bell-shaped and the sides tend to be symmetrical. Using ± 3
sigma control limits, the central limit theorem is the basis of the prediction as, if the process has
not changed, a sample mean falls outside the control limits an average of only 0.27% of the time.
The theorem enables the use of smaller sample averages to evaluate any process because
distributions of sample means tend to form a normal distribution.
Six Sigma - Black Belt
Descriptive Statistics
Central Tendencies - Central tendency is a measure that characterizes the central value of a
collection of data that tends to cluster somewhere between the high and low values in the data. It
refers to measurements like mean, median and mode. It is also called measures of center. It
involves plotting data in a frequency distribution which shows the general shape of the
distribution and gives a general sense of how the numbers are grouped. Several statistics can be
used to represent the "center" of the distribution.
Mean - The mean is the most common measure of central tendency. It is the ratio of the sum
of the scores to the number of the scores. For ungrouped data which has not been grouped in
intervals, the arithmetic mean is the sum of all the values in that population divided by the
number of values in the population as
Where, µ is the arithmetic mean of the population, Xi is the ith value observed, N is the
number of items in the observed population and ∑ is the sum of the values. For example, the
production of an item for 5 days is 500, 750, 600, 450 and 775 then the arithmetic mean is µ
= 500 + 750 + 600 + 450 + 775/ 5 = 615. It gives the distribution's arithmetic average and
provides a reference point for relating all other data points. For grouped data, an
approximation is done using the midpoints of the intervals and the frequency of the
distribution as
Median – It divides the distribution into halves; half are above it and half are below it when
the data are arranged in numerical order. It is also called as the score at the 50th percentile in
the distribution. The median location of N numbers can be found by the formula (N + 1) / 2.
When N is an odd number, the formula yields an integer that represents the value in a
numerically ordered distribution corresponding to the median location. (For example, in the
distribution of numbers (3 1 5 4 9 9 8) the median location is (7 + 1) / 2 = 4. When applied to
the ordered distribution (1 3 4 5 8 9 9), the value 5 is the median. If there were only 6 values
(1 3 4 5 8 9), the median location is (6 + 1) / 2 = 3.5 hence, median is half-way between the
3rd and 4th scores (4 and 5) or 4.5. It is the distribution's center point or middle value with an
equal number of data points occur on either side of the median but useful when the data set
has extreme high or low values and used with non-normal data
Six Sigma - Black Belt
Mode – It is the most frequent or common score in the distribution or the point or value of X
that corresponds to the highest point on the distribution. If the highest frequency is shared by
more than one value, the distribution is said to be multimodal and with two, it is bimodal or
peaks in scoring at two different points in the distribution. For example in the measurements
75, 60, 65, 75, 80, 90, 75, 80, 67, the value 75 appears most frequently, thus it is the mode.
Measures of Spread - Although the average value in a distribution is informative about how
scores are centered in the distribution, the mean, median, and mode lack context for interpreting
those statistics. Measures of variability provide information about the degree to which individual
scores are clustered about or deviate from the average value in a distribution.
Range - The simplest measure of variability to compute and understand is the range. The
range is the difference between the highest and lowest score in a distribution. Although it is
easy to compute, it is not often used as the sole measure of variability due to its instability.
Because it is based solely on the most extreme scores in the distribution and does not fully
reflect the pattern of variation within a distribution, the range is a very limited measure of
variability.
Inter-quartile Range (IQR) - Provides a measure of the spread of the middle 50% of the
scores. The IQR is defined as the 75th percentile - the 25th percentile. The inter-quartile
range plays an important role in the graphical method known as the box plot. The advantage
of using the IQR is that it is easy to compute and extreme scores in the distribution have
much less impact but its strength is also a weakness in that it suffers as a measure of
variability because it discards too much data. Researchers want to study variability while
eliminating scores that are likely to be accidents. The box plot allows for this for this
distinction and is an important tool for exploring data.
Variance (σ2) - The variance is a measure based on the deviations of individual scores from
the mean. As, simply summing the deviations will result in a value of 0 hence, the variance is
based on squared deviations of scores about the mean. When the deviations are squared, the
rank order and relative distance of scores in the distribution is preserved while negative
values are eliminated. Then to control for the number of subjects in the distribution, the sum
of the squared deviations, is divided by N (population) or by N - 1 (sample). The result is the
average of the sum of the squared deviations and it is called the variance. The variance is not
only a high number but it is also difficult to interpret because it is the square of a value.
Six Sigma - Black Belt
Standard deviation (σ) - The standard deviation is defined as the positive square root of the
variance and is a measure of variability expressed in the same units as the data. The standard
deviation is very much like a mean or an "average" of these deviations. In a normal
(symmetric and mound-shaped) distribution, about two-thirds of the scores fall between +1
and -1 standard deviations from the mean and the standard deviation is approximately 1/4 of
the range in small samples (N < 30) and 1/5 to 1/6 of the range in large samples (N > 100).
Coefficient of variation (cv) - Measures of variability can not be compared like the standard
deviation of the production of bolts to the availability of parts. If the standard deviation for
bolt production is 5 and for availability of parts is 7 for a given time frame, it can not be
concluded that the standard deviation of the availability of parts is greater than that of the
production of bolts thus, variability is greater with the parts. Hence, a relative measure called
the coefficient of variation is used. The coefficient of variation is the ratio of the standard
deviation to the mean. It is cv = σ / µ for a population and cv = s/ for a sample.
Measures of Shape - For distributions summarizing data from continuous measurement scales,
statistics can be used to describe how the distribution rises and drops.
Symmetric - Distributions that have the same shape on both sides of the center are called
symmetric and those with only one peak are referred to as a normal distribution.
Skewness – It refers to the degree of asymmetry in a distribution. Asymmetry often reflects
extreme scores in a distribution. Positively skewed is when it has a tail extending out to the
right (larger numbers) so, the mean is greater than the median and the mean is sensitive to
each score in the distribution and is subject to large shifts when the sample is small and
contains extreme scores. Negatively skewed has an extended tail pointing to the left (smaller
numbers) and reflects bunching of numbers in the upper part of the distribution with fewer
scores at the lower end of the measurement scale.
Measures of Association – It provides information about the relatedness between variables so as
to help estimate the existence of a relationship between variables and it’s strength. They are
Covariance - It shows how the variable y reacts to a variation of the variable x. Its formula is
for a population cov( X, Y ) = ∑( xi − µx) (yi − µy) / N
Correlation coefficient (r) - It is a number that ranges between −1 and +1. The sign of r will
be the same as the sign of the covariance. When r equals−1, then it is a perfect negative
Six Sigma - Black Belt
relationship between the variations of the x and y thus, increase in x will lead to a
proportional decrease in y. Similarly when r equals +1, then it is a positive relationship or the
changes in x and the changes in y are in the same direction and in the same proportion. If r is
zero, there is no relation between the variations of both. Any other value of r determines the
relationship as per how r is close to −1, 0, or +1. The formula for the correlation coefficient
for population is ρ = Cov( X, Y ) /σx σy
Coefficient of determination (r2) - It measures the proportion of changes of the dependent
variable y as explained by the independent variable x. It is the square of the correlation
coefficient r thus, is always positive with values between zero and one. If it is zero, the
variations of y are not explained by the variations of x but if it one, the changes in y are
explained fully by the changes in x but other values of r are explained according to closeness
to zero or one.
Frequency Distributions - A distribution is the amount of potential variation in the outputs of a
process, usually expressed by its shape, mean or variance. A frequency distribution graphically
summarizes and displays the distribution of a process data set. The shape is visualized against
how closely it resembles the bell curve shape or if it is flatter or skewed to the right or left. The
frequency distribution's centrality shows the degree to which the data center on a specific value
and the amount of variation in range or variance from the center.
A frequency distribution groups data into certain categories, each category representing a subset
of the total range of the data population or sample. Frequency distributions are usually displayed
in a histogram. Size is shown on the horizontal axis (x-axis) and the frequency of each size is
shown on the vertical axis (y-axis) as a bar graph. The length of the bars is proportional to the
relative frequencies of the data falling into each category, and the width is the range of the
category. It is used to ascertain information about data like distribution type of the data.
It is developed by segmenting the range of the data into equal sized bars or segments groups then
computing and labeling the frequency vertical axis with the number of counts for each bar and
labeling the horizontal axis with the range of the response variable. Finally, determining the
number of data points that reside within each bar and construct the histogram.
Cumulative Frequency Distribution - It is created from a frequency distribution by adding an
additional column to the table called cumulative frequency thus, for each value, the cumulative
frequency for that value is the frequency up to and including the frequency for that value. It
shows the number of data at or below a particular variable
Six Sigma - Black Belt
The cumulative distribution function, F(x), denotes the area beneath the probability density
function to the left of x.
Graphical methods
They are effective tools for the visual evaluation of data is a graph showing the relationship
between variables. They also provide a visual image of the data thus complementing numerical
methods for identifying patterns in the data. They include box plots, stem and leaf plots scatter
diagrams, pattern and trend analysis, histograms, normal probability distributions and Weibull
distributions.
Box plot - It is also called a box-and-whisker plot or “five number summary”. It has five points
of interest, which are the quartiles, the median, and the highest and lowest values and shows how
the data are scattered within those ranges. It shows location, spread and shape of the data. It is
used for graphically showing the variation between multiple variables and the variations within
the ranges. In it, the upper and lower quartiles of the data form the ends of the box, the median
forms the centerline of the box which is also dividing the box and the minimum and maximum
data points are drawn as end points to lines that extend from the box (the whiskers). Outlier data
are represented by asterisks or diamonds outside of the minimum or maximum points. Notches
indicate variability of the median, and widths are proportional to the log of the sample size.
Six Sigma - Black Belt
It is used when comparing two or more sets of data or determining significance of an apparent
difference. It is useful with a large number of data sets by providing a graphic summary of a data
set as it visually shows the center, the spread, the overall range and indicates skewness of the
distribution. It is usually used in the early stages of data analysis.
Developing Box plot involves
Enlisting the data in numerical order and computing the median
Enlisting the lower and upper quartile and their medians.
Computing the inter-quartile range and plot the 5-points to a number line (three medians,
lowest and highest value).
Draw a box through the upper and lower quartiles points and a vertical line through the
median point.
Draw the whiskers from each end of the box to the smallest and largest values.
Stem and Leaf Plot - It separates each number into a stem (all numbers but the last digit) and a
leaf (the last digit) like, for the numbers 45, and 59, the stems are 4 and 5, while the leaves are 5
and 9. It is easy to make and shows shape and distribution quickly. It is a compact depiction of
data showing both variable and categorical data sets. It resembles a histogram and is used to
visualize the spread of a distribution and indicate around what values the data are mainly
concentrated. It is essentially composed of two parts, the stem on the left side of the graph and
the leaf on the right. Data can be read directly from the diagram. It is useful for classifying data
and organizing data as it is collected but all numbers should be whole numbers or of same
precision. As in the figure, most data is in between 70 to 79.
Six Sigma - Black Belt
Developing Stem and Leaf Plot
Sort the given data in numerical order (ascending).
Separate the numbers into stems and leaves.
Group the numbers with the same stems.
Histograms - It shows frequencies in data as adjacent rectangles, erected over intervals with an
area proportional to the frequency of the observations in the interval. They are frequency column
graphs that display a static picture of process behavior and require a minimum of 50-100 data
points. It is characterized by the number of data points that fall within a given bar or interval or
frequency. It enables the user to visualize how the data points spread, skew and detect the
presence of outliers. A stable process which is predictable, usually shows a histogram with bellshaped curves which is not shown with unstable process even though shapes like exponential,
lognormal, gamma, beta, Poisson, binomial, geometric, etc. are a stable process.
The construction of a histogram starts with the division of a frequency distribution into equal
classes, and then each class is represented by a vertical bar. They are used to plot the density of
data especially of continuous data like weight or height.
Run Charts - It displays how a process performs over time as data points are plotted in
chronological order and connected as a line graph. It is useful in detection of variation or
problem trend or pattern as it is evident in run charts when shift occurs that’s why, it is also
called as trend charts. It can displays sequential data for spotting patterns and abnormalities. It is
used for monitoring and communicating process performance. It is usually used for displaying
performance data over time or for showing tabulations.
Even though trends observable on the run chart might not signify deviation as it might be under
normal limits but, usually it indicates a trend or shift or a cycle. When a run chart exhibits seven
or eight points successively up or down, then a trend is clearly present in the data.
Developing Run Chart
Sequence the input data against time and order the data from lowest to highest.
Six Sigma - Black Belt
Calculate the median and the range.
Make the Y-axis scale 1.5 to 2 times the range and of X-axis 2 to 3 times against Y-axis.
Depict the median by a dotted line.
Plot the points and connect them to form a line graph.
Scatter Diagram - It is displays multiple XY coordinate data points represent the relationship
between two different variables on X and Y-axis. It is also called as correlation chart. It depicts
the relationship strength between an independent variable on the vertical axis and a dependent
variable on the horizontal axis. It enables strategizing on how to control the effect of the
relationship on the process. It is also called scatter plots, X-Y graphs or correlation charts.
It graph pairs of continuous data, with one variable on each axis, showing what happens to one
variable when the other variable changes. If the relationship is understood, then the dependent
variable may be controlled. The relationship may show a correlation between the two variables
though correlation does not always refer to a cause and effect relationship. The correlation may
be positive due to one variable moving in one direction and the second variable in the same
direction but, for negative correlation both move in opposite directions. Presence of correlation is
due to a cause-effect relationship, a relationship between one cause and another cause or due to a
relationship between one cause and two or more other causes.
Six Sigma - Black Belt
It is used when two variables are related or evaluating paired continuous data. It is also helpful to
identify potential root causes of a problem by relating two variables. The tighter the data points
along the line, the stronger the relationship amongst them and the direction of the line indicates
whether the relationship is positive or negative. The degree of association between the two
variables is calculated by the correlation coefficient. If the points show no significant clustering,
there is probably no correlation.
Developing Scatter Diagram
Collect data for both variables.
Draw a graph with the independent variable on the horizontal axis (x) and the dependent
variable on the vertical axis (y).
For each pair of data, plot a dot (or symbol) where the x-axis value intersects the y-axis
value.
Normal Probability Plots - It is used to detect the presence of normal bell curve or Gaussian
distribution in the process data. The plot is defined by mean and variance. For normally
distributed data, the mean and median are very close and may be identical. The normal
probability plot shows whether or not the data are distributed as a standard normal distribution.
Normal distributions will follow a linear pattern. It is also called as normal test plots.
It is used when prediction or taking decisions based on the data distribution and to test the
assumption of normality. In it most of the data concentrate around or on the centerline which
divides the curve into two equal halves. The data is plotted against a theoretical normal
distribution in such a way that the points should form an approximate straight line. Departures
from this straight line indicate departures from normality.
Weibull Plots - It is usually used to estimate the cumulative probability that a given sample will
fail under certain conditions. The data can be used to determine a point at which a certain
number of samples will fail. Once it is known, this information can help design a process such
that no part of the sample approaches the stress limitations. It provides reasonably accurate
Six Sigma - Black Belt
failure analysis and forecasts with extremely small samples by providing a simple and useful
graphical plot of the failure data.
The Weibull plot has special scales designed so that the data points will be almost linear if they
follow a Weibull distribution. The Weibull distribution has three parameters but can use only
two if the third is assumed
α is the shape parameter
θ is the scale parameter
γ is the location parameter
Weibull plots usually chart data on the probable life of a product or process which is measured in
hours, miles, or any other metric that describes the time-to-failure. If complete data is available,
the exact time-to-failure is known but for suspended data or right censored, the unit operates
successfully for a known period of time and could have continued for an additional period of
time that is not known whereas, for interval data or left censored, the time-to failure is known but
only within a certain range of time.
Valid Statistical Conclusions
Drawing statistical conclusions involves usage of enumerative and analytical studies, which are
Enumerative or descriptive studies describes data using math and graphs and focus on the
current situation like a tailor taking a measurement of length, is obtaining quantifiable
information which is an enumerative approach. Enumerative data is data that can be counted.
These studies are used to explain data, usually sample data in central tendency (median, mean
and mode), variation (range and variance) and graphs of data (histograms, box plots and dot
plots). Measures calculate from a sample, called statistics with which these measures describe a
population, called as parameters.
Six Sigma - Black Belt
A statistic is a quantity derived from a sample of data for forming an opinion of a specified
parameter about the target population. A sample is used as data on every member of population
is impossible or too costly. A population is an entire group of objects that contains characteristic
of interest. A population parameter is a constant or coefficient that describes some characteristic
of a target population like mean or variance.
Analytical (Inferential) Studies - The objective of statistical inference is to draw conclusions
about population characteristics based on the information contained in a sample. It uses sample
data to predict or estimate what a population will do in the future like a doctor taking a
measurement like blood pressure or heart beat to obtain a causal explanation for some observed
phenomenon which is an analytic approach.
It entails define the problem objective precisely, deciding if it will be evaluated by a one or two
tail test, formulating a null and an alternate hypothesis, selecting a test distribution and critical
value of the test statistic reflecting the degree of uncertainty that can be tolerated (the alpha, beta,
risk), calculating a test statistic value from the sample and comparing the calculated value to the
critical value and determine if the null hypothesis is to be accepted or rejected. If the null is
rejected, the alternate must be accepted. Thus, it involves testing hypotheses to determine the
differences in population means, medians or variances between two or more groups of data and a
standard and calculating confidence intervals or prediction intervals.
5.5. Probability
Probability is a measure of the likeliness that an event will occur. Probability is used to quantify
an attitude of mind towards some proposition of whose truth we are not certain.
Concepts
Basic probability concepts and terminology is discussed below
Probability - It is the chance that something will occur. It is expressed as a decimal fraction
or a percentage. It is the ratio of the chances favoring an event to the total number of chances
for and against the event. The probability of getting 4 with a rolling of dice, is 1 (count of 4
in a dice) / 6 = .01667. Probability then can be the number of successes divided by the total
number of possible occurrences. Pr(A) is the probability of event A. The probability of any
event (E) varies between 0 (no probability) and 1 (perfect probability).
Sample Space - It is the set of possible outcomes of an experiment or the set of conditions.
The sample space is often denoted by the capital letter S. Sample space outcomes are denoted
using lower-case letters (a, b, c . . .) or the actual values like for a dice, S={1,2,3,4,5,6}
Event - An event is a subset of a sample space. It is denoted by a capital letter such as A, B,
C, etc. Events have outcomes, which are denoted by lower-case letters (a, b, c . . .) or the
actual values if given like in rolling of dice, S={1,2,3,4,5,6}, then for event A if rolled dice
shows 5 so, A ={5}. The sum of the probabilities of all possible events (multiple E’s) in total
sample space (S) is equal to 1.
Independent Events - Each event is not affected by any other events for example tossing a
coin three times and it comes up "Heads" each time, the chance that the next toss will also be
a "Head" is still 1/2 as every toss is independent of earlier one.
Six Sigma - Black Belt
Dependent Events - They are the events which are affected by previous events like drawing 2
Cards from a deck will reduce the population for second card and hence, it's probability as
after taking one card from the deck there are less cards available as the probability of getting
a King, for the 1st time is 4 out of 52 but for the 2nd time is 3 out of 51.
Simple Events - An event that cannot be decomposed is a simple event (E). The set of all
sample points for an experiment is called the sample space (S).
Compound Events - Compound events are formed by a composition of two or more events.
The two most important probability theorems are the additive and multiplicative laws.
Union of events - The union of two events is that event consisting of all outcomes contained
in either of the two events. The union is denoted by the symbol U placed between the letters
indicating the two events like for event A={1,2} and event B={2,3} i.e. outcome of event A
can be either 1 or 2 and of event B is 2 or 3 then, AUB = {1,2}
Intersection of events - The intersection of two events is that event consisting of all outcomes
that the two events have in common. The intersection of two events can also be referred to as
the joint occurrence of events. The intersection is denoted by the symbol ∩ placed between
the letters indicating the two events like for event A={1,2} and event B={2,3} then, A∩B =
{2}
Complement - The complement of an event is the set of outcomes in the sample space that
are not in the event itself. The complement is shown by the symbol ` placed after the letter
indicating the event like for event A={1,2} and Sample space S={1,2,3,4,5,6} then
A`={3,4,5,6}
Mutually Exclusive - Mutually exclusive events have no outcomes in common like the
intersection of an event and its complement contains no outcomes or it is an empty set, Ø for
example if A={1,2} and B={3,4} and A ∩ B= Ø.
Equally Likely Outcomes - When a sample space consists of N possible outcomes, all
equally likely to occur, then the probability of each outcome is 1/N like the sample space of
all the possible outcomes in rolling a die is S = {1, 2, 3, 4, 5, 6}, all equally likely, each
outcome has a probability of 1/6 of occurring but, the probability of getting a 3, 4, or 6 is 3/6
= 0.5.
Probabilities for Independent Events or multiplication rule - Independent events occurrence
does not depend on other events of sample space then the probability of two events A and B
occurring both is P(A ∩ B) = P(A) x P(B) and similarly for many events the independence
rule is extended as P(A∩B∩C∩. . .) = P(A) x P(B) x P(C) . . . This rule is also called as the
multiplication rule. For example the probability of getting three times 6 in rolling a dice is
1/6 x 1/6 x 1/6 = 0.00463
Probabilities for Mutually Exclusive Events or Addition Rule - Mutually exclusive events do
not occur at the same time or in the same sample space and do not have any outcomes in
common. Thus, for two mutually exclusive events, A and B, the event A∩B = Ø, and the
probability of events A and B occurring is zero, as P(A∩B) = 0, for events A and B, the
probabilities of either or both of the events occurring is P(AUB) = P(A) + P(B) – P(A∩B)
also called as addition rule. For example let P(A) = 0.2, P(B) = 0.4, and P(A∩B) = 0.5, then
P(AUB) = P(A) + P(B) - P(A∩B) = 0.2 + 0.4 - 0.5 = 0.1
Conditional probability - It is the result of an event depending on the sample space or another
event. The conditional probability of an event (the probability of event A occurring given
that event B has already occurred) can be found as
Six Sigma - Black Belt
For example in sample set of 100 items received from supplier1 (total supplied= 60 items and
reject items = 4) and supplier 2(40 items), event A is the rejected item and B be the event if item
from supplier1. Then, probability of reject item from supplier1 is – P(A|B) = P(A∩B)/ P(B),
P(A∩B) = 4/100 and P(B) = 60/100 = 1/15.
Probabilistic Distributions
Distribution - Prediction and decision-making needs fitting data to distributions (like normal,
binomial, or Poisson). A probability distribution identifies whether a value will occur within a
given range or the probability that a value that is lesser or greater than x will occur or the
probability that a value between x and y will occur.
A distribution is the amount of variation in the outputs of a process, expressed by shape
(symmetry, skewness and kurtosis), average and standard deviation. Symmetrical distributions
the mean represents the central tendency of the data but for skewed distributions, the median is
the indicator. The standard deviation provides a measure of variation from the mean. Similarly
skewness is a measure of the location of the mode relative to the mean thus, if mode is to the
mean's left then the skewness is negative else positive but for symmetrical distribution, skewness
is zero. Kurtosis measures the peakness or relative flatness of the distribution and the kurtosis is
higher for a higher and narrower peak.
Probability Distribution - It is a mathematical formula relating the values of a characteristic or
attribute with their probability of occurrence in the population. It depicts the possible events and
the associated probability for each of these events to occur. Probability distribution is divided as
Discrete data describe a finite set of possible occurrences for the data like rolling a dice with
the random variable can take value from 1, 2, 3, 4, 5 or 6. The most used discrete probability
distributions are the binomial, the Poisson, the geometric, and the hypergeometric
distribution.
Continuous data describes a continuum of possible occurrences that is unbroken as, the
distribution of body weight is a random variable with infinite number of possible data points.
Probability Density Function - Probability distributions for continuous variables use
probability density functions (or PDF), which are mathematically model the probability density
shown in a histogram but, discrete variables have probability mass function. PDFs employ
integrals as the summation of area between two points when used in a equation. If a histogram
shows the relative frequencies of a series of output ranges of a random variable, then the
histogram also depicts the shape of the probability density for the random variable hence, the
shape of the probability density function is also described as the shape of the distribution. An
example illustrates it
Example: A fast-food chain advertises a burger weighing a quarter-kg but, it is not exactly 0.25
kg. One randomly selected burger might weigh 0.23 kg or 0.27 kg. What is the probability that a
Six Sigma - Black Belt
randomly selected burger weighs between 0.20 and 0.30 kg? That is, if we let X denote the
weight of a randomly selected quarter-kg burger in kg, what is P(0.20 < X < 0.30)?
This problem is solved by using probability density function as, imagine randomly selecting, 100
burgers advertised to weigh a quarter-kg. If weighed the 100 burgers, and created a density
histogram of the resulting weights, perhaps the histogram might be
In this case, the histogram illustrates that most of the sampled burgers do indeed weigh close to
0.25 kg, but some are a bit more and some a bit less. Now, what if we decreased the length of the
class interval on that density histogram then, it will be as
Now, if it is pushed further and the interval is decreased then, the intervals would eventually get
small that we could represent the probability distribution of X, not as a density histogram, but
rather as a curve (by connecting the "dots" at the tops of the tiny rectangles) as
Such a curve is denoted f(x) and is called a (continuous) probability density function. A density
histogram is defined so that the area of each rectangle equals the relative frequency of the
corresponding class, and the area of the entire histogram equals 1. Thus, finding the probability
that a continuous random variable X falls in some interval of values involves finding the area
under the curve f(x) sandwiched by the endpoints of the interval. In the case of this example, the
probability that a randomly selected burger weighs between 0.20 and 0.30 kg is then this area, as
Six Sigma - Black Belt
Distributions Types - Various distributions are
Binomial - It is used in finite sampling problems when each observation has only one of two
possible outcomes, such as pass/fail.
Poisson - It is used for situations when an attribute possibility is that each sample can have
multiple defects or failures.
Normal - It is characterized by the traditional "bell-shaped" curve, the normal distribution is
applied to many situations with continuous data that is roughly symmetrical around the mean.
Chi-square - It is used in many situations when an inference is drawn on a single variance or
when testing for goodness of fit or independence. Examples of use of this distribution include
determining the confidence interval for the standard deviation of a population or comparing
the frequency of variables.
Student's t - It is used in many situations when inferences are drawn without a variance
known in the case of a single mean or the comparison of two means.
F - It is used in situations when inferences are drawn from two variances such as whether two
population variances are different in magnitude.
Hypergeometric - It is the "true" distribution. It is used in a similar manner to the binomial
distribution except that the sample size is larger relative to the population. This distribution
should be considered whenever the sample size is larger than 10% of the population. The
hypergeometric distribution is the appropriate probability model for selecting a random
sample of n items from a population without replacement and is useful in the design of
acceptance-sampling plans.
Bivariate - It is created with the joint frequency distributions of modeled variables.
Exponential - It is used for instances of examining the time between failures.
Lognormal - It is used when raw data is skewed and the log of the data follows a normal
distribution. This distribution is often used for understanding failure rates or repair times.
Weibull - It is used when modeling failure rates particularly when the response of interest is
percent of failures as a function of usage (time).
Binomial Distribution - It is used to model discrete data having only two possible outcomes like
pass or fail, yes or no and which are exactly two mutually exclusive outcomes. It may be used to
find the proportion of defective units produced by a process and used when population is large –
when N> 50 with small size of sample compared to the population. The ideal situation is when
sample size (n) is less than 10% of the population (N) or n< 0.1N. The binomial distribution is
useful to find the number of defective products if the product either passes or fails a given test.
The mean, variance, and standard deviation for a binomial distribution are µ = np, σ2= npq and σ
=√npq. The essential conditions for a random variable are fixed number of observations (n)
Six Sigma - Black Belt
which are independent of each other, every trial results in either of the two possible outcomes
and if the probability of a success is p and the probability of a failure is 1 -p.
The binomial probability distribution equation will show the probability p (the probability of
defective) of getting x defectives (number of defectives or occurrences) in a sample of n units (or
sample size) as
As an example if a product with a 1% defect rate, is tested with ten sample units from the
process, Thus, n= 10, x= 0 and p= .01 then, the probability that there will be 0 defective products
is
Poisson Distribution - It estimates the number of instances a condition of interest occurs in a
process or population. It focuses on the probability for a number of events occurring over some
interval or continuum where µ, the average of such an event occurring, is known like project
team may want to know the probability of finding a defective part on a manufactured circuit
board. Most frequently, this distribution is used when the condition may occur multiple times in
one sample unit and user is interested in knowing the number of individual characteristics found
like critical attribute of a manufactured part is measured in a random sampling of the production
process with non-conforming conditions being recorded for each sample. The collective number
of failures from the sampling may be modeled using the Poisson distribution. It can also be used
to project the number of accidents for the following year and their probable locations. The
essential condition for a random variable to follow Poisson distribution is that counts are
independent of each other and the probability that a count occurs in an interval is the same for all
intervals. The mean and the variance of the Poisson distribution are the same, and the standard
deviation is the square root of the mean hence, µ = σ2 and σ =√µ =√σ2.
The Poisson distribution can be an approximation to the binomial when p is equal to or less than
0.1, and the sample size n is fairly large (generally, n >= 16) by using np as the mean of the
Poisson distribution. Considering f(x) as the probability of x occurrences in the sample/interval,
λ as the mean number of counts in an interval (where λ > 0), x as the number of defects/counts in
the sample/interval and e as a constant approximately equal to 2.71828 then the equation for the
Poisson distribution is as
Normal Distribution - A distribution is said to be normal when most of the observations are
clustered around the mean. It charts a data set of which most of the data points are concentrated
around the average (mean) in a symmetrical manner, thus forming a bell-shaped curve. The
normal distribution’s shape is unique in that the most frequently occurring value is in the middle
of the range and other probabilities tail off symmetrically in both directions. The normal
distribution is used for continuous (measurement) data that is symmetric about the mean. The
Six Sigma - Black Belt
graph of the normal distribution depends on the mean and the variance. When the variance is
large, the curve is short and wide and when the variance is small, the curve is tall and narrow.
The normal distribution is also called as the Gaussian or standard bell distribution. The
population mean µ is zero and that the population variance σ2 equals one as in the figure and σ is
the standard deviation. The normal probability density function is
For normal distribution, the area under the curve lies between µ − σ and µ + σ.
Z- transformation - The shape of the normal distribution depends on two factors, the mean and
the standard deviation. Every combination of µ and σ represent a unique shape of a normal
distribution. Based on the mean and the standard deviation, the complexity involved in the
normal distribution can be simplified and it can be converted into the simpler z-distribution. This
process leads to the standardized normal distribution, Z = (X − µ)/σ. Because of the complexity
of the normal distribution, the standardized normal distribution is often used instead.
Chi-Square Distribution - The chi-square (χ2) distribution is used when testing a population
variance against a known or assumed value of the population variance. It is skewed to the right
or with a long tail toward the large values of the distribution. The overall shape of the
distribution will depend on the number of degrees of freedom in a given problem. The degrees of
freedom are 1 less than the sample size. It is formed by adding the squares of standard normal
random variables. For example, if z is a standard normal random variable, then the following is a
chi-square random variable (statistic) with n degrees of freedom
The chi-square probability density function where v is the degree of freedom and
gamma function is
An example of a χ2 distribution with 6 degrees of freedom is as
(x) is the
Six Sigma - Black Belt
Student t Distribution - It was developed by W.S. Gosset. The t distribution is used to
determine the confidence interval of the population mean and confidence statistics when
comparing the means of sample populations but, the degrees of freedom for the problem must be
know n. The degrees of freedom are 1 less than the sample size.
The student’s t distribution is a symmetrical continuous distribution and similar to the normal
distribution, but the extreme tail probabilities are larger than for the normal distribution for
sample sizes of less than 31. The shape and area of the t distribution approaches towards the
normal distribution as the sample size increases. The t distribution can be used whenever
samples are drawn from populations possessing a normal, bell-shaped distribution. There is a
family of curves, one for each sample size from n =2 to n = 31.
F Distribution - The F distribution or F-test is a tool used for assessing the ratio of independent
variances or equality of variances from two normal populations. It is used in the Analysis of
Variance (ANOVA, a technique frequently used in the Design of Experiments to test for
significant differences in variance within and between test runs).
If U and V are the variances of independent random samples of size n and m taken from
normally distributed populations with variances of w and z, then
Six Sigma - Black Belt
which is a random variable with an F distribution with v1 = n-1 and v2 = m - 1. The Fdistribution is represented by
with (s1)2 is the variance of the first sample (n1- 1 degrees of freedom in the numerator) and
(s2)2 is the variance of the second sample (n2- 1 degrees of freedom in the denominator), given
two random samples drawn from a normal distribution.
The shape of the F distribution is non-symmetrical and will depend on the number of degrees of
freedom associated with (s1)2 and (s2)2. The distribution for the ratio of sample variances is
skewed to the right (the large values).
Geometric Distribution - It addresses the number of trials necessary before the first success. If
the trials are repeated k times until the first success, we would have k−1 failures. If p is the
probability for a success and q the probability for a failure, the probability of the first success to
occur at the kth trial is P(k, p) = p(q)k−1 with the mean and standard deviation are µ =1/p and σ
= √q/p.
Hypergeometric Distribution - The hypergeometric distribution applies when the sample (n) is
a relatively large proportion of the population (n >0.1N). The hypergeometric distribution is used
when items are drawn from a population without replacement. That is, the items are not returned
to the population before the next item is drawn out. The items must fall into one of two
categories, such as good/bad or conforming/nonconforming.
The hypergeometric distribution is similar in nature to the binomial distribution, except the
sample size is large compared to the population. The hypergeometric distribution determines the
probability of exactly x number of defects when n items are samples from a population of N
items containing D defects. The equation is
Six Sigma - Black Belt
With, x is the number of nonconforming units in the sample (r is sometimes used here if dealing
with occurrences), D is the number of nonconforming units in the population, N is the finite
population size and n is the sample size.
Bivariate Distribution - When two variables are distributed jointly the resulting distribution is
a bivariate distribution. Bivariate distributions may be used with either discrete or continuous
data. The variables may be completely independent or a covariance may exist between them.
The bivariate normal distribution is a commonly used version of the bivariate distribution which
may be used when there are two random variables. This equation was developed by Freund in
1962 as
With
-∞ < x < ∞
-∞ < y < ∞
-∞ < µ1< ∞
-∞ < µ2< ∞
σx> 0, σx> 0
µ1 and µ2 are the two population means
First σ2 and second σ2 are the two variances
ρ is the correlation coefficient of the random variables
Exponential Distribution - It is used to analyze reliability, and to model items with a constant
failure rate. The exponential distribution is related to the Poisson distribution and used to
determine the average time between failures or average time between a numbers of occurrences.
The mean and the standard deviation are µ =1/λ and σ =1/λ.
For example, if there is an average of 0.50 failures per hour (discrete data - Poisson distribution),
then the mean time between failure (MTBF) is 1 / 0.50 = 2 hours (continuous data - exponential
distribution). If a random variable x is distributed exponentially, then its reciprocal y =1/x
follows a Poisson distribution. The opposite is also true. If x follows a Poisson distribution, then
the reciprocal y = 1/x is exponentially distributed. The exponential distribution equation is
Six Sigma - Black Belt
With µ is the mean (also sometimes referred to as θ), λ is the failure rate which is the same as1/µ
and x is the x-axis values. When this equation is integrated, it results in cumulative probabilities
as
Lognormal Distribution - The most common transformation is made by taking the natural
logarithm, but any base logarithm, such as base 10 or base 2 may be used. It is used to model
various situations such as response time, time-to-failure data, and time-to-repair data. Lognormal
distribution is a skewed-right distribution (with most data in the left tail), and consists of the
distribution of the random variable whose logarithm follows the normal distribution.
The lognormal distribution assumes only positive values. When the data follows a lognormal
distribution, a transformation of data can be done to make the data follow a normal distribution.
Then probabilities, confidence intervals and tests of hypothesis can be conducted (if the data
follows a normal distribution). The lognormal probability density function is
With µ is the location parameter or log mean and σ is the scale (or shape) parameter or standard
deviation of natural logarithms of the individual values.
Lognormal Distribution
Plotting Natural Logarithm
Weibull Distribution - The Weibull distribution is a widely used distribution for understanding
reliability and is similar in appearance to the lognormal. It can be used to measure time to fail,
time to repair, and material strength. The shape and dispersion of the Weibull distribution
depends on two parameters β which is the shape parameter and θ which is the scale parameter
but, both parameters are greater than zero.
Six Sigma - Black Belt
The Weibull distribution is one of the most widely used distributions in reliability and statistical
applications. The two and three parameter Weibull common versions. The difference is the three
parameter Weibull distribution has a location parameter when there is some non-zero time to first
failure. In general, the probabilities from a Weibull distribution can be found from the
cumulative Weibull function as
With, X is a random variable, x is an actual observation. The shape parameter (β) provides the
Weibull distribution with its flexibility as
If β = 1, the Weibull distribution is identical to the exponential distribution.
If β = 2, the Weibull distribution is identical to the Rayleigh distribution.
If 3 < β < 4, then the Weibull distribution approximates a normal distribution.
5.6. Process Capability
Process capability is a predictable pattern of statistically stable behavior where the chance causes
of variation are compared to the engineering specifications. A capable process is a process whose
spread on the bell-shaped curve is narrower than the tolerance range.
Process Capability Indices
Process capability indices includes Cp and Cpk, who identify the current state of the process and
provide statistical evidence for comparing after-adjustment results to the starting point.
Cp – It measures the ratio between the specification tolerance (USL-LSL) and process spread.
Whenever a process which is normally distributed and is exactly mid-way between the
specification limits, would yield a Cp of 1 if the spread is +/- 3 standard deviations. The usual
accepted minimum value for Cp is 1.33. It’s requirements for both an upper and lower
specification and usage after the process is centered, is the major limitation. It is computed as
It is used to identify the process's current state and measures the actual capability of a process to
operate within customer defined specification limits hence, it should be used when the data set is
from a controlled, continuous process. Hence, it needs standard deviation/Sigma information
with USL and LSL specifications. Cp indicates the amount of variation in the process but not
about the process's ability to align with the target.
Cpk – It measures the absolute distance of the mean to the nearest specification limit. Usually a
Cpk value of minimum 1 and maximum 1.33 is desired. It needs the centering process similar as
that for Cp. Along with Cp, Cpk provides a common measurement for assigning an initial
process capability to center on specification limits. It is computed as
Six Sigma - Black Belt
Cp measures "can it fit" while Cpk measures "does it fit.". If Cp= Cpk , then the process is
centered.
Cpm - It is also referred to as the Taguchi index. It is more accurate and reliable than the other
indices. It focuses on reducing the variation from a target value (T). Variation from the target T
is expressed as process variability or σ2 and process centering (µ - T), where µ= process average.
Cpm provides a common measurement assigning an initial process capability to a process for
aligning the mean of the sample to the target. It is computed as
With T is the target value, µ is the expected value and σ is the standard deviation. It is applied if
the target is not the center or mean of the USL – LSL or when establishing an initial process
capability during the Measure phase. Higher Cpm value, indicates more likely the output of the
process meet the specs and the target.
Sigma and Process Capability - When means and variances wander over time, a standard
deviation (symbolized by the Greek letter σ) is the most common way to describe how data in a
sample varies from its mean.
A Six Sigma goal is to have 99.99976% error-free work (reducing the defects to 3.4 per million).
By computing sigma and relating to a process capability index such as Ppk, it can be determined
the number of non-conformances (or failure rate) produced by the process. To compute sigma
(σ), use the following equation for a population
With, N is the number of items in the population,
each data point.
is the mean of the population data and x is
Process Performance Indices
The most used process performance indices are Pp, Ppk, and Cpm which depict the present status
of the process and also act as an important tool for improvement of the process. These metrics
have a common purpose as process capability indices but, they differ in their approach.
Six Sigma - Black Belt
Pp – It measures the ratio between the specification tolerance and process spread. It helps to
measure improvement over time as it signals where the process is in comparison to the
customer's specifications. It is computed as
It is used for collecting continuous data and the process is not in control. It depicts the amount of
variation, but not alignment to the target and for process to be in control, a process must only
have common causes for each of the data points (no data points existing beyond the UCL or
LCL).
Ppk – It measures the absolute distance of the mean to the nearest specification limit. It provides
an initial measurement to center on specification limits. It also examines variation within and
between subgroups. It is computed as
It is used with continuous data and the process is not in control. It indicates alignment to the USL
and LSL but not the amount of variation.
Short-term and long-term capability
Short-term capability is measured over a very short time period since it focuses on the machine's
ability based on design and quality of construction. By focusing on one machine with one
operator during one shift, it limits the influence of other outside long-term factors, including
operator, environmental conditions such as temperature and humidity, machine wear and
different material lots.
Thus, short-term capability can measure the machine's ability to produce parts with a specific
variability based on the customer's requirements. Short-term capability uses a limited amount of
data relative to a short time and the number of pieces produced to remove the effects of longterm components. If the machines are not capable of meeting the customer's requirements,
changes may have a limited impact on the machine's ability to produce acceptable parts.
Remember, though, that short-term capability only provides a snapshot of the situation. Since
short-term data does not contain any special cause variation (such as that found in long-term
data), short-term capability is typically rated higher.
When a process capability is determined using one operator on one shift, with one piece of
equipment, the process variation is relatively small. Control limits based on a short-term process
evaluation are closer together than control limits based on the long-term process.
A modified and R chart can be used for short runs, based on an initial 3 to 10 pieces, using a
calculated value compared with a critical value. Inflated D4 and A2 values are used to establish
control limits. Control limits are recalculated after additional groups are run.
Six Sigma - Black Belt
The X and MR chart can also be used for small runs, with a limited amount of data. The X
represents individual data values, and the MR is the moving range, a measure of piece to piece
variability.
Process capability or Cpk values determined from either of these methods must be considered
preliminary information. As the number of data points increases, the calculated process
capability will approach the true capability.
Process Capability for non-normal data
Although effective analysis of data that is not distributed normally is possible, completing one of
the action steps below is beneficial for some projects to create a more useful data set
Divide the data into subsets according to business sub processes.
Mathematically transform the data and specification limits.
Turn the continuous data into discrete data.
The goal of a process capability calculation is to use a sample of items or transactions from a
business process to make a projection of the number of defects expected from the process in the
long term. The defect rate, expressed as DPMO (defects per million opportunities), is part of the
common language of Six Sigma. Expressing a business problem as a defect rate typically
provides a more direct way to communicate with stakeholders and members of the project team.
The process capability calculation is based on
The mean of the sample.
The standard deviation of the sample.
The known characteristics of the distribution.
The normal, or Gaussian, distribution is commonly observed in data involving a physical
measurement of some kind such as length of machined bars, weight of widgets or the average
number of manufacturing defects per week. It is less common in a transactional environment
when tracking financial information or cycle time.
The research for PCIs (Process Capability Indices) under non-normality has been grouped into
two main streams
Examination of PCIs and their performances for various underlying distributions
Construction of new generation process capability indices and/or development of new
approaches specially tailored for non-normally distributed outputs.
Although much effort has been put into these studies, there is not yet any standard approach or
standardized PCI accepted by academicians and practitioners when non-normal data is handled.
The former stream focuses on exploring the properties of different PCIs under different
conditions and provides comparisons between them and suggests some of them for specific
circumstances. These specific circumstances can be exemplified by different underlying process
distributions, one-sided or two-sided specifications limits, the corresponding proportion of non-
Six Sigma - Black Belt
conformity of PCIs, and so forth. The latter stream attempts to provide new approaches or new
PCIs which would be robustly applicable to non-normal data. However, there is no information
about how widely these new indices are utilized by practitioners for some of the new indices
require a rather involved statistical knowledge and might be rather confusing for practitioners.
The second stream of PCI research can be categorized into five groups (Shore, 1998; Kotz and
Johnson, 2002)
Data transformation methods - These approaches aim at transforming the non-normal process
data into normal process data. Several methods have been proposed for approximating
normally distributed data by using mathematical functions. Most known amongst these
methods are Johnson transformation system, which is based on derivation of the moments of
the distribution, and Box-Cox power transformation.
Development of quality control procedures for certain non-normal distributions - There are
control charts developed for log-normal and Weibull distributions (Shore, 1998). Lovelace
and Swain (2009) proposed to use midrange and ratio charts in order to track central
tendency and dispersion for log-normally distributed data and then estimate PCIs based on
empirical percentiles. However, in reality the distribution itself cannot be identified precisely
or it requires a great magnitude of data for a solid identification. Combining this with the
unknown parameters, which have to be estimated in order to compute PCIs, using quality
control charts and their respective PCIs for non-normal distributions are not highly favored
by practitioners (Shore, 1998; Kotz and Johnson, 2002).
Distribution fitting for empirical data - Distribution fitting methods uses the empirical
process data, of which the distribution is unknown. These methods later fit the empirical data
set with a non-normal distribution based on the parameters of the empirical distribution
(Shore, 1998).
Development of distribution-free procedures - These approaches either aim at establishing
distribution-free specification intervals or at adjusting PCIs through heuristic methods (Kotz
and Johnson, 2002). Chan et al. (1988) proposed obtaining distribution-free PCIs by using
distribution-free specification interval estimations, which are assumed to be independent of
the underlying distribution of the process. However, the construction of tolerance intervals is
derived from normal distribution intervals. Therefore, this approach was criticized by Kotz
and Johnson (1993) because of the dependency of a “distribution-free” approach on normal
distribution. A heuristic weighted variance method is proposed by Choi and Bai (1996). The
essence of weighted variance method is to divide a non-normal skewed distribution into two
different distributions such that the resulting distribution would be normally distributed with
same mean but different standard deviations. This segmentation requires no assumption of
the distribution; therefore it makes the approach distribution-free (Wu, et al., 1999).
Construction of new PCIs - Wright’s Index, Cs, has been proposed as an index which is
sensitive to skewness (Wright, 1995). The Cs index adds a skewness correction factor on the
Cpmk index by taking the skewness of the process data into account. A flexible PCI, Cjkp, is
introduced by Johnson, et al.(1994). The index is based on the Cpm index and is assumed to
be flexible because the asymmetry of a non-normal process is considered with regard to the
difference of variability below and above the target value, which is reflected upon the index
by treating the two specification limits differently (Deleryd, 1999; Wu and Swain, 2001).
Six Sigma - Black Belt
Data does not always fit a normal distribution. One strategy is to make non-normal data resemble
normal data by using a transformation. A family of power transformations for positive data
values are attributed to G.E.P Box and D.R. Cox. The Box-Cox power transformations are given
by
Given data observations x1, x2, ...xn, select the power
likelihood function
\ that maximizes the logarithm of the
/
The arithmetic mean of the transformed data is
Process Capability for attributes data
The control chart represents the process capability, once special causes have been identified and
removed from the process. For attribute charts, capability is defined as the average proportion or
rate of nonconforming product.
For p charts, the process capability is the process average nonconforming,
conforming to specification, 1-
.The proportion
, may be used.
For np charts, the process capability is the process average non-conforming, .
For c charts, the process capability is the average number of nonconformities, , in a sample
of fixed size n.
For u charts, the process capability is the average number of nonconformities per reporting
unit, .
The average proportion of nonconforming may be reported on a defects per million opportunities
scale by multiplying
times 1,000,000.
Process Capability Studies
A process capability study attempts to quantify whether a process can consistently meet the
standards set by internal or external customers. Since this study yields a prediction, and
predictions should be made from relatively stable processes, a process capability study should
only be used in a relatively controlled and stable process environment.
Measuring capability can be challenging because it is, by definition, a point estimate. Every
process has unpredictable instability, which creates an inherent risk of estimate errors. Since
there is no confidence interval related for mean and standard deviation, there is no confidence
Six Sigma - Black Belt
interval for capability, therefore risk cannot be quantified. The user must accept the risk of
variability related to instability. If the variation is due to a common cause, the output will still
form a distribution that is relatively stable as the variation is constant. In this case, a process
capability study may be completed but, if the variation is a result of a special cause, then the
output is not as stable and not as predictable. In this case, a process capability study may have
problems with its accuracy.
The objective of a process capability study is to establish a state of control over the
manufacturing process and then maintaining that state of control through time.
Study Procedure – It includes various steps, as
Select a process to study which is critical and can be selected using several techniques like a
Pareto analysis or a cause-and-effect diagram.
Verify or define the process parameters. Verification of what the process entails, its
boundaries, and gain agreement on the process’s definition. Many of these steps are
completed when developing a process map.
Conduct a measurement systems analysis to ensure that the measurement methods produce
sound data.
Select a process capability analysis method like Cpk, Cp, Ppk and Pp.
Obtain the data and conduct an analysis.
Develop an estimate of the process capability. This estimate can be compared to the
standards set by internal or external customers.
After completing a process capability study, address any special causes of variation that can be
isolated. If able, eliminate the special causes that are not desirable. In some cases, a special cause
of variation may be desirable if it produces a better product or output. In that circumstance, if
possible, attempt to make the special cause a common cause to ensure the benefit is achieved
equally on all output.
Identifying Characteristics - Characteristics selected to be part of a process capability study
should meet certain requirements, as
Six Sigma - Black Belt
The characteristic should be important relative to the quality of the product or process. A
process may have 15 characteristics, but only one or two should be selected for inclusion in
the process capability study.
The characteristics are Ys or outcomes to process steps that meet customer requirements. The
Ys are changed by changing the Xs or inputs.
The characteristic’s value should be adjustable.
The operating parameters that influence the characteristic should be able to be determined
and controlled.
Sometimes, the characteristic selected has a history of being the most difficult item to
control.
Identifying Specifications/Tolerances
The process specifications or tolerances are determined either by customer requirements,
industry standards, or the organization’s engineering department.
Developing Sampling Plans
If the process fits a normal distribution and is in statistical control, then the standard deviation
can be estimated from
For new processes, for example for a project proposal, a pilot run may be used to estimate the
process capability.
Specification Limits - Specification limits are set by the customer, and result from either
customer requirements or industry standards. The amount of variance (process spread) the
customer is willing to accept sets the specification limits. A customer wants a supplier to produce
12-inch rulers. Specifications call for an acceptable variation of +/- 0.03 inches on each side of
the target (12.00 inches). The customer is saying acceptable rulers will be from 11.97 to 12.03
inches. If the process is not meeting the customer's specification limits, two choices exist to
correct the situation:
Change the process's behavior.
Change the customer's specification (requires customer approval).
Examples of Specification Limits - Specification limits are commonly found in
Blueprints
Engineering drawings and specs
Industry standards
Self-imposed standards within a shop
Federally mandated standards (e.g., emissions controls)
Verifying Stability and Normality - If only common causes of variation are present in a
process, then the output of the process forms a distribution that is stable over time and is
predictable. If special causes of variation are present, the process output is not stable over time.
Six Sigma - Black Belt
While the process is currently capable, stability may need to be improved to assure continued
capability. Since the process is stable, but not capable, we can be reasonably sure the lack of
capability is reasonably correct. The process must be improved to become capable. The lack of
stability makes it difficult to estimate the level of capability with any certainty. First, we need to
reduce variation and remove special causes of variation to improve stability so we will have
reasonable estimates of the centering of the process. Following that, we may need to re-center
the process and/or further reduce process variation.
Process Performance vs. Specification
The performance metric indices establish a controlled process, and then maintain that process
over time. Numbered values are a shortcut method indicating the quality level of a process in
parts per million (ppm). Once the status of the process is determined, the causes in variation
(based on statistical significance) may be identified. Courses of action might be to
Do nothing.
Change the specifications.
Center the process.
Reduce the variation in the Six Sigma process spread.
Accept the losses.
Process Limits - A stable process can be monitored to determine if changes that occur are due to
factors other than random variation. Such observation determines whether changes are necessary
and if any corrective actions are required. Process limits are the voice of the process based on the
variation of the products produced. The supplier collects data over time to determine the
variation in the units against the customer's specification. These data points collected over time
establish the process curve.
Having a predictable process producing 100 percent conformances is the ideal state. Day-to-day
control charts help identify assignable causes to any variations that occur. Control charts are
special types of time series charts in which control limits are calculated around the central
location, or mean, of the variable being plotted.
A process capability diagram displays both the voice of the process and the voice of the
customer. To draw one of these diagrams
Six Sigma - Black Belt
Locate the mean of the distribution (X) and draw a normal curve that reflects the upper and
lower process limits (UPL, LPL) to the data.
Draw the customer specifications with the upper and lower limits for those specifications as
appropriate (USL, LSL). Note that a customer may only have a lower limit or just an upper
limit.
Process Performance Metric - It is a measure of an organization's activities and performance
and includes metrics like percentage defective which is defined as the (Total number of defective
parts)/(Total number of parts) X 100. So if there are 1,000 parts and 10 of those are defective, the
percentage of defective parts is (10/1000) X 100 = 1%. Other metrics have been discussed earlier
and are summarized as
Performance Metric
Description
Percentage Defective
What percentage of parts contain one or more defects?
Parts per Million (PPM)
What is the average number of defective parts per million? This
is the same figure in metric 1 above of “percentage defective”
multiplied by 1,000,000.
Defects per Unit (DPU)
What is the average number of defects per unit?
What is the average number of defects per opportunity? (where
Defects per Opportunity
opportunity = number of different ways a defect can occur in a
(DPO)
single part
Defects per million The same figure in metric 3 above of defects per opportunity
Opportunities (DPMO) multiplied by 1,000,000
Rolled throughput yield The yield stated as a percentage of the number of parts that go
(RTY)
through a multi-stage process without a defect.
Process sigma
Cost of poor quality
The sigma level associated with either the DPMO or PPM level
found in metric 2 or 5 above.
The cost of defects: either internal (rework/scrap) or external
(warranty/product)
Six Sigma - Black Belt
6. ANALYZE PHASE
This phase is the starting of the statistical analysis of the problem. This phase statistically
reviews the families of variation to determine which significant contributors to the output are.
The statistical analysis is done with the development of a theory, null hypothesis. The analysis
will "fail to reject" or "reject" the theory. The families of variation and their contributions are
quantified and relationships between variables are shown graphically and numerically to provide
the team direction for improvements. The main objectives of this phase are
Reduce the number of inputs (X’s) to a manageable number
Determine the presence of noise variables through Multi-Vari Studies
Plan first improvement activities
6.1. Modeling and Measuring Between Variables
Correlation and regression helps in modeling and measuring relation between variables.
Correlation coefficient
Correlation is tool that is with a continuous x and a continuous y. The Pearson correlation
coefficient (r) measures the linear relationship between the x and y as discussed earlier.
Causation is different from correlation as the correlation is the mutual relation that exists
between two or more things while causation is the fact that something causes an effect. The
correlation between two variables does not imply that one is as a result of the other. The
correlation value ranges from -1 to 1. The closer to value 1 signify positive relationship with x
and y going in same direction similarly if nearing -1, both are in opposite direction and zero
value means no relationship between the x and y.
Six Sigma - Black Belt
Confidence in a relationship is computed both by the correlation coefficient and by the number
of pairs in data. If there are very few pairs then the coefficient needs to be very close to 1 or –1
for it to be deemed ‘statistically significant’, but if there are many pairs then a coefficient closer
to 0 can still be considered ‘highly significant’. The standard method used to measure the
‘significance’ of analysis is the p-value. It is computed as
For example to know the relationship between height and intelligence of people is significant, it
starts with the ‘null hypothesis’ which is a statement ‘height and intelligence of people are
unrelated’. The p-value is a number between 0 and 1 representing the probability that this data
would have arisen if the null hypothesis were true. In medical trials the null hypothesis is
typically of the form that the use of drug X to treat disease Y is no better than not using any drug.
The p-value is the probability of obtaining a test statistic result at least as extreme as the one that
was actually observed, assuming that the null hypothesis is true. Project team usually "reject the
null hypothesis" when the p-value turns out to be less than a certain significance level, often
0.05. The formula to calculate the p value for Pearson's correlation coefficient (r) is
p=r/Sqrt(r^2)/(N—2).
Regression
Linear Regression - When the input and output variables are both continuous and to see a
relationship between the two variables, regression and correlation are used. Determining how the
predicted or dependent variable (the response variable, the variable to be estimated) reacts to the
variations of the predicator or independent variable (the variable that explains the change)
involves first to determine any relationship between them and it's importance. Regression
analysis builds a mathematical model that helps making predictions about the impact of variable
variations.
Usually, there is more than one independent variable causing variations of a dependent variable
like changes in the volume of cars sold depends on the price of the cars, the gas mileage, the
warranty, etc. But the importance of all these factors in the variation of the dependent variable
Six Sigma - Black Belt
(the number of cars sold) is disproportional. Hence, project team should concentrate on one
important factor instead of analyzing all the competing factors.
In simple linear regression, prediction of scores on one variable is done from the scores on a
second variable. The variable to predict is called the criterion variable and is referred to as Y.
The variable to base predictions on is called the predictor variable and is referred to as X. When
there is only one predictor variable, the prediction method is called simple regression. In simple
linear regression, the predictions of Y when plotted as a function of X form a straight line.
As an example, data for X and Y are listed below and having a positive relationship between X
and Y. For predicting Y from X, the higher the value of X, the higher prediction of Y.
X
1.00
2.00
3.00
4.00
5.00
Y
1.00
2.00
1.30
3.75
2.25
Linear regression consists of finding the best-fitting straight line through the points. The bestfitting line is called a regression line. The diagonal line in the figure is the regression line and
consists of the predicted score on Y for each possible value of X. The vertical lines from the
points to the regression line represent the errors of prediction. As the line from 1.00 is very near
the regression line; its error of prediction is small and similarly for the line from 1.75 is much
higher than the regression line and therefore its error of prediction is large.
The error of prediction for a point is the value of the point minus the predicted value (the value
on the line). The below table shows the predicted values (Y') and the errors of prediction (Y-Y')
like, for the first point has a Y of 1.00 and a predicted Y (called Y') of 1.21 hence, its error of
prediction is -0.21.
X
1.00
2.00
3.00
4.00
5.00
Y
1.00
2.00
1.30
3.75
2.25
Y'
1.210
1.635
2.060
2.485
2.910
Y-Y'
-0.210
0.365
-0.760
1.265
-0.660
(Y-Y')2
0.044
0.133
0.578
1.600
0.436
The most commonly-used criterion for the best-fitting line is the line that minimizes the sum of
the squared errors of prediction. That is the criterion that was used to find the line in the figure.
The last column in the above table shows the squared errors of prediction. The sum of the
squared errors of prediction shown in the above table is lower than it would be for any other
regression line.
Six Sigma - Black Belt
The regression equation is calculated with the mathematical equation for a straight line as y =
b0+ b1 X where, b0 is the y intercept when X= 0 and b1 is the slope of the line with the
assumption that for any given value of X, the observed value of Y varies in a random manner
and possesses a normal probability distribution. For calculations are based on the statistics,
assuming MX is the mean of X, MY is the mean of Y, sX is the standard deviation of X, sY is
the standard deviation of Y, and r is the correlation between X and Y, a sample data is as
MX
3
MY
2.06
sX
1.581
sY
1.072
r
0.627
The slope (b) can be calculated as b = r sY/sX and the intercept (A) as A = MY - bMX. For the
above data, b = (0.627)(1.072)/1.581 = 0.425 and A = 2.06 - (0.425)(3) = 0.785. The calculations
have all been shown in terms of sample statistics rather than population parameters. The
formulas are the same but need the usage of the parameter values for means, standard deviations,
and the correlation.
Least Squares Method – In this method, for computing the values of b1 and b0, the vertical
distance between each point and the line called the error of prediction is used. The line that
generates the smallest error of predictions will be the least squares regression line. The values of
b1 and b0 are computed as
The P-value is determined by referring to a t-distribution with n-2 degrees of freedom.
Simple Linear Regression Hypothesis Testing - Hypothesis tests can be applied to determine
whether the independent variable (x) is useful as a predictor for the dependent variable (y). The
following are the steps using the cost per transaction example for hypothesis testing in simple
regression
Determine if the conditions for the application of the test are met. There is a population
regression equation Y = β0+ β1 so that for a given value of x, the prediction equation is
Given a particular value for x, the distribution of y-values is normal. The distributions of yvalues have equal standard deviations. The y-values are independent.
Establish hypotheses.
Ho:b1= 0 (the equation is not useful as a predictor of y - cost per transaction)
Ha:b1≠ 0 (the equation is useful as a predictor of y - cost per transaction)
Decide on a value of alpha.
Six Sigma - Black Belt
Find the critical t values. Use the t–table and find the critical values with +/- tα/2 with n – 2
df.
Calculate the value of the test statistic t. The confidence interval formula is used to determine
the test statistic
Interpret the results. If the test statistic is beyond one of the critical values greater than tα/2 OR
less than -tα/2 reject the null hypothesis; otherwise, do not reject.
Multiple Linear Regression - Multiple linear regression expands on the simple linear regression
model to allow for more than one independent or predictor variable. The general form for the
equation is y = b0+ b1x + ... bn+ e where, (b0,b1,b2…) are the coefficients and are referred to as
partial regression coefficients. The equation may be interpreted as the amount of change in y for
each unit increase in x (variable) when all other xs are held constant. The hypotheses for multiple
regression are Ho:b1=b2= ... =bn Ha:b1≠ 0 for at least one i.
It is an extension of linear regression to more than one independent variable so a higher
proportion of the variation in Y may be explained as first-order linear model
And second-order linear model
R2 the multiple coefficient of determination has values in the interval 0<=R2<=1
Source
Regression
Error
Total
DF
k
n-(k+1)
n-1
SS
SSR
SSE
Total SS
MS
MSR=SSR/k
MSE=SSE[n-(k+1)]
Where k is the number of predictor variables.
Multivariate tools
It encompasses the simultaneous observation and analysis of more than one outcome variable.
There are many different tools used for multivariate analysis, which are
Principal components analysis (PCA) creates a new set of orthogonal variables that contain
the same information as the original set. It rotates the axes of variation to give a new set of
orthogonal axes, ordered so that they summarize decreasing proportions of the variation.
Discriminant analysis, or canonical variate analysis, attempts to establish whether a set of
variables can be used to distinguish between two or more groups of cases.
Factor analysis is similar to PCA but allows the user to extract a specified number of
synthetic variables, fewer than the original set, leaving the remaining unexplained variation
Six Sigma - Black Belt
as error. The extracted variables are known as latent variables or factors; each one may be
supposed to account for co-variation in a group of observed variables.
Multivariate analysis of variance (MANOVA) extends the analysis of variance to cover cases
where there is more than one dependent variable to be analyzed simultaneously: see also
MANCOVA.
Multivariate regression analysis attempts to determine a formula that can describe how
elements in a vector of variables respond simultaneously to changes in others. For linear
relations, regression analyses here are based on forms of the general linear model.
Correspondence analysis (CA), or reciprocal averaging, finds (like PCA) a set of synthetic
variables that summarise the original set. The underlying model assumes chi-squared
dissimilarities among records (cases).
Multidimensional scaling comprises various algorithms to determine a set of synthetic
variables that best represent the pair wise distances between records. The original method is
principal co-ordinates analysis (PCoA; based on PCA).
Clustering systems assign objects into groups (called clusters) so that objects (cases) from the
same cluster are more similar to each other than objects from different clusters.
Multivari studies
Multi-Vari Analysis is a tool that graphically displays patterns of variation. These studies are
used to identify possible X’s and/or families of variation. These families of variation can
frequently hide within a subgroup, between groups or over time.
It is a technique for viewing multiple sources of process variation. Different sources of variation
are categorized into families of related causes and quantified to reveal the largest causes. Multivari analysis provides a breakdown for example, that machine B on shift 1 is causing the most
variation. It won’t quantify the variation just show where it is. Multi-Vari is the perfect tool to
determine where the variability is coming from in process (lot-to-lot, shift-to-shift, machine-tomachine, etc.), because it does not require to manipulate the independent variables (or process
parameters) as with design of experiments. It enables analyzing the effects of multiple factors,
multi-vari analysis is widely used in six sigma projects.
Also, the effect of categorical type inputs can be displayed on a response on a multi-vari chart. It
is one of the tools used to reduce the trivial many inputs to the vital few. In other words it is used
to identify possible Xs or families of variation, such as variation within a subgroup, between
subgroups, or over time. Multi vari charts are useful for quickly identifying positional, temporal
and cyclical variation in processes.
Attributes data analysis
Multiple linear regression is applicable for continuous variables but for discrete variables,
following tools are used
Binary logistic regression - Used to model a binary (two-level) response—for example, yes
or no.
Nominal (unordered) logistic regression - Used to model a multilevel response with no
ordering—for example, eye color with levels brown, green, and blue. Such a response is also
called polytomous, polychotomous, or multinomial.
Six Sigma - Black Belt
Ordinal (ordered) logistic regression - Used to model an ordered response—for example,
low, medium, or high. Might also be called the ordinal multinomial logit model. Ordinal
logistic models take into account the ordered nature of the response, which can result in
simple, more powerful models. Typical response functions that are modeled are cumulative
logits, adjacent-category logits, or continuation-ratio logits resulting in ordinal logistic
models known as the cumulative logit model, the adjacent-category logit model, and the
continuation-ratio logit model. The proportional odds model and the partial proportional odds
model are special cases of the cumulative logit model. If the spacing between levels of the
ordinal response scale is known, so that numerical scores can reasonably be assigned to the
response levels, then a mean response model can be fit.
Logistic regression can be thought of as consisting of a mathematical transformation of a
standard regression model. Remember that one solution to outliers or heteroscedasticity
problems is to transform X or Y or both by taking the square root or the log etc. The
transformation used in logistic regression is a transformation of the predicted scores of Y (ˆY),
which is different. The transformation in logistic regression is called the logit transformation (so
sometimes logistic is referred to as a logit model). Instead of using ˆY, the log of the
probabilities is used.
The primary reason why the logit transformation function is used is that the residuals will not be
normally distributed and they cannot be constant across values of X. Because Y has only two
possible values 0 and 1, the residuals have only two possible values for each X. With only two
possible values, the residuals cannot be normally distributed. Moreover, the best line to describe
the relationship between X and Y is not likely to be linear, but rather an S-shape.
Generalized linear models include a link function that relates the mean of the response to the
linear predictors in the model. The general form of the link function is as
g(µi) = Xi'β
Link functions enables fitting a wide variety of response models. Different link functions used
for different types of response variables.
Models
Name
Link Function, g(µi)
Binomial, Ordinal, Nominal logit
ln(µi/(1−µi))
Binomial, Ordinal
normit (probit)
Φ−1(µi)
Binomial, Ordinal
gompit (complementary log-log) ln(−ln(1−µi))
Poisson
natural log
ln(µi)
Poisson
square root
õi
Poisson
identity
µi
Where, g(µi) is link function, µi is the mean response of the ith row, Xi is the vector of predictor
variables for the ith row, β is the vector of coefficients associated with the predictors and Φ−1(·) is
the inverse cumulative distribution function of the normal distribution
Six Sigma - Black Belt
6.2. Hypothesis Testing
A hypothesis is a theory about the relationships between variables. Statistical analysis is used to
determine if the observed differences between two or more samples are due to random chance or
to true differences in the samples.
Terminology
A hypothesis is a value judgment, a statement based on an opinion about a population. It is
developed to make an inference about that population. Based on experience, a design engineer
can make a hypothesis about the performance or qualities of the products she is about to produce,
but the validity of that hypothesis must be ascertained to confirm that the products are produced
to the customer’s specifications. A test must be conducted to determine if the empirical evidence
does support the hypothesis.
Null Hypothesis - The first step consists in stating the hypothesis. It is denoted by H0, and is
read “H sub zero.” The statement will be written as H0: µ = 20%. A null hypothesis assumes no
difference exists between or among the parameters being tested and is often the opposite of what
is hoped to be proven through the testing. The null hypothesis is typically represented by the
symbol Ho.
Alternate Hypothesis - If the hypothesis is not rejected, exactly 20 percent of the defects will
actually be traced to the CPU socket. But if enough evidence is statistically provided that the null
hypothesis is untrue, an alternate hypothesis should be assumed to be true. That alternate
hypothesis, denoted H1, tells what should be concluded if H0 is rejected. H1 : µ ≠ 20%.
An alternate hypothesis assumes that at least one difference exists between or among the
parameters being tested. This hypothesis is typically represented by the symbol Ha.
Test Statistic - The decision made on whether to reject H0 or fail to reject it depends on the
information provided by the sample taken from the population being studied. The objective here
is to generate a single number that will be compared to H0 for rejection. That number is called
the test statistic. To test the mean µ, the Z formula is used when the sample sizes are greater than
30,
and the t formula is used when the samples are smaller,
The level of risk - It addresses the risk of failing to reject a hypothesis when it is actually false,
or rejecting a hypothesis when it is actually true.
Type I error (False Positive) – It occurs when one rejects the null hypothesis when it is true.
The probability of a type I error is the level of significance of the test of hypothesis, and is
denoted by *alpha*. Usually a one-tailed test of hypothesis is used when one talks about type I
error or alpha error.
Six Sigma - Black Belt
Type II error (False Negative) – It occurs when one rejects the alternative hypothesis (fails to
reject the null hypothesis) when the alternative hypothesis is true. The probability of a type II
error is denoted by *beta*. One cannot evaluate the probability of a type II error when the
alternative hypothesis is of the form µ > 180, but often the alternative hypothesis is a competing
hypothesis of the form: the mean of the alternative population is 300 with a standard deviation of
30, in which case one can calculate the probability of a type II error.
Decision Rule Determination - The decision rule determines the conditions under which the
null hypothesis is rejected or not. The critical value is the dividing point between the area where
H0 is rejected and the area where it is assumed to be true.
Decision Making - Only two decisions are considered, either the null hypothesis is rejected or it
is not. The decision to reject a null hypothesis or not depends on the level of significance. This
level often varies between 0.01 and 0.10. Even when we fail to reject the null hypothesis, we
never say “we accept the null hypothesis” because failing to reject the null hypothesis that was
assumed true does not equate proving its validity.
Testing for a Population Mean - When the sample size is greater than 30 and σ is known, the Z
formula can be used to test a null hypothesis about the mean. The Z formula is as
Phrasing - In hypothesis testing, the phrase “to accept” the null hypothesis is not typically
used. In statistical terms, the Six Sigma Black Belt can reject the null hypothesis, thus accepting
the alternate hypothesis, or fail to reject the null hypothesis. This phrasing is similar to jury's
stating that the defendant is not guilty, not that the defendant is innocent.
One Tail Test - In a one-tailed t-test, all the area associated with a is placed in either one tail or
the other. Selection of the tail depends upon which direction to bs would be (+ or -) if the results
of the experiment came out as expected. The selection of the tail must be made before the
experiment is conducted and analyzed.
Six Sigma - Black Belt
Two Tail Test - If a null hypothesis is established to test whether a population shift has
occurred, in either direction, then a two tail test is required. The allowable α error is generally
divided into two equal parts.
Statistical vs Practical Significance
Practical Significance - Practical significance is the amount of difference, change or
improvement that will add practical, economic or technical value to an organization.
Statistical Significance - Statistical significance is the magnitude of difference or change
required to distinguish between a true difference, change or improvement and one that could
have occurred by chance. The larger the sample size, the more likely the observed difference is
close to the actual difference.
To achieve victory in a project, both practical and statistical improvements are required. It is
possible to find a difference to be statistically significant but not of practical significance.
Because of the limitations of cost, risk, timing, etc., project team cannot implement practical
solutions for all statistically significant Xs. Determining practical significance in a Six Sigma
project is not the responsibility of the Black Belt alone. Project team need to collaborate with
others such as the project sponsor and finance manager to help determine the return on
investment
(ROI) associated with the project objective.
Sample Size
It has been assumed that the sample size (n) for hypothesis testing has been given and that the
critical value of the test statistic will be determined based on the α error that can be tolerated.
The sample size (n) needed for hypothesis testing depends on the desired type I (α) and type II
(β ) risk, the minimum value to be detected between the population means (µ - µ0) and the
variation in the characteristic being measured (S or σ).
Point and Interval Estimates
Estimation refers to the process by which one makes inferences about a population, based on
information obtained from a sample. An estimate of a population parameter may be expressed in
two ways, as
Point estimate - A point estimate of a population parameter is a single value of a statistic. For
example, the sample mean x is a point estimate of the population mean µ. Similarly, the
sample proportion p is a point estimate of the population proportion P.
Six Sigma - Black Belt
Interval estimate - An interval estimate is defined by two numbers, between which a
population parameter is said to lie. For example, a < x < b is an interval estimate of the
population mean µ. It indicates that the population mean is greater than a but less than b.
Unbiased Estimator - When the mean of the sampling distribution of a statistic is equal to a
population parameter, that statistic is said to be an unbiased estimator of the parameter. If the
average estimate of several random samples equals the population parameter, the estimate is
unbiased. For example, if credit card holders in a city were repetitively random sampled and
questioned what their account balances were as of a specific date, the average of the results
across all samples would equal the population parameter. If however, only credit card holders in
one neighborhood were sampled, the average of the sample estimates would be a biased
estimator of all account balances for the city and would not equal the population parameter.
Efficient Estimator - It is an estimator that estimates the quantity of interest in some “best
possible” manner. The notion of “best possible” relies upon the choice of a particular loss
function — the function which quantifies the relative degree of undesirability of estimation
errors of different magnitudes. The most common choice of the loss function is quadratic,
resulting in the mean squared error criterion of optimality.
Prediction Interval - It is an estimate of an interval in which future observations will fall, with a
certain probability, given what has already been observed. Prediction intervals are often used in
regression analysis.
Tests for means, variances and proportions
Confidence Intervals for the Mean - The confidence interval for the mean for continuous data
with large samples is
where,
is the normal distribution value for a desired confidence level. If a relatively small
sample is used (<30) then the t distribution must be used. The confidence interval for the mean
for continuous data with small samples is
The
distribution value for a desired confidence level,
, uses (n - 1) degrees of freedom.
Confidence Intervals for Variation - The confidence intervals for variance is based on the ChiSquare distribution. The formula is
Six Sigma - Black Belt
S2 is the point estimate of variance and
1) degrees of freedom
are the chi-square table values for (n -
Confidence Intervals for Proportion - For large sample sizes, with n(p) and n(1-p) greater than
or equal to 4 or 5, the normal distribution can be used to calculate a confidence interval for
proportion. The following formula is used
is the appropriate confidence level from a Z table.
Population Variance - The confidence interval equation is as
Population Standard Deviation - The equation is
The statistical tests for means usually are
One-sample Z-test: for population mean
Two-sample Z-test: for population mean
One-sample T-test: single mean (one sample versus historic mean or target value)
Two-sample T-test : multiple means (sample from each of the two categories)
One-Sample Z-Test for Population Mean - The One-sample Z-test for population mean is used
when a large sample (n≥ 30) is taken from a population and we want to compare the mean of the
population to some claimed value. This test assumes the population standard deviation is known
or can be reasonably estimated by the sample standard deviation and uses the Z distribution. Null
hypothesis is Ho: µ = µ0 where, µ0 is the claim value compared to the sample.
Two-Sample Z-Test for Population Mean - The Two-sample Z-test for population mean is
used after taking 2 large samples (n≥30) from 2 different populations in order to compare them.
This test uses the Z-table and assumes knowing the population standard deviation, or estimated
by using the sample standard deviation. Null hypothesis is Ho: µ1= µ2
One-Sample T-test - The One-sample T-test is used when a small sample (n< 30) is taken from
a population and to compare the mean of the population to some claimed value. This test
assumes the population standard deviation is unknown and uses the t distribution. The null
hypothesis is Ho: µ = µ0 where µ0 is the claim value compared to the sample. The test statistic is
Six Sigma - Black Belt
where x is the sample mean, s is the sample standard deviation and n is the sample size.
Two-Sample T-test - The Two-sample T-test is used when two small samples (n< 30) are taken
from two different populations and compared. There are two forms of this test: assumption of
equal variances and assumption of unequal variances. The null hypothesis is Ho: µ1= µ2. The
Test Statistic with assumption of equal variances is
Where pooled variance is
Taking assumption of unequal variances then the test statistic is as
Analysis of Variance (ANOVA)
Sometimes it is essential to compare three or more population means at once with the
assumptions as the variance is the same for all factor treatments or levels, the individual
measurements within each treatment are normally distributed and the error term is considered a
normally and independently distributed random effect. With analysis of variance, the variations
in response measurement are partitioned into components that reflect the effects of one or more
independent variables. The variability of a set of measurements is proportional to the sum of
squares of deviations used to calculate the variance
.
Analysis of variance partitions the sum of squares of deviations of individual measurements from
the grand mean (called the total sum of squares) into parts: the sum of squares of treatment
means plus a remainder which is termed the experimental or random error.
ANOVA is a technique to determine if there are statistically significant differences among group
means by analyzing group variances. An ANOVA is an analysis technique that evaluates the
importance of several factors of a set of data by subdividing the variation into component parts.
ANOVA tests to determine if the means are different, not which of the means are different Ho:
µ1= µ2= µ3 and Ha: At least one of the group means is different from the others.
ANOVA extends the Two-sample t-test for testing the equality of two population means to a
more general null hypothesis of comparing the equality of more than two means, versus them not
all being equal.
One-Way ANOVA
Six Sigma - Black Belt
Terms used in ANOVA
Degrees of Freedom (df) - The number of independent conclusions that can be drawn from
the data.
SSFactor - It measures the variation of each group mean to the overall mean across all
groups.
SSError - It measures the variation of each observation within each factor level to the mean
of the level.
Mean Square Error (MSE) - It is SSError/ df and is also the variance.
F-test statistic - The ratio of the variance between treatments to the variance within
treatments = MS/MSE. If F is near 1, then the treatment means are no different (p-value is
large).
P-value - It is the smallest level of significance that would lead to rejection of the null
hypothesis (Ho). If α = 0.05 and the p-value ≤ 0.05, then reject the null hypothesis and
conclude that there is a significant difference and if α = 0.05 and the p-value > 0.05, then fail
to reject the null hypothesis and conclude that there is not a significant difference.
One-way ANOVA is used to determine whether data from three or more populations formed by
treatment options from a single factor designed experiment indicate the population means are
different. The assumptions in using One-way ANOVA is all samples are random samples from
their respective populations and are independent, distributions of outputs for all treatment levels
follow the normal distribution and equal or homogeneity of variances.
Steps for computing one-way ANOVA are
Establish the hypotheses. Ho: µ1= µ2= µ3 and Ha: At least one of the group means is
different from the others.
Calculate the test statistic. Calculate the average of each call center (group) and the average
of the samples.
Calculate SSFactor as
Calculate SSError as
Calculate SSTotal as
Calculate the ANOVA table with degrees of freedom (df) is calculated for the group, error
and total sum of squares.
Determine the critical value. Fcritical is taken from the F distribution table.
Draw the statistical conclusion. If Fcalc< Fcritical fail to reject the null hypothesis and if Fcalc >
Fcritical, reject the null hypothesis.
Chi Square Tests
Six Sigma - Black Belt
Usually the objective of the project team is not to find the mean of a population but rather to
determine the level of variation of the output like to know how much variation the production
process exhibits about the target to see what adjustments are needed to reach a defect-free
process.
If the means of all possible samples are obtained and organized we can derive the sampling
distribution of the means similarly for variances, the sampling distribution of the variances can
be known but, the distribution of the means follows a normal distribution when the population is
normally distributed or when the samples are greater than 30, the distribution of the variance
follows a Chi square (χ2) distribution. As the sample variance is computed as
Then the χ2 formula for single variance is given as
The shape of the χ2 distribution resembles the normal curve but it is not symmetrical, and its
shape depends on the degrees of freedom. The χ2 formula can be rearranged to find σ2. The
value σ2 with a degree of freedom of n−1, will be within the interval
The chi-square test compares the observed values to the expected values to determine if they are
statistically different when the data being analyzed do not satisfy the t-test assumptions. The chisquare goodness-of-fit test is a non-parametric test which compares the expected frequencies to
the actual or observed frequencies. The formula for the test is
with fe as the expected frequency and fa as the actual frequency. The degree of freedom will be
given as df= k − 1. Chi-square cannot be negative because it is the square of a number. If it is
equal to zero, all the compared categories would be identical, therefore chi-square is a one-tailed
distribution. The null and alternate hypotheses will be df= k − 1.
Chi-square cannot be negative because it is the square of a number. If it is equal to zero, all the
compared categories would be identical, therefore chi-square is a one-tailed distribution. The null
and alternate hypotheses will be H0: The distribution of quality of the products after the parts
were changed is the same as before the parts were changed. H1: The distribution of the quality of
the products after the parts were changed is different than it was before they were changed.
Six Sigma - Black Belt
Contingency Tables
A two-way classification table (rows and columns) containing original frequencies can be
analyzed to determine whether the two variables (classifications) are independent or have
significant association. It is a type of table in a matrix format that displays the (multivariate)
frequency distribution of the variables. The Chi Square will test whether there is dependency
between the two classifications. In addition, a contingency coefficient (correlation) can be
calculated. If the Chi Square test shows a significant dependency, the contingency coefficient
shows the strength of the correlation.
A measure of the difference found between observed and expected frequencies is supplied by the
2
statistic chi-square, X , where
2
2
If X =0 observed and theoretical frequencies agree exactly. If X >0 they do not agree exactly.
2
The larger the value of X the greater the discrepancy between observed and theoretical
frequencies. Contingency tables are very similar to goodness-of-fit tests.
An example is taken to explain. Suppose that we have two variables, sex (male or female) and
handedness (right- or left-handed). Further suppose that 100 individuals are randomly sampled
from a very large population as part of a study of sex differences in handedness. A contingency
table can be created to display the numbers of individuals who are male and right-handed, male
and left-handed, female and right-handed, and female and left-handed. Such a contingency table
is shown below.
Right-handed
Males 43
Females 44
Totals 87
Left-handed
9
4
13
Total
52
48
100
The numbers of the males, females, and right- and left-handed individuals are called marginal
totals. The grand total, i.e., the total number of individuals represented in the contingency table,
is the number in the bottom right corner.
The table allows us to see at a glance that the proportion of men who are right-handed is about
the same as the proportion of women who are right-handed although the proportions are not
identical. The significance of the difference between the two proportions can be assessed with a
variety of statistical tests including Pearson's chi-squared test, the G-test, Fisher's exact test, and
Barnard's test, provided the entries in the table represent individuals randomly sampled from the
population about which we want to draw a conclusion. If the proportions of individuals in the
Six Sigma - Black Belt
different columns vary significantly between rows (or vice versa), we say that there is a
contingency between the two variables. In other words, the two variables are not independent. If
there is no contingency, we say that the two variables are independent.
The example above is the simplest kind of contingency table, a table in which each variable has
only two levels; this is called a 2 x 2 contingency table. In principle, any number of rows and
columns may be used. There may also be more than two variables, but higher order contingency
tables are difficult to represent on paper. The relation between ordinal variables, or between
ordinal and categorical variables, may also be represented in contingency tables, although such a
practice is rare.
Non-Parametric Tests
The term parametric implies that an underlying distribution is assumed for the population, while
non-parametric makes no assumptions regarding the population distribution; hence often called
“distribution-free” tests.
The advantage of parametric testing are, if the assumptions of parametric testing are met, the
probability (power or 1 - β) of rejecting the null when it is false (correct decision) is higher than
is the power of a corresponding non-parametric test with equal sample sizes. Similarly, the
advantage of non-parametric testing is, the test results are more robust against violation of the
assumptions. Therefore, if assumptions are violated for a test based upon a parametric model, the
conclusions based on parametric test significance levels (alpha risk) may be more misleading
than conclusions based upon non-parametric test significance levels. According to
“Nonparametric Statistics: An Introduction ”by Jean D. Gibbons
Use non-parametric tests if any of the following conditions are true
The data are counts or frequencies of different types of outcomes.
The data are measured on a nominal scale.
The data are measured on an ordinal scale.
The assumptions required for the validity of the corresponding parametric procedure are not
met or cannot be verified.
The shape of the distribution from which the sample is drawn is unknown.
The sample size is small.
The measurements are imprecise.
There are outliers in the data making the median more representative than the mean.
And use non-parametric test when both of the following are true
The data are collected and analyzed using an interval or ratio scale of measurement.
All of the assumptions required for the validity of that parametric procedure can be verified.
The Mood's Median Test
Mood’s Median Test performs a hypothesis test of the equality of population medians in a oneway design. The test is robust against outliers and errors in data, and is particularly appropriate in
the preliminary stages of analysis. The median test determines whether k independent groups
have either been drawn from the same population or from populations with equal medians.
Six Sigma - Black Belt
Find the combined median for all scores in the k groups. Replace each score by a plus if the
score is larger than the combined median and by a minus if it is smaller than the combined
median. If any scores fall at the combined median, the scores may be assigned to the plus and
minus groups by designating a plus to those scores which exceed the combined median and a
minus to those which fall at the combined median or below. Next set up a Chi Square “k x 2”
table with the frequencies of pluses and minuses in each of the k groups.
The Mood's Median Test is used to determine whether there is sufficient evidence to conclude
that samples are drawn from populations with different medians. The test statistic used is the chisquare test statistic
Examples for the usage of the Mood’s median test include:
Comparing the medians of manufacturing cycle time (Y) of three different production lines
(X = Lines A, B and C)
Comparing the medians of the monthly satisfaction ratings (Y) of six customers (X) over the
last two years
Comparing the medians of the number of calls per week (Y) at a service hotline separated by
four different call types (X = complaint, technical question, positive feedback or product
info) over the last six months
The Mood's median test is a non-parametric test to compare two or more independent samples. It
is an alternative method to ANOVA. However, unlike ANOVA, it does not assume normality in
the samples, and so is useful when comparing medians where normality is questionable. In this
test, the null hypothesis is that the medians are the same. The alternative hypothesis is that at
least two of the samples' medians are different. To perform a Mood's median test
find the median of the combined data set
find the number of values in each sample greater than the median and form a contingency
table:
A B C Total
Greater than the median
Less than or equal to the median
Total
find the expected value for each cell as
Six Sigma - Black Belt
find the chi-square value from as
As an example, you are the manager of the mortgage department in a bank. You have three
officers processing mortgage applications. You collected data on cycle time to process
applications for the last 2 months and you want to assess if the three officers have the same
processing speed.
For a desired a = 0.05, since p = 0 < a, we will reject H0. Therefore, we conclude that there is
significant evidence that the median cycle times for the three officers are different.
Levene's Test
It is used to compare two or more variances, Levene's Test is appropriate for continuous data that
may not be normally distributed, testing for homogeneity of variances across a set of k samples.
It is an inferential statistic used to assess the equality of variances for a variable calculated for
two or more groups. Some common statistical procedures assume that variances of the
populations from which different samples are drawn are equal. Levene's test assesses this
assumption. It tests the null hypothesis that the population variances are equal (called
homogeneity of variance or homoscedasticity). If the resulting P-value of Levene's test is less
than some critical value (typically 0.05), the obtained differences in sample variances are
unlikely to have occurred based on random sampling from a population with equal variances.
Thus, the null hypothesis of equal variances is rejected and it is concluded that there is a
difference between the variances in the population.
Levene’s test is used to test the null hypothesis that multiple population variances are equal.
Levene’s test determines whether a set of k samples have equal variances. Equal variances across
samples are called homogeneity of variances.
The Levene test is less sensitive than the Bartlett test to departures from normality. If there is
strong evidence that the data do in fact come from a normal, or approximately normal,
distribution, then Barlett’s test has better performance
Levene’s variance test is more robust against departures from normality than the F test. When
there are just two sets of data, the Levene procedure is to
Six Sigma - Black Belt
Determine the mean.
Calculate the deviation of each observation from the mean.
Let Z equal the square of the deviation from the mean.
Apply the t test of two means to the Z data.
Kruskal-Wallis One-Way Analysis of Variance by Ranks
This is a test of independent samples. The measurements may be continuous data but the
underlying distribution is either unknown or known to be non-normal. As an example,
G = (RankSum)2/n = 693.781 + 495.042 + 1486.286 = 2675.109 N = 8 + 6 + 7 = 21
The significance statistic is H. H is distributed as chi square. Tie values are included in the
calculation of chi square. Let t = number of tied values in each tied set Then T = t3 - t for that
set.
Let k = number of sample sets. DF = k - 1 = 3 - 1 = 2. Let
= 0.05. Critical chi square =
= 5.99. H is less than critical chi square. Therefore, the null hypothesis of equality of
sample means cannot be rejected.
Mann-Whitney U Test
When there are ordinal measurements, the Mann-Whitney U test may be used to test whether two
independent groups have been drawn from the same population. This is a powerful
nonparametric test and is an alternative to the t test when the normality of the population is either
unknown or believed to be non normal.
Consider two populations, A and B. The null hypothesis, H0, is that A and B have the same
frequency distribution with the same shape and spread (the same median). An alternative
hypothesis, H1, is that A is larger than B, a directional hypothesis. We may accept H1 if the
probability is greater than 0.5 that a score from A is larger than a score from B. That is, if a is
one observation from population A, and b is one observation from population B, then H1 is that
P (a > b) > 0.5.
Six Sigma - Black Belt
If the evidence from the data supports H1, this implies that the bulk of population A is higher
than the bulk of population B. If we wished instead to test if B is statistically larger than A, then
H1 is that P (a > b) < 0.5. For a 2-tailed test, that is, for a prediction of differences which does
not state direction, H1 would be that P (a > b) =/ 0.5 (the medians are not the same).
If there are n1 observations from population A, and n2 observations from population B, rank all
(n1 + n2) observations in ascending order. Ties receive the average of their rank number. The
data sets should be selected so that n1 < n2. Calculate the sum of observation ranks for
population A, and designate the total as Ra, and the sum of observation ranks for population B,
and designate the total as Rb.
Calculate the U statistic as the smaller of Ua and Ub. For n2 <= 20, Mann-Whitney tables are
used to determine the probability, based on the U, n1, and n2 values. If n2> 20, the distribution
of U rapidly approaches the normal distribution and the following apply
6.3. FMEA
Failure Mode and Effects Analysis (FMEA) is an industrial strength risk assessment tool which
gives a step-by step approach to recognize and evaluate all potential failures in a design
manufacturing or assembling process of a product or a process. FMEA identifies actions that
could eliminate or reduce the chances of the occurrence of the potential failure and tracks the
corrective actions and their effects. It also document the entire decision process
FMEA can be viewed as an assessment tool, as it is used to diagnose the opportunities or as a
prevention tool, as it used to prevent high level risks.
Purpose of FMEA
The purpose of FMEA is to recognize and evaluate the failure and the effects that failure has on
the system and take actions to eliminate or reduce failures, starting with the highest-priority
failures. FMEA also aims to reduce the time and cost of the operation by improving the
teamwork and promoting accountability.
Elements of FMEA
FMEA consists of following elements, as
Six Sigma - Black Belt
Severity(S) - Severity is the worst potential outcome of a failure, which is determined by the
degree of injury, system damage etc. Severity is the impact of failure, numbers from 1 to 10 are
assigned to each failure effect, where 1 refers to the failure with no/slight effect and 10 refers to
that failure with most critical effect with most critical effect. The range of Severity is 1 <S <10.
The following table gives the different values in the Severity scale and corresponding effects.
Rank Effect
10
Hazardous without warning
9
Hazardous with warning
8
Very High
7
High
6
Moderate
5
Low
4
Very Low
3
Minor
2
Very Minor
1
None
Occurrence (O) - Occurrence refers to the number of times a cause of a failure will occur.
Occurrence is considered to be a design weakness, numbers form 1 to 10 are assigned to each
failure, where 1 refers to that failure which is unlikely to occur and 10 refers to that failure which
is most likely to occur failure which is most likely to occur The range of Occurrence is 1 <O
<10. The following table represents the different values in occurrence scale and the
corresponding effect.
Rank
10
9
8
7
6
5
4
3
2
1
Effect
Very High (1 in 2)
Very High (1 in 3)
High(1 in 8)
High(1 in 20)
Moderate(1in 80)
Moderate (1 in 400)
Moderate (1 in 2,000)
Low(1 in 15,000)
Low( 1 in 150,000)
Remote ( 1 in 1,500,000)
Detection (D)- Detection refers to the ability of designed checks and inspections to detect and
remove defects of failure modes. Numbers from 1 to 10 are assigned to each failure effect, where
1 refers to that failure which is easy to detect and 10 refers to that failure is easy to detect and 10
refers to that failure which is almost certain that we can’t detect. The range of detection is 1 <D
<10. The following table represents the different values in the detection scale and the
corresponding effect.
Rank
10
Effect
Absolute Uncertainty
Six Sigma - Black Belt
9
8
7
6
5
4
3
2
1
Very Remote
Remote
Very low
Low
Moderate
Moderately High
High
Very High
Almost Certain
Important FMEA Terms
Risk Priority Numbers (RPN) - RPN is the product of severity(s), Occurrence (O) and
Detection (D). The range of RPN is 1 <RPN <1000. It is calculated as
RPN = Severity x Occurrence x Detection
Failure effect - Failure effect refers to the consequence of a failure mode on the part of the
product/process as perceived by the internal and external customers
Failure mode - Failure mode refers to the manner in which a component, subsystem, system,
process, etc. probably fails to adhere to the design intent.
Failure cause - Defects in plan, process, quality, etc. that result in a failure or initiate a
process that leads to failure.
Important Measures
RPN has to be calculated for the entire process
Highest priority is given to that failure with highest RPN, since higher the RPN values higher
the risks involved
We should consider the less RPN values in some cases, because there can be some failures
which are less severe and less detective but occurs most often.
FMEA Procedure
The basic procedures involved in FMEA are listed below
The first step in the FMEA methodology is to describe the product/process and its functions.
Create a block diagram which includes all major components of the product/process. Connect
the blocks logically.
Identify the failure modes in terms of component, subsystem, system, process, etc.
Identify the effects of failure mode on the product/process as perceived by the internal and
external customers.
Assign severity - Numbers from 1 to 10 can be used to rank the severity of the effects.
Brainstorm the root causes of the failure - the sources for each failure mode have to be
discovered and recorded.
Occurrence - This is followed by entering the probability factor, which is the numerical
weight assigned to each failure mode effect cause, indicating ‘the probability that the failure
mode effect cause may occur’.
The next step is to identify the current controls. Current Controls are the devices that detect
and check the failure mode causes from occurring before the customer gets access to the
product or service.
Six Sigma - Black Belt
Detection - Likelihood of detection has to be ascertained in this step. Detection refers to the
probability that the current controls will detect the cause of the failure mode before the
customer gets access to the product.
Calculate RPN - After ascertaining the likelihood of detection, the Risk Priority Number
(RPN) has to be created and reviewed. RPN is the product obtained after multiplying
severity, probability, and detection ratings.
RPN = (Severity) x (Probability) x (Detection)
The RPN is used to rank the items that require supplementary quality planning or corrective
measures.
Recommend actions - This is followed by determination of recommended actions to deal
with potential failures that have a high RPN.
Document changes of RPN - After the recommended actions are implemented, the
implemented actions have to be documented. This can be used to reassess the severity,
probability, and detection and review the revised RPNs. This is done to explore the
possibility of the requirement of supplementary actions.
Review Periodically - Finally, the FMEA has to be updated as and when the design or
process changes or the assessment changes or new information becomes known. FMEA is an
ongoing activity
FMEA Types
There are various types of FMEAs available for organizations. They can make use of these
FMEAs in accordance with their requirements. The major types of FMEAs are as follows
Design FMEA (DFMEA) - DFMEA is mainly concentrating on identifying the weakness in the
design of a product or its components that may cause the failure of the total system while the
product is in service. It highlights the area of improvement. It helps to improve the system safely
in accordance with the priority by eliminating unsafe conditions. DFMEA is mainly used to
analyze product or component designs
Process FMEA (PFMEA) - PFMEA is mainly used to assess transactional processes. PFMEA
helps to identify the deficiencies of the process in the early stage of production. It gives an
organized and systematized approach to reduce the potential process deficiencies in a accordance
with the priority. It helps to improve the future process by taking necessary action to reduce
deficiencies
FMEA Advantages
FMEA captures the collective knowledge of the team. This will direct the total efforts of the
team toward a common goal.
FMEA improves the quality, reliability, and safety of the process as it identifies the possible
failure modes. Thus, it enables the personnel to plan for the future while it remedies the
present hindrances.
FMEA helps to identify design qualities that are responsible for failures and minimizes or
eliminates them from the system. Thus, it allows for creation of a logical structured process.
Six Sigma - Black Belt
FMEA cuts down process improvement time and cost as it optimizes the ability to transmit
structured information from one project to another. Thus, it drives the qualities of
repeatability and reproducibility across the system.
FMEA records and monitors the activities aimed at reducing the potential risks in the design.
This helps in the expansion of corporate database and leads to the success of future products
as well.
FMEA helps to identify critical-to-quality characteristics (CTQs) as they evaluate the
requirements obtained from the customer or other participants in the design process.
As FMEA is constantly updated with anticipated failure modes, it provides a baseline for the
future design. Thus, it not only provides historical records but also helps in establishing the
future baseline.
FMEA evaluates the functions and the form of products and processes. It provides safety
factors to make sure that the design succeeds and keeps crucial elements of the project from
slipping away. Thus, it protects the customer against product or process failure and helps to
increase customer satisfaction and safety.
FMEA Disadvantages
The major disadvantages for an organization due to usage of FMEA are as
The FMEA is limited to the experience of previous failures because FMEA is purely
dependent on the team members who analyze the product failures. So, if a failure mode is not
identified then the organization may have to seek external help which will increase the costs.
If FMEA is used only as a top-down tool, the probability of identifying minor failure modes
in a system is remote. So, it may ignore the minor failure modes which in the course of time
may develop into a major failure mode.
If FMEA is used only as a top-down tool, it will be able to identify most of the major and
minor causes of failure modes in the system. But, at times, it will not be able to identify some
of the complex failure modes that comprise manifold failures within a subsystem. Thus, it
will not be able to report probable failure intervals of particular failure modes up to the upper
level subsystem or system.
Another drawback of using FMEA is that the multiplication of the severity, occurrence, and
detection rankings may result in rank reversals. ‘Rank reversal’ means a serious failure mode
is attributed a lower RPN, whereas a less severe failure mode is given a higher RPN. ‘Rank
reversals’ may result in the organization facing major problems in the current and future
scenarios.
6.4. Other Analysis Methods
Various other analysis methods are discussed.
Gap Analysis
It is the comparison of actual performance with potential performance. If a company or
organization does not make the best use of current resources, or forgoes investment in capital or
technology, it may produce or perform below its potential. This concept is similar to an
economy's being below the production possibilities frontier.
Six Sigma - Black Belt
Gap analysis identifies gaps between the optimized allocation and integration of the inputs
(resources), and the current allocation level. This reveals areas that can be improved. Gap
analysis involves determining, documenting, and approving the difference between business
requirements and current capabilities. Gap analysis naturally flows from benchmarking and other
assessments. Once the general expectation of performance in the industry is understood, it is
possible to compare that expectation with the company's current level of performance. This
comparison becomes the gap analysis. Such analysis can be performed at the strategic or
operational level of an organization.
Gap analysis is a formal study of what a business is doing currently and where it wants to go in
the future. It can be conducted, in different perspectives, as
Organization (e.g., Human Resources)
Business direction
Business processes
Information technology
An example illustrates, gap analysis process as
Identify Future State - First, identify the objectives that you need to achieve. This gives you
your future state – the "place" where you want to be once you've completed your project.
Future State
Answer 90 per cent of calls
within 2 minutes.
Current Situation
Next Actions/Proposals
Analyze Current Situation - For each of your objectives, analyze your current situation. To
do this, consider the following questions: Who has the knowledge that you need? Who will
you need to speak with to get a good picture of your current situation? Is the information in
people's heads, or is it documented somewhere? What's the best way to get this information?
By using brainstorming workshops? Through one-to-one interviews? By reviewing
documents? By observing project activities such as design workshops? Or in some other
way?
Future State
Answer 90 per cent of calls
within 2 minutes.
Current Situation
Approximately 50 per cent of
calls are answered within 2
minutes.
Next Actions/Proposals
Identify How to Bridge the Gap - Once you know your future state and your current
situation, you can think about what you need to do to bridge the gap and reach your project's
objectives.
Future State
Current Situation
Next Actions/Proposals
Six Sigma - Black Belt
Future State
Current Situation
Next Actions/Proposals
Develop
a
call
volume
reporting/queue
modeling
system to ensure that there are
enough staff during busy
Approximately 50 per cent of
Answer 90 per cent of calls
periods.
calls are answered within 2
within 2 minutes.
Recruit any additional people
minutes.
needed.
Develop a system that allows
callers to book a call back
during busy periods.
Root Cause Analysis
It is a method of problem solving that tries to identify the root causes of faults or problems. A
root cause is a cause that once removed from the problem fault sequence, prevents the final
undesirable event from recurring. A causal factor is a factor that affects an event's outcome, but
is not a root cause. Though removing a causal factor can benefit an outcome, it does not prevent
its recurrence for certain.
RCA arose in the 1950s as a formal study following the introduction of Kepner-Tregoe Analysis,
which had limitations in the highly complex arena of rocket design development and launch in
the United States by the National Aviation and Space Administration (NASA).
RCA practice solves problems by attempting to identify and correct the root causes of events, as
opposed to simply addressing their symptoms. Focusing correction on root causes has the goal of
preventing problem recurrence. RCFA (Root Cause Failure Analysis) recognizes that complete
prevention of recurrence by one corrective action is not always possible. RCA is an iterative
process and a tool of continuous improvement.
RCA is typically used as a reactive method of identifying event(s) causes, revealing problems
and solving them. Analysis is done after an event has occurred.
By repeatedly asking the question ‘why?' (use five as a rule of thumb), you can peel away the
layers of an issue, just like the layers of an onion, which can lead you to the root cause of a
problem. The reason for a problem can often lead into another question; you may need to ask the
question fewer or more than five times before you get to the origin of a problem. The steps to
complete the root cause analysis are
Write down the specific problem. Writing it down helps you formalise the problem and
describe it accurately. It also helps a team focus on the same problem
Use brainstorming to ask why the problem occurs then, write the answer down below
If this answer doesn't identify the source of the problem, ask ‘why?' again and write that
answer down
Loop back to step three until the team agrees that they have identified the problem's root
cause. Again, this may take fewer or more than five ‘whys?'
Six Sigma - Black Belt
The 5 Whys technique is true to this tradition, and it is most effective when the answers come
from people who have hands-on experience of the process being examined. It is remarkably
simple: when a problem occurs, you uncover its nature and source by asking "why" no fewer
than five times. As an example, for a problem when client is refusing to pay for the leaflets you
printed for them.
Why? The delivery was late, so the leaflets couldn't be used.
Why? The job took longer than we anticipated.
Why? We ran out of printer ink.
Why? The ink was all used up on a big, last-minute order.
Why? We didn't have enough in stock, and we couldn't order it in quickly enough.
Counter-measure: We need to find a supplier who can deliver ink at very short notice.
Benefits of root cause analysis
Helps you to identify the root causes of a problem
Helps you to determine the relationship between different root causes of a problem
It is one of the simplest analysis tools as it's easy to complete without statistical analysis
It is easy to learn and apply
Fishbone Diagram - The 5 Whys can be used individually or as a part of the fishbone (also
known as the cause and effect or Ishikawa) diagram. The fishbone diagram helps exploring all
potential or real causes that result in a single defect or failure. Once all inputs are established on
the fishbone, you can use the 5 Whys technique to drill down to the root causes.
Fault tree analysis (FTA) - It is a top down, deductive failure analysis in which an undesired
state of a system is analyzed using Boolean logic to combine a series of lower-level events. This
analysis method is mainly used in the fields of safety engineering and reliability engineering to
understand how systems can fail, to identify the best ways to reduce risk or to determine (or get a
feeling for) event rates of a safety accident or a particular system level (functional) failure.
Fault trees are built using gates and events (blocks). The two most commonly used gates in a
fault tree are the AND and OR gates. As an example, consider two events (or blocks) comprising
a Top Event (or a system). If occurrence of either event causes the top event to occur, then these
events (blocks) are connected using an OR gate. Alternatively, if both events need to occur to
cause the top event to occur, they are connected by an AND gate. As a visualization example,
consider the simple case of a system comprised of two components, A and B, and where a failure
of either component causes system failure. The system RBD is made up of two blocks in series
Name of Gate
AND
Classic FTA
Symbol
Description
The output event occurs if all input events occur.
Six Sigma - Black Belt
The output event occurs if at least one of the input events
occurs.
OR
Taking an example, an inspection of a system reveals that any of the following failures will
cause the system to fail
Failure of components 1 and 2.
Failure of components 3 and 4.
Failure of components 1 and 5 and 4.
Failure of components 2 and 5 and 3.
In probability terminology it can be denoted as (1 And 2) Or (3 And 4) Or (1 And 5 And 4) Or (2
And 5 And 3). The consecutive fault tree as
Waste Analysis
There are seven types of waste that are found in a manufacturing process which are
Overproduction - An protective or "just in case" mindset usually results in overproduction.
Producing more than the customer requires is waste. It causes other wastes like inventory
costs, manpower and conveyance to deal with excess product, consumption of raw materials
and installation of excess capacity.
Needless Inventory - Inventory at any point is a no value-add as it ties up financial resources
of the company and is exposed to the risk of damage, obsolescence, spoilage, and quality
issues. It also needs space and other resources for proper management and tracking. Large
inventories also cover up process deficiencies like equipment problems or poor work
practices.
Defects - Defects and broken equipment results in defective products and subsequently
customer dissatisfaction, which need more resources for solving. Shipping damage is also
taken as a defect. Any process, product, or service that fails to meet specifications is also a
waste.
Six Sigma - Black Belt
Non-value Processing – It is also called over-processing, for which more resources are
wasted in production, their wasted movement and time. Any processing that does not add
value to the product is waste like in-process protective packaging. It is primarily due to extra
or unnecessary manufacturing steps, using older and outdated methods or not having
standard work plans.
Excess Motion - Unnecessary motion which is also a waste occurs due to poor workflow,
poor layout, housekeeping, inconsistent and undocumented work methods or lack of
standardized procedures or even a deficiency in employee training. It is usually hidden as it's
not easily evident, but careful observation and worker’s communication can highlight it.
Transport and Handling – It focus on shipping damage and includes pallets not being
properly stretch wrapped (wasted material), or a truck is not loaded to use floor space
efficiently or in handling, setting up or fixing a wrapping machine. Material should be
shipped directly from the vendor to the location in the assembly line where it will be used
also called as point-of-use-storage (POUS).
Waiting - These are wastages in time, usually due to broken machinery, lack of trained staff,
shortages of materials, inefficient planning, and waiting for material, information, equipment,
tools, etc. It leads to slowed production, delayed shipments, and even missed deadlines.
There are other types of waste in other places which are
Confusion – It is due to misinformation
Underutilization of available employees (of their skills and knowledge ) and facilities
7. IMPROVE PHASE
This phase needs understanding the KPIV's that are causing the effect and will help determining
the relationships and amounts of these key variables to the project "Y" and lead to optimal
Six Sigma - Black Belt
improvement ideas. All the hard work done previously lacks merit unless there is
implementation.
7.1. Design of Experiments (DOE)
It is a method of varying a number of input factors simultaneously in a planned manner, so that
their individual and combined effects on the output can be identified. It develops well-designed
efforts to identify which process changes yield the best possible results for sustained
improvement as mostly experiments address only one factor at a time, the Design of Experiments
(DOE) method focuses on multiple factors at one time. It provides the data that illustrates the
significance to the output of input variables acting alone or interacting with one another. Various
DOE advantages include evaluation of multiple factors simultaneously, controlling of input
factors to make the output insensitive to noise factors, experiments highlight important factors,
and there is confidence in the conclusions drawn. the factors can easily be set at the optimum
levels and quality and reliability can be improved without cost increase or cost savings can be
achieved.
Terminology
Basic DOE terms are
Factor - A predictor variable that is varied with the intent of assessing its effect on a response
variable. Most often referred to as an "input variable."
Factor Level - It is a specific setting for a factor. In DOE, levels are frequently set as high
and low for each factor. A potential setting, value or assignment of a factor of the value of
the predictor variable like, if the factor is time, then the low level may be 10 minutes and the
high level may be 30 minutes.
Response variable - A variable representing the outcome of an experiment. The response is
often referred to as the output or dependent variable.
Treatment - The specific setting of factor levels for an experimental unit. For example, a
level of temperature at 65° C and a level of time at 45 minutes describe a treatment as it
relates to an output of yield.
Experimental error - An error from an experiment reveals variation in the outcome of
identical tests. The variation in the response variable beyond that accounted for by the
factors, blocks, or other assignable sources while conducting an experiment.
Experimental run - A single performance of an experiment for a specific set of treatment
conditions.
Experimental unit - The smallest entity receiving a particular treatment, subsequently
yielding a value of the response variable.
Predictor Variable - A variable that can contribute to the explanation of the outcome of an
experiment. Also known as an independent variable.
Repeated Measures - The measurement of a response variable more than once under similar
conditions. Repeated measures allow one to determine the inherent variability in the
measurement system. Repeated measures are known as "duplication" or 'repetition."
Replicate - A single repetition of the experiment.
Replication - Performance of an experiment more than once for a given set of predictor
variables. Each of the repetitions of the experiment is called a "replicate." Replication differs
Six Sigma - Black Belt
from repeated measures in that it is a repeat of the entire experiment for a given set of
predictor variables, not just repeat of measurements of the same experiment.
Replication increases the precision of the estimates of the effects in an experiment.
Replication is more effective when all elements contributing to the experimental error are
included. In some cases replication may be limited to repeated measures under essentially the
same conditions. In other cases, replication may be deliberately different, though similar, in
order to make the results more general.
Repetition - When an experiment is conducted more than once, repetition describes this event
when the factors are not reset. Subsequent test trials are run again but not necessarily under
the same conditions.
Blocking - When structuring fractional factorial experimental test trials, blocking is used to
account for variables that the experimenter wishes to avoid. A block may be a dummy factor
which doesn’t interact with the real factors.
Box-Behnken - When full second-order polynomial models are to be used in response
surface studies of three or more factors, Box- Behnken designs are often very efficient. They
are highly fractional, three-level factorial designs.
Confounded - When the effects of two factors are not separable. As in the image below
A is confounded with BC, B with AC, and C with AB
Correlation Coefficient (r) - A number between -1 and 1 that indicates the degree of linear
relationship between two sets of numbers. Zero (0) indicates no linear relationship.
Covariates - Things which change during an experiment which had not been planned to
change, such as temperature or humidity. Randomize the test order to alleviate this problem.
Record the value of the covariate for possible use in regression analysis.
Degrees of Freedom - The term used is DOF, DF, df or v. The number of measurements that
are independently available for estimating a population parameter.
EVOP - It stands for evolutionary operation, a term that describes the way sequential
experimental designs can be made to adapt to system behavior by learning from present
results and predicting future treatments for better response.
First-order - It refers to the power to which a factor appears in a model. If “X1” represents a
factor and “B” is its factor effect, then the model Y = B0 + B1X1 + B2X2 + ,is first-order in
both X1 and X2.
Fractional - An adjective that means fewer experiments than the full design.
Full Factorial - It describes experimental designs which contain all combinations of all levels
of all factors. No possible treatment combinations are omitted.
Interaction - It occurs when the effect of one input factor on the output depends upon the
level of another input factor.
Six Sigma - Black Belt
Level - It is a given factor or a specific setting of an input factor like three levels of a heat
treatment may be 100°C, 120°C and 150°C.
Main Effect- An estimate of the effect of a factor independent of any other factors.
Mixture Experiments - They are experiments in which the variables are expressed as
proportions of the whole and sum to 1.0.
Nested Experiments - An experimental design in which all trials are not fully randomized.
Optimization - It involves finding the treatment combinations that gives the most desired
response. Optimization can be maximization or minimization
Orthogonal - It is a design is orthogonal if the main and interaction effects in a given design
can be estimated without confounding the other main effects or interactions.
Paired Comparison - The basis of a technique for treating data so as to ignore sample-tosample variability and focus more clearly on variability caused by a specific factor effect.
Only differences in response for each sample are tested because sample-to-sample
differences are irrelevant.
Fixed Effects Model - If the treatment levels are specifically chosen by the experimenter,
then conclusions reached will only apply to those levels.
Random Effects Model – If the treatment levels are randomly chosen from a population of
many possible treatment levels, then conclusions reached can be extended to all treatment
levels in the population.
Residual Error () or (E) - The difference between the observed and the predicted value for
that result, based on an empirically determined model. It can be variation in outcomes of
virtually identical test conditions.
Residuals - The difference between experimental responses and predicted model values.
Resolution - A fractional factorial design in which no main effects are confounded with each
other but the main effects and two factor interaction effects are confounded.
Response Surface Methodology (RSM) - The graph of a system response plotted against
system factors. RSM employs experimental design to discover the “shape” of the response
surface and uses geometric concepts to take advantage of the relationships.
Design Principles
In using DOE it is essential to identify and define critical concepts, which includes
Randomization - this is an essential component of any experiment that is going to have
validity. If you are doing a comparative experiment where you have two treatments, a
treatment and a control for instance, you need to include in your experimental process the
assignment of those treatments by some random process. An experiment includes
experimental units.
Six Sigma - Black Belt
Replication - It is the square root of the estimate of the variance of the sample mean. The
width of the confidence interval is determined by this statistic. Our estimates of the mean
become less variable as the sample size increases. Replication is the basic issue behind every
method we will use in order to get a handle on how precise our estimates are at the end. We
always want to estimate or control the uncertainty in our results. We achieve this estimate
through replication.
Blocking - is a technique to include other factors in experiment which contribute to
undesirable variation by creatively use various blocking techniques to control sources of
variation that will reduce error variance. For example, age and gender are factors which
contribute to variability and make it difficult to assess systematic effects. By using these as
blocking factors, you can both avoid biases that might occur due to differences between the
allocation of subjects to the treatments, and as a way of accounting for some noise in the
experiment.
Multi-factor Designs - It is contrary to the scientific method where everything is held except
one factor which is varied. The one factor at a time method is a very inefficient way of
making scientific advances. It is much better to design an experiment that simultaneously
includes multiple factors that may affect the outcome. These may be blocking factors which
deal with parameters or they may just help you understand the interactions or the
relationships between the factors that influence the response.
Confounding - It is usually avoided but in building complex experiments we sometimes can
use confounding to our advantage. We will confound things we are not interested in order to
have more efficient experiments for the things we are interested in. This will come up in
multiple factor experiments later on. We may be interested in main effects but not
interactions so we will confound the interactions in this way in order to reduce the sample
size, and thus the cost of the experiment, but still have good information on the main effects.
Power - The equivalent to one minus the probability of a Type II error (1-β). A higher power
is associated with a higher probability of finding a statistically significant difference. Lack of
power usually occurs with smaller sample sizes.
The Beta Risk(i.e.,Type II ErrororConsumer’s Risk) is the probability of failing to reject the
null hypothesis when there is significant difference (i.e., a product is passed on as meeting
the acceptable quality level when in fact the product is bad). Typically, (β) = 0.10%. This
means there is a 90% (1-β) probability you are rejecting the null when it is false (correct
decision). Also, the power of the sampling plan is defined as 1-β, hence the smaller the β, the
larger the power.
Sample Size - The number of sampling units in a sample. Determining sample size is a
critical decision in any experiment design. Generally, if the experimenter is interested in
detecting small effects, more replicates are required than if the experimenter is interested in
detecting large effects. Increasing the sample size decreases the margin of error and improves
the precision of the estimate.
Balanced Design - A design where all treatment combinations have the same number of
observations. If replication in a design exists, it would be balanced only if the replication was
consistent across all the treatment combinations. In other words, the number of replicates of
each treatment combination is the same.
Order - The order of an experiment refers to the chronological sequence of steps to an
experiment. The trials from an experiment should be carried out in a random run order. In
experimental design, one of the underlying assumptions is that the observed responses should
Six Sigma - Black Belt
be independent of one another (i.e., the observations are independently distributed). By
randomizing the experiment, we reduce bias that could result by running the experiment in a
“logical” order.
Interaction effect - The interaction effect for which the apparent influence of one factor on
the response variable depends upon one or more other factors. Existence of an interaction
effect means that the factors cannot be changed independently of each other.
Design considerations
Number
Factors
1
2-4
5 or more
of Comparative Objective
Screening Objective
1-factor
completely
randomized design
Randomized block design
Full
or
fractional
factorial
Randomized block design
Fractional factorial or
Plackett-Burman
Response
Objective
Surface
Central composite or
Box-Behnken
Screen first to reduce
number of factors
Planning Experiments
Planning the experiment is probably the most important task in the Improve phase when using
DOE. For planning to be done well, some experts estimate that 10-25% of your time spent
should be devoted to planning and organizing the experiments.
The purpose of DOE is to create an observable event from which data may be extracted and
decisions made about the best methods to improve the process. DOE may be used most
effectively in the following situations
Identifying factors that produce a specific response or outcome
Selecting between alternative approaches to effect the best outcome
In DOE, a full factorial design combines levels for each factor with levels for all other factors.
This basic design ensures that all combinations are used, but if factors are many, this design may
take too much time or be too costly to implement. In either case, a fractional factorial design is
selected as the number of runs is fewer with fewer treatments.
For example, a four-factor factorial experiment studies the effects on a golf score using four
factors, each with two levels. The factors (and levels) could be: type of driver (regular or
oversized), type of ball construction (balata-covered or three-piece), type of beverage (water or
beer) and mode of travel (walking or riding). To run a full factorial design experiment, 16 runs
would be required.
Six Sigma - Black Belt
For a fractional factorial, only 8 runs would be required (see below). Thus if time and funding
only permits 8 rounds of golf, the fractional factorial design will provide good information about
the main effects of the four factors as well as some information about how these factors interact.
The project team decides the exact steps to follow in the Improve phase. Steps to include in the
Improve phase may actually be identified in the Measure and Analyze phases and should be
noted to expedite later planning in the Improve phase. Planning the experiment(s) to be
conducted in the improve phase using DOE is
Establish experiment objectives - Objectives differ per project, but the designs typically fall
into three categories to support different objectives
Screening – used to identify which factors are most important.
Characterization – used to quantify the relationships and interaction between several
factors.
Optimization – used to develop a more precise understanding of just one or two variables.
Identify factors to be considered Label both input variables (x factors) and output variables (y responses) in the
experiment.
Use information collected in prior phases to assist in the identification process.
Finalize an experiment design
Select a design for the experiment.
Choose a design type (full factorial, fractional factorial, or others) that meets the
experiment’s objectives.
Determine how the factors are measured.
Consider the resources needed and determine whether a practice run or pilot experiment
may be needed.
Run the experiment
Run the experiment and collect the data. Place initial data in the results column of a
design array, a graphical representation of the experiment factors and results.
Six Sigma - Black Belt
Minimize chance for human error by carefully planning where human error could occur
and allow for the possibility in the planning process.
Randomize the runs to reduce confounding.
Document the results as needed depending on the experiment.
Analyze the results of the experiment
Review the results of the experiment(s).
Examine the relationships among input variables (factors) acting together and with
regards to the output variable(s) (responses).
Make decisions on next steps
Based on the results, determine next steps.
Are additional runs of the experiment needed?
Do the levels need to be modified prior to conducting the experiment again?
If the results point to an optimal solution, implement the factors and the levels of choice
and look at the Control phase to sustain the desired improvements.
Other Considerations - The DOE planning phase may include other considerations for the
project team as
Iterative process - One large experiment does not normally reveal enough information to
make final decisions. Several iterations may be necessary so that the proper decisions may be
made and the proper value settings verified.
Measurement methods - Ensure that measurement methods are checked out prior to the
experiment to avoid errors or variations from the measurement method itself. Review
measurement systems analysis to ensure methods have been reviewed and instruments
calibrated as needed, etc.
Process control and stability - The results from an experiment are more accurate if the
process in question is relatively stable.
Inference space - If the inference space is narrow, then the experiment is focused on a subset
of a larger process – such as one specific machine, one operator, one shift, or one production
line. With a narrowed or focused inference space, the chance for “noise” (variation in output
from factors not directly related to inputs) is much reduced. If the inference is broad, the
focus is on the entire process and the chances for noise impacting the results are much
greater.
One-factor experiments
It involves only one factor or input variable. In a one-factor experiment the project team selects a
starting point, or baseline set of levels for each factor, then successively varying each factor over
its range with the other factors held constant at the baseline level. After each factor has been
tested, it is then easy to compare the results and conclude which factor most likely provides the
optimal results.
Often, one-factor experiments are used when the critical factor has been determined through
prior analysis or when testing all factors is too costly or not practical. In these cases, a one-factor
experiment allows the project team to focus on the one critical factor that can have the greatest
impact on the response variable.
Six Sigma - Black Belt
Two-level fractional factorial experiments
The following seven step procedure will be followed
Select a process
Identify the output factors of concern
Identify the input factors and levels to be investigated
Select a design
Conduct the experiment under the predetermined conditions
Collect the data (relative to the identified outputs)
Analyze the data and draw conclusions
Randomized Block Experiments
When focusing on just one factor in multiple treatments, it is important to maintain all other
conditions as constant as possible. Since the number of tests to ensure constant conditions might
be too large to practically implement, an experiment may be divided into blocks. These blocks
represent planned groups that exhibit homogeneous characteristics. A randomized block
experiment limits each group in the experiment to exactly one and only one measurement per
treatment. For example, if an experiment is going to cover two shifts, then bias may emerge
based on the shift during which the test was conducted. A randomized block plan might measure
each item on each shift to reduce the chance for bias.
A randomized block experiment would arbitrarily select the runs to be performed during each
shift. For example, since the coolant temperature in the example below is probably the most
difficult to adjust, but may in part reflect the impact of the change in shift, the best approach
would be to randomly select runs to be performed during each shift. The random selection might
put runs 1, 4, 5 and 8 in the first shift and runs 2, 3, 6 and 7 in the second shift. Another approach
that may be used to nullify the impact of the shift change would be to do the first three replicates
of each run during the first shift and the remaining two replicates of each run during the second
shift.
Latin Square Designs
A Latin square design involves three factors in which the combination of the levels of any one of
them and the levels of the other two appear once and only once. A Latin square design is often
used to reduce the impact of two blocking factors by balancing out their contributions. A Latin
square plan is useful to allow for two sources of non-homogeneity in the conditions affecting test
results. A third variable, the experimental treatment, is then applied to the source variables in a
balanced fashion. A basic assumption is that these block factors do not interact with the factor of
interest or with each other. This design is particularly useful when the assumptions are valid for
minimizing the amount of experimentation. The Latin square design has two limitations which
are
The number of rows, columns, and treatments must all be the same (in other words, designs
may be 3X3X3, 4X4X4, 5X5X5, etc.).
Interactions between row and column factors are not measure
Six Sigma - Black Belt
An example of a Latin square design (3X3) is seen below.
Three aircraft with three different engine configurations are used to evaluate the maximum
altitude when flown by three different pilots (A, B, and C). In this case, the two constant sources
are the aircraft (1, 2, and 3) and the engine configuration (I, II and III). The third variable – the
pilots – is the experimental treatment and is applied to the source variables (aircraft and engine).
Notice that the condition of interest is the maximum altitude each of the pilots can attain, not the
interaction between aircraft or engine configuration. For example, if the data shows that pilot A
attains consistently higher altitudes in each of the aircraft/engine configurations, then the skills
and techniques of that pilot are the ones to be modeled. This is also an example of a fractional
factorial as only nine of the 27 possible combinations are tested in the experiment.
Full factorial experiments
A full factorial is an experimental design which contains all levels of all factors. No possible
treatments are omitted. A fractional factorial is a balanced experimental design which contains
fewer than all combinations of all levels of all factors. Listed below are full and half fractional
factorial designs for 3 factors at two levels
The half fractional factorial also requires an equal number of plus and minus signs in each
column.
Fractional Factorial Experiments
A fractional factorial experimental design consists of a subset (fraction) of the factorial design.
Typically, the fraction is a simple proportion of the full set of possible treatment combinations.
For example, half-fractions, quarter-fractions, and so forth are common. While fractional
Six Sigma - Black Belt
factorial designs require fewer runs, some degree of confounding occurs. A fractional factorial is
often referred to as 2k factorials with k referring to the number of factors and 2, the number of
levels. Using this nomenclature, a full factorial may be represented as 2k and the fractional
factorial 2k-1 as to represent the subset of combinations. There are many possible fractional
factorial designs and the number of possible fractional factorial designs can be represented by 2kp
where p is the number of independent generators.
Taguchi Designs
The Taguchi approach to experiments emphasizes two items
Reduce process variation which reduces the loss to society in general.
Use a proper development approach to reduce process variation. It involves
Identify a parameter that improves a characteristic of performance.
Identify a less expensive alternative design, material, or method that provides the same
level of quality at a less expensive cost.
7.2. Waste Elimination
The core philosophy of lean manufacturing is waste elimination. The focus is not quick or more
production but, to eliminate waste of any kind which has no value addition. Companies by
focusing and following the philosophy of "do it faster, do it better" hide the symptoms of
problems which hamper quicker and better production. Lean manufacturing reduces costs and
increase productivity by addressing the root of the problem by eliminating the "muda". Muda is a
Japanese term meaning "waste" as, lean manufacturing is a Japanese management philosophy
hence, Japanese terms and concepts are used extensively. Various waste elimination techniques
which are used are listed, as
Pull System – It is the technique for producing parts as per the customer’s demand.
Companies need to have a Push System or building products to stock as per sales forecast,
without firm customer orders.
Kanban – It is a method for maintaining an orderly flow of material. Kanban cards are used
to indicate material order points, how much material is needed, from where the material is
ordered, and to where it should be delivered.
Work Cells – The technique of arranging operations and people in a cell (U-shaped, etc.)
instead of a straight assembly line for better utilization of people and improved
communication.
Total Productive Maintenance – It focuses on proactive and progressive maintenance of
equipments by utilizing the knowledge of operators, equipment vendors, engineering and
support persons to optimize machine performance thus, drastically reducing breakdowns,
unscheduled and scheduled downtime which results in improved utilization, higher
throughput, and better product quality.
Total Quality Management – It is a management system for continuous improvement in all
areas of a company's operation. It is applicable to every operation of the organization and
involves employees.
Quick Changeover (or SMED - Single Minute Exchange of Dies) – It is the technique for
reducing changeover time to change a process from running a specific product manufacture
to another. It enables flexibility in final product offerings and also to address smaller batch
sizes.
Six Sigma - Black Belt
5S or Workplace Organization – It is a systematic method for organizing and standardizing
the workplace and is applicable to every function in an organization.
Visual Controls – They provide an immediate understanding (usually thirty seconds) of a
condition or situation like what’s happening with regards to production schedule, backlog,
workflow, inventory levels, resource utilization, and quality. It includes kanban cards, lights,
color-coded tools, lines delineating work areas and product flow, etc.
7.3. Cycle-time Reduction
Cycle Time (CT) is the time taken to complete the corresponding process. Changeover Over
Time (C/O) is the time involved for changing from one model to another. Uptime (UT) is the
actual operating time divided by available time and with changeover time it is calculated as
UT = (AT-C/O)/AT
Cycle Time is the minimum cycle time that process can be expected to achieve in optimal
circumstances. It is sometimes called Design Cycle Time, Theoretical Cycle Time or Nameplate
Capacity.
Techniques used to reduce cycle time are
Facility analysis – Determine the gap between current state and a state desired
5-S - It focuses on waste removal
TPM – Begin Total Productive Maintenance early
Value Stream Mapping – Determine the waste across the entire system
Process mapping – A more detailed map of each process
Takt time – Determine need to produce on all processes, equipment
Overall equipment effectiveness (OEE) and six losses – Determine the losses on all processes
and equipment
Line balance – Use, if necessary, with takt time and OEE
SMED – Push setup times down to reduce cycle time, batch quantity and lower costs
Pull/Continuous-Flow flow/Continuous Flow Analysis – Utilize kanban and supermarkets
Cellular manufacturing/layout and flow improvement – Analyze facility and each process
Develop standardized operations – Concurrently with SMED, line balance, flow, layouts
Kaizen – Continue improving operations, giving priority to bottlenecks within the system
List of steps to reduce cycle time are
Form team (mix of lean manufacturing and relevant business experience)
Develop communication and feedback channel for everyone
Meet with everyone and explain the initiative
Begin to train all employees
Analyze quality at the source application – Poor quality stopped at the source
Implement error-proofing ideas
Takt Time
Scheduling production such that the rate of manufacturing synchronizes with the rate of
customer demand is called as Takt Time. Processes must be able to be scaled to takt time, or the
Six Sigma - Black Belt
rate of customer demand. For example, if takt time is 10 minutes, processes should be able to
scale to run at one unit every 10 minutes. Operation cycle times must be balanced (equal) to the
takt time. Uneven work times will create waiting time and overproduction.
The takt time is to be calculated and it is a measure of customer demand expressed in units of
time and is calculated as follows
Takt time = Available work-time per shift / Customer demand per shift
Continuous-Flow Manufacturing
It refers to the concept of moving one work piece at a time between operations within a work cell
uninterrupted in time, sequence, substance or extent. Continuous-Flow flow is the method of
production in which operators or machines work on single units and pass them along to the next
process when requested. The most common example of Continuous-Flow flow is the assembly
line. An operator at each station works on a unit. All of this work-in process (one unit per
operator or automatic machine) moves in synchronization to the next station.
One-piece Flow is also known by other names as Make-one, Move-one or Single-piece Flow or
Continuous Flow and Flow Manufacturing.
In a typical MRP batch-and-queue manufacturing environment as illustrated above, parts move
from functional area to functional area in batches, and each processing step or set of processing
steps is controlled independently by a schedule. There is little relationship between each
manufacturing step and the steps immediately upstream or downstream. This result in
Large amounts of scrap when a defect is found because of large batches of WIP
Long manufacturing lead time
Poor on-time delivery and/or lots of finished goods inventory to compensate
Large amounts of WIP
When we achieve connected flow, there is a relationship between processing steps: That
relationship is either a pull system such as a supermarket or FIFO lane or a direct link
(Continuous-Flow flow). As illustrated below, Continuous-Flow flow is the ideal method for
creating connected flow because product is moved from step to step with essentially no waiting
(zero WIP).
Six Sigma - Black Belt
Various benefits accrue to using the Continuous-Flow flow production system which includes
Reduced waste in waiting, inventory and transportation
Less overhead in managing because:
It is more stable and predictable.
Lead times are reduced.
More responsive to customer needs, particularly as they change volume or mix
Continuous-Flow flow describes the sequence of product or transactional activities through a
process one unit at a time. In contrast, batch processing creates a large number of products or
works on a large number of transactions at one time – sending them together as a group through
each operational step. In Continuous-Flow flow, focus is on the product or the transactional
process, rather than on the waiting, transporting, and storage of either. Continuous-Flow flow
methods need short changeover times and are conducive to a pull system.
Implementation
The first step in implementing a Continuous-Flow flow cell is to decide which products or
product families will go into the cells, and determine the type of cell: Product-focused or mixed
model. For product focused cells to work correctly, demand needs to be high enough for an
individual product. For mixed model cells to work changeover times must be kept short; a
general rule of thumb is that changeover time must be less than one takt time.
The next step is to calculate takt time for the set of products that will go into the cell. Takt time is
a measure of customer demand expressed in units of time and is calculated as follows
Takt time = Available work-time per shift / Customer demand per shift
Next, determine the work elements and time required for making one piece. In much detail, list
each step and its associated time. Time each step separately several times and use the lowest
repeatable time. Then, determine if the equipment to be used within the cell can meet takt time.
Considerations here include changeover times, load and unload times, and downtime.
The next step is to create a lean layout. Using the principles of 5-S (eliminating those items that
are not needed and locating all items/equipment/materials that are needed at their points of use in
the proper sequence), design a layout. Space between processes within a Continuous-Flow flow
cell must be limited to eliminate motion waste and to prevent unwanted WIP accumulation. Ushaped cells are generally best; however, if this is impossible due to factory floor limitations,
other shapes will do.
Six Sigma - Black Belt
Finally, balance the cell and create standardized work for each operator within the cell.
Determine how many operators are needed to meet takt time and then split the work between
operators. Use the following equation
Number of operators = Total work content / Takt time
In most cases, an “inconvenient” remainder term will result (e.g., user will end up with Number
of Operators = 4.4 or 2.3 or 3.6 instead of 2.0, 3.0, or 4.0). If there is a remainder term, it may be
necessary to kaizen the process and reduce the work content. Other possibilities include moving
operations to the supplying process to balance the line.
Reducing Changeover Time
Reducing changeover time is like adding capacity, increasing profitability and can help most
manufacturers gain a competitive edge. Image a pit crew changing the tires on a race car. Team
members pride themselves on reducing changeover by even tenths of a second because it means
that their driver is on the road faster and in a better position to win. The same philosophy applies
to manufacturing – the quicker you are producing the next scheduled product, the more
competitive you are.
Single minute exchange of die (SMED) is a concept originated by Dr. Shigeo Shingo , a
Japanese thought leader who helped evolve the Toyota Production System. It is widely used
technique for setup time reduction. Dr. Shingo is considered the world's leading expert on
improving the manufacturing process and he was also popularly known as "Dr. Improvement" in
Japan.
The SMED system is a simple yet powerful process. Dr. Shingo’s main prerequisite is to use
Scientific Thinking. By this term he means that one needs keep an open mind and personally
watch a process to learn what is happening.
SMED is based on the concept of the mudas removal, as reduction in time of changing over from
one die to another is a saving in non-value adding process time. Lowering the change over time
Six Sigma - Black Belt
also results low inventories due to shorter runs thus, a saving in the inventory is also made
available. SMED is also often called as Quick Changeover (QCO). The development involved
intensive study and improvement of setup operations in many factories and industries. It was
facilitated by the recognition that set-up operations can be categorized as
Internal Setup - Machine must stop to perform the operation
External Setup - Machine can be kept running whilst operations performed
Shigeo Shingo recognizes eight techniques that should be considered in implementing SMED.
Separate Internal from External setup operations
Convert Internal to External setup
Standardize function , not shape
Use functional clamps or eliminate fasteners altogether
Use intermediate jigs
Adopt parallel operations
Eliminate adjustments
Mechanization
Basic steps in a SMED setup process are as
Preparation, gathering tools, materials etc. Searching for misplaced/lost tools, waiting for
tooling, obtaining process instructions, etc.
Exchanging, removing old parts/tools, etc. Removal of previous tooling and
mounting/placement of next job’s tooling
Positioning, repositioning, settings and calibration. Loosening and tightening fasteners for
tooling, loading materials into position. Calibrating tooling, establishing initial control
settings
Gauging, trial runs, and making adjustments. This category typically makes up the majority
of setup time due to re-calibrations, missing or broken tooling/fixtures, incorrect materials
being used. Additional measurements of parts until settings are “correct”
7.4. Kaizen and kaizen blitz
Kaizen (Ky ‘zen) is a Japanese term which means continuous improvement as the words 'Kai',
means continuous and 'zen' means improvement. Sometimes Kaizen also translates to 'Kai' to
mean change and 'zen' to mean good, or for the better.
Kaizen is a system which involves every employee of the organization whether from senior
management or the lowest rank employee. Everyone is encouraged to come up with small
improvement suggestions on a regular basis or it is continuous and not limited to monthly or
yearly activity. Companies, who have implemented Kaizen, receive 60 to 70 suggestions per
employee per year which are written down, shared and implemented.
In most cases the ideas are not for major changes as, Kaizen focus on making little changes of
improving productivity, safety and effectiveness while reducing waste on a regular basis.
Suggestions are also not limited to a specific area like production or marketing but changes can
Six Sigma - Black Belt
be made anywhere needing improvements. The Kaizen philosophy is to "do it better, make it
better, and improve it even if it isn't broken, because if we don't, we can't compete with those
who do."
Kaizen encompasses many continuous improvement components like Quality circles,
automation, suggestion systems, just-in-time delivery, Kanban and 5S. Kaizen involves setting
standards and then continually improving those standards with providing needed training to
achieve the standards and maintain them on an on-going basis.
Gemba - Gemba is a Japanese word for 'real place,' where the value-adding activities to satisfy
the customer are carried out. The Gemba place can be where the product is developed or
produced or sold or made. In the service sectors, Gemba is where the customers come into
contact with the services offered. Gemba is important to Kaizen as most managers prefer their
desk thus, come in contact with reality only through reports or other meetings.
Gembutsu¸ is a Japanese word meaning some unconformable physical or tangible things like out
of order equipment or scrap which can be felt. If a machine is down or a complaining client, the
machine itself is gembutsu then, go to Gemba and have a good look at the machine. By looking
at the machine, and asking the question “why” several times, to probably find out the reason for
the breakdown on the spot.
Kaizen Steps
Kaizen process follow the below listed steps
Define the problem – Defining the problem is a first activity to undertake for initiating the
Kaizen process.
Gemba walk and Document the current situation – It involves making observation and
conducting meetings so as to gather information and identify inefficiencies in the present
processes especially where the Gemba is or the places where value process are taking place.
Visualize the ideal situation – Develop a ideal blueprint for the future situation which is
achievable by implementing Kaizen
Define measurement targets – After finalization of the blueprint for the ideal solution to the
present inefficiencies, make measurable targets so as to quantify the gains due to Kaizen
implementation.
Brainstorm solutions to the problem – Brainstorming helps in listing possible the solutions
whose implementation will
Develop Kaizen plan
Implement plan
Measure, record and compare results to targets
Prepare summary documents
Create short term action plan, on-going standards and sustaining plan
7.5. Theory of constraints (TOC)
It is a methodology for identifying the most important limiting factor (i.e. constraint) that stands
in the way of achieving a goal and then systematically improving that constraint until it is no
longer the limiting factor. It was first published in The Goal by Eliyahu M. Goldratt and Jeff Cox
Six Sigma - Black Belt
in 1984. TOC conceptually models the manufacturing system as a chain, and advocates focusing
on its weakest link. Goldratt defines a five-step process that a change agent can use to strengthen
the weakest link, or links, which includes
Identify the System Constraint - The part of a system that constitutes its weakest link can be
either physical or a policy.
Decide How to Exploit the Constraint - Goldratt instructs the change agent to obtain as much
capability as possible from a constraining component, without undergoing expensive
changes.
Subordinate Everything Else - The non-constraint components of the system must be
adjusted to a "setting" that will enable the constraint to operate at maximum effectiveness.
Once this has been done, the overall system is evaluated to determine if the constraint has
shifted to another component. If the constraint has been eliminated, the change agent jumps
to step five.
Elevate the Constraint - "Elevating" the constraint refers to taking whatever action is
necessary to eliminate the constraint. This step is only considered if steps two and three have
not been successful. Major changes to the existing system are considered at this step.
Return to Step One, But Beware of "Inertia"
The process of delivering a product or service is very much like a chain; each resource and
function are linked. It only takes one element in the system to fail, to cause the entire system to
fail. In order to improve the system, we must optimize the weakest link; the constraint or drum.
All other resources are subordinated to that. In scheduling terms, we
Develop a detailed schedule for the drum resource
Add buffers to protect the performance of that resource
Synchronize the schedule of all other resources to the drum schedule
Drum Buffer Rope
Drum Buffer Rope (DBR) is a planning and scheduling solution derived from the Theory of
Constraints (ToC). The fundamental assumption of DBR is that within any plant there is one or a
limited number of scarce resources which control the overall output of that plant. This is the
“drum”, which sets the pace for all other resources.
In order to maximize the output of the system, planning and execution behaviors are focused on
exploiting the drum, protecting it against disruption through the use of “time buffers”, and
synchronizing or subordinating all other re sources and decisions to the activity of the drum
through a mechanism that is akin to a “rope”.
Six Sigma - Black Belt
The buffer is a period of time to protect the drum resource from problems that occur upstream
from the drum operation. Its effect to provide a resynchronization of the work as it flows through
the plant. The buffer compensates for process variation, and makes DBR schedules very stable,
immune to most problems. It has the additional effect of eliminating the need for 100% accurate
data for scheduling. It allows the user to produce a “good enough” schedule that will generate
superior results over almost every other scheduling method.
Since the buffer aggregates variation, it also allows to operate the plant with much lower levels
of work in process, producing dramatic reductions in production lead times and generating a lot
of cash that was tied up on inventory. The “extra” capacity at the non-constraints helps, too.
Since the plant is not overloaded with work it cannot do, the resources can “catch up” when
problems strike, without affecting the drum or global throughput. After the drum has been
scheduled, material release and shipping are connected to it, using the buffer offset. Material is
released at the same rate as the drum can consume it. Orders are shipped at the rate of drum
production.
DBR Scheduling Algorithm - The process of scheduling the factory first focuses on the primary
objective of the facility, to ship to committed delivery date. Thus we first find the due date of the
order, and add a shipping buffer to create an “ideal” finish date with confidence.
From this planned finish date, the order is backward scheduled to identify an “ideal” time to
work on the drum resource, a “latest due by” (LBD) date.
Six Sigma - Black Belt
All orders are scheduled to fit on the drum using two passes; first, by assigning all batches an
ideal placement on the drum schedule.
When the batch does not fit, i.e., there is another occupying its space, the batch is scheduled
earlier in time so the order due date is not violated. This may result in some jobs starting before
today, and not all jobs may be ready to start at the drum resource.
The drum is then forward scheduled to resolve these conflicts, and potentially late jobs are
identified (the Task 2).
Six Sigma - Black Belt
After the drum is schedule, the operations after the drum are scheduled forward in time from the
drum completion date. Then, the jobs feeding the drum are backward scheduled from the start of
the resource buffer.
7.6. Implementation
Implementation is the realization of an application, or execution of a plan, idea, model, design,
specification, standard, algorithm, or policy.
User cannot start planning for implementation while you are actually implementing. The major
steps associated with implementation though many of these activities need to be completed
ahead of time are
Prepare the infrastructure. It is important that the characteristics of the production
environment be accounted for before initiating implementation. When you are ready for
implementation, the production infrastructure needs to be in place.
Six Sigma - Black Belt
Coordinate with the organizations involved in implementation. It usually involves
communicating to client.
Implement training. Many solutions require users to attend training or more informal
coaching sessions so that end users better acquaint themselves.
Implement new processes and procedures. The solution needs to be finally implemented, all
procedures are followed and implementation takes place as per the plan worked out earlier.
Perform final verification. Test the implemented solution to ensure everything is working as
expected and measurement of achievements against laid goals also need to be worked.
Monitor the solution. Usually the six sigma project team will spend some period of time
monitoring the implemented solution. If there are problems that come up immediately after
implementation, the project team should address and fix them.
Various criteria are chosen to assess the effectiveness of various solutions listed after identifying
the root causes, which can be accomplished by assigning a weight value to each criterion against
all identified solutions. Various approaches providing actionable insights are
Pilot experiment - A pilot experiment, also called a pilot study, is a small scale preliminary
study conducted in order to evaluate feasibility, time, cost, adverse events, and effect size
(statistical variability) in an attempt to predict an appropriate sample size and improve upon
the study design prior to performance of a full-scale research project. Pilot studies, therefore,
may not be appropriate for case studies.
Pilot experiments are frequently carried out before large-scale quantitative research, in an
attempt to avoid time and money being wasted on an inadequately designed project. A pilot
study is usually carried out on members of the relevant population, but not on those who will
form part of the final sample. This is because it may influence the later behaviour of research
subjects if they have already been involved in the research.
Simulation is the imitation of the operation of a real-world process or system over time. The
act of simulating something first requires that a model be developed; this model represents
the key characteristics or behaviors/functions of the selected physical or abstract system or
process. The model represents the system itself, whereas the simulation represents the
operation of the system over time.
Simulation is used in many contexts, such as simulation of technology for performance
optimization, safety engineering, testing, training, education, and video games. Often,
computer experiments are used to study simulation models. Simulation is also used with
scientific modeling of natural systems or human systems to gain insight into their
functioning. Simulation can be used to show the eventual real effects of alternative
conditions and courses of action. Simulation is also used when the real system cannot be
engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage,
or it is being designed but not yet built, or it may simply not exist.
Key issues in simulation include acquisition of valid source information about the relevant
selection of key characteristics and behaviours, the use of simplifying approximations and
assumptions within the simulation, and fidelity and validity of the simulation outcomes.
Six Sigma - Black Belt
Model also referred as a physical model, is a smaller or larger physical copy of an object.
The object being modeled may be small (for example, an atom) or large (for example, the
Solar System).
The geometry of the model and the object it represents are often similar in the sense that one
is a rescaling of the other; in such cases the scale is an important characteristic. However, in
many cases the similarity is only approximate or even intentionally distorted. Sometimes the
distortion is systematic with e.g. a fixed scale horizontally and a larger fixed scale vertically
when modeling topography of a large area (as opposed to a model of a smaller mountain
region, which may well use the same scale horizontally and vertically, and show the true
slopes).
A prototype is an early sample, model, or release of a product built to test a concept or
process or to act as a thing to be replicated or learned from. It is a term used in a variety of
contexts, including semantics, design, electronics, and software programming. A prototype is
designed to test and trial a new design to enhance precision by system analysts and users.
Prototyping serves to provide specifications for a real, working system rather than a
theoretical one.
There is no general agreement on what constitutes a "prototype" and the word is often used
interchangeably with the word "model" which can cause confusion. In general, "prototypes"
fall into five basic categories:
Proof-of-Principle Prototype (Model)
Form Study Prototype (Model)
User Experience Prototype (Model).
Visual Prototype (Model)
Functional Prototype (Model)
Communication Plan
A communication plan is a road map for getting your message across to your audience. The plan
is an essential tool of management. Spending time planning approach will improve ability to
achieve desired outcome.
It explains how to convey the right message, from the right communicator, to the right audience,
through the right channel, at the right time. It addresses the six basic elements of
communications: communicator, message, communication channel, feedback mechanism,
receiver/audience, and time frame. A communication plan includes
“Who” - the target audiences
“What” – the key messages that are trying to be articulated
“When” – timing, it will specify the appropriate time of delivery for each message
“Why” – the desired outcomes
“How” - the communication vehicle (how the message will be delivered)
Six Sigma - Black Belt
“By whom” - the sender (determining who will deliver the information and how he or she is
chosen)
7.7. Risk Analysis and Mitigation
Risk is an uncertain event or condition that, if it occurs, has a positive or a negative effect on a
project’s objectives. There are two things to consider first, is uncertainty as users don’t know
about it yet but are planning for it and second, the event can be positive or negative.
As an example, let’s consider a Six Sigma project that dealt with a sales process. Certainly the
team considered the impact of low sales results, lost customers, etc. But did they consider
dramatically increased sales — far above what the target was?
Risk management refers to the process involved in analysing and effectively mitigating the
amount of risk identified within the company which can be supported by the tools of Six Sigma.
Six Sigma risk management can provide a structured approach for a company to manage and
accept particular risks for achieving strategic opportunities.
Feasibility Study
The feasibility study is an evaluation and analysis of the potential of a proposed project which is
based on extensive investigation and research to support the process of decision making.
Feasibility studies aim to objectively and rationally uncover the strengths and weaknesses of an
existing business or proposed venture, opportunities and threats present in the environment, the
resources required to carry through, and ultimately the prospects for success. In its simplest
terms, the two criteria to judge feasibility are cost required and value to be attained.
Risk Analysis
Risk analysis is a technique used to identify and assess factors that may jeopardize the success of
a project or achieving a goal. This technique also helps to define preventive measures to reduce
the probability of these factors from occurring and identify countermeasures to successfully deal
with these constraints when they develop to avert possible negative effects on the
competitiveness of the company.
Expected value or profit is used for calculating risk. It is done as suppose random variable X can
take value x1 with probability p1, value x2 with probability p2, and so on, up to value xk with
probability pk. Then the expectation of this random variable X is defined as
SWOT
It is a way of evaluating the strengths, weaknesses, opportunities, and threats that affect
something. SWOT is an acronym meaning strengths, weaknesses, opportunities and threats.
SWOT analysis requires that a comprehensive appraisal of internal and external situations be
undertaken before suitable strategic options can be determined. Good strategies are built on the
strengths of a company and on exploiting opportunities.
Six Sigma - Black Belt
Strengths and Weaknesses - A strength is something that the company is good at doing. Some
strengths that a company may enjoy are Engineering expertise, technical patents, skilled
workforce, solid financial position and reputation for quality.
A weakness is something that the firm lacks or is a condition that puts it at a disadvantage.
Analysis of weaknesses should cover the following key areas
An evaluation of each subunit of business
The status of tracking or control systems for the critical success indicators.
An indication of the company’s level of creativity, risk taking and competitive approach.
An assessment of the resources available to implement plans.
An analysis of the current organizational culture and of the company’s way of doing
business.
Opportunities and Threats - The firm must be able to assess the external environment in
preparation for challenges. The external environment can include assessment of the following
Economic environment
Socio-political environment
Social environment
Technological environment
Competitive environment
A firm’s external world will provide opportunities and threats. The strategy must match up with
Opportunities suited to the firm’s capabilities
Defenses against external threats
Changes to the external environment
PEST
PEST analysis expands to Political, Economic, Social and Technological analysis describes a
framework of macro-environmental factors used in the environmental scanning component of
strategic management. The basic PEST analysis includes four factors
Political factors are basically to what degree the government intervenes in the economy.
Specifically, political factors include areas such as tax policy, labor law, environmental law,
trade restrictions, tariffs, and political stability. Political factors may also include goods and
services which the government wants to provide or be provided (merit goods) and those that
the government does not want to be provided (demerit goods). Furthermore, governments
have great influence on the health, education, and infrastructure of a nation.
Economic factors include economic growth, interest rates, exchange rates and the inflation
rate. These factors have major impacts on how businesses operate and make decisions. For
example, interest rates affect a firm's cost of capital and therefore to what extent a business
grows and expands. Exchange rates affect the costs of exporting goods and the supply and
price of imported goods in an economy.
Six Sigma - Black Belt
Social factors include the cultural aspects and include health consciousness, population
growth rate, age distribution, career attitudes and emphasis on safety. Trends in social factors
affect the demand for a company's products and how that company operates. For example, an
aging population may imply a smaller and less-willing workforce (thus increasing the cost of
labor). Furthermore, companies may change various management strategies to adapt to these
social trends (such as recruiting older workers).
Technological factors include technological aspects such as R&D activity, automation,
technology incentives and the rate of technological change. They can determine barriers to
entry, minimum efficient production level and influence outsourcing decisions. Furthermore,
technological shifts can affect costs, quality, and lead to innovation.
8. CONTROL PHASE
The Control phase focuses on how to select, construct, interpret and apply critical aspects of
statistical process control, lean tools and MSA.
Six Sigma - Black Belt
8.1. Statistical Process Control (SPC)
Statistical process control (SPC) is a technique for applying statistical analysis to measure,
monitor, and control processes.
Objectives
The major component of SPC is the use of control charting methods. It had been pioneered by
Walter Shewhart in the 1920s and later enhanced by W. Edwards Deming, statistical process
control (SPC)is a statistical method for measuring, monitoring, controlling, and improving a
process. The basic rule of SPC is to leave the variations from common causes to chance, but to
identify and eliminate special causes. Since all processes are subject to variation, SPC relies on
the statistical evidence instead of on intuition.
SPC focuses on optimizing continuous improvement by using statistical tools for analyzing data,
making inferences about process behavior, and then making appropriate decisions. Variation is
defined as "a change in the process data; a characteristic or a function that results from some
cause." Statistical process control begins with the recognition that all processes contain variation.
No matter how consistent the production appears to be, measurement of the process data will
indicate a level of dispersion or variability in the data. The management and improvement of
variation are at the very heart of the strategy of statistical process control.
The basic assumption made in SPC is that all processes are subject to variation. This variation
may be classified as one of two types, random or chance cause variation and assignable cause
variation. Benefits of statistical process control include the ability to monitor a stable process and
identify if changes occur that are due to factors other than random variation. When assignable
cause variation does occur, the statistical analysis facilitates identification of the source so that it
may be eliminated. The objectives of statistical process control are to determine process
capability, monitor processes and identify whether the process is operating as expected or
whether the process has changed and corrective action is required.
The objectives of SPC are
Using the data generated by the process, called the “voice of the process,” to inform the Six
Sigma Black Belt and team members when intervention is or is not required.
Reducing variation, increase knowledge about a process and steer the process in the desired
way.
Detecting quickly the occurrence of special causes of process shifts so that investigation of
the process and corrective action may be undertaken before many nonconforming (defective)
units are manufactured.
SPC accrues various benefits as it
Monitor processes for maintaining control
Detect special causes
Serve as decision-making aids
Reduce the need for inspection
Increase product consistency
Improve product quality
Decrease scrap and rework
Six Sigma - Black Belt
Increase production output
Streamline processes
Interpretation of control charts may be used as a predictive tool to indicate when changes are
required prior to production of out of tolerance material. As an example, in a machining
operation, tool wear can cause gradual increases or decreases in a part dimension. Observation of
a trend in the affected dimension allows the operator to replace the worn tool before defective
parts are manufactured. An additional benefit of control charts is to monitor continuous
improvement efforts. When process changes are made which reduce variation, the control chart
can be used to determine if the changes were effective. Costs associated with SPC include the
selection of the variable(s) or attribute(s) to monitor, setting up the control charts and data
collection system, training of operators, and investigation and correction when data values fall
outside control limits.
The basic rule of SPC is that variation from common causes (controlled) should be left to
chance, but special causes (uncontrolled) should be identified and eliminated. Shewhart called
the causes “common” and “assignable” respectively however, the terms common and special are
more frequently used today.
Selection of Variables
The risk of charting many parameters is that the operator will spend so much time and effort
completing the charts, that the actual process becomes secondary. When a change does occur, it
will most likely be overlooked. In the ideal case, one process parameter is the most critical, and
is indicative of the process as a whole. Some specifications identify this as a critical to quality
(CTQ) characteristic. CTQ may also be identified as a key characteristic.
Key process input variables (KPIVs) may be analyzed to determine the degree of their effect on a
process. For some processes, an input variable such as temperature may be so significant that
control charting is mandated. Key process output variables (KPOVs) are candidates both for
determining process capability and process monitoring using control charting.
Design of experiments (DOE) and analysis of variance (ANOVA) methods may also be used to
identify variable(s) that are most significant to process control.
Because of the Improve Phase of the DMAIC process, the project team has implemented
improvements to the variables or inputs (Xs) in the process causing variation in the output (Y).
Once these improvements are in place, it is important to monitor the process. Select statistically
and practically significant variables for monitoring that are critical to quality (CTQ) when
establishing control charts. It is possible to monitor multiple variables using separate control
charts.
Common causes - Common causes are sources of process variation that are inherent in a process
over time. A process that has only common causes operating is said to be in statistical control. A
common cause is sometimes referred to as a "chance cause" or "random cause". Examples
includes variation in raw material, variation in ambient temperature and humidity, variation in
electrical or pneumatic sources, variation within equipment (worn bearings) or variation in the
input data
Six Sigma - Black Belt
Special causes - Special causes or assignable causes are sources of process variation (other than
inherent process variation) periodically disrupting the process. A process that has special causes
operating is said to lack statistical control. Examples include tool wear, large changes in raw
materials or broken equipment.
Type I SPC Error - It occurs when we treat a behavior as a special cause when no change has
occurred in the process. It is also referred to as "over control".
Type II SPC Error - Occurs when we do not treat a behavior as a special cause when in fact it
is a special cause. It is also referred to as under control.
Defect- An undesirable result on a product; also known as "a nonconformity".
Defective -An entire unit failing to meet specifications; also known as "a nonconformance".
Rational sub-grouping
Rational sub-grouping is a subset defined by a specific factor. As a sample with variations
caused by conditions producing random effects, the rational subgroup identifies and separates
variations by special causes. Rational subgroups are our attempt to be sure that we are asking the
right questions about the data. Selecting the appropriate control chart to use depends on the
subgroups.
A control chart provides a statistical test to determine if the variation from sample to sample is
consistent with the average variation within the sample. Generally, subgroups are selected in a
way that makes each subgroup as homogeneous as possible. This provides the maximum
opportunity for estimating expected variation from one subgroup to another. In production
control charting, it is very important to maintain the order of production. Data from a charted
process, which shows out of control conditions, may be mixed to create new - R charts which
demonstrate remarkable control. By mixing, chance causes are substituted for the original
assignable causes as a basis for the differences among subgroups.
Sub-grouping Schemes - Where order of production is used as a basis for sub-grouping, two
fundamentally different approaches are possible. The subgroup consists of product all produced
as nearly as possible at one time. The subgroup consists of product intended to be representative
of all the production over a given period of time.
The second method is sometimes preferred where one of the purposes of the control chart is to
influence decisions on acceptance of product. In most cases, more useful information will be
obtained from, five subgroups of 5th an from one subgroup of 25. In large subgroups, such as 25,
there is likely to be too much opportunity for a process change within the sub-group.
The steps for sub-grouping are
Select the measurement
Identify the best data to track.
Six Sigma - Black Belt
Focus on the vital few, not the trivial many.
Select the best data for a few charts.
Produce elements of the subgroup in closely similar identical ways.
Identify number of subgroups
Establishing rational subgroups is important for dividing observations.
Compute statistics for each subgroup separately before plotting on the control chart.
Desire a minimal chance for variations within each subgroup
Sources of Variability - The long term variation in a product will, for convenience, be termed
the product (or process) spread. One of the objectives of control charting is to markedly reduce
the lot-to-lot variability. The distribution of products flowing from different streams may
produce variability’s greater than those of individual streams. It may be necessary to analyze
each stream-to-stream entity separately. Another main objective of control charting is to reduce
time-to-time variation.
Physical inspection measurements taken at different points on a given unit are referred to as
within-piece variability. Another source of variability is the piece-to piece variation. Often, the
inherent error of measurement is significant. This error consists of both human and equipment
components. The remaining variability is referred to as the inherent process capability.
For example the project team desires to monitor a process that manufactures PET (plastic) bottles
for the beverage industry. The bottles are injection-molded on a multi-cavity carousel. The
particular carousel contains 4 cavities and the team initially decides to take 3 bottles from each
cavity each hour and measure a critical characteristic.
Option 1 - Every hour, take 3 samples (subgroups) of 4 bottles (n= 4) at random. Plot the
process (on one chart).
Option 2 - Every hour, take 3 samples (subgroups) of 4 bottles (n= 4) or one bottle from each
cavity. Plot chart for process on 1 chart.
Option 3 - Every hour, take 4 samples (subgroups) and 3 bottles (n= 3) with each sample
from a different cavity. Plot each cavity on separate charts.
Control Chart Selection
Control charts are the most powerful tools to analyze variation in most processes - either
manufacturing or administrative. Control charts were originated by Walter Shewhart in 1931
with a publication called Economic Control of Quality of Manufactured Product.
Originated by Walter Shewhart, control charts are a type of graph for studying how a process
changes over time. By comparing data points to a central line average, with an upper control
limit (UCL) and lower control limit (LCL), users can note variation, track common causes, and
seek special causes. Alternative names are "statistical process control charts" and "Shewhart
charts". Run charts display data measures over time without the central line average and the
limits.
Six Sigma - Black Belt
Control charts using variables data are line graphs that display a dynamic picture of process
behavior. Control charts for attributes data require 25 or more subgroups to calculate the control
limits. A process which is under statistical control is characterized by plot points that do not
exceed the upper or lower control limits. When a process is in control, it is predictable.
Control Chart have various benefits as the addition of calculated control limits facilitates the
ability to detect special or assignable causes of variation, the current process is displayed and
compared to the improved process by identifying shifts in either average or variation and since
every process varies within predictable limits, identifying assignable causes and addressing them
will save money.
Control charts are used to control ongoing processes by finding and correcting problems as they
occur, to predict the expected range of outcomes from a process, determine if a process is in
statistical control, differentiate variation from non-routine events or common causes and
determine whether the quality improvement should aim to prevent specific problems or make
fundamental process changes.
Types of control charts - Different types of control charts exist depending on the measurement
used and two basic categories are
Variable charts - It is constructed from variable data (data that consists of measurements like
weight, length, etc.). Variable data contains more information than data that simply qualifies
or counts something. Consequently, variable charts are some of the most powerful tools in
quality improvement. In it the samples are taken in 2-10 subgroups at predetermined intervals
with the statistic (mean, range, or standard deviation) calculated and recorded on the chart.
Various types of variable charts are
X - R Charts (when data is readily available)
Run Charts (limited single-point data)
M X - MR Charts (moving average/moving range)
X - MR Charts (I - MR, individual moving range)
X - S Charts (when sigma is readily available)
Median Charts
Short Run Charts
Attribute charts - It Uses attribute data(data that counts items, such as the number of rejects
or the number of errors). Control charts based on attribute data are generally less powerful
and sometimes more difficult to interpret than variable charts. Samples are taken from lots of
material where the number of defective units in the sample are counted (for p and np-charts)
or the number of individual defects are counted for a defined unit (c and u-charts). Various
types of attribute charts are
p Charts (for defectives - sample size varies)
np Charts (for defectives - sample size fixed)
Six Sigma - Black Belt
c Charts (for defects - sample size fixed)
u Charts (for defects - sample size varies)
The structure of both types of control charts is similar, but the statistical construction of the
control limits is quite different due to the differences in the distributions in each.
X - R Chart – The X and R (average and range chart) is most widely used by many companies
as they implement statistical process control. These charts are very useful because they are
sensitive enough to detect early signals of process drift or target shift. It's main advantages are
easy to construct and interpret, information from data is needed to perform process capability
studies, when a process can be sufficiently monitored by collecting variable data in small
subgroups and can be sensitive to process changes and provide early warning; providing
opportunity to act before situation worsens. But it has the disadvantage of only being used when
data is available to collect in subgroups.
The CL is determined by averaging the X s as, X =( X 1 + X 2 + X n )/n where, n is the number of
samples. The UCL and the LCL are UCL= X + 3σ, CL= X and LCL= X + 3σ. The mean
range and the standard deviation for normally distributed data are linked as σ =R/d2 where, the
constant d2 is function of n. Various terms are used in this type of chart, which includes
n - Sample size (subgroup size)
X -A reading (the data)
X - Average of readings in a sample
X - Average of all the X s. It is the value of the central line on the chart.
R - The range. The difference between the largest and smallest value in each sample.
R - Average of all the Rs. It is the value of the central line on the R chart.
UCL/LCL - Upper and Lower/control limits - The control boundaries for 99.73% of the
population. They are not specification limits.
Steps for constructing X - R Charts
Determine the sample size (n = 3, 4, or 5) and the frequency of sampling.
Collect 20 to 25 sets of time - sequenced samples.
Calculate the average = X for each set of samples.
Calculate the range = R for each set of samples.
Calculate X (the average of all the X ’s). This is the center line of the chart.
Calculate R (the average of all the R’s). This is the center line of the R chart.
Calculate the control limits as
Plot the data and interpret the chart for special or assignable causes.
Six Sigma - Black Belt
X-Bar and Sigma Charts - X-bar ( X ) and sigma (S) charts are often used for increased
sensitivity to variation (especially when larger sample sizes are used).The sample standard
deviation (S) formula is
The X chart is constructed in the same way as described earlier, except that sigma ( S ) is used
for the control limit calculations via the following formulas
The estimated standard deviation ( ) called sigma hat, can be calculated by
The A3, B3, B4 and C4 factors are based on sample size and are obtained from tables. The X
and s (average and standard deviation) chart is complex and not used extensively. It has the
advantage that, when the subgroup sizes are fairly large (greater than 10), it is often beneficial to
consider the average and standard deviation chart, since using the range as the measure of
dispersion may not yield a good estimate of process variability and it may also be used when
more sensitivity in detecting a process shift is desired, as in the case where the product being
manufactured is quite expensive and any change in the process could either cause quality
problems or add unnecessary costs. It has the disadvantage that it may issue false signals at a
much higher rate than other types of control charts and is complex to construct and use.
Six Sigma - Black Belt
Median (X-tilde and R) Chart - The median control chart or X~ and R chart is calculated using
the same formulas as the X and R chart. The median control chart is different from the average
and range chart in that it is easier to use and requires fewer calculations because the median is
plotted rather than the average of the sample. Typically, the ease of using arithmetic is the
advantage of using a median chart. It is easy to use, shows the process variation, the median and
the spread but has the disadvantage of being less efficient, as exhibiting more variation than the
X and R chart and difficult to detect trends and other anomalies in the range.
Six Sigma - Black Belt
Moving Range - It is because of the type of data available and the situation, various control
charts may be applicable. Given the unknowns of future projects and situations, the project team
may prefer to use the individual and moving range (X-MR, I-MR) control chart. The project
team often use this chart with limited data, such as when production rates are slow, testing costs
are very high, or there is a high level of uncertainty relative to future projects. It has also found
use where data are plentiful, such as in the case of automatic testing of every unit where no basis
exists for establishing subgroups.
M X -MR (moving average-moving range) charts are a variation of the X -R chart where data is
less readily available. There are several construction techniques, the one most sensitive to change
is n = 3.Control limits are calculated using the X -R formulas and factors.
X-MR Chart - The individual and moving range chart (X-MR, I-MR) is applicable when the
sample size used for process monitoring is n= 1. X-MR charts are used in various applications
like the early stages of a process when one is not quite sure of the structure of the process data,
when analyzing every unit, slow production rates with long intervals between observations, when
differences in measurements are too small to create an objective difference, when measurements
differ only because of laboratory or analysis error or when taking multiple measurements on the
same unit (as thickness measurements on different places of a sheet).
Control charts plotting individual readings and a moving range may be used for short runs and in
the case of destructive testing. X-MR charts are also known as I -MR, individual moving range
charts. The formulas are as
The control limits for the range chart are calculated exactly as for the X -R chart. The X-MR
chart is the only control chart which may have specification limits shown. M X -MR charts with
n = 3 is recommended by the authors when information is limited.
An X-MR (individuals and moving range) chart is useful as, it is made with individual measures
(when the subgroup size is one). The X-MR chart is applicable to many different situations, since
there are many scenarios when the most obvious subgroup size is one (monthly data, etc.).
It has the advantage of being useful even in a situation with small amounts of data, easy to
construct and useful in the early stages of a new process when not much is known about the
structure of the data but it has the disadvantage as it cannot discern between common cause and
special cause variation.
Six Sigma - Black Belt
Attribute Charts - Attributes are discrete, counted data, such as defects or no defects. Only one
chart is plotted for attributes.
Chart
p
np
c
u
Records
Fraction Defective
Number of Defectives
Number of Defects
Number of defects per unit
Subgroup size
Varies
Constant
Constant
Varies
Normally the subgroup size is greater than 50 (for p charts). The average number of
defects/defectives is equal to or greater than 4 or 5. The most sensitive attribute chart is the p
chart. The most sensitive and expensive chart is the X -R.
p-Charts - The p-chart is one of the most-used types of attribute charts. It shows the proportion
of defective items in successive samples of equal or varying size. Consider the proportion as the
number of defectives divided by the number in the sample. To develop the control limits for a pchart, consider the case where we are inspecting a variable sample size and recording the number
of nonconforming items in each sample.
The p-chart is used when dealing with ratios, proportions, or percentages of conforming or
nonconforming parts in a given sample. A good example for a p-chart is the inspection of
products on a production line. They are either conforming or non-conforming. The probability
distribution used in this context is the binomial distribution with p for the nonconforming
proportion and q (which is equal to 1 − p) for the proportion of conforming items. Because the
products are only inspected once, the experiments are independent from one another. The first
step when creating a p-chart is to calculate the proportion of nonconformity for each sample as p
Six Sigma - Black Belt
=m/b where, m represents the number of nonconforming items, b is the number of items in the
sample, and p is the proportion of nonconformity. The mean proportion is computed as
where, k is the number of samples audited and pk is the kth proportion obtained. The control
limits of a p-chart are
The benefit of the p-chart is that the variations of the process change with the sizes of the
samples or the defects found on each sample.
np-Charts - The np-chart, number of defective units, is related to the p-chart. The np-chart is a
control chart of the counts of nonconforming items (defectives) in successive samples of
constant size. The np-chart can be used in place of the p-chart to plot the counts of nonconforming items (defectives) when there is a constant sample size. In effect, using np-charts
involves converting from proportions to a plot of the actual counts.
The np-chart is one of the easiest to build. While the p-chart tracks the proportion of
nonconformities per sample, the np-chart plots the number of nonconforming items per sample.
The audit process of the samples follows a binomial distribution—in other words, the expected
outcome is “good” or “bad,” and therefore the mean number of successes is np. The control
limits for an np-chart are
C-Chart - The c-chart is based on Poisson distribution and work with the count of individual
defects rather than numbers of defective units. It's formula assume counting the number of
defects in the same area of opportunity. The c in the formulas is the number of defects found in
the defined inspection unit, and that is plotted on the chart.
The c-chart monitors the process variations due to the fluctuations of defects per item or group of
items. The c-chart is useful for the process engineer to know not just how many items are not
conforming but how many defects there are per item. Knowing how many defects there are on a
given part produced on a line might in some cases be as important as knowing how many parts
are defective. Here, non-conformance must be distinguished from defective items because there
can be several nonconformities on a single defective item.
The probability for a nonconformity to be found on an item in this case follows a Poisson
distribution. If the sample size does not change and the defects on the items are fairly easy to
count, the c-chart becomes an effective tool to monitor the quality of the production process. If c
is the average nonconformity on a sample, the UCL and the LCL limits will be given as
Six Sigma - Black Belt
U-Chart - It is also based on Poisson distribution and work with the count of individual defects
rather than numbers of defective units. With a u-chart, the number of inspection units may vary.
The u-chart requires an additional calculation with each sample to determine the average number
of defects per inspection unit. The n in the formulas is the number of inspection units in the
sample.
The sample sizes can vary when a u-chart is being used to monitor the quality of the production
process, and the u-chart does not require any limit to the number of potential defects. Further, for
a p-chart or an np-chart the number of nonconformities cannot exceed the number of items on a
sample, but for a u-chart it is conceivable because what is being addressed is not the number of
defective items but the number of defects on the sample. The first step in creating a u-chart is to
calculate the number of defects per unit for each sample as u = c/ n. where u represents the
average defect per sample, c is the total number of defects, and n is the sample size. Once all the
averages are determined, a distribution of the means is created and then the mean of the
distribution is to be computed as
where k is the number of samples. The control limits are determined based on u and the mean
of the samples, n as
Control Chart Analysis
Interpreting control charts is a learned behavior based upon increased process knowledge. No
shortcuts exist to becoming competent at the skill of interpreting control charts and it is most
certainly not a skill learned without practice. The distinction between common and special
causes is critical in statistical process control. For Shewhart and Deming, this distinction is the
distinction between a process surrounded by "noise" and one sending a "signal."
Improving the process is the central goal of using control charts. Control charts provide a "voice
of the process" that enables a Black Belt to identify special causes of variation and remove them,
thus allowing for a stable and more consistent process. A control chart becomes a useful tool
after initial development. After establishing and basing the control limits on a stable, in-control
process, charts put in the work area allow operational personnel to monitor the process by
collecting data and plotting points on a regular basis. Personnel can act upon the signals from the
chart when conditions indicate the process is moving or has gone out of control.
Basic rules for control chart interpretation are
Specials are any points above the UCL or below the LCL.
A run violation is seven or more consecutive points on one side of the centerline.
A 1-in-20 violation is more than one point in twenty consecutive points close to control
limits.
A trend violation is any upward or downward movement of 5 or more consecutive points or
drifts of 7 or more points.
Six Sigma - Black Belt
Process Stability - Before taking appropriate action, a SSBB must identify the state the process.
A process can occupy one of 4 states as
Ideal state - A predictable process fully meeting the requirements.
Threshold state - A predictable process that is not always meeting the requirements.
Brink of chaos - An unpredictable process currently meeting the requirements.
State of chaos - An unpredictable process that is currently not meeting the requirements.
Out-of-control - If a process is “out-of-control,” then special causes of variation are present in
either the average chart or range chart, or both. These special causes must be found and
eliminated in order to achieve an in-control process. A process out-of-control is detected on a
control chart either by having any points outside the control limits or by unnatural patterns of
variability.
Usually the following conditions are based on Western Electric Rules for out of control, though
the lists of conditions may vary depending on the resource used.
1 point more than 3σ from the center line (either side)
9 points in a row on the same side of the center line
6 points in a row, all increasing or decreasing
14 points in a row, alternating up and down
2 out of 3 points more than 2σ from the center line (same side)
4 out of 5 points more than 1σ from the center line (same side)
15 points in a row within 1σ from the center line (either side)
8 points in a row more than 1σ from the center line (either side)
The Pre-control Technique - Pre-control was developed by a group of consultants (including
Dorin Shainin) in an attempt to replace the control chart. Pre-control is most successful with
processes which are inherently stable and not subject to rapid process drifts once they are set up.
It can be shown that 86% of the parts will be inside the P-C lines with 7% in each of the outer
sections, if the process is normally distributed and Cpk= 1.
The chance that two parts in a row will fall outside either P-C line is 1/7 times 1/7, or 1/49.This
means that only once in every 49 pieces can we expect to get two pieces in a row outside the P-C
lines just due to chance.
Six Sigma - Black Belt
Pre-control is a simple algorithm based on tolerances which is used for controlling a process.
Pre-control is a method of detecting and preventing failures and assumes the process is
producing a measurable product, with varying characteristics according to some distribution.
Pre-control zones include halfway between the target and each specification limit. Each zone
between the lines has colors resembling a traffic signal with green (acceptable), yellow (alert),
and red (unacceptable).
The Pre-control utilizes process capability limits instead of specification limits to set the green,
yellow, and red zones and is therefore considered more robust than the traditional use of precontrol charts. The limits of each zone are calculated based on the distribution of the
characteristic measured, not on the tolerances. Units that fall in the yellow or red zones trigger an
alarm before defects are produced. Pre-control rules are as follows
Rule 1 - If two parts are in the green zone, take no action – continue to run.
Rule 2 - If the first part is in the green or yellow zones, then check the second part. If second
part is in the green zone, then continue to run. If first part is in the yellow zone and the
second part is also in the yellow zone on the same side, adjust the process. If first part is in
the yellow zone and the second part is also in the yellow zone on the opposite side, stop and
investigate the process.
Rule 3 - If any part is in the red zone, then stop. Investigate, adjust, or reset the process. Requalify the process and begin again with Rule 1.
It has the advantage of easy to implement and interpret, being used in initial setup operations to
determine if the product is centered between the tolerances, easy to detect shifts in process
centering or increases in process spread and it serves as a set up plan for short production runs.
But, it has the disadvantages of lacking information about how to reduce variability or how to
return the process into control, too limited to use for process with a capability ratio greater than
1.0 and small sample size limits the ability of the chart to detect moderate to large shifts.
Six Sigma - Black Belt
Runs Test for Randomness - A run is a sequence of data that exhibit the same characteristic.
Time sequence analysis can apply to both variable and attribute data. As an example, results of
surveys of individuals who prefer Diet Pepsi or Diet Coca Cola
Test I PPPPPPPPPCCCCCCCCC
Test II PCPCPCPCPCPCPCPCPC
In both examples, eighteen samples were taken. Test I, there were only 2 runs. In Test II, there
were 18 runs. Both examples suggest non-random behavior. To perform a runs test
Determine the value of n1 and n2(either the total of two attributes or the readings above and
below the center line on a run or control chart).
Determine the number of runs (R).
Consult a critical value table or calculate a test statistic.
Consult the Critical Value Table for the expected numbers of runs. The expected number of runs
can be approximated by adding the smallest and largest values together and dividing by two.
Short-Run SPC - Short-run or low-volume production is common in manufacturing systems and
includes manufacturing processes that produce built-to-order products or quick turnaround
production. The short-run control chart can also be used in other industries such as general
services and healthcare when data are collected infrequently. These processes often are so short
that not enough data can be collected to construct standard control charts.
Statistical process control techniques have been developed to accommodate short-run production
for both variables data and attributes data. Examples of control charts for both situations are
presented. If possible, collect approximately 20 samples before constructing the control charts
for short production runs are constructed. In the examples presented in this subtopic, ten samples
will be used for illustration purposes.
Short run charting may be desirable when the production lot size is extremely small (10-20)
pieces or when the sample size, under typical operating conditions, is small. Two limited data
charts may be used X - MR Charts M- MR Charts.
The emphasis has been on short runs and multiple variables per chart. Consider a part which has
four key dimensions. Each dimension has a different target but expected similar variances. The
for each variable are coded by subtracting the target value. Calculating Centerlines is done as
Six Sigma - Black Belt
Exponentially Weighted Moving Average (EWMA) - The exponentially weighted moving
average (EWMA) is a statistic for monitoring a process by averaging the data in a way that gives
less and less weight to data as they are further removed in time. By the choice of a weighting
factor, 8, the EWMA control procedure can be made sensitive to a small or gradual drift in the
process. The statistic that is calculated is
where, EWMA0is the mean of historical data (target), Yt is the observation at time t, n is the
number of observations to be monitored including EWMA0, 0 < λ <= 1 is a constant that
determines the depth of memory of the EWMA . It is a variable control chart where each new
result is averaged with the previous average value using an experimentally determined weighting
factor, λ (lambda).
The parameter, λ determines the rate at which “older” data enters into the calculation of the
EWMA statistic. A large value of λ gives more weight to recent data and a small value of8 gives
more weight to older data. The value of 8 is usually set between 0.2 and 0.3 although this choice
is somewhat arbitrary. The estimated variance of the EWMA statistic is approximately
when t is not small, and where s is the standard deviation calculated from the historical data. The
center line for the control chart is the target value or EWMA0.The control limits are
Where the factor k is either set equal to 3 or chosen using the Lucas and Saccucci tables. The
data are assumed to be independent and these tables also assume a normal population.
It usually only averages plotted and range omitted and the action signal, a single point out of
limits. It is also known as the Geometric Moving Average (GMA) chart and used extensively in
time-series modeling and in forecasting. It allows the user to detect smaller shifts in the process
than with traditional control charts and is ideal to use with individual observations.
CUSUM Charts - It is called as the cumulative sum control chart (CUSUM) and is used with
variable data and calculates the cumulative sum of the deviations from target to detect shifts in
the level of the measurement. It may be suitable when necessary to detect small process shifts
faster than with a comparable Shewhart control chart.
The chart is effective with samples of size n= 1 where rational subgroups are frequently of size
one. Examples of utilization are in the chemical and process industries and in discrete parts
manufacturing. The CUSUM chart can be graphical (V-mask) or tabular (algorithmic) and unlike
standard charts, all previous measurements for CUSUM charts are included in the calculation for
the latest plot. But, establishing and maintaining the CUSUM chart is complicated.
Six Sigma - Black Belt
V-mask - A V-mask resembles a sideways V. The chart is used to determine whether each
plotted point falls within the boundaries of the V-mark. Points falling outside are considered to
signal a shift in the process mean. Each time a point is plotted, the V-mask is shifted to the right.
The geometry associated with the construction of the V-mask is based on a combination of
specified and computed values. The graph below shows how the formulas relate.
The behavior of the V-Mask is determined by the distance k (which is the slope of the lower
arm) and the rise distance h. The team could also specify d and the vertex angle (or, as is more
common in the literature, q = 1/2 the vertex angle). For an alpha and beta design approach, we
must specify
α, the probability of concluding that a shift in the process has occurred, when in fact it did
not.
β, the probability of not detecting that a shift in the process mean has, in fact, occurred.
δ(delta), the detection level for a shift in the process mean, expressed as a multiple of the
standard deviation of the data points.
These charts have been shown to be more efficient in detecting small shifts in the mean of a
process than Shewhart charts. They are better to detect 2 sigma or less shifts in the mean. To
create a CuSum chart, collect m sample groups, each of size n, and compute the mean x of each
sample. Determine Sm or S'm from the following equations
where, µ0 is the estimate of the in-control mean andσx is the known (or estimated) standard
deviation of the sample means. The CuSum control chart is formed by plotting Sm or S'm as a
function of m. If the process remains in control, centered at µ0, the CuSum plot will show
variation in a random pattern centered about zero.
Six Sigma - Black Belt
A visual procedure proposed by Barnard, known as the V-Mask, may be used to determine
whether a process is out of control. A V-Mask is an overlay V shape that is superimposed on the
graph of the cumulative sums. As long as all the previous points lie between the sides of the V,
the process is in control.
For example a process has an estimated mean of 5.000 with h set at 2 and k at 0.5.As h and k are
set to smaller values, the V-Mask becomes sensitive to smaller changes in the process average.
Consider the following 16 data points, each of which is average of 4 samples (m=16, n=4). The
CuSum control chart with 16 data groups and shows the process to be in control.
If data collection is continued until there are 20 data points (m=20, n=4), the CuSum control
chart shows the process shifted upward, as indicated by data points 16, 17 and 18 below the
lower arm of the V-Mask.
8.2. Other Control Tools
Various other control tools can also be applied to not only supplement SPC but also enhance the
effectiveness of six sigma.
TPM
Total Productive Maintenance (TPM) is productive maintenance undertaken by every employee
through small group activities. TPM is equipment maintenance performed across the
organization.
Six Sigma - Black Belt
TPM maximizes the productivity of equipment by predicting and preventing unplanned
downtime. TPM is one of the critical building blocks in the lean continuous improvement
process, which can increase a machine’s capacity, reduce maintenance costs, eliminate overtime
shifts drastically and increase productivity and profits for the company. These benefits of TPM
enable lower inventory levels as no need to cover unplanned downtime.
It reduces the roles of production and maintenance by focusing on empowering operators to help
maintain their equipment. The implementation of a TPM program creates a shared responsibility
for equipment encouraging greater involvement by plant floor workers and can be very effective
in improving productivity by increasing up time, reducing cycle times and eliminating defects.
Nine Essentials of TPM
Self maintained work place
Elimination of the 6 big losses
Zero Breakdowns
Zero Defects
Optimal life and availability of tools
Self-improvement
Short production-development time and low machine life cost
Productivity in indirect departments
Zero Accidents
TPM focuses on employees to achieve maintenance free operations and is achieved by
implementing of following three practices
Development of TPM pillars
Prevention of big losses
Measuring and monitoring OEE
The traditional approach to TPM was developed in the 1960s and consists of 5S as a foundation
and eight supporting activities also called as pillars.
Eight Pillars of TPM - The eight pillars of TPM consists of
Autonomous Maintenance
Continuous Improvement
Planned Maintenance
Quality Maintenance
Materials planning, design and equipment control
Education & Training
Office TPM
Safety, Hygiene and Environment Control
Six Sigma - Black Belt
5S, The Foundation of TPM - TPM starts with 5S. Problems cannot be clearly seen when the
work place is unorganized. Cleaning and organizing the workplace helps the team to uncover
problems. Making problems visible is the first step of improvement.
SEIRI - Sort - Seiri means sorting and organizing the items as critical, important, frequently
used items, or items that are not currently needed. Unwanted items can be salvaged. Critical
items should be kept for use nearby and items that are not needed in near future should be
stored some place else.
SEITON - Organize - The concept here is that "A place for everything, and everything in its
place". After usage items should be stored in their designated storage location. To identify
items easily, name plates and colored tags can be used. Vertical racks can be used for
organization.
SEISO - Shine - Seiso involves cleaning the workplace and ensuring equipment is free of
burrs, loose wires, grease, oil, waste, scrap, etc.
SEIKETSU - Standardization - Associates decide together on standards for keeping the
workplace, machines, and pathways neat and clean. These standards are implemented for
whole organization and are regularly checked.
SHITSUKE - Self Discipline - Accepting 5S as a way of life forms self-discipline among the
associates. This includes wearing badges, following work procedures, punctuality, dedication
to the organization, etc.
JISHU HOZEN PILLAR (Autonomous Maintenance) - Jishu Hozen, which means
autonomous or self-maintenance, promotes development of production operators to be able to
take care of small maintenance tasks, such as cleaning, inspecting, and lubricating their
equipment, thus freeing the maintenance associates to spend time on more value-added activities
and technical repairs. The operators are responsible for upkeep of their equipment to prevent it
from deteriorating. Jishu Hozen (JH) has been shown to reduce oil consumption by 50% and
process time by 50%.
Goals of Jishu Hozen
Uninterrupted operation of equipment
Flexible operators who can operate and maintain other equipment
Elimination of defects at the source through active employee participation
Stepwise implementation of JH activities
The effects of autonomous maintenance include
Equipment condition is known at all times.
Unexpected breakdowns are minimized.
Corrosion is prevented, wear is delayed, and machine life is extended.
Judgment of machine capability is improved.
Parts costs are reduced.
Production Operators are expected to perform the TPM Activities of cleaning, lubrication, and
inspection on a daily basis. Make sure user follow the instructions given by the supervisor.
Six Sigma - Black Belt
2.Kobetsu Kaizen or Continuous Improvement - "Kai" means change, and "Zen" means good
(for the better). Kaizen is the opposite of big spectacular innovations. Kaizen is small
improvements carried out on a continual basis and involves all people in the organization.
Kaizen requires no or little investment. The principle behind Kaizen is that a large number of
small improvements are more effective in an organizational environment than a few large-scale
improvements. Systematically using various Kaizen tools in a detailed and thorough method
eliminates losses. The goal is to achieve and sustain zero loses with respect to minor stops,
measurement and adjustments, defects, and unavoidable downtimes.
Kobetsu Kaizen uses a special event approach that focuses on improvements associated with
machines and is linked to the application of TPM. Kobetsu Kaizen begins with an up-front
planning activity that focuses its application where it will have the greatest effect within a
business and defines a project that analyses machine operations information, uncovers waste,
uses a form of root cause analysis (e.g., the 5 Why approach) to discover the causes of waste,
applies tools to remove waste, and measures results.
The objective of TPM is maximization of equipment effectiveness. TPM maximizes machine
utilization, not merely machine availability. As one of the pillars of TPM activities, Kaizen
activities promote efficient equipment and proper utilization of manpower, materials, and energy
by eliminating major losses. Examples of Kobetsu Kaizen includes
Relocating gauges and grease fittings for easier access.
Making shields that minimize contamination.
Centralizing lubrication points.
Making debris collection accessible.
3.Planned Maintenance - The goal of planned maintenance is to have trouble-free machines and
equipment that produce defect-free products for total customer satisfaction. Planned
Maintenance achieves and sustains availability of machines at an optimum maintenance cost,
reduces spares inventory, and improves reliability and maintainability of machines.
With Planned Maintenance the associates’ efforts evolve from a reactive approach to a proactive
method and trained maintenance staff helps train the operators to better maintain their
equipment.
Steps in Planned Maintenance (PM) include
Evaluate and record present equipment status.
Restore deterioration and improve weaknesses.
Build information management system.
Prepare time-based data system, select equipment, parts, and team, and make plan.
Prepare predictive maintenance system by introducing equipment diagnostic techniques.
Evaluate planned maintenance.
4. Hinshitsu Hozen or Quality Maintenance (QM) - Quality Maintenance (QM) targets
customer satisfaction through defect free manufacturing of the highest quality products. The
Six Sigma - Black Belt
focus is on eliminating non-conformances in a systematic manner. Through QM we gain an
understanding of what parts of the equipment affect product quality, eliminate quality concerns,
and then move to potential quality concerns. The transition is from reactive to proactive (From
Quality Control to Quality Assurance).
QM activities control equipment conditions to prevent quality defects, based on the concept of
maintaining perfect equipment to maintain perfect quality of products. These conditions are
checked and measured in time series to verify that measured values are within standard values to
prevent defects. The transition of measured values is trended to predict possibilities of defects
occurring and to take countermeasures before defects occur.
QM activities to support Quality Assurance through defect free conditions and control of
equipment. The focus is on effective implementation of operator quality assurance and detection
and segregation of defects at the source. Opportunities for designing Poka-Yoke (foolproof
system) are investigated and implemented as practicable.
5. Materials planning, design and equipment control - It directs practical knowledge and
understanding of manufacturing equipment gained through TPM towards improving the design
of new equipment. The new equipment reaches planned performance levels much faster due to
fewer startup issues. Maintenance is simpler and more robust due to practical review and
employee involvement prior to installation.
6. Education & Training - The goal of training is to have multi-skilled revitalized employees
whose morale is high and who are eager to come to work and perform all required functions
effectively and independently. The focus is on achieving and sustaining zero losses due to lack of
knowledge / skills / techniques. Ideally, we would create a factory full of experts.
Operators must upgrade their skills through education and training. It is not sufficient for
operators to learn how to do something; they should also learn why they are doing it and when it
should be done. Through experience operators gain “know-how” to address a specific problem,
but they do so without knowing the root cause of the problem and when and why they should be
doing it. Hence it becomes necessary to train operators on knowing why. This will enable the
operators to maintain their own machines, understand why failures occur, and suggest ways of
avoiding the failures occurring again.
7. Office TPM Pillar - Office TPM should be started after activating f our other pillars of TPM
(Jishu Hozen, Kobetsu Kaizen, Quality Maintenance, and Planned Maintenance). Office TPM
must be followed to improve productivity, efficiency in the administrative functions, and identify
and eliminate losses. This includes analyzing processes and procedures towards increased office
automation. Office TPM addresses twelve major losses
Processing loss
Cost loss in areas like procurement, accounts, marketing and sales
Communication loss
Idle loss
Set-up loss
Six Sigma - Black Belt
Accuracy loss
Office equipment breakdown
Communication channel breakdown, telephone and fax lines
Time spent on retrieval of information
Unavailability of correct on-line stock status
Customer complaints due to logistics
Expenses on emergency dispatches/purchases
Improving the office efficiency by eliminating the above-listed losses helps in achieving Total
Productive Maintenance.
8. Safety, Health and Environment - The target of the Safety, Health & Environment is
Zero accidents,
Zero health damage, and
Zero fires.
The focus is on creating a safe workplace and surrounding areas that are not damaged by our
process or procedures. Autonomous Maintenance is daily preventive maintenance (cleaning,
inspection, lubrication and re-tightening) performed by the equipment operator. This pillar plays
an active role in each of the other pillars on a regular basis. The major categories of Maintenance
includes
Breakdown Maintenance (BM) is when we wait for equipment to fail and then repair it. For
example, some electronic equipment is simply replaced when it fails.
Preventive Maintenance is periodic maintenance that retains the condition of equipment and
prevents failure through the prevention of deterioration, periodic inspection, and equipment
condition diagnosis. PM includes daily cleaning, inspection, lubrication and tightening.
Preventive Maintenance is further divided into Periodic Maintenance and Predictive
Maintenance. Periodic Maintenance is time-based, which involves periodically inspecting,
servicing, and cleaning equipment and replacing parts to prevent problems.
Predictive Maintenance is condition-based, which involves predicting service life of
important parts based upon inspection or diagnosis, to use the parts to the limit of their
service life.
Corrective Maintenance improves equipment and its components so that preventive
maintenance can be performed reliably. Equipment with a design weakness is redesigned
with corrective maintenance to improve reliability or maintainability.
Maintenance Prevention deals with improving the design of new equipment. Current machine
data (information leading to failure prevention, easier maintenance, prevention of defects,
safety, and ease of manufacturing) are studied and designs are incorporated in new
equipment.
OEE (Overall Equipment Effectiveness) - It is a metric that identifies the percentage of
planned production time that is truly productive. It was developed to support TPM initiatives by
accurately tracking progress towards achieving “perfect production”.
Six Sigma - Black Belt
An OEE score of 100% is perfect production.
An OEE score of 85% is world class for discrete manufacturers.
An OEE score of 60% is fairly typical for discrete manufacturers.
An OEE score of 40% is not uncommon for manufacturers without TPM and/or lean
programs.
OEE consists of three underlying components, each of which maps to one of the TPM goals set
out at the beginning of this topic, and each of which takes into account a different type of
productivity loss.
Component
TPM Goal
Type of Productivity Loss
Availability takes into account Down Time Loss, which includes
Availability No Breakdowns all events that stop planned production for an appreciable length
of time (typically several minutes or longer).
Performance takes into account Speed Loss, which includes all
No Small Stops or
Performance
factors that cause production to operate at less than the
Slow Running
maximum possible speed when running.
Quality takes into account Quality Loss, which factors out
Quality
No Defects
manufactured pieces that do not meet quality standards,
including pieces that require rework.
OEE takes into account all losses (Down Time Loss, Speed
OEE
Perfect Production Loss, and Quality Loss), resulting in a measure of truly
productive manufacturing time.
OEE is tightly coupled to the TPM goals of No Breakdowns (measured by Availability), No
Small Stops or Slow Running (measured by Performance), and No Defects (measured by
Quality). It is extremely important to measure OEE in order to expose and quantify productivity
losses, and in order to measure and track improvements resulting from TPM initiatives.
OEE calculation is based on the three OEE Factors: Availability, Performance, and Quality.
Here’s how each of these factors is calculated.
Availability - Availability takes into account Down Time Loss, and is calculated as:
Availability = Operating Time / Planned Production Time
Performance - Performance takes into account Speed Loss, and is calculated as:
Performance = Ideal Cycle Time / (Operating Time / Total Pieces)
Ideal Cycle Time is the minimum cycle time that process can be expected to achieve in optimal
circumstances. It is sometimes called Design Cycle Time, Theoretical Cycle Time or Nameplate
Capacity. Since Run Rate is the reciprocal of Cycle Time, Performance can also be calculated as
Performance = (Total Pieces / Operating Time) / Ideal Run Rate
Performance is capped at 100%, to ensure that if an error is made in specifying the Ideal Cycle
Time or Ideal Run Rate the effect on OEE will be limited.
Six Sigma - Black Belt
Quality - Quality takes into account Quality Loss, and is calculated as:
Quality = Good Pieces / Total Pieces
OEE - OEE takes into account all three OEE Factors, and is calculated as
OEE = Availability x Performance x Quality
It is very important to recognize that improving OEE is not the only objective. As an example,
following data for two production shifts may result in higher performance of the second shift
than the first due to higher OEE very few companies, however, would want to trade a 5.0%
increase in Availability for a 3.5% decline in Quality
OEE Factor
Availability
Performance
Quality
OEE
Shift 1
90.0%
95.0%
99.5%
85.1%
Shift 2
95.0%
95.0%
96.0%
86.6%
OEE provide the user with three numbers, which are all useful individually as situation changes
from day to day. And it helps user visualize performance in simple terms - a very practical
simplification. An example for OEE calculation is listed for better illustration
OEEE Example - The table below contains hypothetical shift data, to be used for a complete
OEE calculation, starting with the calculation of the OEE Factors of Availability, Performance,
and Quality. Note that the same units of measurement (in this case minutes and pieces) are
consistently used throughout the calculations.
Item
Data
Shift Length
8 hours = 480 min.
Short Breaks
2 @ 15 min. = 30 min.
Meal Break
1 @ 30 min. = 30 min.
Down Time
47 minutes
Ideal Run Rate 60 pieces per minute
Total Pieces
19,271 pieces
Reject Pieces
423 pieces
Planned Production Time = Shift Length – Breaks = 480 – 60 = 420 minutes
Operating Time = Planned Production Time - Down Time = 420 - 47 = 373 minutes
Good Pieces = Total Pieces - Reject Pieces = 19,271 – 423 = 18,848 pieces
Availability = Operating Time / Planned Production Time = 373 minutes / 420 minutes
= 0.8881 or 88.81%
Performance = (Total Pieces / Operating Time) / Ideal Run Rate
= (19,271 pieces / 373 minutes) / 60 pieces per minute
= 0.8611 or 86.11%
Six Sigma - Black Belt
Quality = Good Pieces / Total Pieces = 18,848 / 19,271 pieces = 0.9780 or 97.80%
OEE = Availability x Performance x Quality = 0.8881 x 0.8611 x 0.9780 = 0.7479 or 74.79%
The Six Big Losses - OEE loss categories (Down Time Loss, Speed Loss, and Quality Loss) can
be further broken down into what is commonly referred to as the Six Big Losses – the most
common causes of lost productivity in manufacturing. The Six Big Losses are extremely
important because they are nearly universal in application for discrete manufacturing, and they
provide a great starting framework for thinking about, identifying, and attacking waste (i.e.
productivity loss).
Big Losses
OEE
Category
Examples
Tooling Failure
Unplanned
Down Time
Breakdowns
Maintenance
Loss
Overheated Bearing
Motor Failure
Setup/Changeover
Material Shortage
Setup
and Down Time
Operator Shortage
Adjustments Loss
Major Adjustment
Warm-Up Time
Component Jam
Minor Adjustment
Small Stops
Speed Loss Sensor Blocked
Delivery Blocked
Cleaning/Checking
Incorrect Setting
Slow Running Speed Loss Equipment Wear
Alignment Problem
Quality
Scrap
Startup Defects
Loss
Rework
Production
Quality
Scrap
Defects
Loss
Rework
Comments
There is flexibility on where to set the
threshold between a Breakdown (Down Time
Loss) and a Small Stop (Speed Loss).
This loss is often addressed through setup time
reduction programs such as SMED (SingleMinute Exchange of Die).
Typically only includes stops that are less than
five minutes and that do not require
maintenance personnel.
Anything that keeps the equipment from
running at its theoretical maximum speed.
Rejects during warm-up, startup or other early
production.
Rejects during steady-state production.
Visual Factory
It is a combination of signs, charts and other visual representations of information that enable the
quick dissemination of data within a lean manufacturing process. The visual factory attempts to
reduce the time and resources required to communicate the same information verbally or in
written form, as both are viewed as a "waste" within the framework of a lean manufacturing
process.
It is the practice of communicating messages visually, to manage work, understand systems or
follow directions. Most of us have some visual clues in our workplaces. They may include
Six Sigma - Black Belt
notices with instructions, signs pointing to other places, or pictures of products or services.
Visual factory is a clear and simple way to organize and present information like
Visual factory is the concept of making a workplace more effective by making the current
condition of a workplace obvious at a glance. Since the dawn of history, people have used visual
signs and signals to simplify and speed up communication. Simple signals let us know something
needs attention, support identification and elimination of waste and ensures no problems are
hidden. When user can quickly see what is going on, then user don’t waste time and energy
trying to find out what’s happening. Visual factory is the concept of making a workplace more
effective
Benefits – Various benefits includes
It makes work standards quicker and easier to understand by all employees so they can
follow them.
It allow us to insert time saving.
It helps to eliminate the wasteful motion involved in searching for information and objects.
It helps to directly observe the work flow.
It allows identifying waste and problems as they occur and fix they become an issue.
It helps to motivate the team members by clearly clarifying key performance targets.
It builds participation through shared information.
Improving communication of key information
Providing everyone in the team with the same picture
Fostering collaboration, promoting teamwork and improving morale
Providing a forum where all staff are able to raise any issues
Helping the team identify and solve problems
Measuring progress, identifying trends and analyzing performance
Focusing on and establishing goals for continuous improvement
Visual Factory Tools - Visual factory evolved in factories, but its principles apply equally in
any setting, from offices to call centers. Ask yourself this, with a basic idea of what organisation
does, could someone walk into building/office and understand what the process is? Would they
be able to see how work passes through the process, or people pass through the service? Do we
make it easier for staff to perform by creating a visual workplace?
From signs, to painted aisles, to dial indicators on equipment, these basic applications of visual
factory exist in most operating or administrative environment. The key is to find creative ways to
apply visual factory to reduce waste in activities, connections, and flows. Some common visual
factory tools includes color coding, pictures/graphics, kanban cards, colored lines, signage,
labeling, control boards, area information boards, gages, dials, etc.
Six Sigma - Black Belt
The most popular techniques for visual factory are
Using Primary Visual Displays
Having Stand-up Meetings
Seeking continuous performance improvement by measuring, monitoring and reviewing team
performance
Together, these three actions provide a foundation upon which teams can begin to continuously
improve.
8.3. Maintain Control
Maintaining control involves re-assessing the effectiveness of measurement system and creating
a control plan for maintaining the precision of measurement systems and processes.
Measurement system re-analysis
It is an experimental and mathematical method of determining how much the variation within the
measurement process contributes to overall process variability. There are five parameters to
investigate in it which are bias, linearity, stability, repeatability and reproducibility. According to
Automotive Industry Action Group (AIAG-2002), a general rule of thumb for measurement
system acceptability is
Under 10 percent error is acceptable.
10 percent to 30 percent error suggests that the system is acceptable depending on the
importance of application, cost of measurement device, cost of repair, and other factors.
Over 30 percent error is considered unacceptable, and you should improve the measurement
system.
AIAG also states that the number of distinct categories the measurement systems divides a
process into should be greater than or equal to 5. In addition to percent error and the number of
distinct categories, you should also review graphical analyses over time to decide on the
acceptability of a measurement system.
Any measurement process for a system typically involves measurement precision as well as
measurement accuracy of the system variables subject to the constraints of the system.
Requirement for statistically analyzing a system would involve a process to determine the
variations from the mean (central) location which is imperative to analyze the measurement
accuracy taking into consideration factors of bias, stability and linearity. These parameters of
MSA (Measurement Systems Analysis) can be described as
Bias refers to a probability of presence of certain factors in a system which can influence
deviation from the standards in the system. Bias can lead to sampling of a data which on
analysis appear to be different from the actual or anticipated data set. In order to measure the
process measurement bias, for determinate measurement a process called calibration is
needed which is of higher level than measuring the data average. In case of indeterminate
Six Sigma - Black Belt
measurement process owing to constraints, normally the data average values are compared
with the standard values.
Stability refers to processes which are normally free from special cause variations. Analyzing
a system for stability typical involve the standard statistical processes such as SPC
(Statistical Process Control), scatter plots, ANOVA techniques and other standard deviation
measurement tools. Determination of stability standards in a system requires data sampled to
cover a wide range of possible variation factors and intensive piece meal statistical tests
covering variations in human resources, tools, parts, time, space and location factors.
Linearity refers to different statistical results from measurements when subjected to different
metric spaces. Linearity in a system is determined using higher levels of calibrations in
measurement standards which often guided by inferences drawn from various interaction
factors influencing a system. For instance, a non linearity in a system may result from
equipment (or tools) not calibrated for various levels of operating range or poor design of
system or any other system constraint.
Control plan
A Control Plan is the key to sustaining the gains from a Six Sigma project. A Control Plan exists
to ensure that we consistently operate our processes such that product always meets customer
requirements as it ties together the elements of Six Sigma improvement activity. it allows
Champions and other stakeholders to have confidence that the process improvements made will
be robust. The Control Plan is a guide for the Process Owner to assist in tracking and correcting
the performance of the KPIV's and KPOV's.
It is used to develop a detailed but simple document to clarify the details around controlling the
key inputs and key outputs for which all the improvement efforts have implemented. Once the
project is closed it is not necessarily over. The Control Plan is one part of ensuring the gains are
maintained. If process performance strays out of control there are details and tools to adjust and
re-monitor to ensure there has not been an over adjustment.
It is possible that the new performance capability warrants the calculation of new control limits.
If so, the test(s) used, evidence, and new control limits should also be a part of this document. It
is ideal to have a simple one-page document but if appendices and attachments are needed to
ensure understanding and control then include this information.
It includes all relevant material and information it takes to ensure the gains are sustained and
even further improved should be included. Often times there are long-term action items and the
project list (possibly utilize a Gantt Chart) shall be updated and followed by the Process Owner
until those actions are complete. The GB/BB will often move on to another project after the team
disbands but follow up is often required weeks or months later. The follow up is done in
conjunction with the Process Owner and possibly the Controller. Adhering to the details of the
Control Plan will standardize these efforts and allow quick analysis of current performance.
A Control Plan provides a single point of reference for understanding process characteristics,
specifications, and standard operation procedures also known as SOP for the process. A control
plan enables assignment of responsibility for each activity within the process. This ensures that
the process is executed smoothly and is sustainable in the long run.
Six Sigma - Black Belt
A good Control Plan needs to be based on a well thought out strategy. A good control plan
strategy should minimize the need of tampering the process. It should also clearly state the
actions to be taken for out-of-control conditions. It should also raise appropriate indicators to
indicate the need for Kaizen activities. A Control Strategy should describe the training
requirements to ensure that everyone on the team is familiar with the standard operating
procedures. In the case of an equipment control plan, it should also include details about
maintenance schedule requirements.
The intent of an effective Control Plan Strategy is to
Operate our processes consistently on target with minimum variation
Minimize process tampering (over-adjustment)
Assure that the process improvements that have been identified and implemented become
institutionalized
Provide for adequate training in all procedures
Include required maintenance schedules
Control Plan inputs includes all processes, measurement systems and resources that need to be
monitored and controlled
The Elements of control plan includes process map steps, key process output variables, targets &
specs, key and critical process input variables with appropriate working tolerances and control
limits, important noise variables (uncontrollable inputs) and short and long term capability
analysis results. Other element of control plan is designated control methods, tools and systems
(spc, automated process control, checklists, mistake proofing systems and standard operating
procedures),training materials, maintenance schedules and reaction plan and responsibilities
The plan should be developed by the project team in conjunction with those who will be
responsible for the day to day running of the processes. The plan should be validated and then be
subject to regular review, as part of the overall management system
Six Sigma - Black Belt
8.4. Sustain Improvements
As a Six Sigma project winds down, there are a number of activities that, if utilized, can
determine whether the implemented process improvement will continue to meet intended results,
thus contributing to the overall and ongoing success of the organization. Sustaining
improvements emphasize the role of training and documentation in sustaining support for Six
Sigma improvements.
Organizational Memory - Traditional memory is associated with the individual's ability to
acquire, retain, and retrieve knowledge. Within business this concept is extended beyond the
individual, and organizational memory therefore refers to the collective ability to store and
retrieve knowledge and information.
Organizational memory (OM) (sometimes called institutional or corporate memory) is the
accumulated body of data, information, and knowledge created in the course of an individual
organization’s existence. Falling under the wider disciplinary umbrella of knowledge
management, it has two repositories: an organization's archives, including its electronic data
bases; and individuals’ memories.
Organizational memory can only be applied if it can be accessed. To make use of it,
organizations must have effective retrieval systems for their archives and good memory recall
among the individuals that make up the organization. Its importance to an organization depends
upon how well individuals can apply it, a discipline known as experiential learning or evidencebased practice. In the case of individuals’ memories, organizational memory’s veracity is
invariably compromised by the inherent limitations of human memory. Individuals’ reluctance to
admit to mistakes and difficulties compounds the problem.
Organizational memory can be subdivided into the following types
Six Sigma - Black Belt
Professional - Reference material, documentation, tools, methodologies
Company - Organizational structure, activities, products, participants
Individual - Status, competencies, know-how, activities
Project - Definition, activities, histories, results
Training Plan Deployment
Training is the acquisition of knowledge, skills, and competencies as a result of the teaching of
vocational or practical skills and knowledge that relate to specific useful competencies. Training
has specific goals of improving one's capability, capacity, productivity and performance.
A training plan is trainer's outline of the training process he or she will use in a training program.
It encompasses all the training schemes or skills reviews to achieve the goals pursued by the
company.
Training in organizations is implemented as
Initial training – It is conducted when implementing new processes for existing employees.
Recurring training – It focuses on re-training employees so as to better orient them towards
the desired goals as deterioration occurs in after employees gets acquainted with existing
system.
Important considerations of a successful employee training are
The goals of the employee training or development program are clear
The employees are involved in determining the knowledge, skills and abilities to be learned
The employees are participating in activities during the learning process
The work experiences and knowledge that employees bring to each learning situation are
used as a resource
A practical and problem-centered approach based on real examples is used
New material is connected to the employee's past learning and work experience
The employees are given an opportunity to reinforce what they learn by practicing
The learning environment is informal, safe and supportive
The individual employee is shown respect
The learning opportunity promotes positive self-esteem
Documentation
Documentation is a set of documents provided on paper, or online, or on digital or analog media,
such as audio tape or CDs. Example are user guides, white papers, on-line help, quick-reference
guides. It is becoming less common to see paper (hard-copy) documentation. Documentation is
distributed via websites, software products, and other on-line applications.
The six sigma project team should update their documentation which includes process maps,
document checklists, cheat sheets, etc. The better is the final documentation the easier it will be
for process to be maintained as per envisaged performance levels.
The main considerations for documentation is to keep the documents updated and current so as
to act as reference for any documentary requirements. The documentation should also comply to
following essential features
Six Sigma - Black Belt
Coverage - Code that is and is not documented is easily identifiable.
Accuracy - The code comments accurately describe the code reflecting the last set of source
code changes.
Clarity - The system documentation describes what the code does and why it is written that
way.
Maintainability - A single source is maintained to handle multiple output formats, product
variants, localisation or translation.
Ongoing Evaluation
Evaluation provides reflection on the performance of the programme and enables the
programmes to receive independent feedback on the relevance, effectiveness, efficiency and/or
consistency of the programme. Ongoing evaluation is a process with the purpose of arriving at
better understanding of the current outcomes of intervention and the formulation of
recommendations that would be useful from the point of view of programme implementation.
Various tools and techniques used for ongoing evaluation are control charts, process capability
studies and process metric assessment.
9. DESIGN FOR SIX SIGMA (DFSS)
Design for Six Sigma can be seen as a subset of Six Sigma focusing on preventing problems by
going upstream to recognize that decisions made during the design phase profoundly affect the
quality and cost of all subsequent activities to build and deliver the product. Early investments of
time and effort pay off in getting the product right the first time. DFSS adds a new, more
predictive front end to Six Sigma. It describes the application of Six Sigma tools to product
development and process design efforts with the goal of “designing in” Six Sigma performance
Six Sigma - Black Belt
capability. The intension of DFSS is to bring such new products and/or services to market with a
process performance of around 4.5 sigma or better, for every customer requirement.
Design for Six Sigma is the suggested method to bring order to product design. 70 - 80% of all
quality problems are design related. Emphasis on the manufacturing side alone will concentrate
at the tail end of the problem solving process. One of the ways to increase revenues must include
introducing more new products for sale to customers.
DFSS is a proactive, rigorous, systematic method using tools, training, and measurements for
integrating customer requirements into the product development process. DFSS strives to prevent
defects by transforming customer wants and expectations to what can be produced, whereas the
Six Sigma DMAIC model focuses on eliminating defects by reducing operational variability.
Stage Gate Process
A stage gate process is used by many companies to screen and pass projects as they progress
through development stages. Each stage of a project has requirements that must be fulfilled. The
gate is a management review of the particular stage in question. It is at the various gates that
management should make the “kill” decision.
9.1. Common Design Methodologies
DMADV is one aspect of Design for Six Sigma (DFSS), which has evolved from the earlier
approaches of continuous quality improvement and Six Sigma approach to reduce variation.
DMADV refers to Define, Measure, Analyze, Design and Verify. A key component of the
DMADV approach is an active ‘toll gate’ check sheet review of the outcomes of each of the five
steps of DMADOV. It is depicted as
The application of DMADOV is aimed at creating a high-quality product keeping in mind
customer requirements at every stage of the game. In general, the phases of DMADOV are
Define phase – In this phase, wants and needs believed to be most important to customers are
identified by historical information, customer feedback and other information sources. Teams
are assembled to drive the process. Metrics and other tests are developed in alignment with
customer information. The key deliverables are team charter, project plan, project team,
critical customer requirements and design goals.
Measure phase - The defined metrics are used to collect data and record specifications for
remaining process. All the processes needed to successfully manufacture the product or
service are assigned metrics for later evaluation. Technology teams test metrics and then
apply them. The key deliverables are qualified measurement systems, data collection plan,
capability analysis, refined metrics and functional requirements.
Six Sigma - Black Belt
Analyze phase - The result of the manufacturing process (i.e. finished product or service) is
tested by internal teams to create a baseline for improvement. Leaders use data to identify
areas of adjustment within the processes that will deliver improvement to either the quality or
manufacturing process of a finished product or service. Teams set final processes in place
and make adjustments as needed. The deliverables are data analysis, initial models
developed, prioritized X's, variability quantified, CTQ flow-down and documented design
alternatives.
Design phase - The results of internal tests are compared with customer wants and needs.
Any additional adjustments needed are made. The improved manufacturing process is tested
and test groups of customers provide feedback before the final product or service is widely
released. The deliverables includes validated and refined models, feasible solutions, tradeoffs quantified, tolerances set and predicted impact.
Verify phase - The last stage in the methodology is ongoing. While the product or service is
being released and customer reviews are coming in, the processes may be adjusted. Metrics
are further developed to keep track of on-going customer feedback on the product or service.
New data may lead to other changes that need to be addressed so the initial process may lead
to new applications of DMADV in subsequent areas. The key deliverables are detailed
design, validated predictions, pilot / prototype, FMEA's, capability flow-up and standards
and procedures.
9.2. DFX
DFX techniques are part of detail design and are ideal approaches to improve life-cycle cost,
quality, increased design flexibility, and increased efficiency and productivity using the
concurrent design concepts. Benefits are usually pinned as competitiveness measures, improved
decision-making, and enhanced operational efficiency. The letter “X” in DFX refers to
performance measure or the ability.
DFX provides systematic approaches for analyzing design from a spectrum of perspectives. It
strengthens teamwork within the concurrent DFSS environment. DFX focuses on vital business
elements of concurrent engineering, maximizing the use of the limited resources available to the
DFSS team. The X is used as a variable term that can be substituted with, for example,
Assembly, Cost, Environment, Fabrication, Manufacture, Obsolescence, Procurement,
Reliability, Serviceability or Test.
Design for Manufacture and Assembly - Designs that are constructed to be easy to
manufacture during the conceptual stage of a product development are much more likely to
avoid redesign later when the system is being certified for production readiness. The best
way to ensure a concept can be manufactured is to have active involvement from the
production and supply chain organizations during concept generation and selection. These
are systematic approaches that the DFSS team can use to carefully analyze each design
parameter that can be defined as part or subassembly for manual or automated manufacture
and assembly to gradually reduce waste.
Design for life-cycle cost - Life-cycle cost is the real cost of the design. It includes not only
the original cost of manufacture but also the associated costs of defects, litigations, buybacks,
distributions support, warranty and the implementation cost of all employed DFX methods.
Probability distributions are given to represent inherent cost uncertainty. Monte Carlo
Six Sigma - Black Belt
simulation and other discrete-event simulation techniques are then used to model uncertainty
and to estimate the effect of uncertainty on cost.
Design for serviceability - It is the ability to diagnose, remove, replace, replenish, or repair
any component or subassembly to original specifications with relative ease. Poor
serviceability produces warranty costs, customer dissatisfaction, and lost sales and market
share due to loss loyalty. The DFSS team may check their VOC (voice-of-the-customer)
studies such as QFD for any voiced serviceability attributes. Ease of serviceability is a
performance quality in the Kano analysis. DFSS strives to have serviceability personnel
involved in the early stages, as they are considered a customer segment.
Design for Reliability - Reliability is the probability that a physical entity delivers its
functional requirements (FRs) for an intended period under defined operating conditions. The
time can be measured in several ways. For example, time in service and mileage are both
acceptable for automobiles, while the number of open-close cycles in switches is suitable for
circuit breakers. The DFSS team should use DFR while limiting the life-cycle cost of the
design. The assessment of reliability usually involves testing and analysis of stress strength
and environmental factors and should always include improper usage by the end user. A
reliable design should anticipate all that can go wrong. Various hazard analysis approaches
are used like fault-tree analysis.
Design for Maintainability - The objective of Design for Maintainability is to assure that
the design will perform satisfactorily throughout its intended life with a minimum
expenditure of budget and effort. Design for maintainability (DFM), Design for
Serviceability (DFS), and Design for Reliability (DFR) are related because minimizing
maintenance and facilitating service can be achieved by improving reliability. An effective
DFM minimizes the downtime for maintenance, user and technician maintenance time,
personnel injury resulting from maintenance tasks, cost resulting from maintainability
features and logistics requirements for replacement parts, backup units, and personnel.
Maintenance actions can be preventive, corrective, or recycle and overhaul. Design for
Maintainability encompasses access and control, displays, fasteners, handles, labels,
positioning and mounting, and testing.
9.3. Robust Design and Process
Robust design processes can produce extremely reliable designs both during manufacture and in
use. Robust design uses the concept of parameter control to place the design in a position where
random “noise” does not cause failure. Dr. G. Taguchi wrote that the United States has coined
the term “Taguchi Methods” to describe his system of robustness for the evaluation and
improvement of the product development processes.
Robust design aims to produce a reliable design by controlling parameters so random noise does
not cause failure. Since DOE techniques help determine the best design concepts used for
tolerance design, a robust DOE strategy helps create a design that improves the product
parameters, process parameters, and desired performance characteristics. A product or process is
controlled by three primary factors of noise, signal, and control.
The robust concept illustrates that a product or process is controlled by a number of factors to
produce the desired response. The signal factor is the signal used for the intended response. The
success of obtaining the response is dependent on control fact ors and noise factors.
Six Sigma - Black Belt
Control factors are those parameters that are controllable by the designer that operate to produce
a response when triggered by a signal. Control factors are separated into those which add no cost
and those that do add cost. Factors that add cost are frequently associated with selection of the
tolerance of the components and are called Tolerance Factors. Factors that don’t add cost are
simply control factors. Noise factors are parameters or events that are not controllable by the
designer and are generally random.
Noise factors have the ability to produce an error in the desired response. The function of the
designer is to select control factors so that the impact of noise factors on the response is
minimized while maximizing the response to signal factors.
Some of the key principles of robust design are
Concept Design - The selection of the process or product architecture is based on technology,
cost, customer, or other considerations.
Parameter Design - The design is established using the lowest cost components and
techniques. The response is then optimized for control and minimized for noise.
Tolerance Design - The tolerances are reduced until the design requirements are met.
Functional Requirements
In the development of a new product, the product planning department must determine the
functions required. The designer will have a set of requirements that a new product must possess.
The designer will develop various concepts, embodiments, or systems that will satisfy the
customer’s requirements.
Functional requirements are the requirement the product or process must possess to satisfy the
customer’s requirements. They need to be understood early in the design process in order to
establish criteria for selecting a design based on the quality level and development costs that
enable the product to survive in a competitive marketplace. Along with establishing the
functional requirement early in the process, they must yield accurate information.
Misinformation about them can delay the development cycle. Examples of it are car must have
average of 25 Km/ltr in city, alarm beep must activate when the temperature exceeds 30o C, etc
The product design must be “functionally robust,” which implies that it must withstand variation
in input conditions and still achieve desired performance capabilities. The designer has two
objectives
Develop a product that can perform the desired functions and be robust under various
operating or exposure conditions.
Have the product manufactured at the lowest possible cost.
Parameter Design
Parameter designs improve the functional robustness of the process so that the desired
dimensions or quality characteristics are obtained. The process is considered functionally robust
Six Sigma - Black Belt
if it produces the desired part for a wide variety of part dimensions. The steps to obtain this
robustness are
Determine the signal factors (input signals) and the uncontrollable noise factors (error
factors) and ranges.
Choose as many controllable factors as possible, select levels for these factors, and assign
these levels to appropriate orthogonal arrays.
Calculate S/N ratios from the experimental data.
Determine the optimal conditions for the process derived from the experimental data.
Conduct actual production runs.
Noise Factors
They are all the uncontrolled sources producing variation throughout the product’s life and
across production units, except variables in design parameters. There are two types of noise
factors
External noise sources are variables that are external to the product affecting its performance.
Internal noise sources are the product’s deviations from its nominal settings, including
worker/machine and environmental conditions.
In baking, the use of sugar, butter, eggs, milk, and flour are controllable factors, whereas the
conditions inside the oven such as humidity and temperature are not controllable. Motor vehicle
tires encounter external noise through exposure to a variety of conditions such as surface
conditions due to weather (damp, wet, snow, ice), different temperature, and different road types
(concrete, asphalt, gravel, dirt, and off road). The ability of tires to provide a smooth ride and
responsive stopping regardless of the conditions is an example of robustness.
Noise factors are difficult, expensive, or impossible to control. In the past, many engineers
approached noise problems by attempting to control the noise factors themselves. Because of the
expense, Dr. Taguchi suggests designers should only use this type of control action as a last
resort, and he recommends an experimental approach to seek the design parameters to minimize
the impact of the noise factors on variation.
This approach drives the designer to select the appropriate control settings that will make the
product unaffected by noise factors, thus robust. Remember, the goal of robustness strategies is
to achieve a given target with minimal variation. Lack of robustness is synonymous with
excessive variation, resulting in quality loss. Ignoring noise factors during the early design stages
can result in product failures and unanticipated costs; therefore addressing noise factors early in
the process through robust design minimizes these problems.
Six Sigma - Black Belt
Signal-to-Noise Ratio - A signal-to-noise ratio or S/N ratio is used to evaluate system
performance. The combinations of the design variables that maximize the S/N ratio are selected
for consideration as product or process parameter settings.
Case 1 - S/N ratio for “smaller-is-better”: S/N = -10 log(mean-squared response)
Case 2 - S/N ratio for “larger-is-better”: S/N = -10 log(mean-squared of the reciprocal response)
Case 3 - S/N ratio for “nominal-is-best”:
The Loss Function - The loss function is used to determine the financial loss that will occur
when a quality characteristic, y, deviates from the target value, m. The quality loss is zero when
the quality characteristic, y, is at the target value, m. The quality loss function is defined as the
mean square deviation of the objective characteristics from their target values. The function is
depicted as
The function L(y) shows that the further away from the target, the quality characteristic is, the
greater the quality loss. The “A” value is the cost due to a defective product. The amount of
deviation from the target, or “tolerance” as Taguchi calls it, is the delta ( ) value. The mean
square deviation from the target ( ), as used by Taguchi, does not indicate a variance.
Noise Strategies
The design engineers will specify the design parameters of the chosen system for improved
quality and reduced cost. A variety of tools are used to make the new system robust to various
factors. Primary sources of variation that will affect the product
Environmental effects
Deteriorative effects
Manufacturing imperfections
Six Sigma - Black Belt
The purpose of robust design is to make the product less sensitive to the effects. It is not
economical to reduce these sources of variation directly. The design & development department
will shoulder the major responsibility for reducing sources of variation (noise).
Tolerance Design
Tolerance is a permissible limit of variation in a parameter's dimension or value. Dimensions and
parameters may vary within certain limits without significantly affecting the equipment’s
function. Designers specify tolerances with a target and specification limits (upper and lower) to
meet customer requirements. The tolerance range, the difference between those limits, is the
permissible limit of variation. Systems are made of components, and components are made of
materials. Realistically, not all components are made of the same materials. Designers must
determine the tolerances for all system components.
Tolerance design establishes metrics allowing designers to identify the tolerances that can be
loosened or tightened to meet customer needs while producing a cost-effective product.
Tolerance design goes a step beyond parameter design by considering tolerance decisions as
economic decisions just as spending additional money buys better materials or equipment.
Besides economics, tolerance design also considers other factors such as constraints due to
material’s properties, engineering design choice and safety factors. By enhancing the
understanding of the relationship between product parameters, process parameters, and desired
performance characteristics, designers use DOE to identify what is significant and move the
process or product to the ideal function.
The tolerances for all system components must be determined. This includes the types of
materials used. In tolerance design, there is a balance between a given quality level and cost of
the design. The measurement criteria are quality losses. Quality losses are estimated by the
functional deviation of the products from their target values plus the cost due to the malfunction
of these products. Taguchi describes the approach as using economical safety factors.
The functional limit
must be determined by methods like experimentation and testing.
Taguchi uses a LD50 point as a guide to establish the upper and lower functional limits. The
LD50 point is where the product will fail 50% of the time. The 50% point is called the median.
The formulas for tolerance specifications, the functional limit, and safety factors are as
Tolerance Specifications -
Functional Limit -
Six Sigma - Black Belt
The economical safety factor N, is determined as follows
Given the value of the quality characteristic at y, and the target value at m, the quality loss
function will appear as
Statistical Tolerance
Parts work together, fit into one another, interact together and bond together. Since each part has
its own tolerance, statistical tolerance is a way to determine the tolerance of an assembly of parts.
By using sample data from the process, statistical tolerance defines the amount of variance in the
process. Statistical tolerance is based on the relationship between the variances of independent
causes and the variance of the overall results.
According to Thomas Pyzdek, engineering tolerances are usually set without knowing which
manufacturing process will be used to manufacture the part, so the actual variances are not
known. However, a worst case scenario would be where the process was just barely able to meet
the engineering requirement. This situation occurs when the engineering tolerance is 6 standard
deviations wide (± 3 standard deviations). Thus, we can write the equation as
Pyzdek asserts, instead of simple addition of tolerances, the squares of the tolerances are added
to determine the square of the tolerance for the overall result." Pyzdek goes on to say, "The result
of the statistical approach is a dramatic increase in the allowable tolerances for the individual
piece parts." This is an important concept in terms of tolerance because now the parts can have a
greater tolerance for each part. Pyzdek uses the following assumptions
The component dimensions are independent and the components are assembled randomly.
This assumption is usually met in practice.
Each component dimension should be approximately normally distributed.
The actual average for each component is equal to the nominal value stated in the
specification.
Six Sigma - Black Belt
Tolerance Design and Process Capability
Customers always have requirements. To meet those requirements, we assign products and
processes certain specifications and a target. As the products deviate from the target, quality
losses grow. Functional requirements (FRs) are the requirement the product or process must have
to satisfy the customer. Tolerance is a permissible limit of variation in a dimension or value of
the parameter.
Linking tolerance and process capability is about linking functional requirements and tolerances
to the economic safety factor. Three situations are forthcoming
Smaller-is-Better Tolerances - The formulas for calculating tolerances are
Larger-is-Better Tolerances - In larger-is-better situations, the quality characteristics are also
nonnegative and should be as large as possible. The economical safety factor is calculated as
The tolerance specification for a larger-is-better tolerance is
tolerance specification required and
where,
is the
is the functional limit.
The quality characteristic, for the larger-is-better situation, is designated as y and the loss
function is L(y). When y is infinite, L(y) is zero. A new equation for the average loss function
L(y) is
Taguchi’s Quality Imperatives
They are summarized as
Robustness is a function of product design. Quality losses are a loss to society.
Increasing the signal-to-noise ratio will improve the robustness of the product.
For new products, use planned experiments to seek out the parameter targets.
To build robust products, use customer-use conditions.
Tolerances are set before going to manufacturing. The quality loss function can be measured.
Products that barely meet the standard are only slightly better than products that fail the
specifications. The aim is for the target value.
The factory must manufacture products that are consistent by reducing variation.
Reducing product field failures will reduce the number of defectives in the factory. Part
variation reduction decreases system variation.
Six Sigma - Black Belt
Proposals for capital equipment for on-line quality efforts should include the average quality
loss.
9.4. Special Design Tools
TRIZ
TRIZ is a Russian abbreviation for “the theory of inventive problem solving.” Genrich
Altshuller states that inventiveness can be taught. Creativity can be learned, it is not innate, one
does not have to be born with it. Altshuller solidified a theory that one solves problems through a
collection of assembled techniques. Technical evolution and invention have certain patterns. One
should be knowledgeable with them to solve technical problems. There is some common sense,
logic and use of physics in problem solving. There are three groups of methods to solve technical
problems
Various tricks (a reference to a technique)
Methods based on utilizing physical effects and phenomena (changing the state of the
physical properties of substances)
Complex methods (combination of tricks and physics)
Traditionally, inventive problem-solving is linked to psychology; however, TRIZ is based on a
systematic view of the technological world. Altshuller realized that people, including specialists,
have difficulty thinking outside of their field of reference. Given a problem (P) within their
specialty, many people will only limit their search for a solution (S) to their area of specialty.
Initially there were 27 TRIZ tools which were later expanded to 40 innovative, technical tools.
The sequence of 9 action steps in the use of TRIZ
Analysis of the problem
Analysis of the problem’s model: Use of a block diagram defining the “operating zone”
Formulation of the ideal final result (IFR) - which will provide more details
Utilization of outside substances and field resources
Utilization of an informational data bank - Determining the constraints on the problem
Change or reformulate the problem
Analysis of the method that removed the physical contradiction - Is a quality solution
provided?
Utilization of the found solution - Seeking side effects of the solution
Analysis of the steps that lead to the solution
Axiomatic Design
Axiomatic design is a design methodology that seeks to reduce the complexity of the design
process. It accomplishes this by providing a framework of principles that guide the
designer/engineer. The axioms appear simple, but applications are complicated. This method
has attracted many converts in the last 20 years. Nam P. Suh is the developer of this technique.
The goal of axiomatic design is to make human designers more creative, reduce the random
search process, minimize the iterative trial-and-error process, and determine the best design
among those proposed. The axiomatic design process consists of basic steps
Six Sigma - Black Belt
Establish design objectives to meet customer needs
Generate ideas to create solutions
Analyze the possible solutions for the best fit to the design objectives
Implement the selected design
Axiomatic design is a systematic, scientific approach which breaks the design requirements into
4 different parts or domains:
Customer domain - The needs of customers are identified.
Functional domain - These are the functional requirements (FRs) the customer wants.
Physical domain - These are the design parameters (DPs) that will meet the functional
requirements
Process domain - These are manufacturing variables to produce the product
There is a “mapping” of requirements from one domain to the next.
The mapping between the customer and functional domains is defined as concept design; the
mapping between functional and physical domains is product design; the mapping between the
physical and process domains corresponds to process design. Identifying the customer’s needs
and requirements also known as customer domain serves as the axiomatic design’s foundation.
The functional domain consists of the requirements of what the product must do to meet the
customer requirements, while the physical domain consists of the design parameters necessary to
meet the functional requirements. Thus, the process domain consists of the requirements to
produce the product to meet the physical domain.
In this methodology, each requirement is filled by one variable. That is, 5 functional
requirements (FRs) will be matched up by 5 design parameters (DPs). If not, then the axiomatic
design methodology is violated. The solutions for each domain are described as
Mapping between customer and functional domains: concept design
Mapping between functional and physical domains: product design (drawings, specs,
tolerances)
Mapping between physical and process domains: process design
Suh proposed that there must exist a fundamental set of principles that determine good design
practices. A search was made for these principles, which were translated into axioms. An axiom
is a formal statement of what is known or used routinely. An axiom is a self-evident truth upon
which other knowledge must rest, thus serving as a starting point for deducing other truths. In
this sense, an axiom can be known before knowing any of the other propositions.
Set-based Design
Six Sigma - Black Belt
Set-based design is an engineering design methodology that pertains to Toyota’s set-based
concurrent engineering design. Set-based concurrent engineering (SBCE) design begins with
broad sets of possible solutions, converging to a narrow set of alternatives and then to a final
solution.
Design teams from various functions can work sets of solutions in parallel, gradually narrowing
sets of solutions. Information from development, testing, customers, others will help narrow the
decision sets. Sets of ideas are viewed and reworked leading to more robust, optimized, and more
efficient projects. This approach is deemed to be more efficient than working with one idea at a
time.
Systematic Design
Systematic design is a step by step approach to design. It provides a structure to the design
process using a German methodology. A method that is close to the guidelines as written by the
German design standard - Guideline VDI 2221 Systematic Approach to the Design of Technical
Systems and Products. Four main phases in the design process
Clarification of the task - collect information, formulate concepts, identify needs
Conceptual design - identify essential problems and sub-functions
Embodiment design - develop concepts, layouts, refinements
Detail design - finalize drawings, concepts and generate documentation
An abstract concept is developed into a concrete item, represented by a drawing. Synthesis
involves search and discovery, and the act of combining parts or elements to produce a new
form. Modern German design thinking uses the following structure
The requirements of the design are determined
The appropriate process elements are selected
A step-by-step method transforms qualitative items to quantitative items
A deliberate combination of elements of differing complexities is used
The main steps in the conceptual phase are
Clarify the task
Identify essential problems
Establish function structures
Search for solutions using intuition and brainstorming
Combine solution principles and select qualitatively
Firm up concept variants: preliminary calculations, and layouts
Evaluate concept variants
Pugh Concept Selection
Stuart Pugh was a leader in product development (total design) methodology. QFD can be used
to determine customer technical requirements. Pugh suggests a cross-functional team activity to
assist in the development of improved concepts. The process starts with a set of alternative
designs. These early designs come from various individuals in response to the initial project
Six Sigma - Black Belt
charter. A matrix-based process is used to refine the concepts. During the selection process,
additional new concepts are generated. The final concept will generally not be the original
concept. The Pugh concept selection process has 10 steps which are
Choose criteria
Form the matrix
Clarify the concepts
Choose the datum concept
Run the matrix
Evaluate the ratings
Attack the negatives and enhance the positives
Select a new datum and rerun the matrix
Plan further work
Iterate to arrive at a new winning concept
Pugh Evaluation Matrix
Porter Five Forces Analysis
Porter five forces analysis is a framework for industry analysis and business strategy
development. It draws upon industrial organization (IO) economics to derive five forces that
determine the competitive intensity and therefore attractiveness of a market. Attractiveness in
this context refers to the overall industry profitability. An "unattractive" industry is one in which
the combination of these five forces acts to drive down overall profitability. A very unattractive
industry would be one approaching "pure competition", in which available profits for all firms
are driven to normal profit. This analysis is associated with its principal innovator Michael E.
Porter of Harvard University.
Porter referred to these forces as the micro environment, to contrast it with the more general term
macro environment. They consist of those forces close to a company that affect its ability to
serve its customers and make a profit. A change in any of the forces normally requires a business
unit to re-assess the marketplace given the overall change in industry information. The overall
industry attractiveness does not imply that every firm in the industry will return the same
Six Sigma - Black Belt
profitability. Firms are able to apply their core competencies, business model or network to
achieve a profit above the industry average. A clear example of this is the airline industry. As an
industry, profitability is low and yet individual companies, by applying unique business models,
have been able to make a return in excess of the industry average.
Porter's five forces include - three forces from 'horizontal' competition - the threat of substitute
products or services, the threat of established rivals, and the threat of new entrants; and two
forces from 'vertical' competition - the bargaining power of suppliers and the bargaining power
of customers.
Porter developed his Five Forces analysis in reaction to the then-popular SWOT analysis, which
he found non-rigorous and ad hoc. Porter's five forces is based on the Structure-ConductPerformance paradigm in industrial organizational economics. It has been applied to a diverse
range of problems, from helping businesses become more profitable to helping governments
stabilize industries. Other Porter strategic frameworks include the value chain and the generic
strategies. The five important forces that determine competitive power in a business situation are
Supplier Power - Here you assess how easy it is for suppliers to drive up prices. This is
driven by the number of suppliers of each key input, the uniqueness of their product or
service, their strength and control over you, the cost of switching from one to another, and so
on. The fewer the supplier choices you have, and the more you need suppliers' help, the more
powerful your suppliers are.
Buyer Power - Here you ask yourself how easy it is for buyers to drive prices down. Again,
this is driven by the number of buyers, the importance of each individual buyer to your
business, the cost to them of switching from your products and services to those of someone
else, and so on. If you deal with few, powerful buyers, then they are often able to dictate
terms to you.
Competitive Rivalry - What is important here is the number and capability of your
competitors. If you have many competitors, and they offer equally attractive products and
services, then you'll most likely have little power in the situation, because suppliers and
buyers will go elsewhere if they don't get a good deal from you. On the other hand, if no-one
else can do what you do, then you can often have tremendous strength.
Threat of Substitution - This is affected by the ability of your customers to find a different
way of doing what you do – for example, if you supply a unique software product that
automates an important process, people may substitute by doing the process manually or by
outsourcing it. If substitution is easy and substitution is viable, then this weakens your power.
Threat of New Entry - Power is also affected by the ability of people to enter your market. If
it costs little in time or money to enter your market and compete effectively, if there are few
economies of scale in place, or if you have little protection for your key technologies, then
new competitors can quickly enter your market and weaken your position. If you have strong
and durable barriers to entry, then you can preserve a favorable position and take fair
advantage of it.
Six Sigma - Black Belt
The Hoshin Planning System
Hoshin planning systematizes strategic planning. To be truly effective, it must also be crossfunctional, promoting cooperation along the value stream, within and between business
functions. Hoshin planning is a seven-step process, in which you perform the following
management tasks
Identify the key business issues facing the organization.
Establish measurable business objectives that address these issues.
Define the overall vision and goals.
Develop supporting strategies for pursuing the goals. In the Lean organization, this strategy
includes the use of Lean methods and techniques.
Determine the tactics and objectives that facilitate each strategy.
Implement performance measures for every business process.
Measure business fundamentals.
The hoshin process employs a standardized set of reports, known as tables, in the review process.
These reports are used by managers and work teams to assess performance. Each table includes
A header, showing the author and scope of the plan
The situation, to give meaning to the planned items
Six Sigma - Black Belt
The objective (what is to be achieved)
Milestones that will show when the objective is achieved
Strategies for how the objectives are achieved
Measures to check that the strategies are being achieved
Hoshin tables types are
Hoshin review table - During reviews, plans are presented in the form of standardized hoshin
review tables, each of which shows a single objective and its supporting strategies.
Strategy implementation table: Implementation plans are used to identify the tactics or action
plans needed to accomplish each strategy.
Business fundamentals table (BFT) - Business fundamentals, or the basic elements that
define the success of a key business process, are monitored through its corresponding
metrics. Examples of business fundamentals are safety, people, quality, responsiveness, or
cost.
Annual planning table (APT) - Record the organization’s objectives and strategies in the
annual planning table. The APT is then passed down to the next organizational structure.
The implementation plan usually requires coordination both within and between departments and
process owners. Implementation plans are not just the responsibility of an individual completing
the lowest-level annual plan. Each level in the organization carries detailed responsibilities to
ensure support for and successful completion of the organization’s plans. This is how the do step
of PDCA happens.
.
Six Sigma - Black Belt