How to Improve Quality and ... using Data-Driven Manufacturing White Paper Executive Summary

White Paper
How to Improve Quality and Reduce Costs
using Data-Driven Manufacturing
Executive Summary
Manufacturers today are generating massive amounts of data;
however in many cases the potential benefits of this resource
remain largely untapped. In process manufacturing (e.g. food
and beverage, chemical processing and bulk pharmaceuticals)
the challenges are well understood and there are a wide
variety of off-the-shelf solutions which provide measurement,
storage and analysis capability. Discrete manufacturing (e.g.
aerospace and automotive components, consumer electronics
and medical devices) is more challenging due to the volume
and complexity of data as well as the requirement in many
industries to maintain long-term data archives. The situation
has led to organisations stockpiling data in a way that they are
not then able to use effectively, or developing bespoke inhouse solutions. While these in-house solutions may be
effective, they require significant time and financial
investment and may not be the manufacturer’s core area of
expertise. Some commercial solutions exist but may be cost
prohibitive, do not scale well or lack the ability to be
©2014 Simplicity AI
customised. Additionally, while more data management tools
are coming on to the market, many vendors do not speak the
language of advanced manufacturers, leading to a sub-optimal
solution.
This whitepaper discusses key issues faced by discrete
manufacturers and suggests ways in which a coherent strategy
for managing data can lead to numerous business benefits, all
of which have a direct impact on the profitability of the
business:



Reduced scrap and rework
Improved efficiency
Higher quality
After introducing the concepts and problem-solving
techniques, this paper will introduce Simplicity AI’s Tequra
Analytics data management software, which helps
manufacturers optimise processes, drive quality
improvements and reduce costs.
www.simplicityai.com
1
White Paper
Background
A typical discrete manufacturing process will comprise a
number of production stages, or operations, whereby
elements are integrated into the product and various tests
are performed. A typical electronics manufacturing setup
is shown in Figure 1.
ABOVE:
Figure 1:
Discrete
Manufacturing
Process
In this case, bare printed circuit boards (PCBs) are received
from a supplier and enter the manufacturing process on
the left of the diagram. The product then flows through
the stages as follows:





Component placement using surface-mount pick-and-place
machine
Automated optical inspection to check for component placement
and correctness
In-Circuit and Boundary Scan testing to check populated board
functionality
Module Integration whereby multiple boards are integrated into
a higher level assembly
Functional Test to verify that the integrated unit operates within
required specifications.
the-field failures. The shift could be attributed to a number
of factors such as sourcing components from a new
supplier, or a calibration issue with measurement
equipment. In either case, having visibility of the data
allows corrective actions to be applied in a timely fashion
rather than responding to customer returns. Scaling up the
complexity of the process and/or product makes this task
much more challenging. As mentioned previously, some
test stages may be producing tens of thousands of results
while some may have cycle times of less than one minute
in a factory with many parallel stations. By way of an
example, a manufacturer of devices for the telecoms/
wireless industry generated 55 gigabytes of production
data (predominantly test results) during the second half of
2013, while producing fewer than 10,000 units per month.
This equates to approximately 9.4 megabytes of data per
unit produced, which could increase dramatically for more
complex products. Extracting useful information from this
mountain of data is impossible without the ability to
quickly aggregate large data sets.
Why Is This Important?
At each stage, checks are performed to ensure that
appropriate tolerances are adhered to; for example,
requiring that bolts are tightened with the appropriate
torque at an assembly station, or that module power
consumption is correct at a functional test station. In most
cases, many checks are performed at each stage and any
single out-of-specification failure will prevent the product
from progressing to the next stage. As an extreme
example, a jet engine controller may be subjected to
thousands of unique specification checks during a
functional test. In addition to measurements, other data
are generally captured at each stage including start/end
times, stations/equipment used and operators involved.
In the case of low-complexity, low-volume production it is
feasible to collate relevant data using manual methods in
order to drive improvement initiatives. For example,
keeping track of overall pass/fail status from functional
test stations may help highlight an issue where operators
are having trouble correctly connecting test harnesses. In
this case, operator training and improved connectors could
have a dramatic effect on production efficiency. Similarly, if
a test stage measures 10 parameters, it is possible to plot
these by hand on a chart and add new points for each
product tested. If a shift in any measurement is noticed,
even if within tolerance, this could signal a potential
quality issue as this would lead to a higher likelihood of in-
©2014 Simplicity AI
Inefficiencies within a manufacturing process and defects within
the manufactured product both have a direct effect on an
organisation’s bottom-line. Optimisation of the manufacturing
process helps to eliminate waste and make better use of resources,
while reducing defects helps to mitigate the costs of dealing with
faulty products. Specific example cost savings within manufacturing
are difficult to come by, as manufacturers are reluctant to share
details of efficiency gains with competitors. However, the
effectiveness of process improvement strategies are not in dispute
as can be seen below for organisations using Six Sigma process
improvement techniques:



Motorola saved $16 billion between 1985 and 2001[1]
General Electric (GE) saved $4.4 billion between 1996 and 1999[2]
Ford Motor Company saved $1 billion between 2000 and 2002[3].
While these reported savings cover more than just manufacturing
operations, similar strategies to those used by these companies can
be utilised by large and small manufacturers to streamline
processes, improve quality and ultimately cut costs.
www.simplicityai.com
2
White Paper
Manufacturing process inefficiencies typically manifest as
various forms of waste, which typically means that more
work is being expended than is required to produce a
product. These “wastes” were first categorised by the
Toyota Production System (TPS) and further captured into
the philosophy of Lean Manufacturing:







Transportation (moving a product during manufacturing)
Inventory (holding stock components, work-in-progress and
manufactured goods)
Motion (movement of people or machinery)
Waiting (interruptions to manufacturing flow)
Over-processing (non-value adding work, such as repeating
operations or exceeding requirements)
Over-production (production in excess of demand)
Defects (effort involved in capturing and dealing with defects).
Looking at defects in greater detail, these are covered by a
metric known as the “Cost of Quality” (CoQ), which can be
broken down as follows:
Costs of conformance, or cost of good quality (CoGQ),
which includes:


Prevention costs (e.g. training, quality planning, statistical
process control)
Appraisal costs (e.g. inspection, testing and auditing)
Costs of non-conformance, or cost of poor quality (CoPQ),
which includes:


commitment to quality and tools to improve the current
state of things. It should be noted that while some
improvements can offer large gains they are generally
iterative, whereby optimising one particular area will
uncover another area for improvement. Therefore, these
organisations have a culture of continuous improvement
to extract maximum gains from the approach over time.
Techniques
The following section describes techniques to help drive
efficiency and quality improvements. In most cases using these
tools can have a direct, positive effect on the profitability of the
business. Many of these techniques could be applied using
manual methods of data collection and calculation; however this
may require a great deal of human effort, especially considering
the volume of data produced in modern manufacturing
environments. In many cases this is simply not feasible without
the support of data systems.
Improve efficiency and eliminate waste
Internal failure costs (e.g. scrap and rework)
External failure costs (e.g. returns/repairs, liability, recall and loss
of reputation).
Richard W. Anderson, former general manager of HewlettPackard's Computer Systems Division stated: “The earlier
you detect and prevent a defect, the more you can save. If
you catch a two cent resistor before you use it and throw it
away, you lose two cents. If you don’t find it until it has
been soldered into a computer component, it may cost
$10 to repair the part. If you don’t catch the component
until it is in the computer user’s hands the repair will cost
hundreds of dollars. Indeed, if a $5000 computer has to be
repaired in the field, the expense may exceed the
manufacturing cost.”[4]
In 2002, Ford Group Vice President Jim Padilla said “The
cost of poor quality is the single biggest waste we have. It
costs us in warranty. It costs us in public image, which in
turn affects our residual values.”[5]
Manufacturers have employed various techniques to
streamline their processes and improve quality such as
Total Quality Management (TQM) and more recently, Six
Sigma and Lean. Some organisations have in-house process
improvement strategies, such as United Technologies
Corporation (UTC) with Achieving Competitive Excellence
(ACE). However, in all cases the common thread is a
©2014 Simplicity AI
The challenge for advanced manufacturers is that the complexity
of the manufacturing process and the product being
manufactured both have a great influence on production
efficiency and the prevalence of defects. The avalanche of data
produced by the manufacturing process can be a great asset in
identifying areas for improvement and tracking the effectiveness
of changes. However, without the correct tools to be able to
extract meaningful information, opportunities for improvement
may be missed.
By tracking all stages of the production process, it is possible to
determine the cost and time taken to manufacture a product. The
product will have a cost attributed to raw materials and bought-in
components, while the production process will have costs
attributed to assembling and testing the product such as operator
labour and equipment costs. At each production operation/stage
the following summary data is typically gathered (as a bare
minimum):





Product and/or lot identifier (e.g. serial number for serialised products or
lot number for non-serialised products)
Start date/time and duration
Operator (for operations requiring human intervention)
Station (the physical system or work area used to carry out the
operation/stage)
Status (e.g. pass/fail, to determine whether the product can progress to
the next stage).
For an individual product, it is possible to plot a timeline depicting
its progression through the production process. This can be used
to track production costs and highlight areas of waste, such as:



Operations being repeated due to unexpected failures
Products having to be reworked or scrapped
Interruptions to the manufacturing flow, characterised by variable or long
delays between operations.
www.simplicityai.com
3
White Paper
Using a data system to store operation summary data allows quick
access to this information for a single product; however the real
power comes from the system’s ability to aggregate data across
multiple products.
Overall Yield (OY) is an example of an aggregate metric,
which is defined as the proportion of items produced
within specifications compared to the number scrapped. A
high level depiction of yield is shown in Figure 2.
This diagram shows that the raw materials or
components for 295 units entered the manufacturing
process, 290 units were shipped to customers and 5 were
scrapped. The OY calculation here is simple:


OY = Items Out / Items In
OY = 290 / 295 = 98.3%
©2014 Simplicity AI
ABOVE:
Figure 2:
Overall
Yield
BELOW:
Figure 3:
The Hidden
Factory
Therefore, it could be argued that the process is running at
98.3% efficiency. Unfortunately, Overall Yield is a crude
measure and hides a multitude of manufacturing
problems. This is typically termed ‘the hidden factory’ in
that there may be a large amount of non-visible work
performed in making, finding and repairing defective
products. A more realistic view of the process might be as
shown in Figure 3.
The above example depicts a process with two operations: an
assembly stage (A) which involves fitting an enclosure and
tightening a series of bolts, followed by a functional test stage (B)
whereby the product is subjected to a number of tests. Compared
with the simplified view in Figure 2, in this case it is clear that there
is additional work being performed. At the Enclosure Fitting
operation there are 25 instances of having to repeat the bolt
tightening action due to insufficient or excess torque being applied.
Additionally, one unit needed to be scrapped as it was not possible
to fit the enclosure. During the Functional Test stage it is clear that
a significant number of units fail, requiring rework and retesting.
The information in the diagram can be reduced to two distinct
measures: First Time Yield (FTY) and Rolled Throughput Yield (RTY).
In this case, the FTY for the functional test operation can be
calculated as:



FTYB = First Time Passes / Items In
FTYB = (294 - 91) / 294 = 70.7%
FTYB = 203 / 294 = 69%
www.simplicityai.com
4
White Paper
The RTY takes into account both operations in the
process:



ABOVE:
Figure 4:
Process
Timing
RTY = FTYA x FTYB
RTY = 0.91 x 0.69
RTY = 62.8%
Therefore, the chance of a product being manufactured
correctly without any rework is 62.8%. In this case, the
Functional Test stage appears to be the dominant factor in
low yields; however it should be stressed that the
functional test is performing its function of catching
defects – at this stage it is unclear as to the reason for the
low yield. Armed with these figures an engineer can utilise
a number of statistical and non-statistical tools to
determine whether the low yield is due to design
Note: In the aforementioned example, Functional Testing could
arguably be classified as part of the hidden factory as its primary
function is to catch defects. It is the subject of ongoing debate as to
whether any kind of test or inspection is a value-adding step within a
manufacturing process. If every product could be manufactured
perfectly in the first place, then inspection and testing could
theoretically be eliminated – however, this is not realistic as there
are often strict customer-defined requirements for test coverage and
long term data archiving. Additionally, having a good source of test
data can be used to help optimise product design and also
investigate the root causes of in-the-field failures. The counterpoint
to this argument is that inspection and test can slow down the
production process, therefore manufacturers strive to optimise these
operations to reduce their impact. For these reasons, the Functional
Test stage is not included with ‘the hidden factory.’
marginalities, inappropriate specifications, calibration
issues, operator training or other factors. It is beyond the
scope of this whitepaper to discuss the various techniques
to identify the reasons for low yield, but suffice it to say
that data management systems should provide quick
access to high-level metrics (such as yield) in addition to
providing the data and tools required for detailed
investigations.
©2014 Simplicity AI
In addition to yield, careful attention should be paid to the
manufacturing flow when aiming to optimise the
production process. For example, variations in the time
taken to perform particular operations can lead to a
situation of under or over-production which can affect the
entire process, leading to idle equipment and personnel or
having to store excess stock. In modern manufacturing
organisations, the production rate is set to meet customer
demand – this is known as Takt time (T), which is defined
as:

T = Available production time / number of units (products)
required.
Therefore, assuming a customer requirement of 50 units
per day in a factory capable of 7 hours production per day
(accounting for employee breaks), this would equate to a
Takt time of:


T = 7 / 50
T = 0.14 hours (8 minutes and 24 seconds).
In this example, the manufacturing process needs to
produce a unit every 8 minutes and 24 seconds to keep
pace with customer demand. Missing this target will mean
disappointing customers, while exceeding this rate will
lead to units queuing up between manufacturing
operations or having to be stored prior to shipping. While
it may appear that this concept is more suited to highvolume manufacturers rather than low-volume “job
shops”, this is inaccurate since in virtually all cases, there is
a requirement to produce one or more units within a
particular fixed timeframe – this therefore defines the Takt
time. Figure 4 depict a process with two operations,
showing additional information about each stage.
Crucially, this highlights the expected cycle times
compared with real values. Note that this diagram
included waiting times between operations.
Manufacturing data tools should provide visibility over all
stages/operations in the production process, highlighting
www.simplicityai.com
5
White Paper
any deviations from expected cycle times that could help
uncover potential problems with equipment, the need for
operator training or the fact that it may not be possible to
reliably produce units to within the required
specifications.
By tracking the manufacturing history of all products and
components/sub-assemblies it is possible to:
Provide manufacturing traceability
Due to the complexity of today’s products, it is highly
unlikely that a single manufacturer will process raw
materials and perform all the required operations to
produce a finished product.
Typically, manufacturers will source components and lowlevel assemblies from external suppliers or other parts of
the organisation. When manufacturing a product, it is
necessary to record which lower level assemblies or
components are used to build the product. This
information is often captured in high-level systems (such
as an ERP system). However, this may only represent a
“final state” showing only what was shipped to the final
customer. It is not uncommon for manufacturers to
replace components during the manufacturing process
during rework operations. In some cases replacing a
component may allow a product to pass a test stage, while
the “faulty” component may get reused in another
product if it subsequently allows the second product to
pass. While this is not necessarily recommended practice,
differences in manufacturing tolerances can cancel out to
mean that this is a valid option. High level tracking of
component/sub-assembly data when the product is
shipped ignores this valuable information. An example of a
computer manufacturing process is shown in Figure 5, BELOW:
Figure 5:
whereby components are assembled and replaced to
Computer
Manufacturing
create a working product which is then shipped to the Example
end customer.
In this example, the process requires that a random access
memory (RAM) module and power supply are fitted. The computer
is then tested and fails, with the functional test operation
producing a detailed breakdown of results. Based on the
information in the test report, an operator decides to replace the
power supply and RAM then runs the functional test again. The
test passes, allowing the computer to be shipped to the customer.
The operator subsequently determines that the power supply was
the most likely cause of failure and decides to scrap it; however,
©2014 Simplicity AI
the memory module is returned to stock, meaning that it may be
reused in another computer at a later date. This may subsequently
pass when used in a different computer, or may go onto to cause
many more failures unless it is scrapped.



Uncover wasted effort due to the repeated reuse of failed components
Track whether a component had previously contributed to a failure when
used in another product (this could be an indication that there may be a
higher likelihood of in-the-field failures)
Determine which components have been integrated into a product when
shipped to a customer.
To augment the data gathered within the factory, many
organisations are now choosing to integrate component
manufacturing data from the supply chain. Depending on the level
of detail, this allows the ability to look at the manufacturing history
of a particular component/sub-assembly and its constituent parts.
For maximum effectiveness, this requires a commitment to
openness between suppliers and manufacturers such that
manufacturing history and test results can be provided along with
components. In fact, many larger organisations aim to support
their suppliers with improving their production efficiency and
quality as this has may help reduce lead times, costs and the
prevalence of defects.
Reduce defects and characterise performance
Specifications are critical to ensuring that manufactured products
have acceptable performance. These specifications may include
mechanical constraints (e.g. size and weight), electrical
requirements (e.g. voltage level for a fixed power supply) and
various other measures. These may be derived from customer or
internal requirements and engineers use these to set specification
limits for various parameters, beyond which performance is
deemed to be unacceptable. Using the computer manufacturing
example mentioned previously: a functional test stage may
perform a variety of tests, including measuring the voltage output
from the power supply. If the measured value is within
specification limits and the remaining tests also pass, then the
functional test has been successful and the product may progress
to the next manufacturing operation/stage. It is obvious that
capturing out-of-specification failures helps to reduce the number
of defective units shipped to customers. In order to deal with outof-specification failures, tools should provide high-level failure
summaries to allow engineering resource to be targeted at solving
the biggest problems, while also providing engineers with the data
required to investigate and identify the root cause.
Increasingly complex products make the process of defining
specifications more difficult, as the performance of an integrated
product (such as power supply) may be very different to a wellunderstood isolated system (such as a voltage regulator). For more
complex products, such as smartphones or Radar systems, the
problem is even more pronounced.
While hard specification limits are certainly useful, they
should be used in conjunction with with a strategy to
reduce variation. This technique is one of the key tools of
www.simplicityai.com
6
White Paper
the process improvement methodology, Six Sigma. By
ABOVE:
6:
reducing variation, the probability of defects occurring Figure
Measurement
Variation
is also reduced, even for tests which may have always
previously passed within specification limits. A variation
example is shown in Figure 6; the chart on the left depicts
a particular measurement made on 32 separate units,
plotted in consecutive order with the lower and upper
specification limits. The measurement always stays within
the limits so the test passes; however, it is clear that there
is a distinct shift in results after testing the 15th unit. Shifts
such as this are normally due to external factors, rather
than natural variations in measurements. In this case it
could be due to changing the supplier of a component, or
related to a measurement instrument calibration issue. By
plotting the results using a histogram (as shown in the
chart on the right chart), the distribution of similar
measurements can be seen. This has been extended to
show the short and long-term variation curves that
indicate the expected distribution of measurements, as the
number of measurements increases. Short term variation
removes the effect of the shift and gives an indication of
how narrow the distribution could be if external factors
are removed. Long term variation includes the effect of the
shift to give an indication of actual performance.
The data shown in these charts can be reduced to a set of
single-value metrics:




Cp: Short Term Capability Index – Variation with respect to the
width of the specification limit window, utilising short-term
statistics
Cpk: Adjusted Short Term Capability Index – Variation with
respect to limits, including any offset, utilising long-term
statistics
Pp: Long Term Capability Index – Variation with respect to the
width of the specification limit window, utilising short-term
statistics
Ppk: Adjusted Long Term Capability Index – Variation with
respect to limits, including any offset, utilising long-term
statistics.
These measures are aggregated across many test runs and
many units, so it is possible to visually depict how
“capable” a particular measurement is. Applying
specification limits (lower and upper) to measurements to
provide pass/fail criteria is standard practice, however, this
©2014 Simplicity AI
does not highlight situations where a measured value
moves from being close to one limit to being close to the
other – perhaps after a after a component change. In both
cases the test would pass, but would increase the
likelihood of associated failures within a complex system.
By utilising capability statistics, these changes become
apparent and aid continuous improvement efforts to
reduce the source of variation. Individual measurements
Note: A typical product lifecycle will involve phases for design proving/
verification whereby the product’s performance is characterised and
deemed to be acceptable to move into production. It is a hotly debated
topic as to whether testing during the manufacturing process should be
limited to proving that the unit has been put together correctly. In this case,
the set of tests would be the bare minimum to determine whether the unit
works. The arguments for this approach are as clear:

Test solutions take less time to develop, since the required scope is
smaller

Test cycle times are shorter as fewer measurements need to be
made

Less test equipment is required, leading to lower cost stations

Responsibility for product performance is shifted from manufacturing to R&D.
However, there are a number of potential issues with this strategy:

The requirement to get products to market sooner means that
design proving stages are compressed, meaning that products may
move into production sooner than would be the ideal - it is not
uncommon for manufacturers to use production data to refine
designs and adjust specifications during early production runs

A lack of good quality measurement data for a particular unit
means that diagnosing failures that occur in the field is difficult
(this is especially pertinent in the case of a catastrophic failure,
such as a plane crash, where the unit in question is implicated as a
possible cause of the crash but has been completely destroyed by
the event).
Typically, the best solution is to strike a balance between the available
resources (such as engineering personnel and capital expenditure budget),
the needs of production (such as required production rate and operator
skills) and the requirements for diagnosing failures that occur in the field.
with a Cpk value greater than 2 typically indicate 3.4
defects (failures) per million opportunities, or 99.99966%
pass rate. Since a test procedure encompasses many
measurements, ensuring that each measurement hits the
required level is a good way to ensure overall product
quality and reduce the incidence of field failures and
customer returns. By displaying capability indices,
engineers can quickly review large numbers of tests
www.simplicityai.com
7
White Paper
without having to view trends and histograms directly. In
the case of a poorly performing test (one with low Cpk or
Ppk), it is then possible to investigate in more detail to
determine whether the variation is due to external
factors, inappropriate test specification or unit
performance.
ABOVE:
Figure 7:
Manufacturing
Data Systems
The responsibilities of a manufacturer do not stop once a
product has been shipped from the factory. Many industries
such as aerospace and medical devices have strict
requirements for long term data archiving. This is typically
due to the fact that defects found after manufacture can
lead to injury or loss of life. In the case that a failure occurs
in the field, it is imperative that data be readily accessible to
help determine the root cause. Even in industries where
human life is not at stake, such as consumer electronics,
failures in the field can lead to huge costs relating to
warranty repairs, product recalls and the impact of lost sales
owing to damaged reputation. Therefore, data should be
stored in a form that allows secure storage as well as being
able to quickly access information if prompted by a product
recall or field failure.
Manufacturing Data Systems
Various systems exist which can help manufacturers
optimise processes and reduce defects. Small
manufacturers may implement certain manufacturing
systems using paper-based or spreadsheet-based
approaches, while larger manufacturers will typically have
invested in one or more systems. Some common acronyms
are introduced below in Figure 6. It should be noted that
some of the categories depicted in the diagram do not
necessarily map onto a single tool, they may comprise
multiple components and also may encompass
management activities.
Enterprise Resource Planning (ERP) and Product Lifecycle
Management (PLM) form the top layer of the diagram and are
©2014 Simplicity AI
classified as ‘Enterprise Level’ in that they help to manage the
business activities of the organisation as a whole. Within the
context of manufacturing, ERP systems define customer facing
tasks (e.g. orders and shipping) and high level planning tasks (e.g.
inventory and production capacity). PLM systems aim to collate all
data related to a particular product from design through to
verification, production, maintenance and retirement.
The next layer in the diagram is concerned with managing the
production process and collating manufacturing data.
Manufacturing Execution System (MES) and Manufacturing
Operations Management (MOM) essentially provide the same
function – to direct production operations, including production
scheduling and product routing through the factory. Plant
Information Management (PIM) and Test Data Management (TDM)
provide the capability to store and analyse production data.
Generally, PIM is more common in process/continuous
manufacturing where measurement data (such as temperatures,
pressures and flow rates) are acquired continuously. TDM is more
widely used for complex, discrete products whereby manufacturing
stages generate large amounts of parameterised measurement
data when a product passes through the stage.
The systems which reside on the factory floor occupy the lowest
layer of the diagram. These systems directly control the assembly
and testing operations, potentially with support from a human
operator. The systems generally have integrated instrumentation
or sensors which are used to feed data into the PIM/TDM systems.
The scope and usefulness of these systems vary a great deal and
there are often non-distinct boundaries between them – in that a
one system may include some, but not all features of another. For
example, an ERP system may provide enhanced production
planning capabilities which fulfil the tasks of a MES system.
Additionally an MES system may provide the ability to capture
production data, negating the need for a separate PIM system.
The main challenges facing manufacturers about which systems to
implement are as follows:
www.simplicityai.com
8
White Paper



Certain tools tend to be monolithic (typically older systems) – they
may perform a particular job well but they do not integrate with
other systems to allow data exchange and automation.
Depending on organisation size, there may be requirements to use
a particular vendor’s tools or existing incumbent systems.
Tools which provide the functionality of multiple systems within
one product may seem attractive, but may lack the functionality of
separate tools.
ABOVE:
Figure 8:
Tequra
Analytics
(TDM)
BELOW:
Figure 9:
Yield by
Software
Many organisations, large and small, rely on manual collation of
certain forms of manufacturing data to fulfil the needs of some of
the aforementioned systems. This is problematic since it requires
continuous human effort, is error-prone and does not scale well as
production rates increase. It is therefore imperative that
appropriate systems are used to support the production process.
As is the trend with IT systems in general, integration of
manufacturing systems is becoming more widespread.
Therefore, it is possible to use the most appropriate systems
and exchange data between them. For example, a recent
Simplicity AI bespoke test solution required integration with
SAP (an ERP system). In this case, the test system requested
serial number information and configuration data which was
used for identifying and programming the product being
manufactured. In essence, manufacturing companies can
focus on the tools which offer the best return on investment
(ROI), with the knowledge that these tools will integrate with
existing and future corporate IT infrastructure.
©2014 Simplicity AI
Tequra Analytics
Tequra Analytics is a data collection, storage, reporting and
analysis solution, designed for manufacturing and R&D. The
system allows users to:

Track production metrics

Reduce defects

Characterise product performance

Provide manufacturing traceability.
In relation to the architecture diagram first introduced in
Figure 6, and reproduced for clarity in Figure 7 below, Tequra
Analytics most closely matches the role of a Test Data
Management system.
While Tequra Analytics is predominantly a TDM system, it also
manages data which may be in the realm of some ERP, MES/
MOM and PLM systems. Unlike many other commercial Test
Data Management systems, Tequra Analytics has a strong
focus on integration with other tools, including bespoke inhouse systems, with numerous interfaces allowing for data
interchange. Simplicity AI will work with customers to
determine integration requirements and provide
customisations to ensure that all systems remain
synchronised. Having said this, the system may also be used
stand-alone with data normally available in other systems
entered by hand. This is useful for smaller organisations which
may utilise paper or spreadsheet-based mechanisms for
managing production and also large organisations who may
www.simplicityai.com
9
White Paper
References
1.
2.
3.
want to trial the system prior to integration with other
systems.
The remaining sections of this whitepaper summarises some
key features of Tequra Analytics and highlights how they may
be used to drive efficiency and quality improvements.
4.
5.
Manufacturing yield
Tequra Analytics provides an overview of the manufacturing
process, allowing managers to see the current state of
production and compare performance against earlier time
periods. It is possible to display production yield, broken down
by a number of criteria. An example of which is shown in
Figure 9, showing overall and first pass yields grouped by
software version.
Motorola University: ‘Motorola Six Sigma Services’
– 22 July 2002
General Electric Company: ‘GE Investor Relations
Annual Reports’ – 22 July 2002
Quality Digest: ‘Six Sigma at Ford Revisited’ – June
2003, p. 30
Ross, Joel: ‘Principles of Total Quality’ – Third
Edition, 2004
Connelly, Mary: “Automaker forced to Trim Vehicle
Costs After Launches” – Automotive News, October
2002
TOP LEFT:
Figure 10: Unit
Manufacturing
History
TOP RIGHT:
Figure 11:
Failure Pareto
Unit manufacturing history
The ability to display a product’s manufacturing history is
provided in a timeline view, as shown in Figure 10. Each
element in the timeline shows the result of a
manufacturing operation, any applicable operator notes
and the components/sub-assemblies which made up the
product at that point in time.
Out-of-specification failures
Measurements which fall outside specification limits will
typically mean having to scrap or rework a unit. Therefore,
being able to see at a glance the most prevalent failures
ensures that the most pressing problems can be addressed
first. An example of a failure summary is shown in Figure
11.
More Information
For further details on the features of Tequra Analytics,
please visit www.simplicityai.com/tequra
©2014 Simplicity AI
www.simplicityai.com
10