SPC Pillar: Continual Improvement Process (CIP)
Statistical process control (SPC) allows for continual improvement process (CIP), a method of optimizing a process driven by data and statistical methods.
This article is a continuation of the exploration series of statistical process control (SPC).
Be sure to read the previous articles:
- What is Statistical Process Control (SPC)?
- SPC Pillar: Data Collection
- SPC Pillar: Using Control Charts
One of the major advantages of statistical process control (SPC) is the capability to empower continual improvement process (CIP). CIP is a method of optimizing a process driven by data and statistical methods.
In order to implement CIP, data must be collected and analyzed. As the cost of sensors has decreased and their availability has increased, much of the collected data goes unanalyzed unless there is a problem. Sometimes the sheer quantity of data is overwhelming and not worth the hours needed to parse it all. However, other problems may go masked and are not exposed until the data is analyzed.
Using machine learning (ML), much of this data can be processed without the need for a human to view all of it. This is particularly true for machine diagnostics and maintenance measurements. Many existing hardware platforms can silently monitor bearing temperatures, processing times, vibrations, pump duty cycle, or other machine parameters, throwing an alarm as these approach set hazard limits. With a finely-tuned ML algorithm, the system can parse this data and look for trends. Do the bearing temperatures rise when certain processes or materials are run? Is there a spike in duty cycle during certain work shifts?
What is Continual Improvement Process (CIP)?
The goal of the plant engineer is to make a process stable and predictable. Predictable processes make for simple economics, expected return on investment (ROI), easily-scheduled maintenance procedures and costs, and a clockwork product output. However, if the process engineer stopped at this level of stability, the company would be out of business very quickly, as a stable process tends to stagnate in comparison to other manufacturers. What was good a year ago is not enough today.
Figure 1. Data logged and recorded on machines can be used to make statistical decisions to improve process quality, safety, and efficiency. Image used courtesy of HAAS
On many individual tasks, predictable standards should definitely be maintained. The plasma etcher should etch to a specified depth on each run until there is a business advantage to changing the depth. CIP isn’t directly involved in creating changes to these particular steps. However, statistical methods should certainly be used to verify whether a process change has adversely affected parameters that should be kept constant - this is still a key concept of the SPC system.
Standard Operating Procedures (SOPs)
CIP cannot occur unless the engineer already knows and can accurately predict the facility operations. This requires standard operating procedures (SOPs) to be implemented and referenced regularly. Without these, it is hard to track whether a process improvement will have a positive impact, because it is hard to determine whether a process has been followed in a repeatable fashion in the first place. If there is no repeatability between process runs, there is no hope of determining whether an improvement has occurred that will benefit the entire process.
Process improvement typically falls into a few categories: increased safety, increased throughput, increased efficiency, or decreased overhead cost. As an engineer, one should consider these four categories constantly in any existing process. Is there a potential safety hazard? Is there a way to increase the number of products leaving the assembly line? Is there a way to reduce the amount of consumed material or increase the time between maintenance tasks? These are the types of questions to ask.
Automation changes should be considered with these steps in mind. Before turning automation engineers loose on a project, each component task should be carefully vetted to ensure that the machines can actually meet enhanced goals without significant detriment to others.
Figure 2. Injection molding processes can be improved with various dies and materials, but the changes may come at a cost - is it truly a good business move? Image used courtesy of All3DP
It is all a trade-off, however, and sometimes economics will be the determining factor in how much negative impact is allowed. For example, in an injection molding facility, the addition of a different plasticizer may improve the flowability of the plastic, but it takes longer to cure the plastic after molding. Economics will determine whether the longer cure time is worth the improvement in flowability. These factors can change over time as well, depending on market conditions.
Suppose there is a change that will increase throughput on one part of a complicated process. All of the proper SOPs are in place, and the process is quite repeatable and in control. The temptation is to make the leap to change the SOPs and adjust the controller programming. Not so fast!
Before implementing a process change, the engineer must first run some carefully planned screening experiments to examine how the process change will affect the rest of the line. Making changes too fast often results in unforeseen problems that are far worse than any actual improvements.
One of the more powerful statistical tools is the Z-test. While fully calculating a Z-test is beyond the scope of this article, the premise is that a Z-test can determine whether the process has statistically changed. This can be used to confirm questions of positive change in a process, such as “do the new nozzles produce a more uniform coating than the old ones?” It can also be used to check whether a change has negatively impacted a downstream process, such as, “have these new nozzles (and coating) made the following etching step less reliable?”
A Z-test is used to determine whether an experimental lot or sample is considered part of the general population. In other words, could the experimental sample be mixed in with the rest of the production runs and not stand out, statistically. In order for Z-tests to be effective, the “population” of data must be large compared to the “sample” size. Also, because this is a statistical method, the engineer must determine what degree of confidence they have in the result, and this value must be reported with any findings. For example, “the sample can be considered members of the population with 97% confidence.”
Figure 3. Photoresist spin coating of a silicon wafer, a process that must be repeated countless times, with incredible precision and consistency. Image used courtesy of inseto
Consider a photoresist spin coating at a semiconductor facility. A new photoresist has different physical properties, but is much less expensive. In order for this to be a good fit, the previous runs of the old (standard) photoresist must be compared to the new photoresist. Because the standard photoresist was in use for several years, with many runs, it can be considered the “population”. A few test wafers are run with the new photoresist and compared using a Z-test to determine if the wafers with the new photoresist can significantly be considered part of the population of wafers run with the standard photoresist.
For manufacturers, CIP is possible and may even be considered a necessity to remain competitive. While all of the data collected cannot be processed by human minds, the increased capability of machine learning can be leveraged against this data to look for trends that may otherwise go unchecked in order to unlock new potential to see those tiny, incremental changes that keep a company driving into the future.