Coincidentally, I had planned to write, this month, about the importance of accurate data and its conversion into actionable information, within an effective performance improvement system; then I spotted a very interesting post on LinkedIn by John Foster, CQP MCQI, regarding a similar subject.
The original ‘headline’ quote was accredited to Dr Edwards Deming, the founding father of statistical process control, and has been used and misquoted several times since.
John then gives several examples where we are let down by our trust in the data, all of which make valid observations which cannot be disputed.
From my own experience, I have seen many companies where enormous quantities of data are collected throughout the production day, but where the processes are very poorly controlled. Though the collection of data can be standardised, there are many ways to manipulate it. If the judgements are based upon the taking of statistical samples, then it becomes even more complicated, as in: from where should the sample be taken? How large should the sample set be? How frequently should the sample be taken? Should the sample sets be averaged? If the sample sets contain different quantities, or are taken at different frequencies, should the averages be weighted?
Each variation will have a different result and each result will lead to a different conclusion, leading to different actions.
There is an old saying, first attributed to Carroll D. Wright, a prominent US Government statistician, which went something like ‘Figures will not lie, but Liars will figure’. I suspect that this saying, just like statistics themselves, has been manipulated, over the decades, and that the original statement, especially if expressed by a government spokesman, was more delicately put. However, even though the concept is true, I much prefer the word ‘manipulate’ than ‘lie’, especially if the manipulation is done in ignorance.
Most of our business is dealing with global food and drink manufacturers and, more widely, FMCG and pharmaceuticals, in effect any company which creates products to end up in a packet, can or bottle and sold in specific quantities.
Since Average Quantity Law was introduced in Europe during 1979, statistical analysis of container weights and volumes has become increasingly important. This is even more so where the ingredients are expensive and there is a desire, therefore, to satisfy legality without giving product away in overfill.
In the decades since Average Quantity introduction, production lines have become much faster and have fewer people running them. It is not unusual now to find a whisky bottling line, for example, running at 300 or even 600 bottles per minute. Beer bottling lines are usually faster and canning lines faster still. It is obvious, therefore, that statistical process control must be used to control such processes to best effect.
In the old days, before the introduction of Average Quantity Law, it was quite common to find companies operating quality control procedures which were largely not statistical but absolute, in that the production department would manufacture as quickly as possible, and the quality control department would be find and remove any defects. Customer complaints proved that this method of eliminating defects was unsuccessful. Statistical process control became much more widely used on the assumption that it would control the process, rather than the products, and that the reason for taking samples was simply to ensure that the process was still running within acceptable control limits.
Unfortunately, though frequently used, it has not been well understood and, even more than 4 decades later, a huge level of ignorance still surrounds Average Quantity Law. The consequence of this is that some packers are undoubtedly packing illegally, whilst others are unnecessarily giving product away and thereby unknowingly adding significant losses to their bottom line.
Most of the companies we visit have either too much data, not enough data or meaningless data in that it doesn’t lead to appropriate actions.
To provide optimum improvements, the sampling process must be adequately implemented, the data, thus collected, must be turned into meaningful information and instantly acted upon.
The reason so much data is meaningless is that it isn’t turned into actionable information or, if it is, the information is meaningless as the data from which it was generated was not collected through any consistent or recognisable statistical methodology.
One of the fundamental reasons for this is lack of time. Even a ‘normal’ process, needs appropriate control limits created for each product/production line combination. To create such limits would require ‘The Team’ to have knowledge of ‘process capability’ and the time to carry out analysis product by product and line by line
Most companies nowadays just do not have the time to carry out such time-consuming analyses, even if they have the knowledge to do so.
Consequently, arbitrary control limits are often chosen instead ‘control by guesswork’ or control limits set based upon what the user would like, rather than line/process capability. As a result, a product filled into say a glass bottle (wines or spirits for example) might be given preferential control limits of +1 ml (70ml product) when its filling capability may be only +4mls.
In such an example, the line/product combination would be ‘over-constrained’, and the control system would likely give adjustment instructions when the line was already working within its capability. Alternatively, if the arbitrary control limits have been set too widely, costly and unnecessary ‘giveaway’ would be inevitable. In addition, arbitrary control limits encourage over adjustment, leading to wasted time and operator frustration.
That’s the reason we made our system automatically analyse process capability and set optimum control limits.
Roy Green Lean Six Sigma Black Belt