The past 10 years have seen technological development move at a phenomenal pace. Everyone claims to have the latest and greatest gismos to help prospective buyers achieve their pressing objectives.
Caveat Emptor (buyer beware)! Naturally everyone wants the best and the latest proven methodologies to help them reduce wastage and improve efficiencies. We offer words of caution. It is too easy to fall in love with the latest technology and to believe that somehow it will be the long-awaited fix to all problems. When it comes to lasting performance improvement and sustainability, there is usually no quick fix or silver bullet.
The main reason for this, is ENGAGEMENT, or rather, the lack of it. Even the best and latest systems will fail to achieve desired outcomes when engagement is lacking. Total commitment and engagement ‘from top floor to shop floor’ is essential. No system can operate at full potential without such engagement.
Readers will have heard much about AI, Industry 4.0, Chat GPT, MIS/MES and much more. Most salesmen make exaggerated claims for their offerings as they really want users to see their offer as a panacea to solve all problems, for ever, with minimal input, effort, commitment or engagement.
This is why, even now, in these so called enlightened times, some 70% of computer systems fail completely or fail to deliver fully against expectations.
For this reason, we will always encourage potential users to ‘see’ the big picture, but to start small, to pick one particular area where investment in technology, commitment and engagement, can show a rapid payback on investments, but wherever you start, total engagement with the system and desired outcomes, is essential.
One such area is Machine Vision. For too long too many potential buyers have believed the myth that a camera on the production line will solve all problems….”bung a camera on the line, that’ll fix it!” Well, as many have found to their cost, it isn’t that easy. The concept is simple enough, but it isn’t easy. Machine vision needs total commitment and engagement from the whole team.
Many will already know of Harford Control for its reliable integrated information management systems, which have helped a wide range of users protect themselves and consumers from risk, whilst saving £millions in the cost of transforming raw materials to finished goods. Not so well known is our involvement in machine vision. For several years we have worked with Visicon Ltd and have, with their expert support, successfully integrated vision systems as a valued part of Harford’s factory solutions.
With the increased speed and complexity of production lines we felt we needed to go further and, long story short, we became partners and shareholders with Visicon.
For this reason, we felt that our article this month should focus on some of the latest developments in Vision technology. Some vision applications are easy and straightforward, but some are far more complex. Without Deep Learning techniques, some applications would be impossible. Today, therefore, we will focus upon a couple of these technologies, together with a case study.
Deep learning is a type of artificial ‘neural’ network with multiple layers to recognise patterns in data, in our case, Images. A ‘well-trained’ deep learning system will have human-like perception, such that it would be able to pick up defects, variations or ‘read’ text in the same way that a person would. Great, but very time consuming. Hundreds or even thousands of images have got to be ‘labelled’ and defects ‘outlined’. It also needs to be done on a high-powered PC, but then the user gets a very, very good inspection system with human-like perception.
We were asked by a ready meals manufacturer to install an online vision system to determine the make up of various dishes. The first line was a Three Fish Pie. The manufacturer was rightly concerned about the production and distribution of products with one or more ingredients missing or present in insufficient quantity.
As expected, the salmon, the first ingredient to be added, was easy to detect and quantify against a white background. The salmon was then covered with the white sauce containing haddock, followed by cod. That was when our problems began.
Initially we thought we could achieve this with a standard vision camera, but we failed as it couldn’t reliably differentiate between the creamy white sauce and the creamy white cod, especially with colour variations in each. We then tried a more sophisticated camera with ‘Deep Learning’ software. This looked far more promising and, as one might expect from the description, it got better the more we used it. We soon reached a point where almost none of the fish dishes were rejected through lack of cod content.
Another more recent technology is Edge Learning which is a bit like a ‘Lite’ version of Deep Learning. The training and computation is done on the device itself, which can be a smart camera or even a vision sensor. Edge Learning is optimised to rely on just a few images, often as few as 3 to 10 images of the different samples is sufficient.
Training an Edge Learning classification tool is very much like teaching a young child the difference between vehicles. As a parent, we might tell a child, “that is a car” or “that is a bus”. We don’t explain that a car has four wheels and is designed for the conveyance of 2 to 7 passengers. We simply show them some examples and they soon understand the similarities that group the different vehicles. Consequently, Edge Learning solutions are quicker to deploy, but used for projects where less complex information has to be analysed. Edge Learning is also proving useful in complimenting some rule-based solutions, such as fill height for liquids and cap presence, etc.
Both solutions have their merits and are powerful tools, when used appropriately to help manufacturers optimise production, whilst minimising rejects and customer complaints. Overuse of the term ‘AI’ makes them sound like a page from a sci-fi novel, but they are really just the next logical step in manufacturing technology, which can be quickly and easily implemented to show a rapid return on investments.
These tools have allowed us to revisit applications that we couldn’t previously solve, with payback times that make sense to manufacturers.
The close association between Harford Control and Visicon that our deeper partnership brings, also helps manufacturers to bring factory floor integration and a ‘one stop shop’ much closer.
Of course, our closer partnership still enables clients to purchase parts of an integrated solution and gradually build towards their ultimate big picture. Both companies will continue to trade independently, as before, but where integration is desirable or even where a vision only system is required, after sales support will be even better than before.
Harford Control’s other modules include: Average Quantity Optimisation, Paperless Quality, Automated Label Verification, OEE, Short Interval Control, Energy Management and LIMS (Laboratory Management Information Systems).
Whether a machine vision application or one of our other specialities, at Harford Control and Visicon, we work with you to ensure maximum engagement with the new technology and the fastest possible payback.
You might be wondering why, with all this development going on, Harford Control are not exhibiting at the Smart Factory Expo, part of Manufacturing and Engineering week. After careful consideration we decided, with stand space prices considerably higher than PPMA for just a 2 day event, that we would rather invest the money into even more smart technology together with our new partners. Looking forward to seeing you at PPMA in September, or before, if you give us a call +44 (0) 1225 764461