When visual inspection and process telemetry come together, they deliver Industry 4.0 smart factory capabilities

By Peter Darragh, EVP Product Engineering

When your job is to prevent your customers and supply chain partners from dealing with defects created by your out-of-control production line you have to decide if refinement of what you are doing now is good enough, or if you should solve the problem with entirely new capabilities.

Computer vision system providers have had decades of experience on productions lines. Patents for applying machine vision to manufacturing dates from the 1960s. Technology of image formation has an even longer historyPLCs also have a long history of use in manufacturing and made possible MES and SCADA.

Perhaps its isn’t realistic it to expect your next wave of refinement of technology won’t change something that already has decades of development poured into it and it is time to try something different. Progress may lie not in improving something these technologies do well and but rather to focus on what they were never designed to do at all.

Computer Vision systems performing visual inspection workloads are primarily responsible for annunciating defects on a HMI or sending a signal to downstream equipment to deal with the problem. But they are rarely capable of explaining how the defect was created.

Process telemetry collected by a Manufacturing Execution System (MES), data loggers or process historians are designed to quickly explain what was happening when a lot, batch, or serialized part was made. But they are rarely capable of explaining what the computer vision system actually saw.

So as a process engineer, you must make up for each technology’s respective limitations. If you don’t have a smart factory then you or someone you can boss around, must:

  1. Collect samples, or images of examples of the defects
  2. Define a classification or grading score to apply to the collection
  3. Analyze the frequency of the problem
  4. Match the defect to a specific lot/batch/serial number
  5. Pull the telemetry for all those defects
  6. Define a classification or grading score to ‘bin’ the data into an event.
  7. Combine those results into a workable dataset for analysis
  8. See if there is a relationship between how the item was made and the type of defect produced.
  9. Critically review the strength of those relationships to eliminate post-hoc bias and prove true causality.

The above steps are necessary to deliver insight and vision upon which to decide and act. As described by John Boyd’s work on Organic Design for Command and Control.

“Why insight and Vision? Without insight and vision there can be no orientation to deal with both present and future”.

Either after sufficient personal experience, or sage mentoring, you know how certain defects are created and, if the universe has smiled upon you, know how to prevent them.  But knowing isn’t the same as doing. You are only half way along your OODA loop.

Now John Boyd was a colonel in the air force and OODA has been successfully applied in military strategy, but Chet Richard’s book “Certain to Win” points out the similarities to OODA principles to the TPS and that is solidly in the realm of manufacturing.

And others such as Nigel Duffy when he was Innovation Leader AI Leader at EY saw OODA’s applicability in business workflow.

Regardless if you like it or not, as a process engineer, your job is a series of never-ending OODA loops and dealing with the present and future is the next half of the OODA, the DA - Deciding and Acting part.

In manufacturing, an automated OODA loop lives in an Industry 4.0 smart factory ‘where human beings, machines and resources communicate with each other as naturally as in a social network.’

When you blend process telemetry and visual inspection together, you can have your own 4.0 smart factory OODA loop. For one of our customers their smart factory OODA loop is one where SVI and SCF work together to:

(O)bserve: using a deep-learning model that reviews the images and recognizes a defect.

(O)rient: the raw telemetry from the machines that is converted into OOC events that are matched to the type of defect the deep-learning model identified.

(D)ecide: based on the frequency of the defect, the scale of the out-of-control process events and using rules or other AI to trigger a response.

(A)ct: with a situation report with suggested remedial actions sent via email or text to the process leads in the cell identified as the cause of the defect so they can take action.

With IoT, AI and cloud being commonplace, and the first industry 4.0 documents already consigned to the archives, why do they look so unusual? With a younger workforce rightly expecting social networks to be intrinsic to their workplace experience, then why are these 4.0 smart factory OODA loops not commonplace? Perhaps the answer lies in the words of Matthew Stockwin, Manufacturing Director, Coats.

"AI will come and digitisation is an unstoppable trend, but my view is that its penetration into the deep bowels of manufacturing will take more time than we think."

Stockwin was quoted in 2019 and attributes the problem of managers failing to learn and adapt and put themselves outside of their comfort zone. They fail to acknowledge connectivity is where you start a journey that ‘ends in the prediction power of systems to see problems before they occur’, and therefore cannot advocate for capital expense of connectivity for connectivity’s sake.

Without a desire to learn and adapt you cannot create the new capabilities needed to remove defects.

But if you are ready to learn and adapt to the new capabilities of combining visual inspection and process telemetry and have your very own 4.0 Smart factory OODA loops, then we are ready to help you.

…without the whole bowel penetration thing.

The Origins of Predictive Maintenance

By Peter Darragh, EVP Product Engineering

PdM Begins with Condition-Based Maintenance

The origins of predictive maintenance (PdM) begin with condition-based maintenance, which many attribute to CH Waddington, who along with two Nobel laureates, four Fellows of the Royal Society and a Fellow of the National Academy of Sciences Australia had no particular expertise in any kind of maintenance and how to improve equipment availability. In addition to not being mechanical engineers,

“None of us felt committed to any special expertise in Queuing Theory, or Games Theory or Decision Theory or what have you, so we were ready to stick our noses into what everyone told us did not concern us and to follow wherever that led.”  — CH Waddington. author of OR in World War 2 (1973).

And thankfully they stuck their noses into the “intolerably bread-and-butter affair” of organizing maintenance of Royal Air Force Coastal Command 502 Squadron and after a five-month trial applying their recommendations, aircraft availability in

“..the squadron average exceeded the previous maximum by 61% and exceeded the best average of any squadron over a similar period by 79%’.

Before the recommendations were implemented, aircraft were inspected based on planned maintenance schedules as determined by the manufacturer. Some steps required dis-assembling parts of the aircraft so they could be inspected. One of the most unexpected findings was

“The rate of failure or repair is highest just after an inspection and thereafter falls, becoming constant after about 40-50 flying hours.”

And they concluded,

“But the fact is that the inspection tends to increase breakdowns, and this can only be because it is doing positive harm by disturbing a reasonably satisfactory state of affairs..

Predictive Maintenance Thwarts Waddington Effect

The planned preventative maintenance whose purpose was to prevent unplanned failures, was actually creating unplanned failures. The behavior was termed, “The Waddington effect.” It was as if the old parts were jealous of the new parts and chose to ruin it for everyone.

Their advice was to change the maintenance process to be in-tune with the actual condition of the equipment and its actual usage patterns. It was the beginning of conditioned-based maintenance. It was also the beginnings of using economic and probabilistic information to determine inspection cycle strategies which becomes the foundation for predictive maintenance.

It took a scientist, not an engineer, using observational data and a scientific approach to really improve equipment availability by changing how maintenance was planned. Marshal of the Royal Air Force Sir John C Slessor GCB, DSO commented,

“It never would have occurred to me what the RAF soon came to call a ‘Boffin,’ a gentleman in grey flannel bags whose occupation in life had previously been something markedly unmilitary such as Biology or Physiology, would be able to teach us a great deal about our business. Yet so it was.”

So, if you find yourself fixing the same things on the same equipment over and over again, you may be stuck in The Waddington Effect and need to move to a maintenance approach that is more condition-based and predictive. Furthermore, you may need to enlist the help of a modern-day “Boffin” who can bring modeling and probabilistic maintenance methods to your organization.

Mariner Standardizes on Azure Stack Edge for Spyglass Products

By Phil Morris, CEO Mariner

I’m pleased to announce that Mariner has selected Microsoft’s Azure Stack Edge as our preferred edge intelligence platform. Edge intelligence is an essential component for manufacturers deploying digital transformation or Industry 4.0 technologies. Edge intelligence plays a vital role in a performant, secure, and available architecture for both our Spyglass Connected Factory and Spyglass Visual Inspection products.

The primary benefits of the Edge Intelligence server is to:

  • Reduce latency. For example, Spyglass Visual Inspection depends upon a vision model trained to detect defects in products. It does this in real-time by capturing images from cameras of products at various stages of production then asking the vision model to make a call: pass or fail the quality inspection test. This pass/fail result is then passed back to the production line for proper disposition of the product. Usually, these decisions must be made in a very short period of time. If the industrial vision system depended upon access to the model in the cloud, the round trip could be an unacceptable time loss. An edge intelligence server provides these services at the factory eliminating the latency of the Internet.
  • Permit continued operations when the Internet is down. Many of our customers have factories in far flung areas with a ready workforce. In some of these factories, the internet connection has not been considered a high priority. Should the Internet go down, the Edge Server is capable of continuing operations for a very long while. When connectivity is restored, the Edge Server synchronizes with the cloud as needed without loss of service.
  • Reduce cloud compute and storage costs. The Edge Server can capture messages and telemetry as frequently as it is generated. Since cloud traffic is often priced per message and storage is priced per transaction, the Edge Server can summarize messages to a unit of time that still provides the benefits of getting telemetry with the unit of time needed to provide effective time series analysis. For example, if you have a device generating messages every tenth of a second, the Edge Server can respond to emergent conditions in real-time. It can also provide summarize these transactions and distill them down to min, max, & mean of values in a minute’s worth of data for submission to history significantly reducing the cloud services and storage required without loss of fidelity.
  • Enterprise Scalability. Edge servers are an integral component to permit enterprise scalability. The edge server is where the AI models are stored and executed. But AI models are not static and need to be refreshed for reasons of model drift, new products or changing production configurations. An organization’s AI models can be stored in the cloud and “pushed” down as updated models are made available.
  • Spyglass Visual Inspection applies the advantages of a hybrid/cloud deployment to centralize model management in the cloud and simultaneously provide local control of real-time operations at the edge.

In the cloud:

      • You maintain your labeled image libraries. The centralized store allows you to train on images gathered from any site and line, so pooling the examples of rare defects and improving the model’s ability to recognize them.
      • Label new images that are flagged for investigation adding them to the collection.
      • Re-train and evaluate models without burdening the edge workloads.
      • Maintain histories of model versions, their over-the-air deployments and on-site performance.

At the edge:

      • You only make a small percentage of defects so the edge only send copies of interesting images to add to the cloud collection, thus conserving bandwidth and balancing out libraries.
      • Receive new models over-the-air, but only introduce them using policies specific to each edge device. The process owners decide when and how to introduce new models with partial cut-overs to prove efficacy and so they always know what versions are being used and choose when they are introduced.
      • Receive over-the-air updates to software that will only be installed on an approved outage. Process owners are in full-control of the upgrades to Spyglass Visual Inspection.

I hope this primer on edge intelligence has been helpful. If you’d like a deeper dive, here is a video of Microsoft’s David Armour, Principal PM Manager, Azure Stack and Mariner’s Peter Darragh, VP of Product Engineering doing a deep dive on Microsoft’s Azure Stack Edge.