Article on Machine Learning

By Erica Stevens for

A Guaranteed Model for Machine Learning

On the factory floor, wasted resources stack up fast for every real or imagined defect. When a good part is mistakenly labeled flawed, there’s lost time, efficiency, and machine effort. And when a defective part goes unnoticed and becomes the end customer’s problem? The potential consequences are even more severe.  <Read More at>


See Erica's interview with Mariner's EVP of Product Development, Peter Darragh on our product Spyglass Visual InspectionMicrosoft and Intel marketplace ready!

Vision Systems Design Webinar: Leveraging Deep Learning and AI Applications

This live event has ended.
Click "Register" to view the recording
Click HERE for a copy of the presentation

Please join Mariner as we present the following Vision Systems Design webinar:

Leveraging Deep Learning and AI Applications in Manufacturing
Tuesday, December 15, 2020
12:00PM – 1:00PM EDT 


Manufacturers like you are successfully using artificial intelligence and deep learning in their operations today.

While these technologies help production processes reach new heights, one must carefully evaluate options before making any decisions.

Join us on December 15th to hear real-world case studies about how a chemical factory, a glass factory, and a fabric factory reduced their costs and increased their quality, and the role AI and Deep Learning played in those successes.

The webcast will cover Cloud limitations, the latest on the edge, hybrid edge/cloud setups for Industry 4.0, and how Intel and Microsoft technologies can help make it all come together. The webcast will conclude with a Q&A.

Sign up now to keep yourself on the cutting edge of machine vision technology.

Mariner – Best Place to Work Winner 2020

Every year, the Charlotte Business Journal sponsors a “Best Place to Work” contest.  Mariner reached the #2 slot in the Small Business Category for 2020!  All employees are surveyed by an outside company and that ranking is provided to CBJ.  Faced with unprecedented challenges in 2020, we still expanded our support for work-life balance and continue to have happy employees.

It’s great to be a Mariner!

Mariner selected as a finalist for 2020 Blue Diamond Award

10/6/2020 - Mariner was named a finalist in the "Business Impact - Analytics, AI & Big Data" category for 2020.  Although, we didn't win, we are honored to be selected.  Congratulations Quaero on a job well done!


CATC announces Charlotte’s
2020 Blue Diamond Technology Awards Finalists

Charlotte, N.C. –Charlotte Area Technology Collaborative (CATC) announces the nine award category finalists representing exceptional technology innovation and talent in the Greater Charlotte region.

Each category of nominations was reviewed and voted upon by a unique panel of judges made up of industry technology and business executives.

The winners will be announced at the virtual Blue Diamond Awards Celebration the morning of October 6, 2020, 7:30 – 8:30 a.m.

Business Impact – SMB
Pet Screening

Business Impact  –  Education, Government
Charlotte Mecklenburg Schools
Mecklenburg County
UNC Charlotte

Business Impact – Analytics, AI & Big Data
Sealed Air

Business Impact – Corporate
Curvature  CRM Sales Automation
Curvature  Forward Stocking Locations

Cool Innovation
ChromaSol International
Lucid Drone Technologies

Community Outreach

Human Capital
Genesis 10
Goodwill University
Innovate Charlotte

IT Entrepreneur
Bryan Delaney, Skookum
Brian Kelly, CloudGenera

Student Innovator
Adonis  Abdullah, UNC Charlotte CCI
Fidel Henriquez, UNC Charlotte CCi



The CATC, a 501(c)(3) organization , unites businesses, education, economic development  and community organizations to inspire, grow and advance an inclusive technology talent pipeline.   Proceeds from the annual Blue Diamond Awards Celebration support programs for middle, high school, women in tech, and community collaboration.

IndustryWeek Webinar: Leveraging Deep Learning and AI Applications in Manufacturing

Please join MarinerIntel and Microsoft as we present the following Industry Week webinar: 

Leveraging Deep Learning and AI Applications in Manufacturing
Tuesday, October 13, 2020
11:00AM – 12:00PM EDT 


Do AI and Deep Learning belong on the factory floor or are they just for those with their heads in the clouds?  Is being "on the edge" actually a good thing when you want to improve quality and reduce costs? 

As a manufacturer seeking to improve your production processes, you must consider these questions, and more. You face an array of technologies that promises to help you reach your goals, and you must carefully evaluate your options before making any decisions. 

The fact is that manufacturers like you are successfully using AI and Deep Learning in their operations. 

Come join us to hear real-world case studies about how a chemical factory, a glass factory and a fabric factory reduced their costs and increased their quality, and the role AI and Deep Learning played in those successes. 

In addition, we will also cover: 

  • Why the Cloud has limitations for AI and Deep Learning on the factory floor.                         
  • Why on-premise is fashionable again; now they call it "edge computing."                 
  • Why factory-floor AI and Deep Learning need both a hybrid edge/cloud to truly deliver 4.0 Smart Factory capabilities. 


NOTICE TO ALL PUBLIC SECTOR OR STATE-OWNED ENTITY EMPLOYEES – Federal [including Military], State, Local and Public Education

This is a Microsoft partner event. Should items of value (e.g. food, promotional items) be disbursed to event participants, these items will be available at no charge to attendees. Please check with your ethics policies before accepting items of value.

Peak Performance Symposium - 9/25/2020

The Peak Performance Symposium is a yearly event held for manufacturers, by manufacturers where leading industry professionals speak on a variety of topics relevant in today’s advanced manufacturing environment. There’s a jam packed agenda and Mariner is proud and excited to be one of the sponsors and presenters.  Please be sure to join us as we present with our customer, Milliken on “How Milliken leverages IoT and AI to Improve Asset Reliability” from 10:00a – 10:30a EDT in virtual Breakout Room 2.

Hope to see you there!



When visual inspection and process telemetry come together, they deliver Industry 4.0 smart factory capabilities

By Peter Darragh, EVP Product Engineering

When your job is to prevent your customers and supply chain partners from dealing with defects created by your out-of-control production line you have to decide if refinement of what you are doing now is good enough, or if you should solve the problem with entirely new capabilities.

Computer vision system providers have had decades of experience on productions lines. Patents for applying machine vision to manufacturing dates from the 1960s. Technology of image formation has an even longer historyPLCs also have a long history of use in manufacturing and made possible MES and SCADA.

Perhaps its isn’t realistic it to expect your next wave of refinement of technology won’t change something that already has decades of development poured into it and it is time to try something different. Progress may lie not in improving something these technologies do well and but rather to focus on what they were never designed to do at all.

Computer Vision systems performing visual inspection workloads are primarily responsible for annunciating defects on a HMI or sending a signal to downstream equipment to deal with the problem. But they are rarely capable of explaining how the defect was created.

Process telemetry collected by a Manufacturing Execution System (MES), data loggers or process historians are designed to quickly explain what was happening when a lot, batch, or serialized part was made. But they are rarely capable of explaining what the computer vision system actually saw.

So as a process engineer, you must make up for each technology’s respective limitations. If you don’t have a smart factory then you or someone you can boss around, must:

  1. Collect samples, or images of examples of the defects
  2. Define a classification or grading score to apply to the collection
  3. Analyze the frequency of the problem
  4. Match the defect to a specific lot/batch/serial number
  5. Pull the telemetry for all those defects
  6. Define a classification or grading score to ‘bin’ the data into an event.
  7. Combine those results into a workable dataset for analysis
  8. See if there is a relationship between how the item was made and the type of defect produced.
  9. Critically review the strength of those relationships to eliminate post-hoc bias and prove true causality.

The above steps are necessary to deliver insight and vision upon which to decide and act. As described by John Boyd’s work on Organic Design for Command and Control.

“Why insight and Vision? Without insight and vision there can be no orientation to deal with both present and future”.

Either after sufficient personal experience, or sage mentoring, you know how certain defects are created and, if the universe has smiled upon you, know how to prevent them.  But knowing isn’t the same as doing. You are only half way along your OODA loop.

Now John Boyd was a colonel in the air force and OODA has been successfully applied in military strategy, but Chet Richard’s book “Certain to Win” points out the similarities to OODA principles to the TPS and that is solidly in the realm of manufacturing.

And others such as Nigel Duffy when he was Innovation Leader AI Leader at EY saw OODA’s applicability in business workflow.

Regardless if you like it or not, as a process engineer, your job is a series of never-ending OODA loops and dealing with the present and future is the next half of the OODA, the DA - Deciding and Acting part.

In manufacturing, an automated OODA loop lives in an Industry 4.0 smart factory ‘where human beings, machines and resources communicate with each other as naturally as in a social network.’

When you blend process telemetry and visual inspection together, you can have your own 4.0 smart factory OODA loop. For one of our customers their smart factory OODA loop is one where SVI and SCF work together to:

(O)bserve: using a deep-learning model that reviews the images and recognizes a defect.

(O)rient: the raw telemetry from the machines that is converted into OOC events that are matched to the type of defect the deep-learning model identified.

(D)ecide: based on the frequency of the defect, the scale of the out-of-control process events and using rules or other AI to trigger a response.

(A)ct: with a situation report with suggested remedial actions sent via email or text to the process leads in the cell identified as the cause of the defect so they can take action.

With IoT, AI and cloud being commonplace, and the first industry 4.0 documents already consigned to the archives, why do they look so unusual? With a younger workforce rightly expecting social networks to be intrinsic to their workplace experience, then why are these 4.0 smart factory OODA loops not commonplace? Perhaps the answer lies in the words of Matthew Stockwin, Manufacturing Director, Coats.

"AI will come and digitisation is an unstoppable trend, but my view is that its penetration into the deep bowels of manufacturing will take more time than we think."

Stockwin was quoted in 2019 and attributes the problem of managers failing to learn and adapt and put themselves outside of their comfort zone. They fail to acknowledge connectivity is where you start a journey that ‘ends in the prediction power of systems to see problems before they occur’, and therefore cannot advocate for capital expense of connectivity for connectivity’s sake.

Without a desire to learn and adapt you cannot create the new capabilities needed to remove defects.

But if you are ready to learn and adapt to the new capabilities of combining visual inspection and process telemetry and have your very own 4.0 Smart factory OODA loops, then we are ready to help you.

…without the whole bowel penetration thing.

The Origins of Predictive Maintenance

By Peter Darragh, EVP Product Engineering

PdM Begins with Condition-Based Maintenance

The origins of predictive maintenance (PdM) begin with condition-based maintenance, which many attribute to CH Waddington, who along with two Nobel laureates, four Fellows of the Royal Society and a Fellow of the National Academy of Sciences Australia had no particular expertise in any kind of maintenance and how to improve equipment availability. In addition to not being mechanical engineers,

“None of us felt committed to any special expertise in Queuing Theory, or Games Theory or Decision Theory or what have you, so we were ready to stick our noses into what everyone told us did not concern us and to follow wherever that led.”  — CH Waddington. author of OR in World War 2 (1973).

And thankfully they stuck their noses into the “intolerably bread-and-butter affair” of organizing maintenance of Royal Air Force Coastal Command 502 Squadron and after a five-month trial applying their recommendations, aircraft availability in

“..the squadron average exceeded the previous maximum by 61% and exceeded the best average of any squadron over a similar period by 79%’.

Before the recommendations were implemented, aircraft were inspected based on planned maintenance schedules as determined by the manufacturer. Some steps required dis-assembling parts of the aircraft so they could be inspected. One of the most unexpected findings was

“The rate of failure or repair is highest just after an inspection and thereafter falls, becoming constant after about 40-50 flying hours.”

And they concluded,

“But the fact is that the inspection tends to increase breakdowns, and this can only be because it is doing positive harm by disturbing a reasonably satisfactory state of affairs..

Predictive Maintenance Thwarts Waddington Effect

The planned preventative maintenance whose purpose was to prevent unplanned failures, was actually creating unplanned failures. The behavior was termed, “The Waddington effect.” It was as if the old parts were jealous of the new parts and chose to ruin it for everyone.

Their advice was to change the maintenance process to be in-tune with the actual condition of the equipment and its actual usage patterns. It was the beginning of conditioned-based maintenance. It was also the beginnings of using economic and probabilistic information to determine inspection cycle strategies which becomes the foundation for predictive maintenance.

It took a scientist, not an engineer, using observational data and a scientific approach to really improve equipment availability by changing how maintenance was planned. Marshal of the Royal Air Force Sir John C Slessor GCB, DSO commented,

“It never would have occurred to me what the RAF soon came to call a ‘Boffin,’ a gentleman in grey flannel bags whose occupation in life had previously been something markedly unmilitary such as Biology or Physiology, would be able to teach us a great deal about our business. Yet so it was.”

So, if you find yourself fixing the same things on the same equipment over and over again, you may be stuck in The Waddington Effect and need to move to a maintenance approach that is more condition-based and predictive. Furthermore, you may need to enlist the help of a modern-day “Boffin” who can bring modeling and probabilistic maintenance methods to your organization.

Mariner Standardizes on Azure Stack Edge for Spyglass Products

By Phil Morris, CEO Mariner

I’m pleased to announce that Mariner has selected Microsoft’s Azure Stack Edge as our preferred edge intelligence platform. Edge intelligence is an essential component for manufacturers deploying digital transformation or Industry 4.0 technologies. Edge intelligence plays a vital role in a performant, secure, and available architecture for both our Spyglass Connected Factory and Spyglass Visual Inspection products.

The primary benefits of the Edge Intelligence server is to:

  • Reduce latency. For example, Spyglass Visual Inspection depends upon a vision model trained to detect defects in products. It does this in real-time by capturing images from cameras of products at various stages of production then asking the vision model to make a call: pass or fail the quality inspection test. This pass/fail result is then passed back to the production line for proper disposition of the product. Usually, these decisions must be made in a very short period of time. If the industrial vision system depended upon access to the model in the cloud, the round trip could be an unacceptable time loss. An edge intelligence server provides these services at the factory eliminating the latency of the Internet.
  • Permit continued operations when the Internet is down. Many of our customers have factories in far flung areas with a ready workforce. In some of these factories, the internet connection has not been considered a high priority. Should the Internet go down, the Edge Server is capable of continuing operations for a very long while. When connectivity is restored, the Edge Server synchronizes with the cloud as needed without loss of service.
  • Reduce cloud compute and storage costs. The Edge Server can capture messages and telemetry as frequently as it is generated. Since cloud traffic is often priced per message and storage is priced per transaction, the Edge Server can summarize messages to a unit of time that still provides the benefits of getting telemetry with the unit of time needed to provide effective time series analysis. For example, if you have a device generating messages every tenth of a second, the Edge Server can respond to emergent conditions in real-time. It can also provide summarize these transactions and distill them down to min, max, & mean of values in a minute’s worth of data for submission to history significantly reducing the cloud services and storage required without loss of fidelity.
  • Enterprise Scalability. Edge servers are an integral component to permit enterprise scalability. The edge server is where the AI models are stored and executed. But AI models are not static and need to be refreshed for reasons of model drift, new products or changing production configurations. An organization’s AI models can be stored in the cloud and “pushed” down as updated models are made available.
  • Spyglass Visual Inspection applies the advantages of a hybrid/cloud deployment to centralize model management in the cloud and simultaneously provide local control of real-time operations at the edge.

In the cloud:

      • You maintain your labeled image libraries. The centralized store allows you to train on images gathered from any site and line, so pooling the examples of rare defects and improving the model’s ability to recognize them.
      • Label new images that are flagged for investigation adding them to the collection.
      • Re-train and evaluate models without burdening the edge workloads.
      • Maintain histories of model versions, their over-the-air deployments and on-site performance.

At the edge:

      • You only make a small percentage of defects so the edge only send copies of interesting images to add to the cloud collection, thus conserving bandwidth and balancing out libraries.
      • Receive new models over-the-air, but only introduce them using policies specific to each edge device. The process owners decide when and how to introduce new models with partial cut-overs to prove efficacy and so they always know what versions are being used and choose when they are introduced.
      • Receive over-the-air updates to software that will only be installed on an approved outage. Process owners are in full-control of the upgrades to Spyglass Visual Inspection.

I hope this primer on edge intelligence has been helpful. If you’d like a deeper dive, here is a video of Microsoft’s David Armour, Principal PM Manager, Azure Stack and Mariner’s Peter Darragh, VP of Product Engineering doing a deep dive on Microsoft’s Azure Stack Edge.