Mariner Awarded 2020 Microsoft Partner of the Year for IoT

We are happy to announce that Mariner was recognized at the all-new, all-digital Microsoft Inspire event as the 2020 Microsoft Partner of the Year for Internet of Things.


“We are honored to have the Mariner team’s hard work recognized by Microsoft for the innovative use of IoT and AI technology to bring significant business value to our mutual customers.” – Philip Morris, CEO

The Microsoft Partner of the Year Awards program recognizes partners who have delivered exemplary solutions built on Microsoft technologies.  See the full list of this year's winners and finalists. This award recognition is a testament to our competencies and innovations in leveraging IoT and AI for our manufacturing customers. Mariner’s Spyglass Visual Inspection and Microsoft Azure IoT services brings reduced false rejects, improved productivity and quality analytics to manufacturers.

In addition to highlighting our achievements in solving false rejects in traditional machine vision applications, the award also celebrates our deep commitment to:

  • Maintaining a growth mindset and a culture of diversity and inclusion.
  • Listening to customers and understanding their objectives and needs.
  • Creating transformative solutions and services that solve real business problems.
  • Strategic alignment with Microsoft and the vast partner ecosystem.
  • Helping our customers achieve successful business outcomes.

Our work together has contributed to the achievements recognized by this award. Thank you and we look forward to our continued success.

LEARN MORE ABOUT Spyglass Visual Inspection


Deep Learning vs Traditional Machine Vision

Automated visual inspection systems give manufacturers the ability to monitor and respond to production issues in real time, reducing costs and improving quality. Today, most visual inspection systems consist of some type of image capture hardware, and an integrated or discrete computer equipped with specialized software to process images. At the heart of this software is a computer vision algorithm that takes in the array of numbers that represents the image of the product, performs some mathematical operations on these numbers, and computes a final result. For example, the computer vision algorithm may determine that an entire product is defective, detect the type and location of a defect on a product, check for the presence of certain subcomponent, or measure the overall quality of finish.

In traditional machine vision systems, this computer vision algorithm is broken into two steps. In the first step, typically called feature extraction, a set of mathematical operations are performed on the raw pixel values of the image. For example, if searching for defects in an image of a product, the feature extraction step may consist of sliding a small window across the entire image, and for each window location, computing the contrast - the difference between the brightest and darkest pixel – for the pixels within the window. This feature could be useful in making a final determination, because windows with higher contrast may be more likely to contain defects.

In the second and final processing step, the features computed in the first step are combined to make a final decision about the image. This decisioning step is often accomplished using a combination of manually tuned parameters or thresholds. For example, our computer vision algorithm may flag an image as defective if any window contains contrast greater than ten.

Now, as you can imagine, this approach may work well in some cases, but may fail in others. Not every high contrast region of an image represents a defect. These types of errors often result in high false positive rates, where machine vision systems flag good products as defective. To mitigate these issues, some systems use many different types of features, in an effort to make more fine grain distinctions. This approach can result in better performance, but comes with a real cost. Lots of features means lots of parameters or thresholds to tune, making these systems difficult to adapt to changing conditions on the factory floor, even for the most experienced operators and engineers.

So this two-step approach of feature extraction followed by decisioning, at the heart of many machine vision systems, can in practice be very difficult to successfully deploy and maintain.

Now as you can imagine, this is not just a problem in manufacturing – this two-step approach shows up in many other computer vision applications as well, and for decades researchers have been searching for a more robust and scalable way forward.

One interesting alternative approach is to replace our two-step pipeline with a single unified model that is capable of both extracting features from our images and decisioning. Of course, if we set out to engineer or design a unified model like this, we may end up right back where we started, with two distinct steps.

The real trick here is, instead of explicitly programming the unified model, designing the unified model is such a way that it can learn from labeled data.

Then, in 2012, researchers at the University of Toronto published breakthrough work showing how for the first time, a neural network that was many layers deep that could be successfully trained on a large-scale dataset. [2]Unfortunately, for most of the history of computer vision, no one really knew how to accomplish this. Researchers came close in the 1980s and 1990s, developing computational models called neural networks; but even as recently as 2010, it really wasn't clear if these models could solve the types of general computer vision problems we really care about solving. [1]

This unified learning approach is called deep learning, and over the last few years has completely revolutionized the field of computer vision. For example, on the challenging ImageNet image classification benchmark, deep learning based approaches dropped the error rate from around 30% to less than 4% in from 2012 to 2015 – achieving super-human image classification performance across the challenge’s 1000 distinct image classes.

For visual inspection applications, deep learning offers dramatic performance improvements over traditional feature extraction and decisioning methods. By learning from human-expert-labeled examples specific to the manufacturing problem at hand, deep learning models can emulate expert decisioning at large scale and high speed. Further, since both feature extraction and decisioning is learned, deep learning systems do not require endless tuning to adapt to changing conditions.

Deep learning offers a powerful alternative to traditional machine vision approaches, and when deployed in the right applications, and on top of the right infrastructure, can deliver tremendous business value.

Watch a full presentation hosted by Vision Systems Design or Contact Us to see how we can help improve your machine vision process.

__________________________________________________

[1] Remarkably, most of the key mathematics and ideas for deep learning we’re in place in the 1990s. The biggest missing ingredient were computer power and large labeled datasets. See: LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.

[2] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.

 

About the Author
Stephen Welch is VP of Data Science at Mariner, where he leads a team developing deep-learning based solutions for manufacturing applications. Prior to working with Mariner, Stephen was VP of Machine Learning at Autonomous Fusion, an Atlanta-based autonomous driving startup, where Stephen lead the design, development, and deployment of machine learning algorithms for autonomous driving.

Stephen has extensive experience training and deploying machine learning models across a wide variety of domains, including an on-board crash detection algorithm that is now deployed in over 1M vehicles as part of the Verizon Hum product. Stephen strives to not just develop strong technology, but to explain and communicate results in clear and accessible ways – as an adjunct professor at UNCC, Stephen teaches a 60+ person graduate level class in machine learning and computer vision.

Stephen is also the author of the educational YouTube channel Welch Labs, which has earned 200k+ subscribers and 10M+ views. Stephen holds 10+ US patents, and engineering degrees from Georgia Tech and UC Berkeley.


Spyglass Foundation: The Lean Path to Digital Transformation

Digital Transformation is THE buzz word in many industries these days, and no-less in manufacturing. Significant investments are being made to address ways to use technology to reduce cycle times, improve processes, reduce costs, and take the friction out of doing business. McKinsey’s “A Guidebook for Heavy Industry’s Digital Journey” states that the large scale implementation of digital technologies and advanced analytics could boost profit margins by 3 – 5 percentage points. These digital transformation projects follow a traditional transformation journey of the below 5 steps:

Vision – What does success look like? It can be grandiose or it can be very specific and narrowly defined. For one of Mariner's customers, the vision is to address throughput challenges and quality issues that most effect customers.

Diagnostic – In our experience, the question isn’t to find a problem to solve. It is selecting the most impactful problem from the large collection of possibilities and then prioritizing the rest. We call this step “selecting the problem worth solving.” At Mariner, that means finding a problem for which we have a clear definition of success, access to the necessary information and a business case worthy of the effort. The word “worth solving” implies the development of a business case. You should focus on both the business case for the first phase that you intend to pilot, but also keep your eye on the ROI for the larger vision.

Build the Roadmap – Now that you have a list of projects to tackle, prioritize them. Consider the changes you will need to address in people, process and technology.

Run the Pilot - A pilot is a low cost way of confirming that your hypotheses are on target and that you will be capable of achieving the returns identified in your business case. At Mariner, this embodies a process we call “1 2 3”.

  1. Since our focus is on leveraging AI as the technological centerpiece of digital transformation, our pilot projects begin with building an AI algorithm that demonstrates that an AI model is capable of predicting an outcome with an acceptable degree of accuracy. For example, for a large manufacturer of pumps who is interested in asset management, we are building a model from past telemetry to ensure we can accurately predict failures in time to permit field service to take corrective action.
  2. Continuing with our predictive maintenance example, once the pilot model demonstrates it can predict maintenance needs using historical data, we proceed with a pilot deployment to operationalize the model, first on a limited scale. We’ve developed an IoT platform designed to operationalize analytic models quickly and efficiently. Dubbed “Spyglass Foundation”, we use this framework with great success. We can deploy proven models in 4 weeks, sometimes less.
  3. Model Maintenance – analytic models drift, meaning their accuracy may degrade overtime. For this reason, Mariner provides continuous diagnostic services to identify the optimal frequency to maintain/retrain models.

Scale Up – Once a pilot has been proven out, the next step is to scale it. Mariner has a customer that manufactures automotive glass. Our pilot project on a single production line proved the gains can be attained. The next step is to construct a plan to scale out the entire factory, followed by the remainder of the enterprise.

Manufacturers are hungry for the savings digital transformation can provide. It is essential that they begin the journey in order to stay competitive with their peers who have made progress. However, many manufacturers don’t know where to start. That is where Mariner can help. We have a proven process and tools like Spyglass Foundation to provide a rapid time to value approach to pilot and scale your digital transformation projects. It’s easy as 1, 2, 3.

If you’re interested in learning more, please drop me an email.


CEO to CEO Series: Vision Systems, Deep Learning & Defect Detection

By Phil Morris, CEO

I have a very good friend; let’s call him “Jake.” Jake has spent his entire adult life in manufacturing and supply chain. When his employer decided to develop a customer-centric marketing strategy, they called several large global firms known for their marketing strategy development. Each firm presented their impressive, winning sales presentations highlighting their proprietary methodologies, focused organic growth models, customer profile development, and new business model strategies. Of course, each of these polished presentations arrived with a 7-figure price tag to deliver the desired services. Jake, no stranger to sarcasm, said “Hey, I have an idea. How about we ask our customers what they want, and then do that?” Sage advice. Jake’s no-nonsense approach “won” and that is exactly what his employer did . . . with very successful results.

I always keep this story in mind when visiting customers and prospects. It is enlightening the things we learn from people who spend their working lives thriving in the constant whirlwind of the manufacturing world. Over the past year, I have had the opportunity to meet with the following companies:

  • Global glass manufacturer with an interesting problem to solve in their automotive glass division. Their existing defect detection system created several challenges. It could not reliably differentiate between naturally occurring dirt, grime and water spots from real defects that are real defects. In other words, the system generated an excessive number of false positives. For this reason, they could not equip their production lines with pick and place robots. They also need a large number of trained operators to override the vision systems erroneous decisions.
  • Automotive interiors manufacturer purchased a sophisticated vision system that could not differentiate surface anomalies from real defects. For example, the existing vision system confused lint sitting on top of the fabric for yarn pulls. With the aid of additional human inspection for false positives, their vision system would reduce yield and, consequently, reduce revenue. For this reason, they had human inspectors oversee, and override decisions made by the system. Lots of them.
  • Global tire manufacturer has a visual inspection station at the end of the tire building process. The vision system captures images of the exterior and interior of the tire. Because of the nature of tires (black on black), their false positive and false negative rates are unacceptably high. Consequently, there are human inspectors who must look and feel the tire by running their hands over every square inch of the tire, inside and out, feeling for defects.
  • Pharmaceutical company with a chemical process discovered that when filling vials, occasionally foam would form in the vial. The vision system misinterpreted foam as particulate matter in the liquid or glass defects, both causing an unacceptable level of false positives. Additional human effort must be expended separating the false positives from the true.

Listening to these customers, I discovered a common theme. For the past two or three decades, manufacturers have made significant investments in machine vision systems to automate defect detection and classification. In each example, we found that the cameras, optics, lighting, and image capture was of sufficient quality. Unfortunately, the software scoring the images was not. Modern deep learning and neural network technology can dramatically improve the quality of the scoring results. For example, with one customer we were able to demonstrate a reduction of false positives from 29% to less than 1%. The customer’s existing vision system was highly specialized and represented a significant investment. Replacing the vision system was not an acceptable solution. Our solution to the problem was to create a deep learning model that scored much more accurately. We operationalized the model by creating a system to intercept images from the vision system. We then scored the image using the improved model. Once scored, the results are shared on the operator’s console. This is a preliminary step, however, as the longer term goal is to install pick and place robots. Once that is accomplished, our solution will communicate pass/fail commands directly to a robot.

Our customers are happy to learn we can extend the life of the investments made in their existing vision systems and at the same time, reduce the amount of manual intervention required to compensate for image scoring software.

Do you have an existing vision system with high degree of false positives? Do you know the frequency of defects by defect classification? Do you need a cadre of people to monitor the results of your inspection system?

If so, we should talk.


Is it Hard to Successfully Deploy and Scale IoT Solutions?

Earlier this year, McKinsey & Co. published the findings of their latest Industry 4.0 survey in an article titled “How digital manufacturing can escape ‘pilot purgatory’.” The survey contained some interesting findings. 

At the opening of the piece, the authors share that nearly 70% of the survey respondents named “Digital Manufacturing” their top manufacturing priority. Not really a surprise since so much energy and effort has been expended on supply chain integration, advanced planning, kaizen, six sigma/lean, TQM, and other operational improvement strategies and tactics through the years. 

The article goes on to identify three broad use cases for digital manufacturing initiatives that are equally popular focus areas, as follows: 

  • Connectivity – solutions that improve and facilitate operational performance, management, and everyday collaboration of employees 
  • Intelligence  use of analytics, predictive modeling, and cognitive science to gain insight and improve decision making 
  • Flexible automation  use of new digital equipment to increase efficiency and flexibility in the production system 

Again, not surprising. Most of our customers are exploring multi-faceted Industry 4.0 strategies that encompass some or all of these areas. And here at Spyglass, we’ve pursued a product development roadmap that supports each of these broad areas. 

For us, the most interesting finding of the latest survey was an insight into where most of these global manufacturers are in their Industry 4.0 journey. Almost 30 years after Mark Weiser at Xerox’s PARC envisioned ubiquitous computing” in our future, and more than a decade since the digital transformation really got underway, many manufacturers struggle to get traction in these initiatives. 

The authors go on to share six “success factors” that manufacturers demonstrating “at-scale” impact appear to share. These are: 

  1. Approach the opportunity “bottom-line-value backward” rather than technology forward
  2. Communicate a clear vision and change story for competitive advantage
  3. Select a comprehensive technology stack that scales and supports analytics early on
  4. Select the right technology partners who can build a focused ecosystem 
  5. Secure enterprise-wide sponsorship – don’t treat this as a “one off”
  6. Invest in skills building to get ahead of the capability gap. 

We completely concur with these recommendations. It’s one of the main reason we offer a completely different approach to Industrial IoT implementation than most of our competition. Rather than rip and replace, starting with an expensive platform implementation investment, or treating IIoT as a lab-based pilot undertaking, we start with a true production trial.

What is a production trial? It’s an implementation of the platform and services that will remain the foundation of a full-scale enterprise deployed solution. It’s solving a real-world shop floor issue in production, not with extracted “representative” data (or hypothetical mocked up datasets). And most importantly, its engaging the actual manufacturing stakeholders – production operators, manufacturing engineers, maintenance leads and QA professionals, and plant leadership – to address the opportunity and prove the results. In our experience, technical obstacles can be overcome. Success in these efforts, like we’ve seen in enterprise technology programs that pre-date IoT, are more often determined by so-called soft factors like sponsorship, focus, and change leadership. 

So, what have we seen work successfully in these initiatives? 

Start with the end in mind. Simple and obvious, right? It is amazing to us how many manufacturers are letting their technology organizations invest millions in infrastructure preparations and pilot learnings without a solid business case or vision for how the effort pays for itself. In our experience, “build it and they will come” plays well in the movies but rarely pays off on the factory floor. It’s a popular myth that successful IoT requires major investment in upgrades in IT and OT infrastructure first before real impact can be realized. The reality is that identifying and focusing on specific issues or opportunities, combined with leveraging a mature, robust third-party (i.e., cloud-based) technology stack, shrinks the capital investment requirements enormously – allowing a modest single line case to yield positive impact and set the stage for a broader rollout across other lines and facilities. 

While all improvements offer a mix of qualitative and quantitative benefits, working up front to estimate and build consensus on the quantitative benefits and return on investment (ROI) can really clarify the solution requirements that matter. In our experience, if an ROI case cannot be made the focus may not be clear enough to sustain the implementation. The good news is that when that focus occurs, the result is usually significant and compelling. Here are a few recent examples from our customers: 

  • A food manufacturer implemented remote monitoring and real-time alerts to cooking processes where a blockage in the product flow could result in batch loss within minutes if undetected. A single incident could result in a full shift of lost production capacity. The customer conservatively estimated a four-month return on investment on better control of single condition. That investment also funded other monitoring improvements throughout the entire manufacturing operation.
  • An automotive supplier applied AI/Machine Learning algorithms to an existing industrial visual inspection camera system and slashed false positive rates from 30% to less than 2% within a few months. This customer expects 3-4x ROI in the first year alone.

An industrial equipment producer uses our Connected Product solution to monitor their field service effectiveness. By monitoring the performance of industrial batteries in use at their customers’ locations, they were able to extend service life, increase uptime, delight customers, and realize a 30% ROI annually. 

Starting with the end in mind also implies some nuanced consideration of what opportunities to pursue first. The temptation might be to select a “big payoff” opportunity or a “small, easy win.” Part of the consideration should include the breadth and reusability of the solution once confirmed. For example, you might have a single issue that has a large benefit to solving but is related to a unique manufacturing process or machine that only has a single deployment in your ecosystem. Another opportunity might have a more modest impact associated with the single first situation but, once solved, can be replicated dozens or even hundreds of times throughout your manufacturing footprint. The latter may ultimately represent a larger enterprise opportunity, and also offers the advantage of becoming a proven success for new lines and facilities as the rollout commences. 

As the McKinsey team points out, applying value-based discipline to the approach and plan at the outset will keep the focus on the right priorities and build momentum from the outset. 

Get your best talent involved early. It is a mistake not to engage a multi-disciplinary team at the outset. Your best operators, managers, maintenance personnel, and automation and process control experts are essential to the successful design and validation of IoT solutions. Even small parts of manufacturing systems can involve a wide array of variability and operating conditions that need to be considered. Leveraging this internal expertise early helps also with ownership and adoption as you plan rollout. Encouraging skeptical dialog can ultimately shorten the timeline to robust success, as long as sponsorship is visible and the program goals are clear and well articulated. 

Engaged sponsorship. Again, this probably seems obvious but too often the view is that this is a (fill in the blank) initiative, not a manufacturing initiative. Ultimately, the executives responsible for running safe, efficient, high quality production need to take personal ownership for the success of these initiatives. Program leadership can be delegated to a capable individual, but the executive sponsor needs to be engaged, visible, and easily accessible to that delegated program leader or progress quickly slows or halts. Executive sponsors need to recognize that this will be a routine part of their workweek, not a once a month “check in”. 

Evolution not revolution. At the outset, be clear that learning is a natural part of these undertakings. Conditions change over time and the root cause of performance deficiencies in a production process will change as stakeholders have better tools and data available to manage operations. Like other operations and process improvement practices we are all familiar with, IoT solutions need to be regularly revisited, enhanced and re-validated. It may seem trite to use the “journey not a destination” metaphor, but recognizing that these analytics-based solutions will need ongoing attention and tuning is important to planning your team requirements and roll-out strategy. Since addressing and mitigating variability is often at the heart of manufacturing process improvement, adopting the mindset that new conditions are not “failures” of the IoT solution but rather, an expected part of continuous improvement and learning process. 

Drive to active outcomes. Another frequent shortcoming of innovations that feature better analyticsdata accessibility”“communications”, “monitoring”, or other generalized benefits that are often touted with IoT solutions is that they fail to go the critical last mile to timely action by the right decision maker. Don’t settle for IoT capabilities that ingest and manage massive volumes of data and potential insights, or that leave discovery of trends and improvements to historical perspective. Real impact derives from delivering the right actionable information to the right person to do something with at information, in a timely fashion. Encourage your team to constantly ask the question, “Who will be able to do something differently if we solve this correctly?” “What does that individual really need to make that happen?” If better decisions and actions aren’t directly forthcoming, the risk is considerable that results will not be sustainable. 

So, is it hard to successfully deploy and scale IoT solutions? Not if you choose the right approach, a proven platform, a reliable technology partner (or partners), and focus on scalable improvements with measurable results. We’ve had success working with both mid-sized and very large manufacturers across many industry verticals. If you are trying to avoid or get out of ‘pilot purgatory’, let’s talk! 


OEE and Spyglass - A Primer for Manufacturing CFOs

“I know you believe this can really help my company, but why exactly?”

Last Thanksgiving, our National Sales Manager at Spyglass, Mark Adelhelm, was having a conversation over the holiday dinner table with his brother-in-law. Inevitably, their conversation turned to business. Mark’s brother-in-law is a CFO at a midsized manufacturer, and he was intrigued by Spyglass and how IoT and OEE analytics could improve financial performance. The conversation led to an extended email exchange and, eventually, this article which we are pleased to share with you: “OEE and Spyglass - A Primer for Manufacturing CFOs”.

In this document, you will find answers to these questions:

  • What is OEE?
  • How does Spyglass help me improve OEE?
  • What about IoT, AI and Machine Learning?  These technologies are all over the news.  How do they come in to play?
  • I’m worried that my internal I.T. team may not have the expertise yet to manage an application like Spyglass. What needs to be in place “expert-wise” in order to get this going?
  • Is there more I should know?

A great document every manufacturing CFO should read.

About the Author
Mark Adelhelm has spent his 25+ year career with leading global manufacturers driving innovation and operational excellence throughout their entire supply chain. He is responsible for building healthy, sustainable relationships with Spyglass customers and helping them implement successful IoT solutions to achieve their priorities and business results.  In his words, “Leading lasting, meaningful change takes vision, passion, and attention to detail. I get that. I love what I do, helping great leaders take their teams to levels of performance and results they didn’t think possible. If you want to harness the power of Industry 4.0 for your enterprise, let’s connect.”


AI in Manufacturing, Manufacturing CEO, predictive maintenance

CEO to CEO Series: The Magic of AI

By Phil Morris, CEO, Mariner

Clarke's Third Law
Any sufficiently advanced technology is indistinguishable from magic.

The first genre of literature to capture my attention and imagination was science fiction. At an early age, I was introduced to the great writers from the golden age of science fiction: Robert Heinlein, Isaac Asimov, Frederick Pohl, and Philip K. Dick. One of the stars of that august group was Arthur C. Clarke. Clarke was much more than just a science fiction writer, he was an inventor, a futurist, and a philosopher.

I like Clarke’s Third Law because of the truths it contains. We see these truths repeated in the history of humankind. When two cultures with significant technical differences experience an encounter with each other, “magic” is the only logical explanation for the difference by the less developed culture.

“Magic” represents “not sufficiently understood.”

Why, you may be asking, am I schooling you in retro science fiction? In a previous CEO to CEO message, I promised to share some ideas for applying AI to manufacturing problems. If you mention AI in most manufacturing circles, the standard response is … predictive maintenance. It is well understood that one can accurately predict part failures using historical lifespans and forecasted processing demands.

Where is the magic in that?

Last month, I gave an example of AI that was within reach (reducing defects) followed by a second example that was just within reach (optimizing unplanned downtime), further to the right on the “magic” continuum. I would like to do the same this month.

Automated Visual Inspection

Automated visual inspection is not new; it has been around since the 80’s. Many manufacturers have employed it as a nondestructive testing method. But like most things, much has happened in the past 30+ years with the advent of cloud, advanced analytics capabilities, deep learning systems, etc. Unfortunately, manufacturers and industrial vision systems, haven’t been quick to embrace these new capabilities. However, the good news is you don’t necessarily need to rip and replace your machine vision system to take advantage of new technology. I know something about this as we have recently announced our Spyglass Visual Inspection System.

We recognize the dilemma of the capital investment in camera systems, infrastructure and training. But, what if those images, which generally are very good, and the vision system that provides them could be augmented with this new technology? We are currently involved with two customers in the automotive sector doing exactly that: training deep learning / neural net models capable of eliminating false positives and classifying defects. With this additional information, we alert human operators when defects exceed upper or lower control limits providing even more value.

Learn by Observing the Humans

Manufacturers struggle with the ‘graying of the workforce’ problem. As veteran employees with decades of experience retire, all of their years of experience retires with them. Performance and quality suffer as a result. Here comes the magic. What if you took a deep learning model and trained it on a particular production line to optimize on yield? You provide all possible inputs to that model: all the production line telemetry, raw material specifications, ambient temperature, humidity, and operator’s identification. You add to these two additional pieces of information: the quality of the resulting product (from QA lab) and a means of identifying which of the sensors the operators are able to control.

Over time, a deep learning module could monitor all the variables on the production line while in operation.

  • It could observe the adjustments the operator makes.
  • It will know the yield from existing telemetry.
  • It will know the asset utilization.
  • It will know the impact to the changes the human makes.
  • If it is designed to optimize for yield, it will “learn” which changes result in improvements and discard changes that do not.

Over time, the model could suggest the operator make adjustments to the production line. Once comfortable with the model, it could be connected directly to the production line and operate without the human.

Just like magic.

I will leave you with one more Clarke quote:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right.

When he states that something is impossible, he is very probably wrong.

Want to talk more about the magic of AI?

Connect with me on Linked In and let’s talk.


Microsoft Azure IoT, AI in Manufacturing, American Manufacturing Summit

American Manufacturing Summit - #MFGUS19

Spyglass joins Microsoft at this year’s event to bring AI and IIoT to life for manufacturing executives

American Manufacturing Summit (March 26-27th in Chicago) brings together operations, production, and IT leaders from across the country to explore manufacturing innovation and plant floor optimization. This year, our Spyglass team is excited to join Microsoft at the event (in the Microsoft booth #41-44) and in round tables and meetings throughout the two-day show.

Key Themes for AMS 2019

American Manufacturing Summit 2019 Themes

Microsoft and Spyglass will help attendees consider and explore real-world solutions available today that enable manufacturers to connect their factory floor to Microsoft Azure to create actionable insights aimed to reduce unplanned time, improve quality, and balance production.

Spyglass Showcases Cognitive AI and IIoT Solutions

Spyglass will showcase the following at the event:

  • Spyglass Connected Factory. Reduce downtime and improve OEE with production line data.
    • Live demo of Spyglass Connected Factory with PLCs sending data through a Moxa gateway to Spyglass to be visualized in Microsoft Power BI.
    • Customer Success stories of Spyglass Connected Factory
    • How Spyglass Connected Factory provides a lean implementation and rapid ROI for Microsoft Azure IoT
  • Spyglass Visual Inspection. Improve product quality with AI-driven real-time insights.
    • Launch of Spyglass Visual Inspection – a cognitive AI solution that leverages existing vision systems to intelligently minimize defects and improve quality.
    • Customer Success stories of Spyglass Visual Inspection
    • How Spyglass Visual Inspection can be implemented in less than 60 days to accelerate digital transformation at the plant
  • Operational Excellence Expert, Joey Stokes, from Peak Performance, will join us in the Microsoft booth for the full event. Peak Performance offers workforce development training and consulting services to ensure that you have the right strategy in place for Industry 4.0 across people, processes, and technology.

Attending AMS 2019? Let’s Book time to Meet

We’d love to meet with you personally and share some ideas to make 2019 a breakout year for you and your manufacturing team. Please get in touch here and we’ll respond quickly to schedule time during the event to connect.


AI in Manufacturing, Deep Learning Image Recognition, Quality Analytics, Defect Detection, Industry 4.0

Announcing Spyglass Visual Inspection

Intelligently minimize defects and reduce costs

We are pleased to announce the immediate availability of Spyglass Visual Inspection. Spyglass Visual Inspection harnesses the power of AI, IIoT, and image recognition to help manufacturers improve product quality while significantly reducing the costs associated with manufacturing flaws.

Effectively addressing quality concerns is critical in manufacturing

AI helps drive improved defect detection and better business outcomes:

  • 10-15% of total operating costs often associated with poor quality product (Forbes, 2018)
  • 1/3 of manufacturing executives now identify AI-driven technologies
    as crucial to driving customer satisfaction (Forbes, 2018)
  • $3.7 Trillion - the value that McKinsey forecasts AI-powered “smart factories” will generate by 2025.

What is Spyglass Visual Inspection?

Spyglass Visual Inspection is a rapid time-to-value QA optimization solution for manufacturers of any scale. It is designed to:

  • Quickly and accurately detect defects so that action can be taken to reduce waste and improve customer satisfaction
  • Drive continuous quality improvement by enabling greater visibility with a bird’s eye view of product quality across multiple lines or facilities so you can proactively improve processes.
  • Use predictive analytics to proactively improve quality processes and perform root cause analysis
  • Implement and ramp-up quickly ensuring a rapid return on your investment
  • Augment your existing vision system (if you have one) to gain additional ROI on that investment
  • Use a lean approach to implementing AI and IIoT so that you control costs and gain value at every stage.

Global glass manufacturer saves over $1M quarterly with Spyglass Visual Inspection

A global automotive glass manufacturer is using Spyglass Visual Inspection today as their comprehensive platform for defect detection, prediction, and analysis. Their challenge was that they needed more accurate defect detection in their glass cutting process to reduce false positives from their vision system that resulted in high monetary losses of $30/unit over 40 production lines. They were looking for a solution that would use custom vision, image recognition, and machine learning to more accurately detect defects at high speed and in large volume. With Spyglass Visual Inspection, they are already achieving over $1 million in quarterly savings.

Where can I learn more?

You can review our presentation, solution overview, and customer case study. We'd love to connect to learn more about your quality initiatives and how we could help. Connect with us here.

How do I get started?

Every manufacturer is different and every defect detection requirement is unique. It's critical to determine quickly that Spyglass Visual Inspection is the right fit to meet your quality goals and match your operating conditions. We'll start with a low cost Proof of Value engagement with our Spyglass team to determine your unique accuracy requirements and train the machine learning model accordingly. For most customers, the Proof of Value stage can be completed in 4 weeks or less at a very low cost. Then, we'll Operationalize the solution in your factories using Spyglass as the platform to implement your customized visual inspection solution. Finally, we will Maintain and Improve the Machine Learning Model. On a quarterly basis, we will meet with your quality teams and help the model learn from any mistakes it has made. In this way, the accuracy will continue to improve over time.

Ready to find out more? Connect with us here.


predictive maintenance, condition-based maintenance, OEE, manufacturing, analytics, asset reliability

What happens when critical machines fail?

You’ve likely been there before and it’s never good - a critical machine fails unexpectedly and production is shut down. Your team works around the clock to get it back online to minimize the impact. The costs – from emergency repairs, to lost productivity, and ultimately lost product – are significant.   

Maintenance Strategy - To address the issue, you've invested in various maintenance plans throughout the years from run to failure, to proactive maintenance, preventative maintenance, and more. You may still come across situations where an experienced operator walks onto the site and knows almost immediately that something isn’t right. They hear things or smell things or have a gut instinct that production isn't happening the way it's supposed to. But, when that operator goes home at night - or worse, retires - you lose all of that knowledge. How can a junior operator be equipped with the same knowledge that an asset is about to fail and needs maintenance?

The impact of data - Today, sensors and instrumentation create data that is now fed into systems that can continuously monitor assets. Rapid advances in technology mean that maintenance managers no longer need to look in the rear view mirror using lagging indicators to drive maintenance schedules. Data can be harnessed to identify conditions that compromise assets (or quality or production) so that issues are predicted, and proactive steps taken.

How to get started - Today, the cost and complexity of managing machines reliably and consistently with emerging analytics technologies, like machine learning, has been significantly reduced. You can use cost-effective solutions quickly and even make the case to senior management that they can be funded with a reallocation of the cost savings that will be achieved by avoiding downtime or extending the life of assets. But,

How do you know machine learning systems will work in your plant and for your team?

Do you have the data required?

Will the alerts provided enable the right employee to take the right action at the right time?

Will your team trust the data or will they continue doing their job in ways that worked in the past?

To answer these questions and achieve meaningful results, the right approach is to start small, think big, and go fast.

Begin with a few assets on one line at one site – no risky capital investment required.

  • Get your maintenance and reliability engineers involved early.
  • Learn from the first dashboards and alerts created and improve on them often and quickly.
  • Get buy-in from senior management to move forward based on the cost savings you can prove with your small project.
  • Expand to more assets, more lines, and more sites.
  • Follow a lean approach to maintenance strategy to ensure success at each stage.

At Mariner, we have worked manufacturers - large and small - to monitor assets to reduce unplanned downtime, improve quality, and optimize maintenance. We created Spyglass Connected Factory to get you started and seeing improvements in less than 4 weeks. Learn More.