Data Science Best Practices and Smart Manufacturing with Domino Data Lab

“Industry 4.0 is not future, but reality in Pirelli”

Written by Claudio G. Giancaterino

“Data Science in Pirelli” by Carlo Torniai, Head of Data Science & Analytics, Pirelli

Carlo Torniai Head of Data Science & Analytics in Pirelli started the event which took place on 9th October 2017 with a brief introduction of how Pirelli is working on the concept of “data driven enterprise”.

The Company is structured into three clusters: Smart Manufacturing using predictive manufacturing and machine learning, Integrated value Chain Demand forecasting to optimize raw materials purchasing and Services built on top of Cyber Technologies (ex. Cyber Tyre: smart tyre with a self-powered microchip inside it).

What are the keys of these spells? First of all team built by people from different backgrounds, secondly an agile process to break silos with both each other transparency and trust; the last but not less important: technology stack using the right tools for the job.

“Lessons Learned from advanced data science organizations” by Nick Elprin, CEO, Domino Data Lab

In the second speech Nick Elprin, CEO of Domino Data Lab, has shared patterns met in the most of advanced data science organizations and strategies used to implement these ideas in practice.

Domino Data Lab is the leading Data Science Platform for the enterprise.

How Data Science works in the best organizations? Data scientists work together as a team contributing to collective knowledge and they are able to use many tools.

Best practices could be divided into three areas: the first one is collaboration where not only a  teamwork is required but also open discussion on projects, shared context, discovery experiences. The second area concern reproducibility & reusability of the codes and dataset, but also environments, as packages and results. The last component is agility trying new tools, package resources and deployment it to follow business.

The strategy is to find the way to incentivize best practices with a bottom up approach. Give to data scientists a central platform where they can make packages, infrastructure and let track of their work in a searchable, discoverable way.

Look at the demo

“How Pirelli uses Domino and Plotly for Smart Manufacturing” by Alberto Arrigoni, Senior Data Scientist, Pirelli

In the third speech Alberto Arrigoni presented some data science practices in Pirelli.

In the Pirelli Smart Manufacturing Vision there is a roadmap of three main steps:

-the first one is to analyze what happened yesterday in the manufacturing and what is happening right now, so what is moving in the real time analytics;

-the second step is to improve predictive manufacturing: the idea is to build models able to predict quality product as early as possible connecting process manufacturing and process parameters to quality KPI;

-the third step is the prescriptive manufacturing that is connecting machine to machine, machine to human so interaction can adjust automatically the process in order to maximize quality and efficiency.

The goal to achieve is a virtual factory.

Data analytics could be shared into two data product categories.

The first one is a KPI visualization for data not so big and used to understand, for example, how many tyres were scrapped yesterday, thus production cycle times and another one with big data for trends detection, anomaly detection. In the last case visualization is not used in trigger action as the previous case but in deep dive after you take an action.

One example of KPI visualization is to analyze the process time discrepancies on different machines, instead one example of smart alerting is the quality assessment for several machines at the end of the process to detect the trend changing KPI and to trace back it in the procedure.

To deploying trend detection there are several options, one of this use domino as a gateway to execute R code API, then Python code call the API to get the results of trend detection and fit it with Plotly.

Another application is the use of machine learning to identify the anomaly in some process parameters to have a steady procedure, steady quality. The normal distribution is the reference model for parameters distribution and detecting outlier is not understood for a single parameter, but for overall process.

Domino and Plotly have provided a nice combination for:

-fast prototyping in the exploratory analysis;

-combine output from algorithms and machine learning into interactive web-based visualizations.

 

Follow up