Agenda
Predictive Analytics World for Manufacturing Las Vegas 2018
June 3-7, 2018 – Caesars Palace, Las Vegas
This page shows the agenda for PAW Manufacturing. Click here to view the full 7-track agenda for the five co-located conferences at Mega-PAW (PAW Business, PAW Financial, PAW Healthcare, PAW Manufacturing, and Deep Learning World).
Session Levels:
Blue circle sessions are for All Levels
Red triangle sessions are Expert/Practitioner Level
Workshops - Sunday, June 3rd, 2018
Full-day: 8:30am – 4:30pm
This one day workshop reviews major big data success stories that have transformed businesses and created new markets. Click workshop title above for the fully detailed description.
Two and a half hour evening workshop:
This 2.5 hour workshop launches your tenure as a user of R, the well-known open-source platform for data analysis. Click workshop title above for the fully detailed description.
Workshops - Monday, June 4th, 2018
Full-day: 8:30am – 4:30pm
This one-day session surveys standard and advanced methods for predictive modeling (aka machine learning). Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm
Gain experience driving R for predictive modeling across real examples and data sets. Survey the pertinent modeling packages. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm
Dive in hands-on with crucial data prep steps, including cleaning, missing value imputation, feature creation/selection, and sampling. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm
This one-day introductory workshop dives deep. You will explore deep neural classification, LSTM time series analysis, convolutional image classification, advanced data clustering, bandit algorithms, and reinforcement learning. Click workshop title above for the fully detailed description.
Predictive Analytics World for Manufacturing - Las Vegas - Day 1 - Tuesday, June 5th, 2018
Applied deep learning has fast become a standard tool for many industry machine learning applications. New advances in neural network techniques have opened the doors to solving problems at scale that were out of reach until recently. Because of these advances, applications such as in image recognition for self-driving cars, medical image classification, text translation, and fake news detection are both tractable and often the industry standard. In this keynote, Mike Tamir, who heads the data science teams at Uber ATG—the self-driving cars division—reveals how two key application areas of deep learning signal the broad importance of this emerging technology.
Data science, if judged as a separate science, exceeds its sisters in truth, breadth, and utility. DS finds truth better than any other science; the crisis in replicability of results in the sciences today is largely due to bad data analysis, performed by amateurs. As for breadth, a data scientist can contribute mightily to a new field with only minor cooperation from a domain expert, whereas the reverse is not so easy. And for utility, data science can fit empirical behavior to provide a useful model where good theory doesn’t yet exist. That is, it can predict “what” is likely even when “why” is beyond reach.
But only if we do it right! The most vital data scientist skill is recognizing analytic hazards. With that, we become indispensable.
DataRobot will demonstrate how innovations in Machine Learning & Artificial Intelligence can easily be leveraged to address a variety of opportunities and challenges across industries. Today, companies now have the ability to use machine learning & AI as a competitive advantage in their market, reduce operational costs, create new revenue streams, and drive higher customer satisfaction and loyalty. DataRobot's presentation will outline how scaling AI across your enterprise will deliver top & bottom line-benefits...create truly an "AI-Driven Enterprise".
Anomaly detection has utility in many industries, and its potential to save costs and risk is huge when applied to maintenance prediction. But widespread sharing of techniques for predicting major episodes without a classic pattern of historical data has been limited by the lack of public data. In this paper, we use new, publicly available IOT data from motor sensors along with a variety of techniques to create predictive models. We also explore the effectiveness of Image Mining to further enhance our models. A roadmap for use on ANY type of sensor or IOT data is also discussed. Examples will be used throughout based on public data and open source software so that everyone can benefit from this work.
The surge in IoT and cloud computing have led to a one-way centralized transmission of raw data from billions of installed sensors to centralized systems for storage and processing. As a result, challenges were raised whether the surplus amount of data can be processed securely in real-time while allowing data recovery for predictive analytics. Edge analytics will offer a significant step toward the solution of this growing concern. By deploying predictive analytics models on the edge, this presentation will reveal the impact of edge-based analytics on Seagate operations, such as: predictive maintenance, product quality check, security surveillance, and safety monitoring.
Even with the siloed nature of most manufacturing industries, vast cost-savings are to be had across several points of the manufacturing and distributing processes. With real-time predictions, diwo leverages its cognitive framework to coordinate analytics such as current supply and demand signals to quantify decisions based on these contextual insights. Decision-making is taken into the next realm beyond knowledge generation, enabling the optimization of cost-savings and decision-making.
Data science, machine learning, and Artificial Intelligence are all relevant to the future of reliability practices in manufacturing and utilities. If your organization can predict failure-free performance, not only do you avoid extremely costly downtime, you reduce the potential harms that occur when critical systems unexpectedly fail. With the explosive growth of sensors and Internet of Things technology, and Big Data infrastructure to analyze data at massive scale, is the promise of AI within our grasp?
In this keynote you'll learn:
1) What are the differences between data science, machine learning, and AI?
2) What are the use cases and success stories for AI in managing reliability for manufacturing and utilities?
3) How can your organization get started on the path to AI development?
Preventive maintenance is a known use case for IoT (Internet of Things) and has numerous applications in a variety of industries. Electronic equipment such as internet modems and combined internet/cable boxes are good examples for a preventive maintenance application in Telcos. Wired internet provides the main connection point for a variety of wired/wireless devices at home to the outside world for security, TV/radio, music, internet, within-home WiFi, VOIP, etc. Health of such devices are critical for a good customer service that directly impacts customer experience measured through NPS. Each one of these devices can generate a variety of health measures (variables) and report that periodically to central collection points for general analysis. There are variety of ways to ingest, collect, and analyze such data. Analysis could range from BI/visualization to simple models to predictive models. In this talk, I provide a real-world use case for millions of DOCSIS devices where each provides 10-20 physical measurements about its health. The data collection infrastructure is capable of ingesting hundreds of millions of data records daily. I show two practical scoring approaches for creating alerts for use by customer service staff including field personnel. Using such scores, one can quantify quality of service (QoS) in a single number based on physical measurements. The first approach uses expert knowledge to create an hourly additive score using engineering expert advice. The second approach leverages an hourly predictive model score trained on device health measurements to predict probability of a technical support call in the hours ahead. I explain how the second score can be used as a measure of customer irritation. In each case, high Scores indicate a severe deviation from normal operating conditions (healthy operation range) or high probability of potential customer irritation requiring attention from customer service/field. The methodology is generalizable to other equipment and devices in different settings and different industries.
In a recent project, a predictive system managed cross-border, multi-modal material flows and delivered better supply chain performance than spreadsheet-based manual replenishment. Reality checks using naive Bayesian predictions operated on an evolving supply chain data set to provide alerts and visualization of potential problems before they became real. This improved customer satisfaction by increasing communication and on-time and full rates (OTIF). Additionally, fewer change orders reduced shipment delays due to re-filing export paperwork with the government.
This presentation will present the business situation, improvements, and user interface to provide operational insights to executives, supply chain experts, and users. The data architecture, mathematical modeling techniques, and lessons learned will be presented to provide insights to data scientists and system architects.
Steven Ramirez, Conference Chair, wraps up what we've learned at PAW Manufacturing.
Predictive Analytics World for Manufacturing - Las Vegas - Day 2 - Wednesday, June 6th, 2018
What does a 90-year old company have to do with predictive analytics? Quite frankly, everything. Through predictive analytics services and solutions, Caterpillar is increasing the value of its equipment and commitment to its customers by helping them predict potential outcomes and make better business decisions. Morgan Vawter, Caterpillar’s Chief Analytics Director, will share how this legacy company is combining domain expertise with vast amounts of data and advanced mathematical techniques to help its customers build a better world through better fuel productivity, increased safety, planned downtime and other predictable efficiencies.
Rexer Analytics has been surveying analytic professionals for over a decade. In 2017, over a thousand people from around the world participated in the 8th Data Science Survey. In this PAW session, Karl Rexer will present highlights of recent survey results and discuss trends from the past decade. Highlights will include:
- Key algorithms
- Deep learning adoption and key techniques - Challenges of self-service analytics
- Analytic software adoption
- Job satisfaction & job prospects
diwo (Data In, Wisdom Out) acts as a sixth sense, continuously working in the background to sense and quantify invisible opportunities and situations before they arise, so business users are empowered to act in time and make optimal decisions. Its uniquely developed framework harnesses cognitive computing and machine learning to proactively guide business decision making. diwo's business-first approach prioritizes business and user context, scalability, and value on day one. The new platform’s design tackles adoption issues common with other transformative initiatives. As diwo is vertical-agnostic, it is applicable to a wide range of specific business scenarios across industries. Its scalability allows for a variety of businesses to roll it out according to its own organization’s needs by integrating with existing data and analytics assets.
The Data Mining Cost Model (DMCoMo) attempts to provide a robust parametric approach for estimating the cost of a data mining project. This presentation will provide a practical approach for applying parametric estimation concepts in analytics-based manufacturing organizations with specific focus on using project management techniques for estimating and deploying a data mining project. Additional focus will be placed on how to leverage the results of a data mining project for putting a data mining process into production. The presentation will also recommend further areas of application and research.
Are Industry4.0 and IIoT just hype, or do these technologies provide a measurable ROI or benefit? This session explores video case studies of what forward-thinking companies are doing today, and the ROI and value of each::
• Real-time data analytics and control of high technology “lights out” production lines
• Cloud technology automatically tracking and replenishing raw material levels at workstations
• Global factory and supply chain (remote) visibility. Real time visibility of KPI’s, and real-time alerts for WIP, yield, throughputs on hundreds of production lines worldwide
• How equipment connectivity automates quality records and “forces” day to day regulatory compliance management
Machine learning (aka predictive analytics) only delivers value when acted upon – that is, when deployed. Only a carefully designed management process ensures that the analytics' output is pragmatically viable for operationalization, and that company operations – your internal consumers of analytics – know best how to employ the product they're consuming.
Lead by moderator James Taylor, an industry leader in the operationalization of predictive models, this expert panel explores and expands upon PAW Business' Track 1 topic, operationalization, to provide insights as to how best to execute on the functional deployment of machine learning.
An explosion in data availability from integrated data supply chains cutting across traditional boundaries, cheap cost of data storage and massive increase in computing power are expanding the universe of business problems that could be addressed by using predictive analytics in general and machine learning in particular. While the use of machine learning to understand complex patterns and identify non-linear relationships offers a great opportunity to enhance the accuracy of predictive models, risk management as domain has relatively seen lower adoption of machine learning algorithms.
In this presentation, Vivek will discuss the key drivers for using machine learning and present a case study where machine learning algorithms were successfully used to enhance the accuracy of risk models for a large global organization as it transformed its data supply chain and downstream & upstream technology infrastructure to digest large amount of data and operationalise the analytical outcomes in more agile manner.
The use of Predictive Analytics and Machine Learning is imperative across all channels and departments in Financial Services. Angoss differentiates itself by its easy-to-use visual data science platform with intuitive workflows and best-in-class Decision and Strategy Trees. We will discuss how best to utilize the Angoss Software Suite as the ultimate complete solution.
At Hitachi’s Center for Social Innovation, we have been using Machine learning and Artificial Intelligence to develop cutting edge solutions and push the envelope in the area of Industrial IoT. These problems range from increasing operational efficiencies, reducing costs to creating new AI enabled products and services. Drawing on this rich experience, we will present a systematic taxonomy of industrial analytics problems. We will walk through the different problem areas and give examples how AI/ML is being used as a tool to address these problems. We will conclude by pointing out new research directions, and exciting new developments in the area of industrial analytics.
Prescriptive Analytics is the area of Machine Learning dedicated to finding the best course of action for a given situation. Prescriptive Analytics Inform And Evolve Decision Logic Whether To Act (not not act) And What Action To Take. In this session, we will understand Prescriptive Analytics, its components and methods. You will also learn how to go beyond just knowing in Predictive Maintenance into Prescriptive Service to deliver immediate ROI.
Manufacturers have been actively exploring the use of predictive maintenance to improve productivity and optimize overall throughput. With advancements in the ability to easily access machine data and apply sophisticated analytics, predictive maintenance projects can be an effective tool for reducing the overall cost of unplanned downtime. However, we’ve learned that the most successful manufacturers are those that build capabilities for data-driven process reengineering before embarking on predictive maintenance efforts. By looking at predictive maintenance efforts within the context of a broader data-driven process reengineering foundation, manufacturers can build a foundation to ensure these projects deliver on their promise.
Workshops - Thursday, June 7th, 2018
Full-day: 8:30am – 4:30pm
This one-day session reveals the subtle mistakes analytics practitioners often make when facing a new challenge (the “deadly dozen”), and clearly explains the advanced methods seasoned experts use to avoid those pitfalls and build accurate and reliable models. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm
This workshop dives into the key ensemble approaches, including Bagging, Random Forests, and Stochastic Gradient Boosting. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm
This workshop demonstrates how to build uplift models (aka net lift models) that optimize the incremental impact of marketing campaigns, covering the pros and cons of various core analytical approaches. Click workshop title above for the fully detailed description.
Full-day: 8:30am – 4:30pm
Gain the power to extract signals from big data on your own, without relying on data engineers and Hadoop specialists. Click workshop title above for the fully detailed description.