Originally published in SAP Blogs, October 16, 2019.
For today’s leading deep learning methods and technology, attend the conference and training workshops at Deep Learning World Las Vegas, May 31-June 4, 2020.
Machine learning algorithms, especially deep learning neural networks often produce models that improve the accuracy of prediction. But the accuracy comes at the expense of higher computation and memory consumption. A deep learning algorithm, also known as a model, consists of layers of computations where thousands of parameters are computed in each layer and passed to the next, iteratively. The higher the dimensionality of the input data (e.g., a high-resolution image), the higher the computational need. GPU farms in the cloud are often used to meet these computational requirements.
When machine learning is used for use cases such as detecting the quality of a product in manufacturing, predicting the health of a critical piece of equipment, or video surveillance, it is expected that inference will be done in near real-time. Inferencing at the cloud requires moving data from the source to the cloud and introduces several challenges: (a) it is costly to bring data to the cloud for real-time inference, (b) bringing data from the edge to the cloud will lead to higher network latency, (c) sending data from the edge to the cloud introduces scalability issues as the number of connected devices increase, and (d) security concerns of user data risks sending data to the cloud.
Edge computing is a distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. Though edge computing addresses connectivity, latency, scalability and security challenges, the computational resource requirements for deep learning models at the edge devices are hard to fulfill in smaller devices.
Before determining the type of hardware for edge devices, it is important to establish key performance metrics for the inference. At a high level, the key performance metrics for machine learning at the edge can be summarized as latency, throughput, energy consumption by the device, and accuracy. The latency refers to the time it takes to infer one data point, throughput is the number of inference calls per second, and accuracy is the confidence level of the prediction output required by the use case. Depending on these requirements, one can take one or more of the following approaches to speed up the inference at the resource-constrained edge device.
The right machine learning model for edge device
Researchers have found that reducing the number of parameters in deep neural network models help decrease the computational resources needed for model inference. Some popular models which have used such techniques with minimum (or no) accuracy degradation are YOLO, MobileNets, Solid-State Drive (SSD), and SqueezeNet. Many of these pre-trained models are available to download and use in open-source platforms such as TensorFlow or PyTorch.
To continue reading this article, click here.
You must be logged in to post a comment.
Pingback: Why Machine Learning at the Edge? – Predictive Analytics Times – NikolaNews
Pingback: Why Machine Learning at the Edge? – The Predictive Analytics Times – AiProBlog.Com