Deep Learning Meets the Internet of Things: How New Frameworks Will Drive the Next Generation of Mobile Apps

By Lori Cameron
Published 07/31/2018
Share this on:

robot

As mobile devices take over the world, researchers now study how to build deep learning networks that can keep up.

One collaboration between academia and industry has analyzed a number of related deep learning frameworks, and the results look promising.

When Facebook suggests new friends, Netflix recommends movies, Spotify recognizes a song, or Uber accurately predicts when your driver will arrive—they all use “deep learning”—complex algorithms that gather data about you and your environment to provide you with better recommendations and service.

“Recent advances in deep learning have greatly changed the way that computing devices process human-centric content such as images, video, speech, and audio. Applying deep neural networks to IoT devices could thus bring about a generation of applications capable of performing complex sensing and recognition tasks to support a new realm of interactions between humans and their physical surroundings,” say the authors of “Deep Learning for the Internet of Things,” which appears in the May 2018 issue of Computer.

Asking the right questions

The researchers pose four important questions that need to be answered if mobile apps can effectively implement deep neural network technology:

  • What deep neural network structures can effectively process and fuse sensory input data for diverse IoT applications?
  • How can resource consumption of deep learning models be reduced such that they can be efficiently deployed on resource-constrained IoT devices?
  • How can confidence measurements be computed correctly in deep learning predictions for IoT applications?
  • Finally, how can the need for labeled data be minimized in learning?

In short, if deep neural networks can be used successfully in mobile apps, they must be capable of collecting data from a variety of IoT devices, as well as be energy-efficient, accurate, and able to function with minimal data labels.

DeepSense as a solution for diverse IoT applications

The authors reviewed a general deep learning framework for inputting data from diverse IoT applications, called DeepSense. The framework contains all the essential elements but can be customized for the learning needs of various IoT apps.

Main architecture of the DeepSense framework.
Main architecture of the DeepSense framework.

The DeepSense based algorithms (including DeepSense and its three variants) outperform other baseline algorithms by a large margin, as can be seen in the two comparison graphs below.

Performance metrics of heterogeneous human activity recognition (HHAR) task with the DeepSense framework.
Performance metrics of heterogeneous human activity recognition (HHAR) task with the DeepSense framework.
Performance metrics of UserID task with the DeepSense framework.
Performance metrics of UserID task with the DeepSense framework.

Deep IoT as a solution for energy efficiency

A particularly effective deep learning compression algorithm, called DeepIoT, can directly compress the structures of commonly used deep neural networks.

It “thins” the network structure by dropping hidden elements and compressing the network.

Overall DeepIoT system framework. Orange boxes represent dropout operations. Green boxes represent parameters of the original neural network.
Overall DeepIoT system framework. Orange boxes represent dropout operations. Green boxes represent parameters of the original neural network.

The compressed model can be deployed on commodity devices, delivering final prediction accuracy while saving energy.

The tradeoff between testing accuracy and energy consumption.
The tradeoff between testing accuracy and energy consumption.

RDeepSense as a solution for accuracy

RDeepSense gives simple methods for generating well-calibrated uncertainty estimates for the predictions computed in deep neural networks.

In the two graphs below, RDeepSense comes closest to optimal performance.

The calibration curves of RDeepSense, GP, and MCDrop-k.
The calibration curves of RDeepSense, GP, and MCDrop-k.
The calibration curves of RDeepSense, GP, and SSP-k.
The calibration curves of RDeepSense, GP, and SSP-k.

SenseGAN as a solution for minimizing data labeling

Studies show that the semi-supervised strategy, called SenseGAN, greatly reduces the need for labeled data. The researchers use HHAR (heterogeneous human activity recognition) with the DeepSense framework as an example.

Semisupervised training of HHAR with DeepSense framework.
Semisupervised training of HHAR with DeepSense framework.

One network tries to trick the other.

“The GAN (generative adversarial networks) training strategy is to define a game between two competing networks. The generator network maps a source of noise to the input space. The discriminator network receives either a generated sample or a true data sample and must distinguish between the two. The generator is trained to fool the discriminator. The GAN training strategy leverages the unlabeled data to increase the capacity of generator and discriminator networks, which explicitly improve the discriminating ability of the classifier in return,” the researchers explain.

The authors of the new study are Shuochao Yao, Yiran Zhao, Huajie Shao, Chao Zhang, and Professor Tarek Abdelzaher, all of the University of Illinois at Urbana-Champaign; Aston Zhang, an applied scientist at Amazon AI; Shaohan Hu, a research staff member at IBM Thomas J. Watson Research Center; and Lu Su, an assistant professor in the Department of Computer Science and Engineering at State University of New York Buffalo.

Related research on deep learning in the Computer Society Digital Library

Login may be required to access full text.


 

About Lori Cameron

Lori Cameron is a Senior Writer for the IEEE Computer Society and currently writes regular features for Computer magazine, Computing Edge, and the Computing Now and Magazine Roundup websites. Contact her at l.cameron@computer.org. Follow her on LinkedIn.