Blog‎ > ‎

Embedded Deep Learning

posted Aug 5, 2018, 8:30 PM by MUHAMMAD MUN`IM AHMAD ZABIDI   [ updated Aug 7, 2018, 5:21 AM ]
Market trends:

  • By 2019, 755 of enterprise and ISV development will include AI or ML (IDC)
  • By 2020, 5.6 billion IoT devices connected to an edge solution

Energy-efficient deep learning sits at the intersection between machine learning and computer architecture. New architectures can potentially revolutionize deep learning and deploy deep learning at scale.

State-of-the-art algorithms for applications like face recognition, object identification, and tracking utilize deep learning-based models for inference. Edge-based systems like security cameras and self-driving cars necessarily need to make use of deep learning in order to go beyond the minimum viable product. However, the core deciding factors for such edge-based systems are power, performance, and cost, as these devices possess limited bandwidth, have zero latency tolerance, and are constrained by intense privacy issues. The situation is further exacerbated by the fact that deep learning algorithms require computation of the order of teraops for a single inference at test time, translating to a few seconds per inference for some of the more complex networks. Such high latencies are not practical for edge devices, which typically need real-time response with zero latency. Additionally, deep learning solutions are extremely compute intensive, resulting in edge devices not being able to afford deep learning inference.

Deep learning is necessary to bring intelligence and autonomy to the edge.

The first wave of embedded AI is marked by Apple Siri. It's not really embedded because Siri relies on the cloud to perform the full speech recognition process.

The second wave is marked by Apple's Face ID. The intelligence happens on the device, independent of the cloud.