Market trends:
Energyefficient deep learning sits at the intersection between machine learning and computer architecture. New architectures can potentially revolutionize deep learning and deploy deep learning at scale. Stateoftheart algorithms for applications like face recognition, object identification, and tracking utilize deep learningbased models for inference. Edgebased systems like security cameras and selfdriving cars necessarily need to make use of deep learning in order to go beyond the minimum viable product. However, the core deciding factors for such edgebased systems are power, performance, and cost, as these devices possess limited bandwidth, have zero latency tolerance, and are constrained by intense privacy issues. The situation is further exacerbated by the fact that deep learning algorithms require computation of the order of teraops for a single inference at test time, translating to a few seconds per inference for some of the more complex networks. Such high latencies are not practical for edge devices, which typically need realtime response with zero latency. Additionally, deep learning solutions are extremely compute intensive, resulting in edge devices not being able to afford deep learning inference. Deep learning is necessary to bring intelligence and autonomy to the edge. The first wave of embedded AI is marked by Apple Siri. It's not really embedded because Siri relies on the cloud to perform the full speech recognition process. The second wave is marked by Apple's Face ID. The intelligence happens on the device, independent of the cloud.

Blog >