We a good story
Quick delivery in the UK

Data Orchestration in Deep Learning Accelerators

About Data Orchestration in Deep Learning Accelerators

This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.

Show more
  • Language:
  • English
  • ISBN:
  • 9783031006395
  • Binding:
  • Paperback
  • Pages:
  • 168
  • Published:
  • August 17, 2020
  • Dimensions:
  • 191x10x235 mm.
  • Weight:
  • 327 g.
Delivery: 2-4 weeks
Expected delivery: December 22, 2024
Extended return policy to January 30, 2025

Description of Data Orchestration in Deep Learning Accelerators

This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.

User ratings of Data Orchestration in Deep Learning Accelerators



Find similar books
The book Data Orchestration in Deep Learning Accelerators can be found in the following categories:

Join thousands of book lovers

Sign up to our newsletter and receive discounts and inspiration for your next reading experience.