Description: This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems.The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas.
Price: 40 USD
Location: Sunnyvale, California
End Time: 2024-10-31T00:50:08.000Z
Shipping Cost: 9 USD
Product Images
Item Specifics
All returns accepted: ReturnsNotAccepted
Subject: Networks
Item Length: 9.2in.
Item Height: 0.7in.
Item Width: 7.5in.
Author: Yu-Hsin Chen, Tien-Ju Yang, Vivienne Sze, Joel S. Emer
Publication Name: Efficient Processing of Deep Neural Networks
Format: Trade Paperback
Language: English
Publisher: Morgan & Claypool Publishers
Publication Year: 2020
Series: Synthesis Lectures ON Computer Architecture Ser.
Type: Textbook
Item Weight: 20.8 Oz
Number of Pages: 341 Pages