chalk talk
Subscribe Now

Scaling Embedded Deep Learning Inference Performance with Dedicated Neural Network DSP

 

Neural networks are taking over a broad range of exciting applications these days. But, the amount of computation required for neural network inferencing can be daunting. In this episode of Chalk Talk, Amelia Dalton chats with Pulin Desai of Cadence Design Systems about some new processor IP designed specifically for neural network inferencing.

Click here for more information about Tensilica Vision DSPs for Imaging, Computer Vision, and Neural Networks

Leave a Reply

featured blogs
Jan 16, 2019
112 Gbps Samtec Flyover'„¢ Demo Samtec'€™s Ralph Page walks us through a live demonstration of a Samtec Flyover'„¢ system which enables 112 Gbps PAM4 performance. The Credo CDR generates two ports of 31-bit PRBS data at 112 Gbps PAM4 data rates. The signal travels from...
Nov 14, 2018
  People of a certain age, who mindfully lived through the early microcomputer revolution during the first half of the 1970s, know about Bill Godbout. He was that guy who sent out crudely photocopied parts catalogs for all kinds of electronic components, sold from a Quon...