chalk talk
Subscribe Now

Scaling Embedded Deep Learning Inference Performance with Dedicated Neural Network DSP


Neural networks are taking over a broad range of exciting applications these days. But, the amount of computation required for neural network inferencing can be daunting. In this episode of Chalk Talk, Amelia Dalton chats with Pulin Desai of Cadence Design Systems about some new processor IP designed specifically for neural network inferencing.

Click here for more information about Tensilica Vision DSPs for Imaging, Computer Vision, and Neural Networks

Leave a Reply

featured blogs
Mar 21, 2018
There are few better places to be in March than San Diego. SoCal sunshine always beats wintry weather. Thankfully, OFC 2018 was located at the San Diego Convention Center instead of the snowy US East Coast. The sunny weather outside acted as a harbinger of the positive energy...
Mar 21, 2018
During the war, there was a lot of computer technology developed in all of UK and US (and Germany, although that was mostly electromechanical and so was a sort of dead end). After the war, the US and UK took totally different approaches. In the UK, everyone who had worked at ...
Mar 20, 2018
In my continuing series about SerDes design, I'€™ve discussed the first steps you need to take toward SerDes channel compliance and how protocols and analysis methods have evolved with increased data rates. In this blog, we'€™ll take a look at eye diagrams and how they a...
Mar 5, 2018
Next-generation networking solutions are pushing processing out of the cloud and towards the network'€™s edge. At the same time, processing structures architected around programmable logic provide the ability to make computing much more data-centric. Programmable logic make...