feature article
Subscribe Now

Deep Learning with MATLAB

Making AI Accessible

We are in the midst of a revolution in computing that will uproot our entire digital ecosystem at its core. If IoT has been the banner buzzword for the tech industry for the past several years, you’ll do well to notice that its flag has been lowered just enough for the AI burgee to take the top position. It is no secret that Artificial Intelligence (AI) has reached a critical inflection point, and that the implications of that are being felt throughout an enormous gamut of existing applications as well as enabling entirely new capabilities that have never existed before.

What exactly is that inflection point, and why is it happening now? It turns out that a confluence of factors have come together to make AI suddenly far more effective and practical. The performance of neural networks on many applications now far exceeds the capabilities of purpose-built traditional algorithmic approaches. Large data sets and pre-trained models dramatically reduce the development effort required to build an AI model. And, finally, new compute hardware dramatically reduces the impact of the enormous computation required to make neural networks tick.

Various flavors of AI have been around for decades, and even the taxonomy of the AI approach itself is deep and complex. Just the Neural Network (NN) subset of AI breaks down into over twenty sub-types – from perceptron, feed forward, radial bias, deep feed forward (DFF), recurrent (RNN), long/short term memory (LSTM) – up to massively complex graphs such as the deep convolutional inverse graphics network (or DCIGN for those “in the know”). Across all those sub-types are concepts such as “Deep Learning,” which is an overall class of AI methods using cascading layers of nodes.  

The point is: simply understanding the types of AI currently in use is a formidable field of study. AI is an enormously complex discipline with a remarkably small community of experts. This brings us to the current situation where AI capabilities have suddenly exploded and there is a dearth of talent available to help new applications take advantage of it.

MathWorks is coming to our rescue.

There are a great many brilliant engineers out there in the world who have no background in AI, but who are lead implementers of new development projects. In more and more cases, those projects would have significant benefits if AI were applied, but the shortage of data scientists and related experts is a huge barrier. How do we get around that problem without some serious investment in the recruiting of pedigreed AI talent?  

Most engineers and system designers have at least some experience (and most of us a LOT of it) with MATLAB and/or Simulink. MathWorks has been the dominant supplier of tools for algorithm development, as well as just plain old “doing math” for decades now. In fact, raise your hand if you haven’t been using MATLAB in some capacity since college at least for … Ah crap! We have old timers in the… I mean… ahem… “Senior Engineers” in the room. Stop waving around those slide rules around before someone gets hurt! You folks can do AI too, as it turns out.

With their September 2017 release, “R2017b” MathWorks pushed out a host of capabilities designed to let the average system/application designer take advantage of deep learning, using the familiar MATLAB and Simulink environments. With the recent 2018 release “R2018a” they have reinforced and fortified their deep learning capabilities. The company says that MATLAB now has a complete, start-to-finish deep learning flow, from gathering and labeling data, to building and accessing models, training and testing, and finally to deployment and inferencing.  

Often, the data for deep learning applications is in the form of images or video. Generally, humans do a kind of brute-force approach to labeling that data – marking objects in images. “This is a stop sign,” “this is a tree,” “this is a car,” “this is an alien spacecraft.” This “ground truth” labeling process is facilitated within MATLAB with a new app that allows you to label pixels and regions for semantic segmentation.

Often, you’ll also (or alternatively) want to access pre-trained models where someone else has already done the heavy lifting. There’s no point in becoming the five-thousandth person to train a model to recognize stop signs. Better to just take advantage of what others have already done. Models are generally exchanged in one of several deep-learning frameworks, such as Caffe and TensorFlow. Pre-trained CNN models such as AlexNet, VCG-16, and VCG-19 are capable of some spectacular image categorization, honed through years of development, testing, and even competition. Interestingly, the ImageNet project runs a competition each year called the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where various CNN models compete to correctly classify and detect objects and scenes. Many of these models can be imported directly into MATLAB.

Moving on to the training phase, we bring in the big guns computationally. Training can be very compute intensive, and MathWorks allows you to target GPUs or even multi-GPU clusters to accelerate training. There is an “automatic” mode that will help to optimize GPU acceleration without an excessive amount of custom coding required. GPUs can significantly accelerate your training process, which is inherently highly parallelizable.

Once you’ve got a trained model, you want to see how it does in real-world operation. This “inferencing” phase is all about performance and latency, and it has different computational requirements from training. MATLAB helps out here as well. MathWorks claims that MATLAB can run trained models at 2.5x the speed of TensorFlow. With the latest release, MathWorks is offering “GPU Coder” which converts models to NVidia CUDA code – optimized for GPU execution. The company claims that this can result in model performance up to 7x the speed of TensorFlow and 4.5x the speed of Caffe2.

The converted code can be deployed in embedded systems and other field-ready devices, completing the deep-learning application development cycle from data to deployment. The “whole flow” approach that MathWorks has taken is nice – particularly given their target audience – the 99% of us who are engineers with no formal AI training. In fact, we need more attention to bringing deep-learning techniques to the masses as neural networks continue to gain traction in more and more application areas.

Leave a Reply

featured blogs
Apr 23, 2024
The automotive industry's transformation from a primarily mechanical domain to a highly technological one is remarkable. Once considered mere vehicles, cars are now advanced computers on wheels, embodying the shift from roaring engines to the quiet hum of processors due ...
Apr 22, 2024
Learn what gate-all-around (GAA) transistors are, explore the switch from fin field-effect transistors (FinFETs), and see the impact on SoC design & EDA tools.The post What You Need to Know About Gate-All-Around Designs appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Power Gridlock
The power grid is struggling to meet the growing demands of our electrifying world. In this episode of Chalk Talk, Amelia Dalton and Jake Michels from YAGEO Group discuss the challenges affecting our power grids today, the solutions to help solve these issues and why passive components will be the heroes of grid modernization.
Nov 28, 2023
19,851 views