feature article
Subscribe Now

Deep Learning with MATLAB

Making AI Accessible

We are in the midst of a revolution in computing that will uproot our entire digital ecosystem at its core. If IoT has been the banner buzzword for the tech industry for the past several years, you’ll do well to notice that its flag has been lowered just enough for the AI burgee to take the top position. It is no secret that Artificial Intelligence (AI) has reached a critical inflection point, and that the implications of that are being felt throughout an enormous gamut of existing applications as well as enabling entirely new capabilities that have never existed before.

What exactly is that inflection point, and why is it happening now? It turns out that a confluence of factors have come together to make AI suddenly far more effective and practical. The performance of neural networks on many applications now far exceeds the capabilities of purpose-built traditional algorithmic approaches. Large data sets and pre-trained models dramatically reduce the development effort required to build an AI model. And, finally, new compute hardware dramatically reduces the impact of the enormous computation required to make neural networks tick.

Various flavors of AI have been around for decades, and even the taxonomy of the AI approach itself is deep and complex. Just the Neural Network (NN) subset of AI breaks down into over twenty sub-types – from perceptron, feed forward, radial bias, deep feed forward (DFF), recurrent (RNN), long/short term memory (LSTM) – up to massively complex graphs such as the deep convolutional inverse graphics network (or DCIGN for those “in the know”). Across all those sub-types are concepts such as “Deep Learning,” which is an overall class of AI methods using cascading layers of nodes.  

The point is: simply understanding the types of AI currently in use is a formidable field of study. AI is an enormously complex discipline with a remarkably small community of experts. This brings us to the current situation where AI capabilities have suddenly exploded and there is a dearth of talent available to help new applications take advantage of it.

MathWorks is coming to our rescue.

There are a great many brilliant engineers out there in the world who have no background in AI, but who are lead implementers of new development projects. In more and more cases, those projects would have significant benefits if AI were applied, but the shortage of data scientists and related experts is a huge barrier. How do we get around that problem without some serious investment in the recruiting of pedigreed AI talent?  

Most engineers and system designers have at least some experience (and most of us a LOT of it) with MATLAB and/or Simulink. MathWorks has been the dominant supplier of tools for algorithm development, as well as just plain old “doing math” for decades now. In fact, raise your hand if you haven’t been using MATLAB in some capacity since college at least for … Ah crap! We have old timers in the… I mean… ahem… “Senior Engineers” in the room. Stop waving around those slide rules around before someone gets hurt! You folks can do AI too, as it turns out.

With their September 2017 release, “R2017b” MathWorks pushed out a host of capabilities designed to let the average system/application designer take advantage of deep learning, using the familiar MATLAB and Simulink environments. With the recent 2018 release “R2018a” they have reinforced and fortified their deep learning capabilities. The company says that MATLAB now has a complete, start-to-finish deep learning flow, from gathering and labeling data, to building and accessing models, training and testing, and finally to deployment and inferencing.  

Often, the data for deep learning applications is in the form of images or video. Generally, humans do a kind of brute-force approach to labeling that data – marking objects in images. “This is a stop sign,” “this is a tree,” “this is a car,” “this is an alien spacecraft.” This “ground truth” labeling process is facilitated within MATLAB with a new app that allows you to label pixels and regions for semantic segmentation.

Often, you’ll also (or alternatively) want to access pre-trained models where someone else has already done the heavy lifting. There’s no point in becoming the five-thousandth person to train a model to recognize stop signs. Better to just take advantage of what others have already done. Models are generally exchanged in one of several deep-learning frameworks, such as Caffe and TensorFlow. Pre-trained CNN models such as AlexNet, VCG-16, and VCG-19 are capable of some spectacular image categorization, honed through years of development, testing, and even competition. Interestingly, the ImageNet project runs a competition each year called the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where various CNN models compete to correctly classify and detect objects and scenes. Many of these models can be imported directly into MATLAB.

Moving on to the training phase, we bring in the big guns computationally. Training can be very compute intensive, and MathWorks allows you to target GPUs or even multi-GPU clusters to accelerate training. There is an “automatic” mode that will help to optimize GPU acceleration without an excessive amount of custom coding required. GPUs can significantly accelerate your training process, which is inherently highly parallelizable.

Once you’ve got a trained model, you want to see how it does in real-world operation. This “inferencing” phase is all about performance and latency, and it has different computational requirements from training. MATLAB helps out here as well. MathWorks claims that MATLAB can run trained models at 2.5x the speed of TensorFlow. With the latest release, MathWorks is offering “GPU Coder” which converts models to NVidia CUDA code – optimized for GPU execution. The company claims that this can result in model performance up to 7x the speed of TensorFlow and 4.5x the speed of Caffe2.

The converted code can be deployed in embedded systems and other field-ready devices, completing the deep-learning application development cycle from data to deployment. The “whole flow” approach that MathWorks has taken is nice – particularly given their target audience – the 99% of us who are engineers with no formal AI training. In fact, we need more attention to bringing deep-learning techniques to the masses as neural networks continue to gain traction in more and more application areas.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Battery-free IoT devices: Enabled by Infineon’s NFC Energy-Harvesting
Sponsored by Mouser Electronics and Infineon
Energy harvesting has become more popular than ever before for a wide range of IoT devices. In this episode of Chalk Talk, Amelia Dalton chats with Stathis Zafiriadis from Infineon about the details of Infineon’s NFC energy harvesting technology and how you can get started using this technology in your next IoT design. They discuss the connectivity and sensing capabilities of Infineon’s NAC1080 and NGC1081 NFC actuation controllers and the applications that would be a great fit for these innovative solutions.
Aug 17, 2023
39,740 views