editor's blog
Subscribe Now

Neural Networks are Finding a Place at the Adult’s Table

 

The deep learning revolution is the most interesting thing happening in the electronics industry today, said Chris Rowen during his keynote speech at the Electronic Design Process Symposium (EDPS), held last month at the Milpitas headquarters of SEMI, the industry association for the electronics supply chain. “The hype can hardly be understated,” continued Rowen. Search “deep learning” on Google and you’ll already get more than three billion hits. (Well, I got 20M for “deep learning” and 451M for “artificial intelligence,” but still, that’s a lot.) “There are 12,000 startups worldwide listed in Crunchbase,” he added. (I got 1497, again for “deep learing,” but still…) According to Rowen, 16,500 papers on deep learning and AI were published on arxiv.org in the past 12 months.

In other words, AI is hot (in case you’ve been living in a cave or an underground bomb shelter for the past few years).

Rowen is CEO of BabbleLabs, formerly BabbLabs, but the missing “e” turned out to confuse people who found they couldn’t pronounce it. BabbleLabs is a deep-learning startup. It’s devoted to applying deep learning and DNNs (deep neural networks) to speech processing.

Deep learning is a “mathematical layer cake model for learning,” explained Rowen. (I suspect he was referring to the various layers, hidden and otherwise, in the DNN model.) You take a large number of inputs and put them through a hidden system to get a desired output after a period of training. This model is very general and works for almost any kind of data, but you must have a way of gathering all of the required training data.

Currently, the biggest application for DNNs is, by far, vision systems. Training for these systems is enormously complex and running these systems consumes a lot of compute cycles. DNN-based vision systems gobble up TOPS (tera operations per second) like kids snack on candy corn during Halloween.

The fundamental question, said Rowen, is “Where do the smarts go?” In other words, where’s the best place to execute all of those tera-ops for vision systems? Is the best place close to the camera? That will give you low latency and will not overburden the network with traffic, but will degrade the ability to aggregate data from multiple cameras.

Is the best place to execute all of the tera-ops in some sort of aggregation location? At the cloud edge? In the cloud?

There’s no single answer. (That would be too easy, wouldn’t it?)

There are many critical tradeoffs to consider:

If you want to maximize system responsiveness, you make the processing local. That’s sort of obvious. You don’t want an autonomous car’s collision-avoidance DNN to be located in the cloud where a network dropout could cause a multi-car pileup; you want the processing in the car.

If you need global analysis of data from multiple cameras, such as in a surveillance system, then you want the processing in the cloud.

If you’re concerned about privacy, you don’t want raw video traversing the network. You want the processing to be local.

If you want to minimize cost, you’ll need to constrain the DNN and keep the processing local. Cloud computing is very flexible but it’s a pay-as-you-go system and the operating costs increase monotonically.

At this point, Rowen segued to the work of BabbleLabs. “Voice is vision,” he declared. “It’s the most human interface because there are five billion users (including those people listening to radio).

But there’s another aspect to AI-enhanced voice processing and recognition that indeed makes it a lot like video. “Voice recognition is essentially image recognition performed on spectrograms,” said Rowen.

Now there’s an intriguing idea.

Look at a spectrogram that plots frequency over time. It’s a 2D image, and just like any image, you can train a DNN to recognize traits buried in the spectrogram. Rowen demonstrated a BabbleLabs speech enhancer, which uses AI enhancements to strip road and wind noise from words spoken alongside a busy street in Montevideo, Uruguay. It works surprisingly well.

See for yourself (and watch to the end before making a hasty judgement):

 

The training wheels are coming off.

 

Leave a Reply

featured blogs
Oct 23, 2020
Processing a component onto a PCB used to be fairly straightforward. Through hole products, a single or double row surface mount with a larger center-line rarely offer unique challenges obtaining a proper solder joint. However, as electronics continue to get smaller and conne...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...
Oct 23, 2020
Any suggestions for a 4x4 keypad in which the keys aren'€™t wobbly and you don'€™t have to strike a key dead center for it to make contact?...
Oct 23, 2020
At 11:10am Korean time this morning, Cadence's Elias Fallon delivered one of the keynotes at ISOCC (International System On Chip Conference). It was titled EDA and Machine Learning: The Next Leap... [[ Click on the title to access the full blog on the Cadence Community ...

featured video

Better PPA with Innovus Mixed Placer Technology – Gigaplace XL

Sponsored by Cadence Design Systems

With the increase of on-chip storage elements, it has become extremely time consuming to come up with an optimized floorplan with manual methods. Innovus Implementation’s advanced multi-objective placement technology, GigaPlace XL, provides automation to optimize at scale, concurrent placement of macros, and standard cells for multiple objectives like timing, wirelength, congestion, and power. This technology provides an innovative way to address design productivity along with design quality improvements reducing weeks of manual floorplan time down to a few hours.

Click here for more information about Innovus Implementation System

featured Paper

New package technology improves EMI and thermal performance with smaller solution size

Sponsored by Texas Instruments

Power supply designers have a new tool in their effort to achieve balance between efficiency, size, and thermal performance with DC/DC power modules. The Enhanced HotRod™ QFN package technology from Texas Instruments enables engineers to address design challenges with an easy-to-use footprint that resembles a standard QFN. This new package type combines the advantages of flip-chip-on-lead with the improved thermal performance presented by a large thermal die attach pad (DAP).

Click here to download the whitepaper

Featured Chalk Talk

Cloud Computing for Electronic Design (Are We There Yet?)

Sponsored by Cadence Design Systems

When your project is at crunch time, a shortage of server capacity can bring your schedule to a crawl. But, the rest of the year, having a bunch of extra servers sitting around idle can be extremely expensive. Cloud-based EDA lets you have exactly the compute resources you need, when you need them. In this episode of Chalk Talk, Amelia Dalton chats with Craig Johnson of Cadence Design Systems about Cadence’s cloud-based EDA solutions.

More information about the Cadence Cloud Portfolio