feature article
Subscribe Now

DSP Group Dives into the Hearables Market

The term hearables (some call them smart headphones) refers to technically advanced electronic in-ear-devices. Most people think in terms of wireless earbuds or hearing aids that fit in the ear canal and use a processor to implement digital signal processing (DSP) techniques to enhance the wearer’s listening experience.

In fact, hearables may feature additional capabilities. If “the eyes are the window to the soul,” as the old saying goes, then the ears can provide a window to all sorts of other things, including temperature, skin resistance, heart rate monitoring, brain electrical activity monitoring, medical monitoring, and fitness tracking. Thus, hearables are just one more manifestation of the concept of ubiquitous computing.

According to Wikipedia, “The neologism ‘hearable’ is a hybrid of the terms ‘wearable’ and ‘headphone,’ as hearables combine major assets of wearable technology with the basic principle of audio-based information services, conventional rendition of music, and wireless telecommunication. The term was introduced in April 2014 simultaneously by Apple in the context of the company’s acquisition of Beats Electronics and product designer and wireless application specialist Nick Hunn in a blogpost for a wearable technologies internet platform.”

As an aside (come on, you knew I was going to wander off the beaten track), the IEEE Signal Processing Society (IEEE SPS) is one of close to 40 technical societies organized under the IEEE’s Technical Activities Board. Interestingly enough, when this society was originally formed in 1948, there was no such discipline as signal processing. This explains why the IEEE SPS was originally called the Professional Group on Audio of the Institute of Radio Engineers.

I discovered this tidbit of trivia in a jolly interesting booklet called Fifty Years of Signal Processing (The IEEE Signal Processing Society and its Technologies 1948-1998).

Of course, back then, any form of signal processing was predominantly implemented using analog techniques. It’s funny how few of today’s younger engineers are even aware that there is such a beast as analog signal processing (ASP), which involves processing continuous analog signals by analog means, but it’s “a thing,” nonetheless.

Early digital logic was slow and expensive, and signal processing using digital techniques didn’t emerge until the 1960s and 1970s. I have no idea when the term digital signal processing (DSP) itself first made an appearance (if you know, please share this nugget of knowledge with the rest of us in the comments below).

What I do know is that when I started my degree in 1976, there was no mention of DSP per se; the closest we got was to talk about things like the theory of Fast Fourier Transforms (FFTs). The university did own a mainframe digital computer, which was housed in its own building and which required a bit of a stroll from the engineering building.

The only computer actually residing in the engineering department was an analog monster comprising large numbers of multiplier, divider, integrator, differentiator, and related modules that we connected using telephone operator-style jumper cables. I remember using this bodacious beauty to model the thermal dynamics associated with opening the door of a fridge.

ASP can be used to implement some very interesting effects using relatively few components, but it’s not very flexible, or repeatable, and it’s limited in what it can do. The reason DSP is popping up all over the place is that we now have access to incredibly powerful compute capabilities in the form of silicon chips that can contain billions of transistors. This is coupled with the fact that we now have a much deeper understanding of DSP algorithms, although I have to confess that the math goes completely over my head.

The thing is that DSP can be used to do all sorts of amazing things in the audible domain, including active noise reduction (ANC), audio compression and decompression, and speech processing and recognition.

Speaking of which, have you ever heard of the DSP Group? These little rascals have been around since 1987 and they have an amazing history of acquisitions and spinoffs. In 1996, for example, they spun off their cellular chip design and development division into a new company called DSP Communications (DSPC). Three years later, in 1999, Intel purchased DSPC for US$1.6 billion (which was a lot of money back in those days, LOL).

I’ve done a lot of work with CEVA over the past few years. As you may know, CEVA offers IP for developers to incorporate into their SoCs, where this IP includes DSPs, artificial intelligence processors, and wireless platforms, along with complementary software for sensor fusion, image enhancement, computer vision, voice input, and artificial intelligence. Well, you can only imagine my surprise to discover that, in November 2002, the DSP Group’s IP licensing division and the Irish company Parthus Technologies were merged to form a new company, and the name of that company was CEVA!

The DSP Group has a storied history in mature technologies like cordless phones and in modern growth markets like unified communications, smart voice applications, and smart home / IoT devices. At the time of this writing, they have more than 200 patents filed or granted, and more than a billion products have the DSP Group’s technology inside.

Example products integrating the DSP Group’s SmartVoice solutions (Image source: DSP Group)

Another company of interest in the hearables arena is SoundChip SA, which is a leading supplier of ANC technology, engineering services, design tools, and production-line test systems for headsets. SoundChip’s patented ANC solutions feature advanced situational awareness and augmented audio capabilities that provide enhanced speaking comfort and smart-audio features.

According to a recently published iFixit teardown, DSP Group’s advanced SmartVoice solution is driving Google’s recently launched Pixel Buds 2. Furthermore, earlier this year, Technics and Panasonic launched ANC-enabled true wireless stereo (TWS) headsets based on a family of new advanced hybrid ANC codecs from DSP Group that incorporate SoundChip’s patented Soundflex technology.

A couple of days ago, I was chatting with the guys and gals at the DSP Group. It seems that they just acquired SoundChip SA, and they are really rather excited about the way in which the two companies complement each other’s capabilities.

DSP Group and SoundChip SA — A union that was meant to be (Image source: DSP Group)

About ten years ago, I gave a presentation at the Microsoft campus in Redmond, Washington. The flight out was horrendous noise-wise. On the return journey, I did something everyone says you should never do — I purchased a set of Sony noise cancelling headphones at the airport.

These headphones have served me faithfully over the years, but ten years is a lifetime when it comes to electronics, and ANC technology has progressed in leaps and bounds since then.

Now I can’t stop thinking about the TWS hearables from Panasonic mentioned above. As the product reviewer at Forbes said: “Not only are these earbuds completely wireless, but they also include an ANC (active noise-canceling) function for reducing background noises such as the drone of an aircraft’s engines or the hum of a busy office. It’s a bold move by Panasonic to compete with market leaders like Sony and Sennheiser, but, based on my first impressions of this new pair of earbuds that I’ve just reviewed, I think Panasonic has scored a direct hit.”

High praise indeed. Mayhap it’s time for me to look for a new ANC solution for use on my future travels. In the meantime, I’ve said it before and I’ll say it again — I think the combination of artificial intelligence (AI) and augmented reality (AR) is going to dramatically change the way in which we interface and interact with the world, our systems, and each other. This is going to involve mixed modalities, including vision and sound. Hearables are going to play a big part in this, which means more people may become aware of the DSP Group and their technology offerings in the not-so-distant future.

Leave a Reply

featured blogs
Oct 25, 2020
https://youtu.be/_xItRYHmGPw Made on my balcony (camera Carey Guo) Monday: The Start of the Arm Era Tuesday: The Gen Arm 2Z Ambassadors Wednesday: CadenceLIVE India: Best Paper Awards Thursday:... [[ Click on the title to access the full blog on the Cadence Community site. ]...
Oct 23, 2020
Processing a component onto a PCB used to be fairly straightforward. Through-hole products, or a single or double row surface mount with a larger centerline rarely offer unique challenges obtaining a proper solder joint. However, as electronics continue to get smaller and con...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...
Oct 23, 2020
Any suggestions for a 4x4 keypad in which the keys aren'€™t wobbly and you don'€™t have to strike a key dead center for it to make contact?...

featured video

Demo: Low-Power Machine Learning Inference with DesignWare ARC EM9D Processor IP

Sponsored by Synopsys

Applications that require sensing on a continuous basis are always on and often battery operated. In this video, the low-power ARC EM9D Processors run a handwriting character recognition neural network graph to infer the letter that is written.

Click here for more information about DesignWare ARC EM9D / EM11D Processors

featured paper

Fundamentals of Precision ADC Noise Analysis

Sponsored by Texas Instruments

Build your knowledge of noise performance with high-resolution delta-sigma ADCs. This e-book covers types of ADC noise, how other components contribute noise to the system, and how these noise sources interact with each other.

Click here to download the whitepaper

Featured Chalk Talk

Cloud Computing for Electronic Design (Are We There Yet?)

Sponsored by Cadence Design Systems

When your project is at crunch time, a shortage of server capacity can bring your schedule to a crawl. But, the rest of the year, having a bunch of extra servers sitting around idle can be extremely expensive. Cloud-based EDA lets you have exactly the compute resources you need, when you need them. In this episode of Chalk Talk, Amelia Dalton chats with Craig Johnson of Cadence Design Systems about Cadence’s cloud-based EDA solutions.

More information about the Cadence Cloud Portfolio