We live in exciting times. Sometimes I’m unsure just how much more excitement I can take. I can imagine one day talking to my grandchildren, saying something like, “I remember the days before AI,” and hearing them gasp in astonishment and disbelief.
I know what you’re thinking. When you first ran your orbs over the title of this column, I could practically hear you muttering, “Oh no, please tell me that Max isn’t going to waffle on about yet another AI processor chip!”
Would that I could tell you that, but I can’t. The thing is, we’re still in the early days of figuring out what “AI at the edge” really means. The landscape is shifting under our feet as new application areas emerge faster than we can name them—wearables that understand gestures, cameras that interpret scenes instead of just recording them, and sensors that can think before they speak. In this brave new world, one of the trickiest problems is finding the right balance between brains and beauty battery. A processor that can deliver meaningful intelligence without draining the power budget in minutes isn’t just useful—it’s indispensable.
As an aside (you knew one was coming), my wife (Gigi the Gorgeous) has a super-duper smartwatch. In addition to telling the time (always a helpful feature in a watch), she can use it to send and receive text messages, accept and make telephone calls, and monitor more of her vital signs than I used to know existed. It’s unfortunate that she needs to recharge it at least once each day. It’s also unfortunate that it often runs out of power shortly after we’ve commenced doing something interesting, like taking our evening walk, which means she loses track of whether or not she’s met her daily goals.
By comparison, my watch is much less smart and capable (like its owner). In addition to telling the time (give or take a few minutes), it can also inform me if I enjoyed a good night’s sleep, just in case I’m not sure. But its biggest claim to fame is that I only need to charge it once a week or so. I like that in a battery-powered device.
New AI niches are emerging like mushrooms after rain—smart sensors in shoes, clever cameras on drones, and intelligent assistants tucked into earbuds and eyeglasses—each requiring a smidgen of smarts without a mouthful of milliwatts.
The challenge is clear: how to cram meaningful machine intelligence into something that’s smaller than your fingernail and thriftier than your wristwatch? This is the puzzle engineers are grappling with as they design the next generation of edge AI processors.
All of which leads us to EMASS, which is a fabless semiconductor design company based in Singapore. EMASS was founded in 2020 by Professor Moe Sabry, following over a decade of experience in the semiconductor and embedded systems fields at Stanford University and Nanyang Technological University. The guys and gals at EMASS have been beavering away in stealth mode until just a few weeks ago, at which time they announced the first incarnation of their ECS-DoT Edge AI SoC.
On the off chance you were wondering (I know I was), ECS-DoT stands for “Edge Compute Subsystem—Deep Optimized Tensor,” where:
- Edge Compute Subsystem (ECS): Refers to EMASS’s tightly integrated AI+SoC platform that’s designed to run right at the edge (on sensors, wearables, IoT devices), not in the cloud.
- Deep Optimized Tensor (DoT): Highlights that the subsystem is purpose-built for deep learning math (tensors), optimized with ultra-low-bit quantization and efficient dataflow.

Meet the ESC-Dot (Source: EMASS)
This first incarnation of the ESC-DoT is implemented in TSMC’s 22nm process (the folks at EMASS are already working on a next-generation device at the 16nm node). The component shown here is an engineering sample in a 10mm x 10mm QFN package. This is what’s on the evaluation/development board shown below. The production package will be a 5mm x 5mm QFN. Additionally, the silicon die measures 2.2mm x 3.1mm, and a chip-scale package (CSP) option will be made available to those who need it in the future.

ESC-DoT evaluation/development board (left) and one of a suite of plug-in sensor cards (right) (Source: EMASS)
The design flow is as polished as we’ve all come to expect. Customers can utilize full-precision AI models developed in their preferred framework—PyTorch, TFLite, Caffe, or ONNX—and import them into the EMASS Development Interface (EDI) application, which optimizes the model by quantizing it to INT8 precision while maintaining desired accuracy thresholds.
The EDI can apply additional optimizations to the model, such as pruning and layer fusion, to further reduce its memory footprint and boost performance. The optimized model is then compiled to map onto the ECS-DoT’s deep learning accelerators. Customers can validate the model on the ECS-DoT evaluation/development board and measure latency, accuracy, and power consumption. Once validated, the AI model can be integrated with the whole system, including sensors, firmware (MicroPython on Zephyr OS is currently supported), and peripherals, thereby creating a complete, production-ready Edge AI solution.
I’m not at liberty to talk too much about what’s inside the ESC-DoT itself. Suffice it to say that there’s an ultra-low-power RISC-V core along with two ultra-low-power deep learning (DL) cores, 2MB of SRAM, 2MB of MRAM, and a bunch of other stuff. Additionally, any external sensors, including digital audio and video, temperature, pressure, humidity, and inertial sensors, can be connected directly to the device.
Of course, the proof of the pudding is in the eating, as they say. Compared with leading competitors, ECS-DoT operates up to 93% faster while consuming 90% less energy, all while supporting true multimodal sensor fusion on-device. The numbers shown below (1-10µJ with < 10ms latency per inference) are sufficiently exciting to make even a grizzled old engineer like your humble narrator (I pride myself on my humility) squeal in excitement and delight.

By the numbers (Source: EMASS)
To put this another way, the ECS-DoT can perform 30 giga-operations per second (GOPS) while consuming only 2 milliwatts, achieving a total of 12 tera-operations per second (TOPS) per watt.
Sadly, one of the potential problems with small companies like EMASS is that they lack the resources and marketing reach necessary to reach a global audience. Happily, EMASS was fully acquired by Nanoveu earlier this year.
Nanoveu is an applied-technology company based in Perth, Western Australia. Although headquartered in Australia, the company’s operations span Asia and the Americas, utilizing manufacturing, distribution, and innovation partnerships across multiple territories. Over time, the chaps and chapesses at Nanoveu have broadened their focus from advanced materials (films, coatings, nanotechnology) to a deeper-tech orientation, including high-performance visualization platforms and now edge-AI semiconductors.
On the visualization side, Nanoveu’s EyeFly3D technology is a film-plus-software platform that enables regular mobile devices (smartphones, tablets, and large displays) to display glasses-free 3D content. Essentially, a plastic film (~0.1 mm thick) with about half a million micro-lenses is applied to a display. The user experiences immersive 3D without needing 3D glasses, and without significantly affecting display brightness, resolution, or touchscreen sensitivity.
So, why did Nanoveu acquire EMASS? Well, the obvious reason is that the market for edge AI and ultra-low power compute is growing rapidly (IoT devices, wearables, autonomous vehicles, smart sensors). Nanoveu’s acquisition of EMASS gives it a foothold in the semiconductor/SoC domain, which typically has higher margins (if you scale) and more strategic weighting than display technology on its own.
Furthermore, marrying EMASS’s energy-sipping ECS-DoT silicon with Nanoveu’s EyeFly3D display know-how creates a bridge between seeing and sensing at the edge. The result could be a new generation of intelligent 3D user interfaces—displays that not only render vivid, glasses-free depth but also analyze gestures, eye movements, and environmental cues in real-time. In commercial and industrial HMIs, this will translate into more informative dashboards, more intuitive machine controls, and context-aware displays that anticipate rather than merely respond.
The reason I’m such a font of information is that I was just chatting with Mark Goranson, who’s the CEO at EMASS. Mark has been in the semiconductor industry for over 45 years. He started at Mostek “way back when.” He moved to Intel, where he rubbed shoulders with the greats (Robert Noyce, Gordon Moore, Andrew “Andy” Grove…), then onto Freescale, On Semiconductor (Onsemi), Honeywell, and… the list goes on.
“Wow! That’s a lot,” I hear you cry, “Shouldn’t Mark have retired by now?” You are very astute. In fact, he was retired when the folks at Nanoveu and EMASS came pounding on his door. As Mark says, “I know processors. When I started looking at the ECS-DoT and all of its capabilities in the context of Edge AI space, I said to myself, ‘This is too great a technology for me not to be involved in,’ so I came out of retirement.”
I know what he means. On the one hand, I dream of retirement (and bacon sandwiches). On the other hand, were I ever to retire, and if someone were to come knocking on my door with a technology like ECS-DoT, I’d find it very hard to restrain myself from diving back in (probably with a half-eaten bacon sandwich still in hand). Some technologies are just too exciting to ignore, and the ECS-DoT feels like one of them. What say you? Do you have any thoughts you’d care to share about any of this?


