feature article
Subscribe Now

Don’t React, PreAct!

Is it just me, or are things becoming even more exciting than they already were? I’m not sure if it’s just because I’m getting older, or if it’s all down to the world spinning faster. How old am I? Well—let’s put it this way—I’m so old that the kids next door believed me when I told them one of my chores when I was their age was to shoo the dinosaurs out of our family cabbage patch (this is obviously a joke because my family never owned a cabbage patch).

It seems like every time I talk to a company, they give me something new to think about. For example, I was just talking to the folks at PreAct. These little scamps recently introduced their third-generation modular software-defined Flash LiDAR called Mojave.

Meet Mojave (Source: PreAct)

I’m not an expert here, but this is the smallest LiDAR I’ve ever seen (90mm x 25 mm x 25mm) with this resolution (320 x 240 = QVGA) coupled with millimeter accuracy. It’s also the cheapest at only $350 from online vendors like Brevan Electronics. As we read on its associated spec sheet: “With its software-defined capabilities, well documented SDK, and powerful, user-friendly toolset, Mojave empowers your business to capture and process precise depth and location data, and quickly develop and deploy new features and functionality to your product line now and in the future.”

PreAct as a company is only around five years old. Since its founders are self-described “lifelong car guys,” their original plan was to create a LiDAR that was fast enough to be able to detect an imminent collision. They weren’t looking to avoid a collision like existing camera / LiDAR / radar systems coupled with automatic emergency braking systems do today. In their own words, their system was more for the “Oh shit, we’re screwed,” type of scenario, like a deer jumping in front of the car, another car suddenly emerging from a junction in front of you, or the classic “piano dropping out of the sky” (I hate it when that happens).

With traditional airbag technology, the airbags deploy when the event has been detected by accelerometers, which means after the crash has occurred. At this point, it takes the airbags around 1/20th of a second (50ms) to inflate, which—it must be admitted—is pretty darned fast. On the other hand, I just ran some back of the envelope calculations. These tell me that if one is travelling at 100km/h (62mph), which equates to 28m/s (92 feet/second), then this means one will have travelled an additional 1.4m (4.6 feet) before the airbag has fully deployed. Hmmm.

As an aside, saying “back of the envelope calculations” reminds me of a joke I recently saw bouncing around the interweb. This was in the form of an image accompanied by the caption “I was up early this morning crunching some numbers.”

Up early crunching numbers.

Well, I thought it was funny, but we digress…

This is why the folks at PreAct were aiming for something lightning fast that could do things like tightening the seatbelts and pre-arming—possibly deploying—the airbags prior to the collision taking place. What we are talking about here is something that can detect things, identify them, and make decisions at the extreme edge local to the sensor. Something that’s running at hundreds of frames a second. By comparison, a traditional camera-based machine vision system running at tens of frames a second communicating visual data to some central compute system over a CAN bus isn’t going to cut the mustard, as it were.

It was only after they had succeeded in this mission that the folks at PreAct realized their technology had many more potential uses in industry, security, robotics, agriculture, smart buildings and smart cities… the list goes on.

First, since Mojave illuminates the scene using flashes in the near infrared (NIR), this means it works in the dark (insofar as human vision is concerned), which can be jolly useful in many circumstances. Second, since it’s using time-of-flight (ToF) coupled with a sensor array that’s detecting all the pixels simultaneously, Mojave can build a 3D point cloud frame-by-frame on the fly. And third, as reported by Forbes, in January 2023, PreAct acquired  Gestoos—a Barcelona-based company specializing in human gesture recognition technology.

During our chat, one of the examples the folks from PreAct and Gestoos talked about was using their LiDAR and gesture recognition in operating theaters where the surgeon wishes to control something like changing the view from an imaging machine without touching anything apart from the person being operated on.

Another thing I hadn’t thought about was how difficult it is to extract 3D information and relationships from traditional 2D camera images. Doing so typically requires a humongous amount of computational power, which often demands the cloud. By comparison, since Mojave already builds a 3D point cloud, this makes a lot of artificial intelligence (AI)-based object detection and recognition much easier to the extent that it can be performed at the extreme edge close to the sensor.

Exploded view (Source: PreAct)

Yet one more aspect of all this that was new to me is the concept of a sort of camera-LiDAR sensor fusion. The idea here is to superimpose the 3D point cloud “on top of” a traditional 2D camera image.

One reason for doing this is to combat the oncoming tsunami of deep fakes in the form of AI-generated videos that purport to be people saying and doing things they never actually said or did.

Of course, there’s always the view that “AI taketh away and AI giveth back.” If the bad guys can use AI to generate deep fakes, then the good guys can use AI to detect them.

Additionally, unlike conventional 2D images, depth maps in the form of 3D point clouds provide a 3D spatial understanding of the scene. It’s incredibly challenging for AI algorithms to generate artificial depth data consistent with the real world, so merging 2D camera data with 3D point cloud data adds an extra layer of information to an image or video that could be used to separate the wheat (real-world imagery) from the chaff (AI-generated imagery).

To be honest, there’s a lot for us to wrap our brains around here, so I’d be very interested to hear your thoughts on this topic.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Battery-free IoT devices: Enabled by Infineon’s NFC Energy-Harvesting
Sponsored by Mouser Electronics and Infineon
Energy harvesting has become more popular than ever before for a wide range of IoT devices. In this episode of Chalk Talk, Amelia Dalton chats with Stathis Zafiriadis from Infineon about the details of Infineon’s NFC energy harvesting technology and how you can get started using this technology in your next IoT design. They discuss the connectivity and sensing capabilities of Infineon’s NAC1080 and NGC1081 NFC actuation controllers and the applications that would be a great fit for these innovative solutions.
Aug 17, 2023
39,740 views