feature article
Subscribe Now

Getting Ever Closer to Fully Autonomous Vehicles

When I was but a young whippersnapper, the idea of self-driving vehicles was the stuff of science fiction. Even though many Poohsticks have passed under the bridge since those far off days of yore, and even though we still don’t have fully autonomous vehicles (AVs), I’m delighted to report that we are slowly getting there.

I know many people who hold no truck (no pun intended) with AVs. Some worry that they will be more hazardous than human-controlled vehicles. To these naysayers I would respond, “Have you driven in Alabama recently?” (and don’t even get me started talking about some of the other places I’ve experienced the thrills of human-controlled conveyances, such as Paris, France, Rome, Italy, and Bangalore, India, to name but three). Others are convinced that no computer can drive better than a human. Once again, my response would be to show them the delights of living in a state where only one person in a hundred is even vaguely aware that cars come with indicator lights, let alone deigns to use them.

For me, AVs are a matter of safety, efficiency, and time. Once all vehicles are fully automated, I’m convinced that the total number of injuries and fatalities will drop dramatically. In the case of efficiency, I’ve seen studies predicting that — once cars can communicate with each other in real-time — twice the number of cars could share the same piece of road as they can today. How can this be? Well, as one example, what we currently consider to be a two-lane road today could act as a three-lane road in the future. In this case, two out of the three lanes could be used for traffic entering a metropolis in the morning, and two out of the three could be dedicated to traffic exiting the city in the evening. Also, vehicles could travel closer together, and all of the vehicles waiting at a stop sign (if we still have stop signs) could start moving at the same time, as opposed to the current caterpillar-like action where the lead car starts moving, followed by the next car after a short human-reaction-time-induced delay, and so on down the line. I’m not saying that we desperately need to double the total number of cars in the world, you understand, just that AV capabilities could dramatically increase the flow of traffic through congested city streets.

And then we come to the topic of time, which is becoming ever more precious to me the older I get. On the one hand, I don’t spend a tremendous amount of time on the road compared to many people. Apart from the rare road trip to somewhere far away, most of my driving time is spent travelling between my home and my office. When school is out and traffic is light, such as on a federal holiday, for example, it’s only around a 20-minute drive to my office in the morning and the same returning home in the evening. Even on a bad day, each trip rarely takes me more than 30 minutes. On the other hand, this equates to between 40 and 60 minutes of each working day, which is time I could better spend doing something else, like reading historical books about the desperate days before self-driving cars, for example.

To be honest, I fear I’ve been led astray by the promise of shiny sensors. Today’s automobiles are festooned with an array of sensors, including cameras, lidars, and radars, each of which has its own role to play. For example, cameras can be used to detect and recognize things like signs and traffic lights, lidar can be used to detect small objects and create 3D models, while radar can be used to “see” through fog and snow. In the case of lidar, the older time-of-flight (TOF) sensors look set to soon be displaced by next-generation frequency modulated continuous wave (FMCW) devices, which can provide instantaneous motion/velocity information on a per-pixel basis (see also Equipping Machines with Extrasensory Perception). The best results are gained by employing sensor fusion, which refers to the process of combining sensor data derived from disparate sources such that the resulting information has less uncertainty than would be possible if these sources were to be used individually.

The reason I say “I fear I’ve been led by the promise of shiny sensors” is that I’ve been bedazzled and beguiled by the machine perception side of things, whereby artificial intelligence (AI) and machine learning (ML) employ the sensor data to perform tasks like object detection and recognition. More recently, I’ve come to realize that machine perception is only part of the puzzle and is of limited use without comprehension (understanding what is being perceived) and prediction (the ability to forecast the probability of potential future events).

What do I mean by all this? Well, suppose you and I happened to be on an outing together driving down a suburban street with cars parked on the sides of the road. Now suppose a soccer ball suddenly made an appearance, rolling out into the road from between two cars. I think it’s safe to say that both of us would predict the possibility that the ball might soon be joined in the street by a small boy or girl. This is where the comprehension and prediction portions or the problem kick in. If you thought the computation required to perform object detection and recognition were extreme, then “you ain’t seen nothin’ yet,” is all I can say.

The six levels of autonomous driving (AD) can be summarized as follows: Level 0 (no driving automation; “everything on”), Level 1 (driver assistance such as cruise control; “everything still on”), Level 2 (partial driving automation such as controlling steering and speed, but with a human prepared to take over at any moment; “feet off”), Level 3 (conditional driving automation where the vehicle can make decisions like overtaking other cars but still requires a human available to retake control; “hands off”), Level 4 (high driving automation in which the vehicle doesn’t require human intervention in most cases; “eyes off”), and Level 5 (full driving automation in which the vehicle won’t have any human controls like a steering wheel and the “driver” can sleep through the journey; “mind off”).

So, what’s holding us back from this AD Nirvana? Well, I was just chatting with the clever chaps and chapesses at VSORA (I’m drooling with desire to own a car like the one depicted on their home page). In addition to even better sensors and more sophisticated AI and ML algorithms… in a nutshell… to cut a long story short… without dilly-dallying… the gap between what the market currently offers and what the market actually wants is largely due the amount of computational power required.

The gap between what the market offers and what the market wants
(Image source: VSORA)

In order to address this gap, the folks at VSORA recently announced their Tyr family of petaflop computational companion chips to accelerate L3 through L5 autonomous vehicle designs. Delivering trillions of operations per second while consuming as little as 10 Watts, Tyr allows users to implement autonomous driving functions that have not previously been commercially viable.

There are currently three members of the family: Tyr1 (64K AI MACs + 1,024 DSP ALUs = 260 teraflops), Tyr2 (128K AI MACs + 2,048 DSP ALUs = 520 teraflops), and Tyr3 (256K AI MACs + 4,096 DSP ALUs = 1,040 teraflops). Designed to work as companion platforms to a host processor, Tyr devices are algorithm and processor agnostic, fully reprogrammable, and fully scalable (they can run identical code). They employ IEEE754 floating-point calculations for accuracy and offer a GDDR6 memory interface for optional local data storage and a PCIe interface to connect to sensors and the host computer. 

The VSORA Tyr programmable architecture tightly couples DSP cores ML accelerators to design
L3 through L5 autonomous driving vehicles (Image source: VSORA)

To provide some idea as to Tyr’s computational capabilities, with its 1,040 teraflops of computational power, the Tyr3 can process an eight-million cell particle filter using 16 million particles in less than 5 milliseconds (msec). A full-high-definition (FHD) image with Yolo-v3 takes less than 1.6 msec leading to a throughput of 625 images per second. Wowzers!

Are you involved in creating next-generation ADAS and AD systems? If so, I’m informed that VSORA’s Tyr1, Tyr2, and Tyr3 devices will sample in Q4 2022 and will be available in-vehicle in 2024 (I’m also informed that pricing is available upon request). I don’t know about you, but I for one am becoming ever more optimistic that — in the not-so-distant-future — I will be able to spend my time travelling to and from work enjoying a good science fiction book while my AV takes care of the driving. What say you? Do you have any thoughts you’d care to share on any of this?

Leave a Reply

featured blogs
Apr 25, 2024
Cadence's seven -year partnership with'¯ Team4Tech '¯has given our employees unique opportunities to harness the power of technology and engage in a three -month philanthropic project to improve the livelihood of communities in need. In Fall 2023, this partnership allowed C...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Data Connectivity at Phoenix Contact
Single pair ethernet provides a host of benefits that can enable seamless data communication for a variety of different applications. In this episode of Chalk Talk, Amelia Dalton and Guadalupe Chalas from Phoenix Contact explore the role that data connectivity will play for the future of an all electric society, the benefits that single pair ethernet brings to IIoT designs and how Phoenix Contact is furthering innovation in this arena.
Jan 5, 2024
15,413 views