feature article
Subscribe Now

Automatic Car Driver Detection

What’s the Value to Drivers?

Artificial Intelligence (AI) and, more specifically, Machine Learning (ML) seem finally to have hit the scene in a permanent way. Having grown up watching expert systems and fuzzy logic and other schemes come and go, the stars and planets – not to mention theories, algorithms, and computing power – appear to have aligned to give us something useful.

For a while, ML and “big data” were terms that were tossed liberally about to get attention – in the same way that Internet of Things was (and is). But we’ve seen more and more cases where a press release touting ML means exactly that. It’s not just a buzzword anymore.

No doubt we’re still in early stages, and the technology will continue to evolve. For the moment, the news is that we appear to have traction.

That said, what do we do with this technology? It’s being applied absolutely all over the place. Learning algorithms are being applied pretty much anywhere there’s any sort of artifact that can be measured in quantity. In fact, possibly in too many places.

I recently saw a paper presented on the results of a particular study by a particular company. I’ve decided not to name them (or even the conference) since my point isn’t to call them out or shame them. They did what any of us could have done (and which I certainly have done in other areas), getting lost in the detail and losing the big picture. That’s the point: when you start analyzing data to learn things, it’s easy for this to happen.

It’s also fair to say that this was something of a proof of concept – no one was selling a product as a direct result. It’s not another Juicero. However, it wasn’t presented as, “Here’s what we studied, and yes, there are some practical issues, but that’s not the point.” When I raised questions, it certainly seemed to me that this was the first time they were considering the questions (of course, I can’t say what internal conversations might have happened). And no one else in the audience was asking the questions. So it’s easy to get swept up.

The Setup

The presented project was at the intersection of two of the hottest technology areas: ML and automotive. The idea was that, given a car with potentially four drivers, can you learn from their driving styles and then accurately predict who the driver is at any given time?

This particular study posed a number of challenges. They needed to observe a variety of performance metrics to find those that correlated best with each driver. They did this first by creating a model for each driver and then combining them to get a confidence level as to which of the models – and drivers – was in control at a given moment.

Many of the metrics could be taken from internal busses. But, for example, the gear position (on a manual transmission) wasn’t one of them – and yet they wanted to include this as a metric. So they combined engine speed (measured by revolutions per minute, or RPMs) and road speed to deduce the gear position.

This is an example of a virtual sensor: there is no dedicated sensor that reports the gear position, so other data must be fused to provide an equivalent measurement. Note that the two metrics that give gear position are also fair game for being included on their own. And, in fact, gear position, driving speed, and RPMs were independently tapped as useful metrics, even though gear position is derived from speed and RPMs. In other words, metrics don’t have to be orthogonal.

The other challenge they had was that their learning model didn’t include a notion of time or history. That must mean that they were using a feed-forward neural network, which lacks the feedback that can incorporate time. As we saw recently, recurrent neural networks (RNNs) do include time – which is why they’re used for time-varying phenomena like speech or handwriting analysis. But the feedback means that they’re not feed-forward.

In this case, they opted to turn the time-varying data set into non-time-series data. They could still mine the history during training to extract other useful features, but when actually running the model to determine who a current driver is, they use no notion of history.

They ended up with a large number of features – which can bog down an algorithm. So they ranked them all by predictive value and then took the top ones. Because they were starting with a separate model for each driver (before combining), that meant that different drivers might have different predictive features.

Specifically, the two top features (in order of predictive quality) for each driver were:

  • Driver 1: Gear position and RPMs
  • Driver 2: RPMs and speed
  • Driver 3: RPMs and gear position
  • Driver 4: Turn signal and RPMs

In other words, each driver had a particular “signature” way of driving depending on how they revved the engine, when they changed gears, how fast they drove, and how they signaled turns.

Obviously, the fewer features you use in your model, the faster the algorithm can execute. Here we have two features per driver – will that be enough? The proof comes in the accuracy by which it predicts drivers.

To be clear, this test was done with four real drivers, both for training and then for later prediction. Once the models were complete, they ran test prediction runs. Remember that the benchmark to measure against is the probability of randomly guessing the driver, which, with four drivers, is 25%. With the features listed above, they were able to determine the driver with the following accuracies:

  • Driver 1: 87%
  • Driver 2: 90%
  • Driver 3: 96%
  • Driver 4: 95%
  • Overall: 92%

Not bad… The algorithm seems to have proven itself out.

The Bigger Picture

So this appears to have been an interesting exercise in the realities of prying data out of a system, measuring it, and learning something from it. The follow-on question is, of course, “So what can I do with this?”

The answer given was that you can automatically personalize various settings in the car according to who’s driving. Things like the seat position, mirrors, and radio station. But there are a couple of hitches with this.

  1. No one is going to start driving with a seat that’s way out of whack. They’ll adjust when they get into the car. Some cars have storable seat positions, although more likely 2, not 4. If you need only 2, then that problem was solved decades ago and, really, isn’t a problem anymore. (Is it? Let me know in the comments if so…)
  2. Similarly with mirrors. A conscientious driver will adjust the mirrors before driving if another driver has driven it last. Since most starting positions require either backing up (which uses the rearview mirror – hopefully) or pulling away from a curb (which uses side mirrors – hopefully), then it’s likely that, when starting out, the new driver will notice that the mirrors are off and stop to fix them before driving away.
  3. I don’t know about you, but I have a few radio stations I listen to. Assigning one radio station per person is akin to the security challenge questions typically offered up on websites, which assume everyone has exactly one favorite <whatever> and will keep that favorite for the rest of their lives. If my choice of station is time-dependent (on Mondays I listen to this program and on Sundays some other one on a different station, both at specific times), then it’s possible for a learning algorithm to determine the pattern (although it would take much more training, covering the entire week, for multiple weeks for the pattern to emerge). But if I go to my preferred station and there’s someone annoying blathering on, I may look for something different. And that can’t be learned.
  4. Not only must a driver start driving in order for the car to automatically figure out the driver, the driver must exercise the features with which they are associated. If it’s speed within the city, then what if, today, you turn right onto a 35-mph road and tomorrow you go straight, maintaining 25 mph? What if turn signals are part of your signature, and you live in a rural area where you go 20 miles before needing to use the signal? The point is, the features are harder to extract for all possible variations of route, and there may be a long delay before identifying the driver, which makes items 1 and 2 above that much worse.

Honestly, given the touted benefits, I couldn’t really find one that was truly useful, for two reasons. First, the impracticalities just listed. Second, most of them seemed to be things you could theoretically do automatically (late or not), but didn’t solve real problems that don’t already have acceptable solutions. I suppose having saved mirror settings could be useful, but that could be solved with a connection to the existing saved-seat-position technology.

As a training and learning exercise, or as a proof of concept, this is an interesting study, no doubt. As a useful development? I have my doubts. And, as I mentioned at the outset, it wasn’t presented as a learning exercise only, with no practical utility.

Which is why I highlight the risk of getting buried in data and not reviewing what your end goal is. Much of our new technology is being applied all over the place, but, in many cases, it’s simply showing that you can do something in a new (and cool and hip) way. But the old way may be just as easy (or marginally harder) and far less expensive. And not subject to hacking or snooping.

As with any product, you’ll get the best traction if you solve not only problems, but problems that are points of significant pain for your customers. There have certainly been times in my career where I have lost track of that; it’s no less important today.

[Coincident meme seen on the day I wrote this: “Nothing is less productive than to make efficient what should not be done at all.”]

3 thoughts on “Automatic Car Driver Detection”

  1. Whilst I appreciate the process of using data in this way to identify the driver, surely if the aim is to identify the driver wouldn’t it be better to use facial recognition or some other much more reliable system?

Leave a Reply

featured blogs
Dec 4, 2023
The OrCAD X and Allegro X 23.1 release comes with a brand-new content delivery application called Cadence Doc Assistant, shortened to Doc Assistant, the next-gen app for content searching, navigation, and presentation. Doc Assistant, with its simplified content classification...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

Advantech Industrial AI Camera: Small but Mighty
Sponsored by Mouser Electronics and Advantech
Artificial intelligence equipped camera systems can be a great addition to a variety of industrial designs. In this episode of Chalk Talk, Amelia Dalton and Ryan Chan from Advantech explore the components included in an industrial AI camera system, the benefits of Advantech’s AI ICAM-500 Industrial camera series and how you can get started using these solutions in your next industrial design. 
Aug 23, 2023