feature article
Subscribe Now

Cloud-Based Genetic Algorithms and Computer Vision Applications

Do you recall my earlier column When Genetic Algorithms Meet Artificial Intelligence? This reflected my discovery that the chaps and chapesses at Algolux are using an evolutionary algorithm approach in their Atlas Camera Optimization Suite. The idea here is that, when it comes to creating a new camera system, each of the components — lens assembly, sensor, and image signal processor (ISP) — has numerous parameters (variables). This means that a massive and convoluted parameter space controls the image quality for each camera configuration.

Traditional human-based camera system tuning can involve weeks of lab tuning combined with months of field and subjective tuning. The sad part of all of this is that there’s no guarantee of results when it comes to computer vision applications employing artificial intelligence (AI) and machine learning (ML). The problem is that tuning a camera system for a computer vision application is a completely different “kettle of fish,” as it were, as compared to tuning an image or video stream for human consumption. 

The bottom line is that humans are almost certainly not the best judges of the way in which an AI/ML system likes to see its images. The solution here is to let the AI/ML system judge for itself or, at least, let Atlas determine how close the AI/ML system is coming to what is required, using human-supplied metadata as the “ground truth” state for comparison. Furthermore, employing evolutionary algorithms allows Atlas to explore the solution space to fine-tune the camera’s system variables so as to automatically maximize the results of the computer vision application that’s using the system.

A few months after the aforementioned column, I returned with a follow-up article: Eos Embedded Perception Software Sees All. I have to admit that this one was pretty amazing. We started by watching a video showing AAA Pedestrian-Detection ADAS Testing. Be warned, this is not for the faint of heart. I know that — after watching this video — if anyone were to ask me to step in front of an autonomous vehicle, I would be pretty confident they weren’t my friend.

The really scary thing about this video is that it was taken under optimum lighting conditions. Can you imagine how much worse things could get in adverse conditions like rain, hail, sleet, snow, or fog? And so we come to Eos Embedded Perception software. As described by the folks at Algolux, “Through joint design and training of the optics, image processing, and vision tasks, Eos delivers up to 3x improved accuracy across all conditions, especially in low light and harsh weather.” If you look at my earlier column, you’ll see various videos of this in action, but it was the following still image that really blew me away.

Eos-designed/trained camera system detecting like an Olympic champion (Image source: Algolux)

As you can see, this image shows a camera system designed/trained using Eos detecting people (purple boxes), vehicles (green boxes), and — what I assume to be — signs or traffic signals (blue boxes). As I noted in my earlier article, “I’ve been out walking on nights like this myself and I know how hard it can be to determine “what’s what,” so the above image impresses the socks off me (which isn’t something you want to have happen in cold weather).”

Moving on, the reason I’m waffling on about all this here is that I recently heard from my mate Max at Algolux (I know, that confuses me too — sometimes it feels like I’m emailing or talking to myself — and Max doesn’t like that — LOL). Anyhoo, Max ended up sharing all sorts of interesting nuggets of knowledge and tidbits of trivia with me.

We opened with the fact that Algolux has been named to the 2021 CB Insights AI 100. This is a prestigious list showcasing the 100 most promising private artificial intelligence companies in the world. According to an associated press release, “The AI 100 was selected from a pool of over 6,000 companies based on several factors including patent activity, investor quality, news sentiment analysis, market potential, partnerships, competitive landscape, team strength, and tech novelty.”

Now, it’s no secret that cameras are one of the sensors of choice for system developers of safety-critical applications, such as automotive ADAS, autonomous vehicles and robots, and video security. However, as we alluded to earlier, camera development currently relies on expert imaging teams or external image quality service companies hand-tuning camera architectures. This painstaking approach can take months, requires hard-to-find deep expertise, and is visually subjective. As such, this process does not ensure that the camera provides the optimal output for image quality or computer vision applications.

As we also noted earlier, the Atlas Camera Optimization Suite automates traditional months-long manual ISP tuning processes to maximize computer vision accuracy and image quality in only days, thereby providing an improvement of up to 100x in scalability and resource leverage. The Atlas workflow permits rapid evaluation of different camera sensors and lenses for cost reduction, best performance, or to adapt to changes in customer requirements.

So, you can only imagine my surprise and delight to hear the next tempting teaser from Max, which involved the fact that the Atlas Camera Optimization Suite is now enabled in the cloud. Even better, it supports an extended set of camera ISPs from Arm and Renesas, thereby allowing for further scalability.

In the case of SoC providers deploying Arm Mali-C71AE and Mali-C52, they can leverage the Atlas workflow to automate and significantly scale support for customers that are developing vision systems, predictably reducing ISP tuning time and program risks. For teams developing computer vision applications, Atlas can quickly determine the optimal Arm Mali ISP parameter set to achieve the highest vision accuracy, which is not possible with today’s hand-tuned ISP approaches.

Furthermore, the new cloud-enabled workflow supports the ISPs embedded in Renesas R-Car SoCs, such as the R-Car V3H and R-Car V3M for intelligent and automated driving (AD) vehicles, and the recently announced R-Car V3U ASIL D SoC for advanced driver assistance systems (ADAS) and AD systems.

In closing, as I mentioned in my previous column New Paradigms for Implementing, Monitoring, and Debugging Embedded Systems — in which we discussed the Tracealyzer and DevAlert tools from Percepio and the Luos distributed (not exactly an) operating system from Luos) — I’m going to be giving a presentation at the forthcoming 2021 Embedded Online Conference (EOC). The topic of my talk is Not your Grandmother’s Embedded Systems. The reason I mention this here is that, as part of my presentation, I will be mentioning Percepio, Luos, and — of course — Algolux.

Dare I hope to have the pleasure of your company at my presentation? As always, I welcome your comments and questions (preferably relating to what you’ve read here, but I’m open to anything 🙂

One thought on “Cloud-Based Genetic Algorithms and Computer Vision Applications”

Leave a Reply

featured blogs
Nov 23, 2022
The current challenge in custom/mixed-signal design is to have a fast and silicon-accurate methodology. In this blog series, we are exploring the Custom IC Design Flow and Methodology stages. This methodology directly addresses the primary challenge of predictability in creat...
Nov 22, 2022
Learn how analog and mixed-signal (AMS) verification technology, which we developed as part of DARPA's POSH and ERI programs, emulates analog designs. The post What's Driving the World's First Analog and Mixed-Signal Emulation Technology? appeared first on From Silicon To So...
Nov 21, 2022
By Hossam Sarhan With the growing complexity of system-on-chip designs and technology scaling, multiple power domains are needed to optimize… ...
Nov 18, 2022
This bodacious beauty is better equipped than my car, with 360-degree collision avoidance sensors, party lights, and a backup camera, to name but a few....

featured video

How to Harness the Massive Amounts of Design Data Generated with Every Project

Sponsored by Cadence Design Systems

Long gone are the days where engineers imported text-based reports into spreadsheets and sorted the columns to extract useful information. Introducing the Cadence Joint Enterprise Data and AI (JedAI) platform created from the ground up for EDA data such as waveforms, workflows, RTL netlists, and more. Using Cadence JedAI, engineering teams can visualize the data and trends and implement practical design strategies across the entire SoC design for improved productivity and quality of results.

Learn More

featured paper

How SHP in plastic packaging addresses 3 key space application design challenges

Sponsored by Texas Instruments

TI’s SHP space-qualification level provides higher thermal efficiency, a smaller footprint and increased bandwidth compared to traditional ceramic packaging. The common package and pinout between the industrial- and space-grade versions enable you to get the newest technologies into your space hardware designs as soon as the commercial-grade device is sampling, because all prototyping work on the commercial product translates directly to a drop-in space-qualified SHP product.

Click to read more

featured chalk talk

E-Mobility: Electronic Challenges and Solutions

Sponsored by Mouser Electronics and Würth Elektronik

The future electrification of the world’s transportation industry depends on the infrastructure we create today. In this episode of Chalk Talk, Amelia Dalton chats with Sven Lerche from Würth Elektronik about the electronic challenges and solutions for today’s e-mobility designs and EV charging stations. They take a closer look at the trends in these kinds of designs, the role that electronic parts play in terms of robustness, and how Würth’s REDCUBE can help you with your next electric vehicle or EV charging station design.

Click here for more information about Würth Elektronik Automotive Products