feature article
Subscribe Now

Cloud-Based Genetic Algorithms and Computer Vision Applications

Do you recall my earlier column When Genetic Algorithms Meet Artificial Intelligence? This reflected my discovery that the chaps and chapesses at Algolux are using an evolutionary algorithm approach in their Atlas Camera Optimization Suite. The idea here is that, when it comes to creating a new camera system, each of the components — lens assembly, sensor, and image signal processor (ISP) — has numerous parameters (variables). This means that a massive and convoluted parameter space controls the image quality for each camera configuration.

Traditional human-based camera system tuning can involve weeks of lab tuning combined with months of field and subjective tuning. The sad part of all of this is that there’s no guarantee of results when it comes to computer vision applications employing artificial intelligence (AI) and machine learning (ML). The problem is that tuning a camera system for a computer vision application is a completely different “kettle of fish,” as it were, as compared to tuning an image or video stream for human consumption. 

The bottom line is that humans are almost certainly not the best judges of the way in which an AI/ML system likes to see its images. The solution here is to let the AI/ML system judge for itself or, at least, let Atlas determine how close the AI/ML system is coming to what is required, using human-supplied metadata as the “ground truth” state for comparison. Furthermore, employing evolutionary algorithms allows Atlas to explore the solution space to fine-tune the camera’s system variables so as to automatically maximize the results of the computer vision application that’s using the system.

A few months after the aforementioned column, I returned with a follow-up article: Eos Embedded Perception Software Sees All. I have to admit that this one was pretty amazing. We started by watching a video showing AAA Pedestrian-Detection ADAS Testing. Be warned, this is not for the faint of heart. I know that — after watching this video — if anyone were to ask me to step in front of an autonomous vehicle, I would be pretty confident they weren’t my friend.

The really scary thing about this video is that it was taken under optimum lighting conditions. Can you imagine how much worse things could get in adverse conditions like rain, hail, sleet, snow, or fog? And so we come to Eos Embedded Perception software. As described by the folks at Algolux, “Through joint design and training of the optics, image processing, and vision tasks, Eos delivers up to 3x improved accuracy across all conditions, especially in low light and harsh weather.” If you look at my earlier column, you’ll see various videos of this in action, but it was the following still image that really blew me away.

Eos-designed/trained camera system detecting like an Olympic champion (Image source: Algolux)

As you can see, this image shows a camera system designed/trained using Eos detecting people (purple boxes), vehicles (green boxes), and — what I assume to be — signs or traffic signals (blue boxes). As I noted in my earlier article, “I’ve been out walking on nights like this myself and I know how hard it can be to determine “what’s what,” so the above image impresses the socks off me (which isn’t something you want to have happen in cold weather).”

Moving on, the reason I’m waffling on about all this here is that I recently heard from my mate Max at Algolux (I know, that confuses me too — sometimes it feels like I’m emailing or talking to myself — and Max doesn’t like that — LOL). Anyhoo, Max ended up sharing all sorts of interesting nuggets of knowledge and tidbits of trivia with me.

We opened with the fact that Algolux has been named to the 2021 CB Insights AI 100. This is a prestigious list showcasing the 100 most promising private artificial intelligence companies in the world. According to an associated press release, “The AI 100 was selected from a pool of over 6,000 companies based on several factors including patent activity, investor quality, news sentiment analysis, market potential, partnerships, competitive landscape, team strength, and tech novelty.”

Now, it’s no secret that cameras are one of the sensors of choice for system developers of safety-critical applications, such as automotive ADAS, autonomous vehicles and robots, and video security. However, as we alluded to earlier, camera development currently relies on expert imaging teams or external image quality service companies hand-tuning camera architectures. This painstaking approach can take months, requires hard-to-find deep expertise, and is visually subjective. As such, this process does not ensure that the camera provides the optimal output for image quality or computer vision applications.

As we also noted earlier, the Atlas Camera Optimization Suite automates traditional months-long manual ISP tuning processes to maximize computer vision accuracy and image quality in only days, thereby providing an improvement of up to 100x in scalability and resource leverage. The Atlas workflow permits rapid evaluation of different camera sensors and lenses for cost reduction, best performance, or to adapt to changes in customer requirements.

So, you can only imagine my surprise and delight to hear the next tempting teaser from Max, which involved the fact that the Atlas Camera Optimization Suite is now enabled in the cloud. Even better, it supports an extended set of camera ISPs from Arm and Renesas, thereby allowing for further scalability.

In the case of SoC providers deploying Arm Mali-C71AE and Mali-C52, they can leverage the Atlas workflow to automate and significantly scale support for customers that are developing vision systems, predictably reducing ISP tuning time and program risks. For teams developing computer vision applications, Atlas can quickly determine the optimal Arm Mali ISP parameter set to achieve the highest vision accuracy, which is not possible with today’s hand-tuned ISP approaches.

Furthermore, the new cloud-enabled workflow supports the ISPs embedded in Renesas R-Car SoCs, such as the R-Car V3H and R-Car V3M for intelligent and automated driving (AD) vehicles, and the recently announced R-Car V3U ASIL D SoC for advanced driver assistance systems (ADAS) and AD systems.

In closing, as I mentioned in my previous column New Paradigms for Implementing, Monitoring, and Debugging Embedded Systems — in which we discussed the Tracealyzer and DevAlert tools from Percepio and the Luos distributed (not exactly an) operating system from Luos) — I’m going to be giving a presentation at the forthcoming 2021 Embedded Online Conference (EOC). The topic of my talk is Not your Grandmother’s Embedded Systems. The reason I mention this here is that, as part of my presentation, I will be mentioning Percepio, Luos, and — of course — Algolux.

Dare I hope to have the pleasure of your company at my presentation? As always, I welcome your comments and questions (preferably relating to what you’ve read here, but I’m open to anything 🙂

One thought on “Cloud-Based Genetic Algorithms and Computer Vision Applications”

Leave a Reply

featured blogs
Jan 19, 2022
Explore the importance of system interoperability in hyperscale data centers and why it matters for AI and high-performance computing (HPC) applications. The post Why Interoperability Matters for High-Performance Computing and AI Chip Designs appeared first on From Silicon T...
Jan 19, 2022
2001 was famous for some of the worst security issues (accompanied by obligatory picture of bad guy in a black hoodie): The very first blog post of the year covered SolarWinds. See my post The... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Jan 18, 2022
This column should more properly be titled 'Danny MacAskill Meets Elvis Presley Meets Bollywood Meets Cultural Appropriation,' but I can't spell '˜appropriation.'...

featured video

AI SoC Chats: Understanding Compute Needs for AI SoCs

Sponsored by Synopsys

Will your next system require high performance AI? Learn what the latest systems are using for computation, including AI math, floating point and dot product hardware, and processor IP.

Click here for more information about DesignWare IP for Amazing AI

featured paper

Tackling verification challenges for PCIe® 5.0

Sponsored by Anritsu

PCIe 5.0 works at 32 GT/s data rate per lane and offers many new features, including support for an alternate protocol, precoding to prevent contiguous burst errors, and link equalization flow enhancements. While these features offer several advantages, they also pose additional challenges for verification engineers. This paper discusses the PCIe 5.0 features and their verification challenges. It also describes a case study on how to address these challenges using a strong verification IP solution.

Download White Paper

featured chalk talk

Current Sense Amplifiers: What Are They Good For?

Sponsored by Mouser Electronics and Analog Devices

Not sure what current sense amplifiers are and why you would need them? In this episode of Chalk Talk, Amelia Dalton chats with Seema Venkatesh from Analog Devices about the what, why, and how of current sense amplifiers. They take a closer look at why these high precision current sense amplifiers can be a critical addition to your system and how the MAX40080 current sense amplifiers can solve a variety of design challenges in your next design. 

Click here for more information about Maxim Integrated MAX40080 Current-Sense Amplifiers