feature article
Subscribe Now

Cloud-Based Genetic Algorithms and Computer Vision Applications

Do you recall my earlier column When Genetic Algorithms Meet Artificial Intelligence? This reflected my discovery that the chaps and chapesses at Algolux are using an evolutionary algorithm approach in their Atlas Camera Optimization Suite. The idea here is that, when it comes to creating a new camera system, each of the components — lens assembly, sensor, and image signal processor (ISP) — has numerous parameters (variables). This means that a massive and convoluted parameter space controls the image quality for each camera configuration.

Traditional human-based camera system tuning can involve weeks of lab tuning combined with months of field and subjective tuning. The sad part of all of this is that there’s no guarantee of results when it comes to computer vision applications employing artificial intelligence (AI) and machine learning (ML). The problem is that tuning a camera system for a computer vision application is a completely different “kettle of fish,” as it were, as compared to tuning an image or video stream for human consumption. 

The bottom line is that humans are almost certainly not the best judges of the way in which an AI/ML system likes to see its images. The solution here is to let the AI/ML system judge for itself or, at least, let Atlas determine how close the AI/ML system is coming to what is required, using human-supplied metadata as the “ground truth” state for comparison. Furthermore, employing evolutionary algorithms allows Atlas to explore the solution space to fine-tune the camera’s system variables so as to automatically maximize the results of the computer vision application that’s using the system.

A few months after the aforementioned column, I returned with a follow-up article: Eos Embedded Perception Software Sees All. I have to admit that this one was pretty amazing. We started by watching a video showing AAA Pedestrian-Detection ADAS Testing. Be warned, this is not for the faint of heart. I know that — after watching this video — if anyone were to ask me to step in front of an autonomous vehicle, I would be pretty confident they weren’t my friend.

The really scary thing about this video is that it was taken under optimum lighting conditions. Can you imagine how much worse things could get in adverse conditions like rain, hail, sleet, snow, or fog? And so we come to Eos Embedded Perception software. As described by the folks at Algolux, “Through joint design and training of the optics, image processing, and vision tasks, Eos delivers up to 3x improved accuracy across all conditions, especially in low light and harsh weather.” If you look at my earlier column, you’ll see various videos of this in action, but it was the following still image that really blew me away.

Eos-designed/trained camera system detecting like an Olympic champion (Image source: Algolux)

As you can see, this image shows a camera system designed/trained using Eos detecting people (purple boxes), vehicles (green boxes), and — what I assume to be — signs or traffic signals (blue boxes). As I noted in my earlier article, “I’ve been out walking on nights like this myself and I know how hard it can be to determine “what’s what,” so the above image impresses the socks off me (which isn’t something you want to have happen in cold weather).”

Moving on, the reason I’m waffling on about all this here is that I recently heard from my mate Max at Algolux (I know, that confuses me too — sometimes it feels like I’m emailing or talking to myself — and Max doesn’t like that — LOL). Anyhoo, Max ended up sharing all sorts of interesting nuggets of knowledge and tidbits of trivia with me.

We opened with the fact that Algolux has been named to the 2021 CB Insights AI 100. This is a prestigious list showcasing the 100 most promising private artificial intelligence companies in the world. According to an associated press release, “The AI 100 was selected from a pool of over 6,000 companies based on several factors including patent activity, investor quality, news sentiment analysis, market potential, partnerships, competitive landscape, team strength, and tech novelty.”

Now, it’s no secret that cameras are one of the sensors of choice for system developers of safety-critical applications, such as automotive ADAS, autonomous vehicles and robots, and video security. However, as we alluded to earlier, camera development currently relies on expert imaging teams or external image quality service companies hand-tuning camera architectures. This painstaking approach can take months, requires hard-to-find deep expertise, and is visually subjective. As such, this process does not ensure that the camera provides the optimal output for image quality or computer vision applications.

As we also noted earlier, the Atlas Camera Optimization Suite automates traditional months-long manual ISP tuning processes to maximize computer vision accuracy and image quality in only days, thereby providing an improvement of up to 100x in scalability and resource leverage. The Atlas workflow permits rapid evaluation of different camera sensors and lenses for cost reduction, best performance, or to adapt to changes in customer requirements.

So, you can only imagine my surprise and delight to hear the next tempting teaser from Max, which involved the fact that the Atlas Camera Optimization Suite is now enabled in the cloud. Even better, it supports an extended set of camera ISPs from Arm and Renesas, thereby allowing for further scalability.

In the case of SoC providers deploying Arm Mali-C71AE and Mali-C52, they can leverage the Atlas workflow to automate and significantly scale support for customers that are developing vision systems, predictably reducing ISP tuning time and program risks. For teams developing computer vision applications, Atlas can quickly determine the optimal Arm Mali ISP parameter set to achieve the highest vision accuracy, which is not possible with today’s hand-tuned ISP approaches.

Furthermore, the new cloud-enabled workflow supports the ISPs embedded in Renesas R-Car SoCs, such as the R-Car V3H and R-Car V3M for intelligent and automated driving (AD) vehicles, and the recently announced R-Car V3U ASIL D SoC for advanced driver assistance systems (ADAS) and AD systems.

In closing, as I mentioned in my previous column New Paradigms for Implementing, Monitoring, and Debugging Embedded Systems — in which we discussed the Tracealyzer and DevAlert tools from Percepio and the Luos distributed (not exactly an) operating system from Luos) — I’m going to be giving a presentation at the forthcoming 2021 Embedded Online Conference (EOC). The topic of my talk is Not your Grandmother’s Embedded Systems. The reason I mention this here is that, as part of my presentation, I will be mentioning Percepio, Luos, and — of course — Algolux.

Dare I hope to have the pleasure of your company at my presentation? As always, I welcome your comments and questions (preferably relating to what you’ve read here, but I’m open to anything 🙂

One thought on “Cloud-Based Genetic Algorithms and Computer Vision Applications”

Leave a Reply

featured blogs
Jul 5, 2022
The 30th edition of SMM , the leading international maritime trade fair, is coming soon. The world of shipbuilders, naval architects, offshore experts and maritime suppliers will be gathering in... ...
Jul 5, 2022
By Editorial Team The post Q&A with Luca Amaru, Logic Synthesis Guru and DAC Under-40 Innovators Honoree appeared first on From Silicon To Software....
Jun 28, 2022
Watching this video caused me to wander off into the weeds looking at a weird and wonderful collection of wheeled implementations....

featured video

Multi-Vendor Extra Long Reach 112G SerDes Interoperability Between Synopsys and AMD

Sponsored by Synopsys

This OFC 2022 demo features Synopsys 112G Ethernet IP interoperating with AMD's 112G FPGA and 2.5m DAC, showcasing best TX and RX performance with auto negotiation and link training.

Learn More

featured paper

3 key considerations for your next-generation HMI design

Sponsored by Texas Instruments

Human-Machine Interface (HMI) designs are evolving. Learn about three key design considerations for next-generation HMI and find out how low-cost edge AI, power-efficient processing and advanced display capabilities are paving the way for new human-machine interfaces that are smart, easily deployable, and interactive.

Click to read more

featured chalk talk

Machine Learning with Microchip

Sponsored by Mouser Electronics and Microchip

Can you design a machine learning application without a deep knowledge in machine learning? Yes, you can! In this episode of Chalk Talk, Amelia Dalton chats with Yann Le Faou from Microchip about a machine learning approach that is low power, includes an expertise in communication and security, and is easy to implement.

Click here for more information about Microchip Technology Machine Learning