editor's blog
Subscribe Now

Synthesizing TLM Models

Architectural exploration and design implementation all too often are two separate tasks implemented by two completely separate groups. Once a high-level TLM model has been tested and approved, it goes on the shelf while a designer starts from scratch to generate a synthesizable design.

While this may sound simply wasteful, things aren’t so simple. TLM models are abstract, using busses and transactions. An RTL design has to specific signals at the individual level – the TLM model doesn’t have that, so the TLM model tends not to be particularly useful as a starting see for implementation.

Calypto is trying to address that, as they described in a discussion at DVcon. They’re making available a set of interfaces that can be used at multiple levels of abstraction – and, specifically, which can be used to go from architecture to synthesis. They’re starting with AXI, but plan others.

The tie-in with Mentor here is obvious. First off, you may remember that they bought Mentor’s Catapult C. So they have a stake in the high-level synthesis game now. Second, Mentor has a technology that they call Multi-View that allows a single model to be used and viewed with different levels of abstraction for different purposes.

These technologies converge in this IP announcement. You can find more in their release

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

STM32 Security for IoT
Today’s modern embedded systems face a range of security risks that can stem from a variety of different sources including insecure communication protocols, hardware vulnerabilities, and physical tampering. In this episode of Chalk Talk, Amelia Dalton and Thierry Crespo from STMicroelectronics explore the biggest security challenges facing embedded designers today, the benefits of the STM32 Trust platform, and why the STM32Trust TEE Secure Manager is an IoT security game changer.
Aug 20, 2024
39,815 views