editor's blog
Subscribe Now

Switching Classes

High-level synthesis (HLS) has been all about C (or C++) to RTL. But when you’re validating your algorithm, it’s easier to work at the TLM level for thorough simulations that can complete in your lifetime. But once you’re done with that and ready to create gates, you need more than a TLM model; you need a detailed pin-level model, and so far that’s been a manual job.

Mentor is trying to make this easier by separating out the interface portion of the module, and then allowing for either TLM or pin-level interfaces. They can synthesize the pin interface from the TLM interface using a library of interface modules. So, currently, if you’re using AMBA, AHB, or a standard point-to-point interface, it will take the abstract TLM interface and convert it to the detailed pin interface needed to create RTL.

In this manner, you interface your abstract TLM model with the virtual prototype as usual; when done, you synthesize the pin interface, change the interface class reference in the C algorithm, and then convert the resulting design to RTL.

More detail in their release

Leave a Reply

featured blogs
Nov 30, 2023
No one wants to waste unnecessary time in the model creation phase when using a modeling software. Rather than expect users to spend time trawling for published data and tediously model equipment items one by one from scratch, modeling software tends to include pre-configured...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Non-Magnetic Interconnects
Sponsored by Mouser Electronics and Samtec
Magnets and magnetic fields can cause big problems in medical, scientific, industrial, space, and quantum computing applications but using a non-magnetic connector can help solve these issues. In this episode of Chalk Talk, Amelia Dalton and John Riley from Samtec discuss the construction of non-magnetic connectors and how you could use non-magnetic connectors in your next design.
May 3, 2023
25,576 views