feature article
Subscribe Now

Sticking to Plan

Javelin and Magma Move Floorplanning Towards Production

Floorplanning has become an important step in SoC design because it lets designers and managers get an early sense of what can be accomplished on a given piece of silicon. This is, of course, critical during the never-ending negotiation between design and marketing as to who’s on drugs and who’s sandbagging. It’s more or less the equivalent of doing furniture planning, where you draw a picture of a room and cut out rough scale versions of the furniture and move them around to get a rough sense of what will fit. Not particularly accurate, certainly not good enough for production, but much better than those back-of-the-envelope calculations that get more and more ambitious with every beer.

As such, however, floorplanning has been something of a dead-end operation. The design gets partitioned and assigned, and that part of the work most likely follows the design through to completion. But then the approximations and calculations start, with rough synthesis and placement and a high level of reliance on familiar blocks that have already been done before, so you kind of know what they’ll require. This work contributes to the planning and commitments of feature set, die size, and performance, but then it is largely abandoned as the “real” design gets going in earnest. Floorplanning is freely touted as a prototyping tool, not as a development tool.

There are some changes afoot, however, that are moving floorplanning either closer to or directly into the full production design flow. This is happening, not so much through any magic that improves overall prescience, but rather through the tool infrastructure and data structures that the tools can comprehend. Two recent announcements attest to this trend, the Hydra floorplanner from Magma, one of the big four EDA guys, and j360 from a small company called Javelin. Magma, being a full-flow provider, can offer a floorplanner integrated into their entire offering, but both floorplanners are available as point-solutions that can be integrated into flows populated by tools from other vendors.

When you start diving into the details of floorplanning tools, it’s easy to lose sight of the big picture; the space is replete with incomprehensible TLAs* and fine-granularity concepts that are important mostly to those that spend every moment doing floorplanning. But backing out a bit, there are three major aspects that appear to contribute to the move from a planning backwater into the development mainstream: accuracy, abstraction, and – with apologies for being at a complete and tragic loss to come up with a third word starting with “a” – refinement.

To help put those three items into perspective, let’s start by looking at the typical flow of tasks as a project progresses from the early planning stage. Designs of this magnitude can in no way be handled by a single person; chunks of the design are parceled out to individual team members. So there’s a key role, what I’ll call the “dispatcher/integrator,” that holds the pieces together. Now the exact role, skillset, and identity of this individual may vary by project and even throughout the project, but the element in common throughout is that this person gets the full view of the design from the highest level. Everyone else looks only at their own portion.

To begin with, this role acts as system designer and dispatcher, doing a high-level partitioning and relative placement of key high-level blocks, and then assigning the blocks to different designers to achieve the next level of refinement. This breaking of the overall design into blocks is critical in that each block is assigned a set of “pins” – signals that will talk to other blocks, and this interconnect has an important impact on the number of global signals that will be needed.

Given a block with a specified set of pins as an interface, a designer can use knowledge of existing blocks or high-level behavioral code to describe behavior and then give that back to the dispatcher, who can use the floorplanning tool to estimate the impact of these blocks on the overall die. The floorplanner uses various engines to run the estimates, but the key here is that these engines are not the same engines that would be used for full-on development. Why not use the actual engines? Well, generally, for improved performance: when you’re trying to do up-front planning, you want to be able to get quick results, make changes and do what-if scenarios, and iterate quickly. Production-level placers-and-routers take far too long. So the floorplanning engines provide speed at the expense of accuracy, and then – as traditionally implemented – the results of these engines are discarded once design starts in earnest.

The development proceeds with RTL designers specifying and confirming the functionality of their blocks, handing that back to the dispatcher/integrator, who stitches it all together and works with verification folks to make sure it all appears to work. Then the design gets assigned to physical designers, who take the logic functions and turn them into silicon structures. Each block is laid out and verified within the bounds allocated by the dispatcher/integrator, and then the whole thing can be tied together and “finished.” Obviously there’s a lot more in the details, but in order to avoid getting lost in those details, let’s leave it there for now.

This path from system designer to RTL designer back to integrator to physical design back to final integration still applies with the newer tools. The first thing that changes is accuracy. This has been approached in two different ways for obvious reasons. Because Magma has a full flow, they use their production-quality engines in the floorplanner. This means that if you are using Hydra integrated into a Magma flow, the work done up front during planning is equivalent to what would be done in full development, subject only to the completeness of the design. Based on this, Magma makes the distinction that their floorplanner is the first one designed not to be just for prototyping, but to be valid for actual production.

Because Javelin doesn’t make production synthesis or place-and-route tools, it can’t offer the complete flow, but they tout the accuracy of their engines as having been demonstrated to lie within 5% of what the final number would be. Used as a point tool within a flow using engines from someone other than Magma, Hydra’s results also would not reflect what final production would be. So as a point tool, even though these floorplanners are more accurate, they are still positioned as prototype tools – explicitly so by Javelin.

The second critical issue is abstraction. At the beginning of the design flow, there’s only so much information known about the various components of the circuit. Where a block is being reused, it might be known rather accurately; other blocks may be macros or IP visible only as black boxes. And new circuitry is known only at a very abstract level. What has changed is that these new floorplanners can accommodate designs at various levels of abstraction. It’s no longer the case where the floorplanner takes abstract rough input and the production engine takes a complete design: Hydra and j360 can accept high-level design specs or low-level implementation specs. And various blocks may have different levels of abstraction based on the starting point or simply on how progress is being made.

The third issue is related to abstraction, specifically the progressive refinement of the level of abstraction. While not eliminating the distinction between prototype and production, that line is very much blurred by the ability to refine the abstraction of the various blocks as more detail becomes available. So the same floorplanner can work with the early partially-populated blocks, and then, as the blocks are designed and implemented, the early representations can be replaced by the newer, more accurate ones for refinement.

This allows for better real-time optimization between the integrator and the teams. For example, in the early stage where the system designer partitions the design and defines the interfaces, Hydra creates the global routing that will be used throughout. The physical designers will hook into those global nets when they route their blocks. But by doing the global routing early, the interfaces can be optimized early. With either tool, if later modifications are required due some change or issue with a block as it’s being designed, that information comes back into the floorplanner, and the floorplan – and any other block’s specifications – can be adjusted. As logic is completed, and as physical designs are completed, performance, power, and die size expectations can be validated or adjusted, shapes can be refined, and any global net changes can be made, all using the same tool and database as were used in the early planning stages.

So whereas the older break between the prototype floorplan and the production floorplan happened early in the flow, now it has to happen only when moving to physical design, and only if the tool is used as a point tool. That transition never happens if Hydra is integrated into a Magma flow.

This capability helps to eliminate a big flow discontinuity during the progression from abstract idea to concrete embodiment, saving time and reducing the opportunities for introducing new errors. As to whether Javelin or Magma has the better mousetrap, well, that will be for users to decide as they review not only the broad flow and integration issues, but ease of use, price, support, and all those other things that truly separate the winners and the losers.

*Seriously?? You didn’t know that?? How uncool are you anyway?… <eyes roll, patient sigh>
It stands for “three-letter acronym.”

Leave a Reply

featured blogs
Dec 5, 2023
Generative AI has become a buzzword in 2023 with the explosive proliferation of ChatGPT and large language models (LLMs). This brought about a debate about which is trained on the largest number of parameters. It also expanded awareness of the broader training of models for s...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Introduction to the i.MX 93 Applications Processor Family
Robust security, insured product longevity, and low power consumption are critical design considerations of edge computing applications. In this episode of Chalk Talk, Amelia Dalton chats with Srikanth Jagannathan from NXP about the benefits of the i.MX 93 application processor family from NXP can bring to your next edge computing application. They investigate the details of the edgelock secure enclave, the energy flex architecture and arm Cortex-A55 core of this solution, and how they can help you launch your next edge computing design.
Oct 23, 2023