feature article
Subscribe Now

Advancing FPGA Design Efficiency: A Proven Standard Solution

For decades the SoC design community has consistently lost ground in the battle to match advances in design technology productivity with the growth of available silicon technology. The silicon evolution roadmap has long been chronicled via Moore’s law, so how could the design community allow the existence of the well-known “Design Gap?” Understanding how we got to this point will make it easier to answer that question and make reasonable adjustments for the future, especially if there are obvious things to be learned from the evolution of many analogous industries.

Even in the advanced design field of FPGAs today; most FPGA IP is vendor-supplied and locks the purchasing company into the vendor it was purchased from. This paradigm limits the amount of growth that can happen in industry. In order for the market to grow as a whole, FPGA IP vendors must support and adopt industry standards. By embracing and adopting more standards into FPGAs we can ensure the time to market and cost advantages of standard FPGA IP are met.

Despite any and all management attempts, marketing programs and training, FPGA development is “product driven” in thought and implementation of design methodology, tools and flows. This is largely unavoidable as the entire practice around automation has evolved from and been built around the immediate gratification of “just automating what the designer is doing.” Simply, take the activity, implement it in code or hardware, and string it to the next activity in the design chain.

That worked well in the beginning and did make the design process faster, though it initially left the integration of tools as a largely manual activity, patched into place as so-called integrated tool flows evolved. New tools are individually grafted-in as older versions break from the demands of increasing design complexity, and so tool “integration” has progressed as best it can. “Best of breed” tool alternatives, often with incompatible interfaces, are sometimes brute-forced into flows, especially when the alternative is a prolonged wait for a proprietary solution from a vendor to which you are often inevitably tied.

Hence the existence of the design gap is now entirely predictable. Design technologies are today operational, segmented and compartmentalized for design-function specialists, clearly lagging what the FPGA community needs to create products with the right features, on-time and fully utilizing the available silicon technology. If we had done better with the design process, could we have demanded and achieved faster evolution in silicon (and other) implementation technology?

Nevertheless, there are signs of industry maturation on the horizon. The last five years have seen a clear acknowledgement among major design teams and corporate visionaries that things need to change. We have moved (see Figure 1) from a mentality of seeking solutions with a brute-force building and management of massive SoC design teams and an “integrate or die” approach, through “collaborate or die” where one or two major players get together to pool resources on critical high profile programs, and now to “standardize or die.” In this more mature phase, groups of companies are establishing nimble consortiums that might broadly address those design infrastructure needs that have grown too large for individuals to create them alone.

These consortiums are led by the rapid prominence of groups such as OCP-IP and SPIRIT and ECSI out of Europe, and are also evidenced by the much older and original appearances of VSIA in the U.S. and STARC in Japan. The new arrivals are up, running and building structure, while the older protagonists have evolved and continue to refine and tune their roles for the membership they serve.

So what does the model look like that we have followed, and what might we do in the future to better advance SoC design? Consider the “Industry Development Model” (see Figure 2). Here we plainly see the evolution of how products (P) progress. Most companies start with a well-defined, native product and make as much headway possible selling it in its minimum form until competition dictates they must bring more to the table to survive and thrive.

Thus, products next evolve to a value-added form and competition drops off as competitors fail to keep up, or focus efforts to more viable ground for their circumstances. Still, the competition stiffens even more and another evolution takes place wherein “whole products” are evolved (as popularized by Geoffrey Moore’s descriptions). A whole product is an even more compelling offering whereby the purchaser can often replace a whole set of products and internal operation or development, by getting a superior external solution. He is given a simpler “make or buy” alternative, if he trusts to outsource a once-internal solution. Again (per our model), from the product provider perspective, the number of competitors drops off and a few providers remain, but in a much stronger and more prominent industry position than when they entered that product market with their simple, standalone native product.

But what does Industry get as a result? Generally, we then have a few strong players competing for other related providers to add to their personal infrastructure and make their proprietary product the only real business alternative. Herein lies the problem. What is considered an ideal infrastructure is determined by a commercial and proprietary entity. Often great solutions can and will emerge, but if they are not well suited to the diverse needs of users, this can be a real set-back.

A natural evolution of the whole product is when that product itself supports a standard (S). When this transition occurs, the total number of infrastructures needing support shrinks dramatically. Indeed, the whole products themselves can be a part of the “Standard” infrastructure. Suddenly we can center efforts around just a single infrastructure and bring clarity and focus to a community of providers. The entirety of a shared infrastructure can dramatically focus industry efforts and indeed help whole-product proprietary providers, by letting them focus on their product, without fighting for Industry following to build what is often a token, “complete” solution.

Thus, everyone can focus on known needs and all players can expand their individual core offerings with the diversity a much larger market can embrace. So, how do we evolve to the Infrastructure model? In general, when we follow the product-driven corporate path we will see good technical progress and corporate growth, but if we wish to truly accelerate the growth of an industry, we must evolve with specific direction and shared effort. The bottom line, industry grows and matures faster when there are shared and common resources and practices, and this can ONLY come through the common ground of standards.

The inevitable question is: “How do we accelerate this process?” Unfortunately, in many ways this is an evolutionary event. As we see from discussion above, there is already progress afoot with real, cross-company coordination. Lessons for this maturation process are all around as we look at most all adjacent technological spaces. As industries mature, the major players serve segments of those industries. So, for a new generation of technology to emerge, it is normal to see major players get together and define standards to exchange work. We see this in aerospace, telecommunications, ship-building, etc. and indeed all major industries where problems grow so large that needs ultimately outweigh individual goals. Major players will look for competitive advantage by sharing to grow and this, in larger total markets that are created as a result. If the FPGA Community bands together to support and adopt open standards pressure is put on the larger companies to integrate FPGA’s and bring them together in a uniform way.

What does it mean for the design industry? Expect to see more collaboration and standardization between major users and providers; hope to see real collaboration and coordination between the groups. As we inevitably embrace the need to move along design productivity more rapidly and get on a different design integration path, we will abandon the “product first” approach and build on standards, such that we capitalize on the massive shared infrastructure inherently produced.

Leave a Reply

featured blogs
Sep 21, 2023
Wireless communication in workplace wearables protects and boosts the occupational safety and productivity of industrial workers and front-line teams....
Sep 21, 2023
Labforge is a Waterloo, Ontario-based company that designs, builds, and manufactures smart cameras used in industrial automation and defense applications. By bringing artificial intelligence (AI) into their vision systems with Cadence , they can automate tasks that are diffic...
Sep 21, 2023
At Qualcomm AI Research, we are working on applications of generative modelling to embodied AI and robotics, in order to enable more capabilities in robotics....
Sep 21, 2023
Not knowing all the stuff I don't know didn't come easy. I've had to read a lot of books to get where I am....
Sep 21, 2023
See how we're accelerating the multi-die system chip design flow with partner Samsung Foundry, making it easier to meet PPA and time-to-market goals.The post Samsung Foundry and Synopsys Accelerate Multi-Die System Design appeared first on Chip Design....

Featured Video

Chiplet Architecture Accelerates Delivery of Industry-Leading Intel® FPGA Features and Capabilities

Sponsored by Intel

With each generation, packing millions of transistors onto shrinking dies gets more challenging. But we are continuing to change the game with advanced, targeted FPGAs for your needs. In this video, you’ll discover how Intel®’s chiplet-based approach to FPGAs delivers the latest capabilities faster than ever. Find out how we deliver on the promise of Moore’s law and push the boundaries with future innovations such as pathfinding options for chip-to-chip optical communication, exploring new ways to deliver better AI, and adopting UCIe standards in our next-generation FPGAs.

To learn more about chiplet architecture in Intel FPGA devices visit https://intel.ly/45B65Ij

featured paper

Intel's Chiplet Leadership Delivers Industry-Leading Capabilities at an Accelerated Pace

Sponsored by Intel

We're proud of our long history of rapid innovation in #FPGA development. With the help of Intel's Embedded Multi-Die Interconnect Bridge (EMIB), we’ve been able to advance our FPGAs at breakneck speed. In this blog, Intel’s Deepali Trehan charts the incredible history of our chiplet technology advancement from 2011 to today, and the many advantages of Intel's programmable logic devices, including the flexibility to combine a variety of IP from different process nodes and foundries, quicker time-to-market for new technologies and the ability to build higher-capacity semiconductors

To learn more about chiplet architecture in Intel FPGA devices visit: https://intel.ly/47JKL5h

featured chalk talk

Analog in a Digital World: TRIMPOT® Trimming Potentiometers
Sponsored by Mouser Electronics and Bourns
Trimmer potentiometers are a great way to fine tune the output of an analog circuit and can be found used in a wide variety of applications. In this episode of Chalk Talk, Patricia Moorman from Bourns and Amelia Dalton break down the what, where, how, and why of trimpots and the benefits that Bourns trimpots can bring to your next design.
Feb 2, 2023
27,985 views