feature article
Subscribe Now

Advancing FPGA Design Efficiency: A Proven Standard Solution

For decades the SoC design community has consistently lost ground in the battle to match advances in design technology productivity with the growth of available silicon technology. The silicon evolution roadmap has long been chronicled via Moore’s law, so how could the design community allow the existence of the well-known “Design Gap?” Understanding how we got to this point will make it easier to answer that question and make reasonable adjustments for the future, especially if there are obvious things to be learned from the evolution of many analogous industries.

Even in the advanced design field of FPGAs today; most FPGA IP is vendor-supplied and locks the purchasing company into the vendor it was purchased from. This paradigm limits the amount of growth that can happen in industry. In order for the market to grow as a whole, FPGA IP vendors must support and adopt industry standards. By embracing and adopting more standards into FPGAs we can ensure the time to market and cost advantages of standard FPGA IP are met.

Despite any and all management attempts, marketing programs and training, FPGA development is “product driven” in thought and implementation of design methodology, tools and flows. This is largely unavoidable as the entire practice around automation has evolved from and been built around the immediate gratification of “just automating what the designer is doing.” Simply, take the activity, implement it in code or hardware, and string it to the next activity in the design chain.

That worked well in the beginning and did make the design process faster, though it initially left the integration of tools as a largely manual activity, patched into place as so-called integrated tool flows evolved. New tools are individually grafted-in as older versions break from the demands of increasing design complexity, and so tool “integration” has progressed as best it can. “Best of breed” tool alternatives, often with incompatible interfaces, are sometimes brute-forced into flows, especially when the alternative is a prolonged wait for a proprietary solution from a vendor to which you are often inevitably tied.

Hence the existence of the design gap is now entirely predictable. Design technologies are today operational, segmented and compartmentalized for design-function specialists, clearly lagging what the FPGA community needs to create products with the right features, on-time and fully utilizing the available silicon technology. If we had done better with the design process, could we have demanded and achieved faster evolution in silicon (and other) implementation technology?

Nevertheless, there are signs of industry maturation on the horizon. The last five years have seen a clear acknowledgement among major design teams and corporate visionaries that things need to change. We have moved (see Figure 1) from a mentality of seeking solutions with a brute-force building and management of massive SoC design teams and an “integrate or die” approach, through “collaborate or die” where one or two major players get together to pool resources on critical high profile programs, and now to “standardize or die.” In this more mature phase, groups of companies are establishing nimble consortiums that might broadly address those design infrastructure needs that have grown too large for individuals to create them alone.

These consortiums are led by the rapid prominence of groups such as OCP-IP and SPIRIT and ECSI out of Europe, and are also evidenced by the much older and original appearances of VSIA in the U.S. and STARC in Japan. The new arrivals are up, running and building structure, while the older protagonists have evolved and continue to refine and tune their roles for the membership they serve.

So what does the model look like that we have followed, and what might we do in the future to better advance SoC design? Consider the “Industry Development Model” (see Figure 2). Here we plainly see the evolution of how products (P) progress. Most companies start with a well-defined, native product and make as much headway possible selling it in its minimum form until competition dictates they must bring more to the table to survive and thrive.

Thus, products next evolve to a value-added form and competition drops off as competitors fail to keep up, or focus efforts to more viable ground for their circumstances. Still, the competition stiffens even more and another evolution takes place wherein “whole products” are evolved (as popularized by Geoffrey Moore’s descriptions). A whole product is an even more compelling offering whereby the purchaser can often replace a whole set of products and internal operation or development, by getting a superior external solution. He is given a simpler “make or buy” alternative, if he trusts to outsource a once-internal solution. Again (per our model), from the product provider perspective, the number of competitors drops off and a few providers remain, but in a much stronger and more prominent industry position than when they entered that product market with their simple, standalone native product.

But what does Industry get as a result? Generally, we then have a few strong players competing for other related providers to add to their personal infrastructure and make their proprietary product the only real business alternative. Herein lies the problem. What is considered an ideal infrastructure is determined by a commercial and proprietary entity. Often great solutions can and will emerge, but if they are not well suited to the diverse needs of users, this can be a real set-back.

A natural evolution of the whole product is when that product itself supports a standard (S). When this transition occurs, the total number of infrastructures needing support shrinks dramatically. Indeed, the whole products themselves can be a part of the “Standard” infrastructure. Suddenly we can center efforts around just a single infrastructure and bring clarity and focus to a community of providers. The entirety of a shared infrastructure can dramatically focus industry efforts and indeed help whole-product proprietary providers, by letting them focus on their product, without fighting for Industry following to build what is often a token, “complete” solution.

Thus, everyone can focus on known needs and all players can expand their individual core offerings with the diversity a much larger market can embrace. So, how do we evolve to the Infrastructure model? In general, when we follow the product-driven corporate path we will see good technical progress and corporate growth, but if we wish to truly accelerate the growth of an industry, we must evolve with specific direction and shared effort. The bottom line, industry grows and matures faster when there are shared and common resources and practices, and this can ONLY come through the common ground of standards.

The inevitable question is: “How do we accelerate this process?” Unfortunately, in many ways this is an evolutionary event. As we see from discussion above, there is already progress afoot with real, cross-company coordination. Lessons for this maturation process are all around as we look at most all adjacent technological spaces. As industries mature, the major players serve segments of those industries. So, for a new generation of technology to emerge, it is normal to see major players get together and define standards to exchange work. We see this in aerospace, telecommunications, ship-building, etc. and indeed all major industries where problems grow so large that needs ultimately outweigh individual goals. Major players will look for competitive advantage by sharing to grow and this, in larger total markets that are created as a result. If the FPGA Community bands together to support and adopt open standards pressure is put on the larger companies to integrate FPGA’s and bring them together in a uniform way.

What does it mean for the design industry? Expect to see more collaboration and standardization between major users and providers; hope to see real collaboration and coordination between the groups. As we inevitably embrace the need to move along design productivity more rapidly and get on a different design integration path, we will abandon the “product first” approach and build on standards, such that we capitalize on the massive shared infrastructure inherently produced.

Leave a Reply

featured blogs
Jan 22, 2021
Amidst an ongoing worldwide pandemic, Samtec continues to connect with our communities. As a digital technology company, we understand the challenges and how uncertain times have been for everyone. In early 2020, Samtec Cares suspended its normal grant cycle and concentrated ...
Jan 22, 2021
I was recently introduced to the concept of a tray that quickly and easily attaches to your car'€™s steering wheel (not while you are driving, of course). What a good idea!...
Jan 22, 2021
This is my second post about this year's CES. The first was Consumer Electronics Show 2021: GM, Intel . AMD The second day of CES opened with Lisa Su, AMD's CEO, presenting. AMD announced new... [[ Click on the title to access the full blog on the Cadence Community...
Jan 20, 2021
Explore how EDA tools & proven IP accelerate the automotive design process and ensure compliance with Automotive Safety Integrity Levels & ISO requirements. The post How EDA Tools and IP Support Automotive Functional Safety Compliance appeared first on From Silicon...

featured paper

Overcoming Signal Integrity Challenges of 112G Connections on PCB

Sponsored by Cadence Design Systems

One big challenge with 112G SerDes is handling signal integrity (SI) issues. By the time the signal winds its way from the transmitter on one chip to packages, across traces on PCBs, through connectors or cables, and arrives at the receiver, the signal is very distorted, making it a challenge to recover the clock and data-bits of the information being transferred. Learn how to handle SI issues and ensure that data is faithfully transmitted with a very low bit error rate (BER).

Click here to download the whitepaper

Featured Chalk Talk

SLX FPGA: Accelerate the Journey from C/C++ to FPGA

Sponsored by Silexica

High-level synthesis (HLS) brings incredible power to FPGA design. But harnessing the full power of HLS with FPGAs can be difficult even for the most experienced engineering teams. In this episode of Chalk Talk, Amelia Dalton chats with Jordon Inkeles of Silexica about using the SLX FPGA tool to truly harness the power of HLS with FPGAs, getting better results faster - regardless of whether you are approaching from the hardware or software domain.

More information about SLX FPGA