feature article
Subscribe Now

FPGA IP: Keeping Your Device Options Open

The use of third-party intellectual property (IP) is all but a necessity for most FPGA designs today. The complexity of platform designs as symbolized in Figure 1, the sophistication of modern FPGA architectures, and time-to-market pressures combine to force engineers to call on proven IP elements for at least some the standard functionality within their emerging designs. They rely on IP ranging from storage elements and arithmetic cores to system-level IP offering broader functionality including processors, interfaces, peripherals, and more.

The design challenge is twofold: first, to use these types of IP effectively for the current project; and secondly, to re-use them with similar effectiveness in future projects even if the target device changes. A design team has to think about more than just one upcoming product roll-out. What about the second and third versions, with their inevitable need for new and improved capabilities? To retain its competitiveness, a system house needs to capitalize on the competitiveness of the whole FPGA vendor market, where vendors continually leapfrog each other with the biggest, fastest, cheapest, or most power-efficient devices. It’s a buyer’s market. The reason for switching devices may be technical, economic, or something as simple as the valued business relationship with an FPGA vendor or distribution partner. By being able to re-target the same design to another device with minimal effort, a design team can select the best silicon for each successive project. This degree of portability requires a device-neutral design methodology—and if a single FPGA vendor’s proprietary IP is richly used, this can get complicated.

20091027_mentor_fig1.jpg

Figure 1: FPGA designs use a variety of third-party IP

FPGA Vendor IP is Everywhere

Virtually every FPGA vendor offers proprietary design tools that engineers can use to develop products around its platforms. In addition to tools, FPGA vendors offer IP ranging from memories and simple arithmetic functions to sophisticated system-level cores such as processors, interfaces, and peripherals. Some of these cores can be generated through the vendor’s IP generation software, while others have to be accessed from the vendor separately. For some devices the processor and/or interface IP is built into the silicon itself. For those that can be generated, the user describes the core’s functionality and a post-synthesis, gate-level model is created. Those that are accessed separately through the vendor typically are delivered in gate-level format as well. This is helpful in that a user will know exactly the amount of FPGA resources consumed by any given core.

But the greater benefit of FPGA vendor IP elements is that they are low in incremental cost, if not already provided within the development software. This can be of significant value to some. It provides a quick and inexpensive path to get a design up and running. But opting for this kind of low-cost IP has long-term consequences.

The primary—and substantial—disadvantage is that the IP is only good for the targeted FPGA device family or vendor. Should there be a need to switch devices, or simply to compare the implementation results of multiple devices, the IP generation and integration process must be repeated for the new target(s).  The designer must repeat the work of accessing similar IP from the respective device vendor and integrating it within his or her design. This is a manual process, and one that quickly becomes tedious. Facing all this repetitive work, designers think twice before switching FPGA families. Like other vendor-specific tools, FPGA cores are low-cost solutions that come at a price: vendor dependence.

In fact, certain IP elements can entrench a design more than others. Changing a processor, for example, not only would present integration challenges but also would require modifying the software written for that processor, and perhaps even rework of the hardware logic that surrounds it. A design team is likely to think long and hard before switching device vendors in such a case, even if it is in their product’s best interest.

Take a Target-Neutral Approach

Because implementing a vendor-independent IP approach is more easily said than done, third-party vendors supporting the design community need to offer not only technology solutions but also a healthy measure of industry collaboration. IP generation is an important component wherein the vendor-independent RTL inference and the predictability of core generation are combined. Within a single design flow such as the synthesis environment as shown in Figure 2, designers should be able to select from an extensive library of cores, configure them as needed, and meet uncompromised quality-of-results (QoR) requirements.

 

20091027_mentor_fig2.jpg

Figure 2: Library and Catalog of Vendor-Independent IP

However, one vendor cannot effectively supply all of the cores needed for all vertical markets—particularly system-level cores such as processors, peripherals, interfaces, or cores for some narrow verticals. As with all effective design methodologies, industry collaboration is a must, in order to achieve both quality and breadth. EDA vendors must partner with IP providers to offer pre-validated flows that ensure both compatibility and high quality-of-results. Project teams are already dealing with design, validation, and system integration challenges. The main purpose of leveraging third-party IP is to be able to focus on the critical functionality that differentiates a design. Wrestling with compatibility and quality issues compromises this goal. Ensuring a working tool and IP flow provides freedom to focus on real design issues while maintaining vendor neutrality.

Don’t Leave Vulnerabilities In Your Flow

FPGA designers need to think strategically when considering the impact—and risk—in their approach to third-party IP. Like their colleagues in the ASIC design realm, FPGA designers must deal with the usual issues of IP quality, support, and integration. But an FPGA design house has to consider yet another factor: device portability. Because device flexibility and off-the-shelf cores are essential to remaining competitive, the design house must strive to choose vendors that will keep all possible options open. By choosing a solution that combines both vendor-independent core generation technology and a network of vendor-independent IP providers, a design house moves one step closer to adopting a truly target-neutral approach. On the other hand, if a design team attempts to create a vendor-independent methodology without addressing both the EDA tools and the IP together, it risks leaving vulnerabilities in its flow.

15 thoughts on “FPGA IP: Keeping Your Device Options Open”

  1. Pingback: bottom
  2. Pingback: find
  3. Pingback: DMPK Studies
  4. Pingback: jogos friv
  5. Pingback: TS Escorts
  6. Pingback: friv
  7. Pingback: iraqi geometry
  8. Pingback: Stix Events
  9. Pingback: Aws Alkhazraji
  10. Pingback: Cheap

Leave a Reply

featured blogs
Jan 27, 2021
Why is my poor old noggin filled with thoughts of roaming with my friends through a post-apocalyptic dystopian metropolis ? Well, I'€™m glad you asked......
Jan 27, 2021
Here at the Cadence Academic Network, it is always important to highlight the great work being done by professors, and academia as a whole. Now that AWR software solutions is a part of Cadence, we... [[ Click on the title to access the full blog on the Cadence Community site...
Jan 27, 2021
Super-size. Add-on. Extra. More. We see terms like these a lot, whether at the drive through or shopping online. There'€™s always something else you can add to your order or put in your cart '€“ and usually at an additional cost. Fairly certain at this point most of us kn...
Jan 27, 2021
Cloud computing security starts at hyperscale data centers; learn how embedded IDE modules protect data across interfaces including PCIe 5.0 and CXL 2.0. The post Keeping Hyperscale Data Centers Safe from Security Threats appeared first on From Silicon To Software....

featured paper

Speeding Up Large-Scale EM Simulation of ICs Without Compromising Accuracy

Sponsored by Cadence Design Systems

With growing on-chip RF content, electromagnetic (EM) simulation of passives is critical — from selecting the right RF design candidates to detecting parasitic coupling. Being on-chip, accurate EM analysis requires a tie in to the process technology with process design kits (PDKs) and foundry-certified EM simulation technology. Anything short of that could compromise the RFIC’s functionality. Learn how to get the highest-in-class accuracy and 10X faster analysis.

Click here to download the whitepaper

Featured Chalk Talk

Selecting the Right MOSFET: BLDC Motor Control in Battery Applications

Sponsored by Mouser Electronics and Nexperia

An increasing number of applications today rely on brushless motors, and that means we need smooth, efficient motor control. Choosing the right MOSFET can have a significant impact on the performance of your design. In this episode of Chalk Talk, Amelia Dalton chats with Tom Wolf of Nexperia about MOSFET requirements for brushless motor control, and how to chooser the right MOSFET for your design.

More information about Nexperia PSMN N-Channel MOSFETs