feature article
Subscribe Now

IP Inexact

A Look at Life in the IP World

Intellectual property (IP) is, in our modern edition of the SoC design world, the equivalent of the opposite gender (in our modern equal-opportunity-exasperation world): you can’t live with it and you can’t live without it. The thought of bundling up the results of some great achievement to share with your peers (or better yet, superiors?) was once a nice thought, something to do if you had some time. You could even spare someone a bunch of work that way. Or, if you were lucky, actually sell it.

There are, of course, a couple of problems with that theory. First of all, you never have time. But then, even if you did, once you put your grand oevre out for others to admire, nay, worship, you expect the congratulatory emails to come streaming in. But as the first one arrives and you eagerly open it, you find: there’s a problem, something isn’t working, can you please help figure out what’s wrong.

[Cue sound of massive ego deflation.]

The reality of reusing IP was a massive wet blanket that all but doused the struggling embers for years. But the fire was never put out, partly because, as much of a pain in the backside as it can be, IP isn’t a nicety anymore. You can’t waste time reinventing things that have already been reinvented. And the pressure is such that “I can do it better” ego trips are being deferred like that long sought-after vacation.

One of the initial challenges was that the IP business model was (and, in some cases, continues to be) difficult. But there are IP companies that actually make money – and they’re making money on the IP instead of using the IP as a teaser to get a consulting contract for a custom version of the IP that actually does what the customer wants. Working out the kinks in the system is no nicety for these companies: it’s survival.

So now that we’ve had a few years to try to figure things out, is IP any easier to acquire and use than it was five years ago? The short answer is, yes. But the short answer hides a lot of nuance. It’s daunting to see how many times the concept of “it depends” arises in discussions of IP. And not everyone sees the world the same way. Some companies can truthfully say that, practically, there is no such concept as reuse in the real world. Meanwhile, another email I received appeared to challenge, in apparent anger and outrage, the notion that the IP world might be broken.

So it can be helpful to start wading through the diverse world of IP. In the course of trying to sort through it, a couple of topics stand out as candidates for future separate coverage: IP metadata and networks-on-a-chip (NoCs, which are a kind of IP for interconnecting IP on an SoC, as embodied by companies like Arteris and Sonics). We won’t address those topics here, deferring coverage instead to later editions. We also won’t delve into the IP business model, despite interesting nuggets there as well.

In the interests of further narrowing our scope, IP will, for the purposes of this article, be restricted to functionality that will end up as hardware on a chip (or used in the creation of such hardware). Specifically, we’re not going to talk about software IP.

Core competence

One of the main distinguishing features affecting the attitude of an IP vendor appears to be whether or not the IP they sell is central to the chip. When looking for IP that will be the core computing element for your SoC, your experience will be different than when looking for a peripheral core. And this is reflected in the way the IP vendor greets the day.

If you’re ARM or MIPS, you wake up in the morning knowing that, without you, nothing else matters. Everyone wants to hook up with you. You are first to be evaluated and first to be selected. All other decisions hinge on whether you’re selected. And once you’re in there for a few generations, well, you can sit back, a bit more relaxed, knowing that there’s a giant growing pile of legacy code that will work only on your core and that binds your customer ever tighter to you forever more.

In this world, you build ecosystems. You are the center from which radiate all manner of other cores, tools, and processes. And because you own this universe, you get to define the terms, call the shots. You are the master. You have that most coveted of all things: control (or at least the illusion of a close facsimile thereof). And if you work hard and do a good job defining requirements, then you can sit back in the knowledge that, by gum, the IP world is a pretty beautiful thing these days, especially compared to five years ago.

If you happen not to be at the center of this sphere of wonderfulness, well, you haven’t yet earned the right to welcome the day with quite the same measure of cockiness. Every day is a new day. You have to work your way into ecosystems. You have to fight for the first sale and execute well on it, because you’re going to have to fight for the second sale just like the first one.

Not that the peripheral IP companies are adopting a head-down tail-dragging omega-dog posture. There’s plenty of vim and vigor out there, with an emphasis on process and quality. Oh, and also quality. Those become part of the differentiating value proposition. This becomes a big part of the effort put forth by companies like Synopsys and Snowbush. Oh, and did I mention quality?

As a user of the IP, the concept of the core ecosystem can be both beneficial and limiting. In fact, it can be beneficial because it’s limiting. By remaining within the ecosystem, there’s a much better chance that the pieces you’re going to try to put together will play nicely together because, more than likely, someone else has already tried it and pronounced it good.

Moving outside the ecosystems provides you with more choice, but you rely more on the claims by the vendor that their IP will indeed work as promised. Some companies will even withhold a final payment until everything is clicking along smoothly. But it’s important to remember that, in addition to being an indication of technical merit, belonging to an ecosystem is a business arrangement. Just because some IP isn’t part of an ecosystem doesn’t mean that they tried and the IP didn’t work.

Heck, ask any maitre d’ecosystem: ecosystems are complicated to manage, and companies often try to pare back the number of partners they have, resulting in perfectly good partner candidates being left out. That takes the onus off of the potential sponsor and puts it squarely on the IP company and you, since the ecosystem ringleader has removed the imprimatur of approval.

Digital vs. Analog; Soft vs. Hard

One of the most critical distinctions that determines the likelihood of success with IP reuse is whether it’s digital or analog. Which is almost another way of asking whether it’s soft or hard IP. While they’re not exactly the same thing, for practical purposes, when you get analog IP, you’re getting hardened IP. When you get digital IP, you want RTL, both so that it can be parameterized and so that you can have layout flexibility on your chip.

Once you’ve got hardened IP, your reuse options become limited, to say the least. Where it counts the most, on leading-edge technology nodes, you must proceed extremely carefully. As Virage’s Brani Buric points out, with SoC NREs now in the range of $70M (with as much as $6M or so of that being masks), there’s no room for hoping the IP will work. The concept of silicon-proven IP becomes critical. And silicon-proven doesn’t mean that it has been shown to work on some silicon: it has to mean that the IP has been successfully implemented on that specific process at that specific foundry. If it hasn’t, it may still work fine – but you have to go in eyes-open, knowing that your project is proving out the silicon for that process.

The challenges of implementing delicate IP on leading-edge nodes is such that TSMC has all but created a vertical-IP market with their Open Innovation Platform. They don’t operate this as an IP business; it’s more like a set of reference designs assembled by their own select ecosystem. But the specific reason this exists is that various applications require various pieces of touchy IP to work together, so these reference designs provide usable examples that have already been proven.

In addition, rather than lots of companies replicating the same low-level blocks, they collaborate on those common blocks and focus competition on adding value at a higher level through the interconnection of the blocks. The results are platforms that account for issues in such diverse areas as design, packaging, and manufacturing.

The digital/analog soft/hard divide gets dicier with some kinds of IP that bridge the two sides, like PCI Express. Reverting to the OSI model (or some semblance thereof), the physical layer (PHY) requires careful use of things like differential pairs, phase-locked loops, and clock-data recovery. This is tricky stuff that shouldn’t be underestimated. The higher control layers, however, are really just logic (albeit complicated logic), and so a soft implementation is better.

In fact, the two disciplines are so different that there are instances of companies that provide only the PHY and others that provide the control layers. Or one company may provide both, but as different products that have to be stitched together. This has resulted in standard interfaces that give some measure of confidence that the two pieces will at least connect easily. In the case of PCI Express, it’s the PIPE interface that does the trick; similar interfaces have been set up for other standards as well.

This also provides for some interesting market jumps. For example, Snowbush was active in the PCI Express world and not in the USB world. But with USB SuperSpeed using a high-speed serial PHY, suddenly the expertise that they had developed with PCI Express’s high-speed serial PHY became relevant for USB – they didn’t have to go through the learning curve that traditional USB providers would require. They used this to jump into the USB fray. But it’s no free lunch: the incumbents would have the edge in control logic based on earlier USB generations.

Abstracting

One area of focus for some companies is that of abstracting the functionality of IP. The benefits of this can be for specifying the intent of a piece of IP independently of its actual implementation, specifying high-level functionality to select which IP works best, and packaging up IP for easy re-use. The first and last of these are particularly useful considerations for internal IP and start to blend into the larger ESL question.

Methodologies practiced by companies like CebaTech and Forte are typically discussed in an ESL context. But Forte sells tools to help companies reuse their own internal IP, and CebaTech uses proprietary technology internally to create IP for sale. Both rely on higher-level functionality descriptions – in Forte’s case, C++/SystemC; in CebaTech’s case, C (coupled with a SystemC testbench) – to make specifying the behavior of the IP more independent of the implementation. By moving the behavioral description up above the details of how it was done for one particular project, you can more easily retarget in a different manner when a new application with a different context arises.

Design intent can also be a useful thing to encapsulate in the design so that, when it’s repurposed, any questions on why something was done can be answered easily. Jasper touts their behavioral indexing feature as a unique means of accomplishing this.

The other benefit of abstraction is the ability to make better high-level architectural choices early on. A given micro-architecture may or may not make sense for every possible incarnation of the design. This is part of Cadence’s C-to-Silicon effort, where a C description of functionality is turned into a logic implementation along with an appropriate micro-architecture that in essence hosts the logic.

Abstracting can also be useful in the context of deciding which block to use. In this case, you want to be able to indicate a particular piece of IP, deferring consideration of the interconnect nitty-gritty. More important at this stage are high-level issues like: which features are supported? What is the latency? The throughput? How much silicon will it require?

Cadence has tried to incorporate some of these considerations in the Cadence Chip Planning System, which includes their internalized ChipEstimate capability. The catch is, as we’ll see when we discuss metadata, these capabilities require proprietary data and infrastructure and so aren’t available outside the Cadence domain.

Keeping Up the Quality

If there’s one issue that comes up over and over again with pretty much anyone you talk to, it’s the issue of – surprise – quality. When you buy someone else’s design, you consign a measure of your fate to their competence. And when you’re a designer who measures your own value by your own competence, well, relegating that to someone else isn’t easy.

If you’re dealing with a new IP vendor for the first time, you place a bet that, when you open up the can of IP, it will be free of mold, botulism, E. coli, and that old pop duo Sam ‘n’ Ella. And taking that on faith is a tough thing to do when the stakes are high. So a number of efforts have been made to improve the quality (and the predictability of the quality) of IP.

Much of what contributes to a robust piece of logic comes from the methodology with which it’s designed and tested. That means that much of the quality boils down to how an IP vendor builds a product. This focus actually hasn’t been driven as much by the IP market as it has by the need for design reuse within large companies. Two companies in particular are known for comprehensive, intricate quality control systems that are available externally: Freescale, with their Semiconductor Reuse Standard (SRS), and NXP, with their CoReUse standard, which is actually marketed by IPExtreme.

These two approaches codify, in excruciating detail, the various steps that must be taken in order to attain a level of compliance. CoReUse, for example, has six different levels of blessing that it confers. The documentation for these sorts of standards is enormous because they are so specific and cover so much ground. Each new round of learning within one of these companies can potentially contribute new checklist items, so they’re also evolving.

Not every company can take on such a weighty process. And yet, the industry would benefit from some level of standardization: this is where the QIP standard comes in. This was spearheaded by the VSIA organization and turned over to IEEE. It is now being readied for final ballot this summer as IEEE 1734 under the continued chairmanship of Kathy Werner.

Both NXP and Freescale have participated heavily in this standard, as have others. But, rather than trying to come up with some universal superset of already giant processes, instead it’s kind of a meta-process: it has a series of requirements that things be done but doesn’t specify how you do them. For example, it says you should use a consistent signal-naming convention, but it doesn’t specify what that convention should be.

QIP (note that, in an apparent reversal of the English language pronunciation tendency to ignore letters, in this case an invisible letter is inserted to pronounce it “quip” instead of reverting to the Arabic back-of-the-throat Q) covers a broad range of IP, including software IP, five kinds of hardened IP, digital soft IP, and verification IP (VIP). In addition, they are looking at the future addition of platform IP.

Management of QIP was originally done by that most universal of EDA tools, Excel. However, the latest activity defines an XML schema to allow more formal tools to manage the process. In fact, one tool is already available: VIP Lane from Satin IP. It allows you to bring in the quality checks that are a part of your process, customize the reporting dashboards, and track progress on completing the necessary quality requirements as the product approaches completion.

In general, then, while there is pretty universal agreement that things look much better today than they did a few years ago, challenges remain. The area that will probably never become completely plug-and-play is the hardened-IP-on-latest-node space. By definition, such IP is blazing new ground and will show the way for the IP that follows. Outside this arena, interconnectivity has benefited greatly from new standards and interfaces (some of which, like OCI-IP and IP-XACT, we’ll discuss in more detail in the future). And the real elephant in the room, quality, has merited efforts on its own. Much of this is still settling down, so continued improvement should be expected over the next couple of years.

It remains to be seen how these efforts will impact the IP market landscape. Small companies tend to chafe at big standards as they try to get to market, so some of what’s happening here could make it harder on small companies. On the other hand, concerns about fly-by-night operations and poor quality may have hurt the chances of smaller companies as compared to the big guys, so having clearer rules of the road should improve the chances of those small guys that decide to keep churning on.

Gradually, gradually, the IP business – once viewed as unsustainable (that is, until the dot-com craziness defined a whole new level of unsustainable) – is firming up, and the fire that almost went out is providing reliable heat.

Links:

ARM

Arteris

Cadence C to Silicon

Cadence Chip Planning System

CebaTech

CoReUse (IPExtreme)

Forte

Jasper (Functional Qualification)

MIPS

IP/IEEE P1734 (No current info readily accessible; click here and search for 1734 for summary)

Satin IP (VIP Lane)

Semiconductor Reuse Standards (Freescale)

Snowbush

Sonics

Synopsys

TSMC Open Innovation Platforms (in DFM announcement)

Virage Logic

Leave a Reply

featured blogs
Apr 18, 2024
Analog Behavioral Modeling involves creating models that mimic a desired external circuit behavior at a block level rather than simply reproducing individual transistor characteristics. One of the significant benefits of using models is that they reduce the simulation time. V...
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

TE Connectivity MULTIGIG RT Connectors
In this episode of Chalk Talk, Amelia Dalton and Ryan Hill from TE Connectivity explore the benefits of TE’s Multigig RT Connectors and how these connectors can help empower the next generation of military and aerospace designs. They examine the components included in these solutions and how the modular design of these connectors make them a great fit for your next military and aerospace design.
Mar 19, 2024
4,242 views