feature article
Subscribe Now

IP Interface Standards

Or, I’m Sooooo Confused!

Standards are a tricky business. They require cooperation between competitors to be successful. They require companies with well-entrenched proprietary positions at the feeding trough to yield some of that advantage so that others may feed alongside them. There has to be an overarching good that can be achieved – especially for the companies having to cede pole position. “Kum-bah-yah” and “this is good for the universe and your karma” won’t get very far. Even if there’s a really really good reason, PEGS LAW* will conspire to upset the best standardization effort. Or at least try to undermine uptake of the standard.

So standards are sometimes viewed with suspicion. Many languish, unused, unloved. If you’re a new standard trying to get traction with someone who’s giving you the once-over, you’ve got about five seconds to establish relevance and credibility. This will typically happen somewhere on the home page of the standards body. If it takes too many clicks to figure out what a standard is about, it’s likely to remain unused, unloved.

If nothing else, there’s always one indelible black mark against any standard, no matter how well-meaning or well-conceived: it wasn’t invented here.

Given that setup, take the concept of a standard and toss it into the chaotic world of IP reuse. Now you’ve got yourself a real challenge. IP has resisted business viability outright; trying to standardize any part of it would presumably be the equivalent of putting a leash on a cat. (No, not one of those bizarre occasional caninoid cats that comes when called and follows behind you and takes to a leash. I mean a real cat that, when faced with limitations, hurls itself up and down and in and out and wails and moans and scratches and claws until, putting your arms at risk of amputation by a million instantaneous lacerations, you reach in and release the leash, at which point the cat calmly struts away, tail firmly in the air in an unmistakable display of what he or she thinks about you.)

Such dangers notwithstanding, there are a couple of standards out to ease the reuse of IP. One is the Open Core Protocol (OCP), under the aegis of the OCP-IP organization; the other is IP-XACT, developed by the SPIRIT Consortium (recently merged into Accellera). Let’s pretend that I’m a bright, budding, new engineer (I know, a stretch on all counts) on the hunt for an IP metadata standard. Going to the IP-XACT web page (and clicking to the “about” page) I find the following:

To accelerate the design of large System-on-Chip (SoC) solutions, the semiconductor industry needs a standard mechanism for describing and handling multi-sourced IP that enables automated design integration and configuration within multi-vendor tool flows. The SPIRIT Consortium’s founding companies are combining their long experience in IP development, supply, integration and electronic design automation (EDA) to deliver such a mechanism.

Sounds good enough; vague, but on a useful track. There’s some more boilerplate to read, but I’ll stop there to see what the OCP standard does. On its home page I find:

OCP-IP is dedicated to proliferating a common standard for intellectual property (IP) core interfaces, or sockets, that facilitate “plug and play” System-on-Chip (SoC) design.

Now… with the benefit of hindsight, these two statements mean very different things. But I don’t have hindsight yet (work with me here); I’m still trying to figure out which of these solves my problem. And, quite frankly, I’m scratching my head a bit, wondering how these two things are different. They both seem to be addressing some kind of reuse issue, dealing with interfaces to tools or designs or something. A good metadata standard would help define the interface of an IP block, which would help it interface to my design as well as helping it interface to my tools.

Poking around a bit more, I actually find a nice comparison table – apparently I’m not the only one who’s wondered what the difference was between these and other standards. On it, I find the mission statement for OCP-IP:

OCP-IP’s mission is to drive the OCP to become the most widely used and adopted socket interface for SoC design. OCP-IP provides the tools and services to its members that are necessary for convenient implementation, maintenance, and support of the standard soecket [sic] interface.

And then, for the SPIRIT Consortium, I read as its mission:

To establish a set of IP and tool integration standards that enable proliferation of IP reuse through design automation

Hmmm…. Still very vague. There does seem to be this “socket” thing that’s come up in both of the OCP descriptions, and the IP-XACT one keeps mention design automation. But I’m not quite sure what to do next: I’ve run smack dab into the tendency of standards organizations to cast wide nets with mission statements that are correct but vague enough to mean practically anything – until you know what they mean, at which point they’re not useful anymore. And what they specifically don’t seem to do well is to describe exactly what problem they’re trying to solve (in terms more specific than “making IP easier to reuse”).

We took a look at IP reuse issues before and noted that metadata plays into the efforts to improve reuse. So are OCP and IP-XACT two competing metadata standards? If you’re thinking that way, it gets even more confusing when you find that OCP-IP has been working with the SPIRIT Consortium to make the two standards work together. Which must mean they don’t compete.

So here’s the deal, after some sleuthing. These aren’t competing metadata standards; they are intended to solve different problems. Here is problem number 1:

I design SoCs, and I have to plug many pieces of IP into my infrastructure. Each piece of IP works differently, and so the interface into my system for each piece of IP must be independently created. If I change even the vendor for that piece of IP, then I may have to go redesign the interface. I would like one standard for interfacing with any IP block so that if I change vendor or even IP block (say, from PCI to USB), in theory, the blocks should just plug and play with my design and I have nothing to redo. (Except validation, and how hard can that be?)

Problem number 2, on the other hand, is as follows:

I’m trying to evaluate and use IP. But those darned IP vendors are very closed-mouthed about exactly what’s in their IP, and I can’t tell much about it. Even if I buy it, I don’t easily know what features it supports or what its programming model is or what the interconnect signals are. I would like a metadata standard that tells me lots of useful things about the pieces of IP I’m evaluating and using. Oh, and tools so I don’t have to read XML would be nice too.

Problem number 1 is addressed by OCP. It defines a “socket” with an interface and a protocol for which any IP could be designed. It defines how a piece of IP should actually work at the interface. There appears to be particular attention being paid to the Network on a Chip (NoC) concept (which we’ll examine in a future article) as a natural extension of this concept.

Problem number 2 is addressed by IP-XACT. It provides an XML schema for a broad range of information about a piece of IP. It doesn’t say how the IP should work but rather provides a way of describing how it does work. And this standard is actually threatening to gain some traction. Although tools aren’t a part of the standard; that ball remains on the field, awaiting some brave soul to pick it up and run with it.

So this means that a piece of IP that conforms to the OCP standard may also have an IP-XACT description associated with it. The two standards can indeed intersect.

There is one thing that makes these two types of standard very different. With a standard like OCP, interoperability is key – you can’t easily deviate from the standard without creating an issue. Options and proprietary extensions are possible but can be dicey – even requiring heavyweight things like feature negotiation if not done carefully. A metadata standard, on the other hand, can always provide some kind of escape feature for proprietary extensions when somebody thinks of something new to describe about their IP.

Which is good, because one of the key stumbling blocks to standard acceptance – especially metadata standards – is that they can’t think of everything. This becomes even truer if you’ve got some targeted proprietary scheme that competes with a broader standard.

Cadence’s ChipEstimate arm and TSMC’s Unified IP Spec Program are both examples of proprietary metadata schemes, each of which involves information about IP. Cadence’s goal is estimation of the impact of a piece of IP in an SoC; Steve Glaser describes their data interchange format as being more or less like extended IP-XACT. TSMC is addressing the very specific problem of trying to enable early IP development of basic cores on new technologies; this involves both requirements specs as well as the characteristics of the resulting IP. Not inconsequentially, TSMC is the giant gorilla in its space and has a lot of leverage on its own; Cadence is also no small potatoes, and ChipEstimate has been around for a while, so carries its own legacy. Both classic situations where standards find themselves at a disadvantage.

So, putting my newbie engineer hat back on, what do I do? Well, since I’m looking for metadata, IP-XACT is what I want unless I’m specifically looking to do something addressed by a proprietary standard. And even then, it might be useful to poke them and ask when they’re going to support IP-XACT. Yes, I may find that there’s something I want to do that isn’t covered by the standard, but it’s easier to extend a standard to add 5% new functionality than it is to come up with 100% of a new proprietary format to address the 5% missing stuff.

Although… you know… as I think about it, it occurs to me… that standard wasn’t invented here…

* Pride, envy, gluttonly, sloth, lust, avarice, and wrath. Yer basic human characteristics. Plus or minus the guilt or shame, according to your preference.

Leave a Reply

featured blogs
Apr 23, 2024
Do you think you are spending too much time fine-tuning your SKILL code? As a SKILL coder, you must be aware that producing bug-free and efficient code requires a lot of effort and analysis. But don't worry, there's good news! The Cadence Virtuoso Studio platform ha...
Apr 23, 2024
We explore Aerospace and Government (A&G) chip design and explain how Silicon Lifecycle Management (SLM) ensures semiconductor reliability for A&G applications.The post SLM Solutions for Mission-Critical Aerospace and Government Chip Designs appeared first on Chip ...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

GaN Solutions Featuring EcoGaN™ and Nano Pulse Control
In this episode of Chalk Talk, Amelia Dalton and Kengo Ohmori from ROHM Semiconductor examine the details and benefits of ROHM Semiconductor’s new lineup of EcoGaN™ Power Stage ICs that can reduce the component count by 99% and the power loss of your next design by 55%. They also investigate ROHM’s Ultra-High-Speed Control IC Technology called Nano Pulse Control that maximizes the performance of GaN devices.
Oct 9, 2023
25,453 views