I’m a failure.
Utter and complete.
You see, we journalists are supposed to stir things up, get someone’s dander up, rouse some rabble. And panels can be the best way to do this. Get a group of spirited experts together, whisper to them what the others said about their mothers, and let nature take its course.
It’s all good fun. For us. And everyone else gets over it. Eventually. And it’s not even hard.
So a recent discussion I was privileged to conduct can, by these standards, only be considered an epic fail. Granted, this wasn’t a public panel, so any dramatics would have been of value only to me; thus the consequences of my failure are less dire. But only by degrees.
The Atrenta team recently convened a collection of minds to discuss IP quality. And, while everyone brought a different perspective, there was general agreement around the table on the points made.
I cringe now as I think about it. I don’t think I’ll ever completely erase the horror of so much consensus from my mind.
They say the first step towards healing is to face your failures and account for them fully. So, in that spirit, I stand before you and make public my shame. I offer up a full accounting of the whole dreadful affair and only hope that my pitiful efforts can serve to salvage a mere sliver of your respect.
It all started in an innocent-looking conference room. The table was manned by representatives from four companies: Ralph Morgan, Synopsys’s VP of Engineering for digital DesignWare; Piyush Sancheti as Atrenta’s Sr. Director of Business Development; Steve Roddy, Tensilica’s VP of Marketing and Business Development; and Jim McCanny, CEO of Altos Design Automation. This gave us viewpoints from an IP division of a larger EDA company, another tools company, a processor-centric IP company, and a company that creates IP models for use in verification.
Each came ready to talk about IP quality: what’s good, bad, and indifferent. I probably didn’t set the right expectations; there was no microphone dangling from the ceiling into which I could intone the customary summons to rumble. And so ensued a horrifyingly pleasant conversation that started with definitions of what characteristics comprise IP quality.
What is quality, anyway?
There are numerous potential contributors to the general category of “quality.” Asked in such a setting, the first guy answering gets to make all the obvious points; the poor followers are then left to find something else to add other than, “Yeah, what he said.” Ralph was the lucky first-off, and he put specific focus on interfaces: how easy is it to integrate the IP into the rest of the design?
He also raised a less-expected issue with tool flow testing. The reality is that simulators have bugs. When you try to verify the same IP on different simulators, you may run into trouble. IP shipped without such testing tends to give the appearance of poor quality when it trips over bugs in the rest of the toolchain.
Piyush agreed and then raised the issue of specific metrics that should be met, along with the practical matter of how software interacts with the IP – specifically, the ability to use and map software registers in the IP. He also pointed out that the boundary between analog and digital portions of IP tends to be an area for particular concern.
Steve put a different nuance on the overall way to judge quality: how did the IP affect the overall project? Did it end up costing less or more to use the IP? How easy is it to model and evaluate the IP prior to purchase? How was software integration affected, including such considerations as driver availability or development and virtual prototype environment support? Unfortunately, this shift didn’t involve disagreement with any of the prior points made.
Finally, Jim, from a company that has to deal with the daily down-and-dirty nitty gritty of IP at the physical level, spoke of the many levels at which quality has to be considered and of the unfortunate fact that much of it is very hard to quantify. His bottom line: “To be able to just take IP and not know anything about it and plug it in is a Holy Grail; it’s not there and I don’t think it will ever be there.”
Delegating expertise
One of the main benefits of IP is that you can spare your development team the expense and agony of having to learn in detail many complex standards and protocols: you can buy that expertise from someone else instead. But there’s a catch: if you’re going to evaluate whether someone did a good job implementing the dictates of a thousand-page spec, then you have to have read and understood those thousand pages as well. Seems to defeat the purpose of having someone else take the load off of you.
So how do you evaluate IP quality without practically doing the work yourself? Opinions varied slightly here. Steve thought that the vendor should be held accountable for the testing, letting the user focus on the external interface rather than the internals. Jim felt that, even if the functionality is solid, you still have to check out things like performance and power in detail. This may sound simpler than functional testing, but many of these protocols are extremely complex, with lots of corner cases.
Helping illustrate the complexity, Ralph referred to the work that Synopsys is doing on USB SuperSpeed. They got it working in an FPGA over a year ago. They’ve got 70 guys working on it with no weekend or vacation breaks, and there’s a still a lot more to do. The older USB standards are basically buried in the new one in the same way that an atom bomb serves as a mere detonator for a hydrogen bomb; the number of traffic scenarios they have to prove out went from 694 for USB 2.0 to 13,248 for SuperSpeed.
And customers will probably want to verify all of that. Scary thought.
All agreed that tools can help, but that they only go so far. Piyush and Steve pointed out that the critical test is to put the IP through its paces in the actual system environment that it will call home once complete. This is where the unexpected cases are most likely to arise.
Ralph put a different spin on it, suggesting that quality perception is really about the organization creating the IP. The first time you work with a company, you will likely end up wading very far into the details. This scrutiny is as much about testing the team as it is about testing the IP. The next time, assuming the first project went well, you might back off slightly. After a few projects, you can relax a bit and stop meddling so much.
And, of course, the usual cautionary note: this is all harder for analog than it is for digital.
Can standards stand in?
When people don’t agree and there is inconsistency in the world, more than journalists win: standards organizations win too. Although they really win when the chaos is tamed and a standard (one that people will actually follow) is issued; journalists would just as soon the debate go on endlessly.
So are there areas where standardization can help? That’s a tough one, since we’re talking about something with a really squishy definition. It’s hard to measure a blob with a tape measure.
There’s one effort that has been on EDA companies’ wish lists for years: an encryption standard (IEEE P1735). There’s been a general concern in the past about too much transparency in IP. It has not gone unnoticed that, in certain circumstances, once you deliver one copy of source code to one customer, well, let’s just say you’ve pretty much covered that market. Encryption means that an IP provider might be able to get paid for each copy.
But it turns out that, for IP that’s going into an SoC, encryption really isn’t necessary. If you’re using IP in an FPGA and you download some black-market copy from some questionable website, you don’t risk much. On the other hand, it’s very unlikely that you’re going to risk your multi-million-dollar mask set by trying to save some tens of thousands of dollars on IP. In fact, what if a competitor is deliberately suckering you into trying some flimsy IP that looks cheap but will never work? So, for example, Synopsys and Tensilica generally deliver their source code in the clear, and that’s been working okay for them, which really makes encryption less relevant.
The other attempts at standards are the QIP standard (IEEE 1734) and the IPecosystem Tool Suite from the Global Semiconductor Alliance (GSA). We talked about the former last year, and it’s had more time to get traction since then. But in fact it’s not being picked up very aggressively. Few customers ask for it, and, when they do, it’s almost like they just want to know that you’ve heard of the standard and can provide the checklist; they’re less interested in the actual contents of the checklist. Ralph said they’ve had roughly 11 requests over the last five years. Steve has rarely been asked about it.
So what’s the problem with it? Piyush noted that it’s simply not enforceable, so it’s less effective. The approach is not to standardize how quality is assured, but rather to establish whether the company has certain processes (that may or may not be effective). Ralph observed that many companies have their own internal systems that go beyond QIP, so they just sort of skip over it. Steve was concerned about its hardware-centricity, an obvious shortcoming for processor IP.
Jim pointed to the fact that it’s all about checking off items on a list, with no nuance. “Did you tick the box well or poorly?” Ralph gave an example of the challenges in over-simplifying complex issues. He cited a typical question: “It asks, ‘Do you do constrained random stimulus?’ as a yes-or-no answer, and it’s not a yes-or-no answer; that’s a day-long discussion…” There’s no check box for “it depends.”
As to the GSA initiative, Piyush questioned whether the organization sees it as part of their core mission. And, again, it lacks the force of a mandate since it’s a questionnaire “with the purpose of aiding the hard IP integrator/evaluator in assessing risks when evaluating… [IP] vendors and their IP…” They distinguish themselves from QIP by focusing on hard IP where they see QIP focusing on soft IP.
So what do you want?
In a last-ditch attempt to resurrect some dignity before leaving the room, I asked them all what they would like in order to make things better, hoping for a good tussle over priorities. But again, everyone politely agreed with everyone else, even though they each had their own specific points to make.
Ralph’s wish was for more transparency; the IP vendor has to be an extension of the design team. Jim’s wish was for more modeling transparency from IP vendors, in particular with respect to how decisions were made when creating the IP.
Steve echoed the need for model transparency, with an emphasis on system modeling and physical effects. Even though you may have SystemC models from all the vendors, they’re all different and can’t be compared. “There’s not an effective way that the IP vendors can slug it out on the playing field because the rules… are not cleanly established.”
Piyush’s wish was for standards. “Somebody’s got to drive this because we all agree that quality is important and that transparency between the consumer and supplier is important; now we just need a common language.”
And there it was. Not only had no blood been drawn, they barely raised a sweat. Just the rosy glow of empathy and goodwill. Ugh. How do I explain that to Kevin? As they shook hands, I silently shook my head. And realized that some serious atonement was going to be necessary.
And so here it is, my weaknesses and vulnerabilities laid bare for all to see. May they be understood so that I can live to attempt once more to stir the pot.
And what’s the upshot of all this? It’s hard to boil down. There are clearly numerous elements that constitute quality; some measurable, most not. Attempts to define the rules of the road have fallen woefully short. Companies are more or less doing what they have to do to get their own individual needs met, and we’re all gradually getting better at this as we go along.
The good news is: IP quality will clearly yield more material in the future.
More information:
Participants: