feature article
Subscribe Now

Other People’s Code

You down with OPC?

Ownership is a big thing for an engineer. When you’re done with a project, you can stand back and state proudly, “I made that!”

Well, it used to be that simple. Perhaps now it’s more like, “See your TV? Well, it needs backlights to work and those backlights are divided into zones and something has to decide how to light the zones to keep power usage down and there’s a big chip that controls a lot of this stuff and a portion of the chip handles the zones and that portion has to talk to the rest of the chip over a complex interface, and that interface is really really important for the picture to look good. So, that interface? I made that! OK, I didn’t actually make it, but I designed it!”

Ownership is good. If you’re going to put your name on it, you want to make sure it’s going to be good. Stated conversely, if you want it done right, do it yourself.

Of course, ownership has a darker side, typically characterized by the misleading phrase “NIH” – “Not Invented Here.” Which is misleading because, at best, it typically means, “Not Re-invented Here.” Or, being completely truthful, “Not Done By Me.” Such thinking can go beyond pride of ownership to preservation of employment.

But NDBM is an absurd luxury these days; not even Not Done By Us is possible. Every SoC design will have code from outside the team. We typically lump this into the topic of IP, which we’ve covered a few times in this space. But it’s more complicated than that.

There are actually three different categories of Other People’s Code: IP (which I’ll divide into three categories in a minute), legacy code, and open-source code. And, theoretically, these can apply to any code, whether it expresses something that will end up in hardware or software.

IP is typically divided into two camps, design IP and verification IP. But it can actually be divided three ways: design, verification, and modeling. Modeling IP is also for verification, but it’s for verifying a higher-level architecture. The “verification” IP in this ontology refers to IP that’s used to verify an implementation of the same function. For example, if you buy some IP from one vendor and want to do your own quality-control checking on it, you would obtain independent verification IP to do the testing. On the other hand, if you want a more abstract model of the IP you’ve purchased for architectural work, then you can generate modeling IP from the design IP itself.

Such architectural models are often available from IP vendors themselves, but, assuming they’re derived from the implementation, they can’t really be used to test the implementation. Any bugs in the implementation will end up in the model.

Carbon Design Systems announced the Carbon IP Exchange a couple months ago, which, at first glance, sounded like another attempt at an IP marketplace or clearinghouse. But, in fact, it’s a place to go get modeling IP. In some cases, they’re just redistributing someone else’s models; in other cases, they’re generating C-level models from the RTL design IP (in the case of ARM, they do both of these); and in some cases, they wrap highly configurable IP in a GUI that allows selection of parameters.

That this isn’t verification IP for the purpose of checking out implementations is clear for two reasons: some of the models derive from the implementations, and, in the cases where the model is being configured, there is no link to connect the model configuration to the actual implementation configuration (because, according to Bill Neifert, their CTO, customers haven’t requested that). These both point to the models being used architecturally, not as implementation verification IP.

Legacy code is often simply considered to be internally-generated IP, and companies with well-structured IP acquisition and integration mechanisms allow groups from one division to “publish” their IP for consumption elsewhere in the company. But there’s a critical difference to legacy code, especially when not well managed: while commercial IP has an owner (it’s just not you that owns it), legacy code usually has none. It typically wasn’t designed as an independent product, in which case it won’t have undergone the required quality-control mechanisms that formal IP would (or should) undergo.

So, while commercial IP is hopefully designed to be a drop-in, with no knowledge of the internal workings required (or even allowed), legacy code typically has to be made your own. Someone else may have written it, but buck-stoppage transfers to you when you use the code. So you end up having to study it to make sure that, in the design review (or, heaven forefend, failure analysis meeting), you can stand up for the code. It’s like taking responsibility for the guy you brought to the party – you want to know that he’s not some out-of-control meth-head that’s going to make you look bad in the end.

Of course, you don’t want to look bad with a poor commercial IP choice either, but at least there are due-diligence measures you can use there, and, most importantly, there’s someone still around to blame.

Legacy code would almost seem the most problematic source of code. It’s “ours” (the next best thing to “mine”), so it automatically gains tribal acceptance. It’s also free. And, importantly, it carries forward decisions and conventions previously agreed to, increasing the likelihood that new equipment will work in a manner consistent with old equipment. So it will get used; restarting every design from scratch is not an option. So, absent good internal-IP quality controls, designers will have to be more careful with legacy code than they will with commercial IP.

Finally, we have open source code. And here we need to make a more careful distinction between hardware and software. The concept of open-source hardware is something of a non-starter for most designers.  IPextreme’s Warren Savage says simply that, for hardware, there is “… zero chance of open source.” The reason for that is that open-source IP has none of the benefits of legacy IP and all of the downsides. While you can patch software bugs, you’re not going to risk a mask change simply for the sake of saving a few bucks on design. So open-source hardware code is pretty much dead at present.

Open-source software, on the other hand, is alive and well. It’s really astonishing the number of algorithms and protocols for which open-source implementations exist. Here again, quality should be a concern, but since you’re getting the source code, you can take ownership much the way you would with legacy code.

The one gotcha that remains with open-source software is the licensing. With IP, licensing is explicitly negotiated with payment terms, and companies are used to implementing whatever tracking mechanisms are required to ensure that royalty obligations, or whatever other terms might exist, are met. There is no such negotiation or contract (other than the click-through, whose signing can usually be summarized as, “Yeah, yeah, whatever… <click>”) when open-source code is downloaded.

And there is a variety of styles of license that may apply to any given piece of code. Some, like BSD licensing, are considered more commercially friendly; others, like GPL, are considered less so because they may require you to allow proprietary improvements you make to the code to be available to others for free. Many a manager worries about complex code being “contaminated” by code having a license inconsistent with the planned code deployment.

This is an area that Protecode is trying to address. They have a database of code and licensing that they can use to scan a codebase to see if it contains any licensing issues. They also provide tools and methodologies for managing the licensing obligations across large, complex projects. So, while quality and ownership issues remain, there are attempts to manage the potential legal surprises that open-source code can yield.

So, from an ownership standpoint, you really have two kinds of code: your own code and OPC. IP remains fully OPC since an owner remains for that code: the provider. It’s a bit more tenuous if you’re doing your own implementation and just purchasing verification IP to make sure it’s right – since the actual code to be shipped is yours. But, regardless, someone’s an owner.

Legacy and open-source code, on the other hand, are really orphans. While they had parents, the parents have died or run away or are in rehab. So if you’re going to adopt them, you have to make them feel like part of the family.

Either someone else needs to own it or you need to make it your own. That’s the only way to be down with OPC.

 

More info:

Protecode

Carbon IP Exchange

IPextreme

 

Leave a Reply

featured blogs
Dec 1, 2023
Why is Design for Testability (DFT) crucial for VLSI (Very Large Scale Integration) design? Keeping testability in mind when developing a chip makes it simpler to find structural flaws in the chip and make necessary design corrections before the product is shipped to users. T...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

ADI's ISOverse
In order to move forward with innovations on the intelligent edge, we need to take a close look at isolation and how it can help foster the adoption of high voltage charging solutions and reliable and robust high speed communication. In this episode of Chalk Talk, Amelia Dalton is joined by Allison Lemus, Maurizio Granato, and Karthi Gopalan from Analog Devices and they examine benefits that isolation brings to intelligent edge applications including smart building control, the enablement of Industry 4.0, and more. They also examine how Analog Devices iCoupler® digital isolation technology can encourage innovation big and small!  
Mar 14, 2023
31,058 views