feature article
Subscribe Now

Other People’s Code

You down with OPC?

Ownership is a big thing for an engineer. When you’re done with a project, you can stand back and state proudly, “I made that!”

Well, it used to be that simple. Perhaps now it’s more like, “See your TV? Well, it needs backlights to work and those backlights are divided into zones and something has to decide how to light the zones to keep power usage down and there’s a big chip that controls a lot of this stuff and a portion of the chip handles the zones and that portion has to talk to the rest of the chip over a complex interface, and that interface is really really important for the picture to look good. So, that interface? I made that! OK, I didn’t actually make it, but I designed it!”

Ownership is good. If you’re going to put your name on it, you want to make sure it’s going to be good. Stated conversely, if you want it done right, do it yourself.

Of course, ownership has a darker side, typically characterized by the misleading phrase “NIH” – “Not Invented Here.” Which is misleading because, at best, it typically means, “Not Re-invented Here.” Or, being completely truthful, “Not Done By Me.” Such thinking can go beyond pride of ownership to preservation of employment.

But NDBM is an absurd luxury these days; not even Not Done By Us is possible. Every SoC design will have code from outside the team. We typically lump this into the topic of IP, which we’ve covered a few times in this space. But it’s more complicated than that.

There are actually three different categories of Other People’s Code: IP (which I’ll divide into three categories in a minute), legacy code, and open-source code. And, theoretically, these can apply to any code, whether it expresses something that will end up in hardware or software.

IP is typically divided into two camps, design IP and verification IP. But it can actually be divided three ways: design, verification, and modeling. Modeling IP is also for verification, but it’s for verifying a higher-level architecture. The “verification” IP in this ontology refers to IP that’s used to verify an implementation of the same function. For example, if you buy some IP from one vendor and want to do your own quality-control checking on it, you would obtain independent verification IP to do the testing. On the other hand, if you want a more abstract model of the IP you’ve purchased for architectural work, then you can generate modeling IP from the design IP itself.

Such architectural models are often available from IP vendors themselves, but, assuming they’re derived from the implementation, they can’t really be used to test the implementation. Any bugs in the implementation will end up in the model.

Carbon Design Systems announced the Carbon IP Exchange a couple months ago, which, at first glance, sounded like another attempt at an IP marketplace or clearinghouse. But, in fact, it’s a place to go get modeling IP. In some cases, they’re just redistributing someone else’s models; in other cases, they’re generating C-level models from the RTL design IP (in the case of ARM, they do both of these); and in some cases, they wrap highly configurable IP in a GUI that allows selection of parameters.

That this isn’t verification IP for the purpose of checking out implementations is clear for two reasons: some of the models derive from the implementations, and, in the cases where the model is being configured, there is no link to connect the model configuration to the actual implementation configuration (because, according to Bill Neifert, their CTO, customers haven’t requested that). These both point to the models being used architecturally, not as implementation verification IP.

Legacy code is often simply considered to be internally-generated IP, and companies with well-structured IP acquisition and integration mechanisms allow groups from one division to “publish” their IP for consumption elsewhere in the company. But there’s a critical difference to legacy code, especially when not well managed: while commercial IP has an owner (it’s just not you that owns it), legacy code usually has none. It typically wasn’t designed as an independent product, in which case it won’t have undergone the required quality-control mechanisms that formal IP would (or should) undergo.

So, while commercial IP is hopefully designed to be a drop-in, with no knowledge of the internal workings required (or even allowed), legacy code typically has to be made your own. Someone else may have written it, but buck-stoppage transfers to you when you use the code. So you end up having to study it to make sure that, in the design review (or, heaven forefend, failure analysis meeting), you can stand up for the code. It’s like taking responsibility for the guy you brought to the party – you want to know that he’s not some out-of-control meth-head that’s going to make you look bad in the end.

Of course, you don’t want to look bad with a poor commercial IP choice either, but at least there are due-diligence measures you can use there, and, most importantly, there’s someone still around to blame.

Legacy code would almost seem the most problematic source of code. It’s “ours” (the next best thing to “mine”), so it automatically gains tribal acceptance. It’s also free. And, importantly, it carries forward decisions and conventions previously agreed to, increasing the likelihood that new equipment will work in a manner consistent with old equipment. So it will get used; restarting every design from scratch is not an option. So, absent good internal-IP quality controls, designers will have to be more careful with legacy code than they will with commercial IP.

Finally, we have open source code. And here we need to make a more careful distinction between hardware and software. The concept of open-source hardware is something of a non-starter for most designers.  IPextreme’s Warren Savage says simply that, for hardware, there is “… zero chance of open source.” The reason for that is that open-source IP has none of the benefits of legacy IP and all of the downsides. While you can patch software bugs, you’re not going to risk a mask change simply for the sake of saving a few bucks on design. So open-source hardware code is pretty much dead at present.

Open-source software, on the other hand, is alive and well. It’s really astonishing the number of algorithms and protocols for which open-source implementations exist. Here again, quality should be a concern, but since you’re getting the source code, you can take ownership much the way you would with legacy code.

The one gotcha that remains with open-source software is the licensing. With IP, licensing is explicitly negotiated with payment terms, and companies are used to implementing whatever tracking mechanisms are required to ensure that royalty obligations, or whatever other terms might exist, are met. There is no such negotiation or contract (other than the click-through, whose signing can usually be summarized as, “Yeah, yeah, whatever… <click>”) when open-source code is downloaded.

And there is a variety of styles of license that may apply to any given piece of code. Some, like BSD licensing, are considered more commercially friendly; others, like GPL, are considered less so because they may require you to allow proprietary improvements you make to the code to be available to others for free. Many a manager worries about complex code being “contaminated” by code having a license inconsistent with the planned code deployment.

This is an area that Protecode is trying to address. They have a database of code and licensing that they can use to scan a codebase to see if it contains any licensing issues. They also provide tools and methodologies for managing the licensing obligations across large, complex projects. So, while quality and ownership issues remain, there are attempts to manage the potential legal surprises that open-source code can yield.

So, from an ownership standpoint, you really have two kinds of code: your own code and OPC. IP remains fully OPC since an owner remains for that code: the provider. It’s a bit more tenuous if you’re doing your own implementation and just purchasing verification IP to make sure it’s right – since the actual code to be shipped is yours. But, regardless, someone’s an owner.

Legacy and open-source code, on the other hand, are really orphans. While they had parents, the parents have died or run away or are in rehab. So if you’re going to adopt them, you have to make them feel like part of the family.

Either someone else needs to own it or you need to make it your own. That’s the only way to be down with OPC.

 

More info:

Protecode

Carbon IP Exchange

IPextreme

 

Leave a Reply

featured blogs
Apr 18, 2021
https://youtu.be/afv9_fRCrq8 Made at Target Oakridge (camera Ziyue Zhang) Monday: "Targeting" the Open Compute Project Tuesday: NUMECA, Computational Fluid Dynamics...and the America's... [[ Click on the title to access the full blog on the Cadence Community s...
Apr 16, 2021
Spring is in the air and summer is just around the corner. It is time to get out the Old Farmers Almanac and check on the planting schedule as you plan out your garden.  If you are unfamiliar with a Farmers Almanac, it is a publication containing weather forecasts, plantin...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

Featured Chalk Talk

SLX FPGA: Accelerate the Journey from C/C++ to FPGA

Sponsored by Silexica

High-level synthesis (HLS) brings incredible power to FPGA design. But harnessing the full power of HLS with FPGAs can be difficult even for the most experienced engineering teams. In this episode of Chalk Talk, Amelia Dalton chats with Jordon Inkeles of Silexica about using the SLX FPGA tool to truly harness the power of HLS with FPGAs, getting better results faster - regardless of whether you are approaching from the hardware or software domain.

More information about SLX FPGA