feature article
Subscribe Now

Should You Reuse RTL?


Ever since hardware description languages (HDLs) were first put into use to specify electronic designs, designers have recycled code. New insights into the use of these HDLs for design are gained by copying and modifying – with the requisite permissions of course – existing examples. By placing that code into the new designs, everybody saves time since designers do not needlessly re-invent an existing block of code. Someone spent significant time and energy to design that code block and other designers rightfully want to leverage that work. Until the entire design community, standards groups and EDA vendors can deliver a methodology and supporting tools that enable designers to create code for reuse in an automatic and efficient manner, the recycling of existing code will certainly continue. This recycling occurs over a whole spectrum:

  • Cutting & pasting a single routine or function
  • Copying existing simple blocks or modules into the new design
  • Copying large design blocks and building around them with new blocks
  • Building around an existing platform by adding a few new blocks

In all these cases, code is recycled that probably was never originally meant to be used “as-is” as reusable code. This implies that there is some work involved in recycling the code. The key to recycling is to quickly figure out how much work is required in order to decide whether or not recycling is practical.

Determine Design Integrity

Design integrity fundamentally means that all the code is complete and ready for simulation. If the code has successfully been used in a product, designers should have a higher level of comfort that the code is complete. However, when faced with a directory of data to recycle, they need to figure out what files are necessary and which ones can be discarded for their new project. Finally, they need to figure out the integrity of the remaining files.

Typically, the first step is to find the top level of the design in order to ascertain which files comprise the design description from the root of the design on down. Additionally, designers can eliminate any extraneous files from the project, since extra files are the hardest to deal with and they can include design variants, experimental blocks and the output of design tools that are not required for the new project.

Dealing with Missing Files

On the surface, missing files can be a signal that the design is not worthy of recycling and this can shake a designer’s confidence that this code was ever used in production. However, deeper analysis is required:

  • Include or package files. Sometimes the missing files are simply include or package files that are defined once and reused on many projects. Many times these files are located in a central location for all project teams to use. If that is the case, then the missing files exist; they just need to be found and included in the project.

  • Referenced technology instances. Sometimes, the original designer references a technology-specific block or cell that is automatically inserted via a downstream tool or generator. In this case, the instance needs to be replaced with actual RTL code or an equivalent cell found in the new technology.

  • Referenced blocks. Other times, the referenced block is a place holder for a block that was to be created later in the project.

Dealing with Errors

Again, if designers see syntax or semantic errors, the tendency is to walk away from the code. However, deeper analysis is required here as well:

  • Missing files. If the design had missing files, tools can react badly and issue a slew of errors. Some of these errors cascade throughout the code. For example, if a package file that defines a custom type is missing, then everywhere that type is used an error can result. In the best case, adding the missing file to the project could eliminate all the errors that are happening. Worst case, there are still errors to deal with.

  • Tool settings. Quite often a tool requires a certain setting to get error-free compilation or runs. The most common example is a setting that tells the tool the dialect of the language.

  • Actual errors. These are errors that must be fixed in order to move forward with the design. Typically, syntax errors are easier to fix than semantic errors. Syntax errors can mean simply adding or changing a character or fixing a typographical error. Semantic errors mean that code has violated the Language Reference Manual (LRM) in the use of the construct.

Understanding the Design

Designers can understand what the code does by using a tool that allows visualization techniques. These tools can generate a spreadsheet or block diagram view outlining the top-level structure of the design. Designers can then visualize the lower level blocks as tables, block diagrams, finite state machines or flowcharts.

These tools can also convert code to graphics allowing designers to edit and generate VHDL or Verilog. So, if a missing design block is found, it can be quickly created.

Figure 1: Visual Representation of RTL Code Aid in Understanding the Design.

Determine Code Quality

Assuming that the code passed initial checks, quality is the next consideration. To automate code quality assessment, designers can use tools to perform this. Often, a company has a set of reuse rules commonly based on the Reuse Methodology Manual. Separately, companies capture rules that represent collective knowledge about code that can cause design errors. Most tools will also score the code by using weights for each rule. Figure 2 shows an example of design quality results.

Figure 2: Code Quality Assessment Results Based on Design Rules.

 In this example, the code scored an 89% rating. Most of the problems were due to Code Reuse violations. Code quality typically is categorized by:

  • Coding practices. Violations that indicate potential functional errors. Examples include incomplete sensitivity lists, unintended combinational feedback, unused signals and multiply-driven signals. These areas must be corrected in order to reuse the code.

  • Downstream tool issues. Violations that indicate the code will have problems in a specific tool in your tool chain. For example, the code contains elements that cannot be synthesized. These checks assume that your team has created rules based on experience with a particular set of downstream tools to be used in the project. Changing code to meet the needs of a downstream tool can be complex. It can mean rewriting big pieces of code and re-coding due to unsupported constructs.

  • Style. Violations due to how code physically appears in the file. These checks are typically focused around reuse, making the code easier for someone to read and therefore reuse. Examples include: using one declaration per line, comment rules, indentation and consistent FSM encoding technique. Checks for parameterization can fall into this category as well. Hard-coded values make the code less flexible. Thus, these violations don’t strictly need to be addressed, but they do give designers an indication of how easy it will be to understand and reuse the code for their project.

Establish Design Validity

The most time spent in a project is typically validating code; whether it is new code or recycled code. At this stage, designers are tying to ensure that the recycled code functions as expected. Verification is a complex process that can require many techniques and tools. For quick evaluation of the recycled code, designers need to establish some simple “markers” that help them decide the validity of the code, instead of actually performing the process:

  • Does a testbench exist? If not, one has to be created or generated and it takes time. A testbench that minimally can test if the code at least responds to input can take many hours. A testbench that can give designers the confidence that the code is fully functional can take weeks. Designers can try to get by without a testbench for the code block and just use the high-level testbench for the entire project, but it is not advised.

  • Does the testbench contain many tests? If not, testing routines must be added to ensure acceptable coverage. Look for the use of more advanced techniques like monitors, scoreboards and randomized tests. These indicate a comprehensive testing strategy.

  • If code coverage can be analyzed, does the testbench exercise the entire design? Is the code specifically instrumented for functional coverage? If the coverage level is unacceptable, designers have to upgrade the testbench. The impact is minimal if the coverage is acceptable. The impact can be major if the coverage is not acceptable or if the appropriate code needs to be inserted to instrument for functional coverage.

  • Do the testbench and the design contain assertions? If not, debugging time increases. Assertions allow designers to pin-point the logic and the exact time a violation occurred. Without them, they typically have to work their way back from a bad output value to find out the logical cause for the error. The impact is minimal if the code has quality assertions. The impact can be major if the debugging time increases due to lack of assertions.

  • Are there simulation errors? If so, each one takes time to debug. The impact is related to the quantity of errors.

  • How much time do you actually spend simulating? If a designer is lucky, the code that he or she is trying to recycle has a quality testbench. If the code is large and complex, running the simulation could actually take days but the code needs to be simulated at least once, regardless of whether or not the code is recycled or not. However, as code is modified, it has to be re-simulated.


The method of recycling code in order to accelerate design projects promises to remain a common practice. What is uncommon is a technique to determine, within a day’s time, whether or not a piece of code is worthy of reuse. This article outlines some basic guidelines to help validate whether the initial estimate of the time savings due to recycling code actually will be realized.

Designers establishing their own recycling methodology now encourage a new method for thinking about the problem and that allows them to identify the tools needed to automate the process. As they work with automation techniques and tools, designers will gravitate towards a design flow that naturally enables developing code for reuse. Eventually, they realize that they are actually designing new code for reuse as an effect of employing practical recycling techniques.

Leave a Reply

featured blogs
Dec 7, 2023
Building on the success of previous years, the 2024 edition of the DATE (Design, Automation and Test in Europe) conference will once again include the Young People Programme. The largest electronic design automation (EDA) conference in Europe, DATE will be held on 25-27 March...
Dec 7, 2023
Explore the different memory technologies at the heart of AI SoC memory architecture and learn about the advantages of SRAM, ReRAM, MRAM, and beyond.The post The Importance of Memory Architecture for AI SoCs appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Universal Verification Methodology Coverage for Bluespec RISC-V Cores

Sponsored by Synopsys

This whitepaper explains the basics of UVM functional coverage for RISC-V cores using the Google RISCV-DV open-source project, Synopsys verification solutions, and a RISC-V processor core from Bluespec.

Click to read more

featured chalk talk

ADI's ISOverse
In order to move forward with innovations on the intelligent edge, we need to take a close look at isolation and how it can help foster the adoption of high voltage charging solutions and reliable and robust high speed communication. In this episode of Chalk Talk, Amelia Dalton is joined by Allison Lemus, Maurizio Granato, and Karthi Gopalan from Analog Devices and they examine benefits that isolation brings to intelligent edge applications including smart building control, the enablement of Industry 4.0, and more. They also examine how Analog Devices iCoupler® digital isolation technology can encourage innovation big and small!  
Mar 14, 2023