feature article
Subscribe Now

Should You Reuse RTL?

Introduction

Ever since hardware description languages (HDLs) were first put into use to specify electronic designs, designers have recycled code. New insights into the use of these HDLs for design are gained by copying and modifying – with the requisite permissions of course – existing examples. By placing that code into the new designs, everybody saves time since designers do not needlessly re-invent an existing block of code. Someone spent significant time and energy to design that code block and other designers rightfully want to leverage that work. Until the entire design community, standards groups and EDA vendors can deliver a methodology and supporting tools that enable designers to create code for reuse in an automatic and efficient manner, the recycling of existing code will certainly continue. This recycling occurs over a whole spectrum:

  • Cutting & pasting a single routine or function
  • Copying existing simple blocks or modules into the new design
  • Copying large design blocks and building around them with new blocks
  • Building around an existing platform by adding a few new blocks

In all these cases, code is recycled that probably was never originally meant to be used “as-is” as reusable code. This implies that there is some work involved in recycling the code. The key to recycling is to quickly figure out how much work is required in order to decide whether or not recycling is practical.

Determine Design Integrity

Design integrity fundamentally means that all the code is complete and ready for simulation. If the code has successfully been used in a product, designers should have a higher level of comfort that the code is complete. However, when faced with a directory of data to recycle, they need to figure out what files are necessary and which ones can be discarded for their new project. Finally, they need to figure out the integrity of the remaining files.

Typically, the first step is to find the top level of the design in order to ascertain which files comprise the design description from the root of the design on down. Additionally, designers can eliminate any extraneous files from the project, since extra files are the hardest to deal with and they can include design variants, experimental blocks and the output of design tools that are not required for the new project.

Dealing with Missing Files

On the surface, missing files can be a signal that the design is not worthy of recycling and this can shake a designer’s confidence that this code was ever used in production. However, deeper analysis is required:

  • Include or package files. Sometimes the missing files are simply include or package files that are defined once and reused on many projects. Many times these files are located in a central location for all project teams to use. If that is the case, then the missing files exist; they just need to be found and included in the project.

  • Referenced technology instances. Sometimes, the original designer references a technology-specific block or cell that is automatically inserted via a downstream tool or generator. In this case, the instance needs to be replaced with actual RTL code or an equivalent cell found in the new technology.

  • Referenced blocks. Other times, the referenced block is a place holder for a block that was to be created later in the project.

Dealing with Errors

Again, if designers see syntax or semantic errors, the tendency is to walk away from the code. However, deeper analysis is required here as well:

  • Missing files. If the design had missing files, tools can react badly and issue a slew of errors. Some of these errors cascade throughout the code. For example, if a package file that defines a custom type is missing, then everywhere that type is used an error can result. In the best case, adding the missing file to the project could eliminate all the errors that are happening. Worst case, there are still errors to deal with.

  • Tool settings. Quite often a tool requires a certain setting to get error-free compilation or runs. The most common example is a setting that tells the tool the dialect of the language.

  • Actual errors. These are errors that must be fixed in order to move forward with the design. Typically, syntax errors are easier to fix than semantic errors. Syntax errors can mean simply adding or changing a character or fixing a typographical error. Semantic errors mean that code has violated the Language Reference Manual (LRM) in the use of the construct.

Understanding the Design

Designers can understand what the code does by using a tool that allows visualization techniques. These tools can generate a spreadsheet or block diagram view outlining the top-level structure of the design. Designers can then visualize the lower level blocks as tables, block diagrams, finite state machines or flowcharts.

These tools can also convert code to graphics allowing designers to edit and generate VHDL or Verilog. So, if a missing design block is found, it can be quickly created.

Figure 1: Visual Representation of RTL Code Aid in Understanding the Design.

Determine Code Quality

Assuming that the code passed initial checks, quality is the next consideration. To automate code quality assessment, designers can use tools to perform this. Often, a company has a set of reuse rules commonly based on the Reuse Methodology Manual. Separately, companies capture rules that represent collective knowledge about code that can cause design errors. Most tools will also score the code by using weights for each rule. Figure 2 shows an example of design quality results.

Figure 2: Code Quality Assessment Results Based on Design Rules.

 In this example, the code scored an 89% rating. Most of the problems were due to Code Reuse violations. Code quality typically is categorized by:

  • Coding practices. Violations that indicate potential functional errors. Examples include incomplete sensitivity lists, unintended combinational feedback, unused signals and multiply-driven signals. These areas must be corrected in order to reuse the code.

  • Downstream tool issues. Violations that indicate the code will have problems in a specific tool in your tool chain. For example, the code contains elements that cannot be synthesized. These checks assume that your team has created rules based on experience with a particular set of downstream tools to be used in the project. Changing code to meet the needs of a downstream tool can be complex. It can mean rewriting big pieces of code and re-coding due to unsupported constructs.

  • Style. Violations due to how code physically appears in the file. These checks are typically focused around reuse, making the code easier for someone to read and therefore reuse. Examples include: using one declaration per line, comment rules, indentation and consistent FSM encoding technique. Checks for parameterization can fall into this category as well. Hard-coded values make the code less flexible. Thus, these violations don’t strictly need to be addressed, but they do give designers an indication of how easy it will be to understand and reuse the code for their project.

Establish Design Validity

The most time spent in a project is typically validating code; whether it is new code or recycled code. At this stage, designers are tying to ensure that the recycled code functions as expected. Verification is a complex process that can require many techniques and tools. For quick evaluation of the recycled code, designers need to establish some simple “markers” that help them decide the validity of the code, instead of actually performing the process:

  • Does a testbench exist? If not, one has to be created or generated and it takes time. A testbench that minimally can test if the code at least responds to input can take many hours. A testbench that can give designers the confidence that the code is fully functional can take weeks. Designers can try to get by without a testbench for the code block and just use the high-level testbench for the entire project, but it is not advised.

  • Does the testbench contain many tests? If not, testing routines must be added to ensure acceptable coverage. Look for the use of more advanced techniques like monitors, scoreboards and randomized tests. These indicate a comprehensive testing strategy.

  • If code coverage can be analyzed, does the testbench exercise the entire design? Is the code specifically instrumented for functional coverage? If the coverage level is unacceptable, designers have to upgrade the testbench. The impact is minimal if the coverage is acceptable. The impact can be major if the coverage is not acceptable or if the appropriate code needs to be inserted to instrument for functional coverage.

  • Do the testbench and the design contain assertions? If not, debugging time increases. Assertions allow designers to pin-point the logic and the exact time a violation occurred. Without them, they typically have to work their way back from a bad output value to find out the logical cause for the error. The impact is minimal if the code has quality assertions. The impact can be major if the debugging time increases due to lack of assertions.

  • Are there simulation errors? If so, each one takes time to debug. The impact is related to the quantity of errors.

  • How much time do you actually spend simulating? If a designer is lucky, the code that he or she is trying to recycle has a quality testbench. If the code is large and complex, running the simulation could actually take days but the code needs to be simulated at least once, regardless of whether or not the code is recycled or not. However, as code is modified, it has to be re-simulated.

Conclusion

The method of recycling code in order to accelerate design projects promises to remain a common practice. What is uncommon is a technique to determine, within a day’s time, whether or not a piece of code is worthy of reuse. This article outlines some basic guidelines to help validate whether the initial estimate of the time savings due to recycling code actually will be realized.

Designers establishing their own recycling methodology now encourage a new method for thinking about the problem and that allows them to identify the tools needed to automate the process. As they work with automation techniques and tools, designers will gravitate towards a design flow that naturally enables developing code for reuse. Eventually, they realize that they are actually designing new code for reuse as an effect of employing practical recycling techniques.

Leave a Reply

featured blogs
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 23, 2024
We explore Aerospace and Government (A&G) chip design and explain how Silicon Lifecycle Management (SLM) ensures semiconductor reliability for A&G applications.The post SLM Solutions for Mission-Critical Aerospace and Government Chip Designs appeared first on Chip ...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Optimize Performance: RF Solutions from PCB to Antenna
Sponsored by Mouser Electronics and Amphenol
RF is a ubiquitous design element found in a large variety of electronic designs today. In this episode of Chalk Talk, Amelia Dalton and Rahul Rajan from Amphenol RF discuss how you can optimize your RF performance through each step of the signal chain. They examine how you can utilize Amphenol’s RF wide range of connectors including solutions for PCBs, board to board RF connectivity, board to panel and more!
May 25, 2023
37,570 views