feature article
Subscribe Now

On the Cutting-Edge of FPGA Design and Verification

As advanced FPGAs have grown to ASIC proportions in terms of size and complexity, their design and verification have become correspondingly more difficult. This has driven the need for greater expertise in the design and verification of FPGAs. However, many companies, large and small, lack either the resources or the expertise for these demanding designs, so they turn to engineering service firms like A2e Technologies.

Typically projects reach us at a point in the design schedule where schedules are tight and time is of the essence. We need to get up to speed quickly on tough, challenging designs, and we must use cutting-edge technologies, including simulators, that help us quickly understand the design and give us the flexibility to work with different languages in the same design environment.

Living on the Edge: FPGA Design Services
As a third-party design and verification service, A2e offers a wide range of product development services, including hardware, firmware, software, analog, and digital FPGAs. Our customers hire us to do everything from a single electronic component to an entire consumer product. We can manage all or part the design process to ensure product quality and success. The volumes of our customers’ products range from millions per year for high-volume consumer products, down to thousands for test and measurement equipment. Projects can last anywhere from six weeks to two years, with the typical duration coming in around six months.

For a number of reasons, FPGAs have increased as a part of our business. For one thing, the cost of designing an ASIC has continued to grow while the price for FPGAs has dropped to the point that a large portion of the ASIC work is going into FPGAs. At the same time, the growth in size and complexity of FPGAs has expanded their uses into areas that were previously the sole domain of ASICs. Although we deal with everything from small EPLDs to multi-million gate FPGAs, most of our work is with the larger, cutting-edge FPGAs. Technology has progressed so much over the past few years that designers can squeeze just about everything onto a single FPGA. And, with the exception of emulating an ASIC, there is typically one FPGA per board. Five to ten years ago, that wasn’t the case; you had to use multiple FPGAs on a board.

These massive FPGAs typically integrate many intellectual property (IP) components and have many features and components, such as multiple processor cores and multi-layered bus architectures. Unfortunately, a large portion of the designs coming in the door are poorly documented. This prevents us from moving forward quickly on new projects, as we must fully understand the code, schematics, design intent, and so on before starting work on a design. Because designers usually don’t include all the necessary comments, we often must recreate the design intent by back-tracking through the partially finished design to understand the details.

In the engineering services world, getting the job done quickly and cost effectively is of paramount importance. The goal is to keep costs down, improve design quality, and meet tight project schedules, which means employing the best possible tools and methodologies. The quality of our documentation is also critical. We have to hand off a complete, easy-to-read documentation package to our customers.

With all of these requirements, it is no surprise that our tool suite is as broad as our services, with a design and verification flow that is not technology dependent. We use DxDesigner for schematic capture, Synplicity for synthesis, ModelSim® Designer for digital simulation, Hyperlynx for signal integrity, and PSPICE for analog simulation.

Visualization Maximizes Productivity
A graphical design and verification tool is key to maximum productivity. Our customers deliver their partially competed design in the form of hundreds of files written in Verilog or VHDL. We need to understand what’s there, and we need to do it quickly.

It is much easier for the human brain to grasp large amounts of information visually, for instance in a state diagram, than by reading thousands of lines of code. The proof is in the unmistakable productivity increases we’ve seen at every step in the process when using graphical state machine editors. When we bring up the initial design graphically, we quickly can assess the overall project and then go down to a specific module and look at its details. The ability to “reverse engineer” an existing design—by looking at graphical state machines, feeding it into the simulator, and then visually observing the results—helps us to gauge the level of effort much quicker than if we had to pour through code line by line and figure it out ourselves.

Working graphically is essentially self-documenting. This is important for a number of reasons; including the comprehensiveness of the final reports we deliver to our customers. A graphical environment also means that we are not dependent upon comments to capture the designer’s intent. Another advantage is that fewer errors are introduced when we go from the abstract state diagram to the coding because the state-machine editor automates the translation into Verilog or VHDL, eliminating an error-prone manual process.

Graphical design and verification is a capability that we’ve needed for a long time, but have only seen in the past few years. For our needs, it is important that the simulator scale to whatever size project we face—handling the EPLDs and multimillion gate FPGAs equally well. It must also support all of the major standards and synthesis technologies in order to fit our vendor independent design flow.

Automating Testbench Generation
Testbench automation is also important to us. ModelSim Designer features a Tcl scripting feature. Upon our simply typing a single command, the simulator does all the compiling, automatically runs the script and testbench, and displays a new waveform. We extensively rely on this capability to automate our testbenches, and then we go back and do our adjustments in the graphical state machine. This way we can quickly compile the whole testbench and look at the results, significantly speeding debug.

With Tcl script, we have an automated way of checking our code. When we change a single line to fix a bug, we have to make sure something else did not break. We run a regression test that sets all the features and automatically runs all the testbenches. Because the whole process is automated, the speed at which we can make these changes is much greater than it would be otherwise.

It is equally important that we can do design tweaks, run a simulation, then go back and debug in a single environment. Accessing all of these tasks within the same GUI boosts both productivity and performance. It is much more efficient compared to firing up multiple tool sets that invariably have problems communicating with each other. Tool integration also impacts performance, because it frees us from having to use a foreign language interface (FLI).

Flexibility Supports Many Languages and Vendors
The ability to do mixed-mode simulation of both VHDL and Verilog is absolutely critical. Historically, we either got designs in Verilog or VHDL, but the twain never met. For example, a customer might purchase a piece of IP, perhaps a memory, that is in Verilog, but the other 90 percent of the design is in another language.

Mixed-mode simulation enables us to simulate Verilog and VHDL, different models, and different languages all at the same time. This flexibility is very important to us. In fact, we are using mixed-mode simulation on a current project. It is an internal core, memory-based IP written in VHDL, and we’re writing the majority of the project in Verilog.

Interoperability with multiple FPGA vendors is also essential since we work with all leading FPGA vendors. The simulator must seamlessly integrate with the synthesis technologies of all of these vendors without forcing us to change tools.

Design services for leading-edge FPGA creation is not for the faint hearted, especially since these projects are relatively complicated compared to those done by most design houses. Success in our highly competitive market requires us to deliver superior designs within aggressive schedules at an attractive price. The only way to achieve all that is to use the most cutting-edge tools and methodologies, enabling us to quickly comprehend, design, and verify technology that pushes the envelope.

Using a simulator that supports the creation of graphical state and block diagrams of existing code allows us to quickly come up to speed on new designs and realistically determine the scope of work. That, in turn, enables us to put the right talent on the right projects to get them done successfully and in short order. By using the optimal set of tools to serve the needs of our customers, A2e has successfully become a critical design partner for many leading technology companies.

Leave a Reply

featured blogs
Dec 4, 2023
The OrCAD X and Allegro X 23.1 release comes with a brand-new content delivery application called Cadence Doc Assistant, shortened to Doc Assistant, the next-gen app for content searching, navigation, and presentation. Doc Assistant, with its simplified content classification...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

The Next Generation of Switching Regulator
Sponsored by Mouser Electronics and RECOM
Power modules can bring a variety of benefits to electronic system design including reduced board space, shorter time to market and easier sourcing of materials. In this episode of Chalk Talk, Amelia Dalton and Louis Bouche from RECOM discuss the benefits of RECOM’s switching regulators, the details of their advanced 3D power packaging and how you can leverage RECOM’s expertise with your next design.
Jan 9, 2023