feature article
Subscribe Now

On the Cutting-Edge of FPGA Design and Verification

As advanced FPGAs have grown to ASIC proportions in terms of size and complexity, their design and verification have become correspondingly more difficult. This has driven the need for greater expertise in the design and verification of FPGAs. However, many companies, large and small, lack either the resources or the expertise for these demanding designs, so they turn to engineering service firms like A2e Technologies.

Typically projects reach us at a point in the design schedule where schedules are tight and time is of the essence. We need to get up to speed quickly on tough, challenging designs, and we must use cutting-edge technologies, including simulators, that help us quickly understand the design and give us the flexibility to work with different languages in the same design environment.

Living on the Edge: FPGA Design Services
As a third-party design and verification service, A2e offers a wide range of product development services, including hardware, firmware, software, analog, and digital FPGAs. Our customers hire us to do everything from a single electronic component to an entire consumer product. We can manage all or part the design process to ensure product quality and success. The volumes of our customers’ products range from millions per year for high-volume consumer products, down to thousands for test and measurement equipment. Projects can last anywhere from six weeks to two years, with the typical duration coming in around six months.

For a number of reasons, FPGAs have increased as a part of our business. For one thing, the cost of designing an ASIC has continued to grow while the price for FPGAs has dropped to the point that a large portion of the ASIC work is going into FPGAs. At the same time, the growth in size and complexity of FPGAs has expanded their uses into areas that were previously the sole domain of ASICs. Although we deal with everything from small EPLDs to multi-million gate FPGAs, most of our work is with the larger, cutting-edge FPGAs. Technology has progressed so much over the past few years that designers can squeeze just about everything onto a single FPGA. And, with the exception of emulating an ASIC, there is typically one FPGA per board. Five to ten years ago, that wasn’t the case; you had to use multiple FPGAs on a board.

These massive FPGAs typically integrate many intellectual property (IP) components and have many features and components, such as multiple processor cores and multi-layered bus architectures. Unfortunately, a large portion of the designs coming in the door are poorly documented. This prevents us from moving forward quickly on new projects, as we must fully understand the code, schematics, design intent, and so on before starting work on a design. Because designers usually don’t include all the necessary comments, we often must recreate the design intent by back-tracking through the partially finished design to understand the details.

In the engineering services world, getting the job done quickly and cost effectively is of paramount importance. The goal is to keep costs down, improve design quality, and meet tight project schedules, which means employing the best possible tools and methodologies. The quality of our documentation is also critical. We have to hand off a complete, easy-to-read documentation package to our customers.

With all of these requirements, it is no surprise that our tool suite is as broad as our services, with a design and verification flow that is not technology dependent. We use DxDesigner for schematic capture, Synplicity for synthesis, ModelSim® Designer for digital simulation, Hyperlynx for signal integrity, and PSPICE for analog simulation.

Visualization Maximizes Productivity
A graphical design and verification tool is key to maximum productivity. Our customers deliver their partially competed design in the form of hundreds of files written in Verilog or VHDL. We need to understand what’s there, and we need to do it quickly.

It is much easier for the human brain to grasp large amounts of information visually, for instance in a state diagram, than by reading thousands of lines of code. The proof is in the unmistakable productivity increases we’ve seen at every step in the process when using graphical state machine editors. When we bring up the initial design graphically, we quickly can assess the overall project and then go down to a specific module and look at its details. The ability to “reverse engineer” an existing design—by looking at graphical state machines, feeding it into the simulator, and then visually observing the results—helps us to gauge the level of effort much quicker than if we had to pour through code line by line and figure it out ourselves.

Working graphically is essentially self-documenting. This is important for a number of reasons; including the comprehensiveness of the final reports we deliver to our customers. A graphical environment also means that we are not dependent upon comments to capture the designer’s intent. Another advantage is that fewer errors are introduced when we go from the abstract state diagram to the coding because the state-machine editor automates the translation into Verilog or VHDL, eliminating an error-prone manual process.

Graphical design and verification is a capability that we’ve needed for a long time, but have only seen in the past few years. For our needs, it is important that the simulator scale to whatever size project we face—handling the EPLDs and multimillion gate FPGAs equally well. It must also support all of the major standards and synthesis technologies in order to fit our vendor independent design flow.

Automating Testbench Generation
Testbench automation is also important to us. ModelSim Designer features a Tcl scripting feature. Upon our simply typing a single command, the simulator does all the compiling, automatically runs the script and testbench, and displays a new waveform. We extensively rely on this capability to automate our testbenches, and then we go back and do our adjustments in the graphical state machine. This way we can quickly compile the whole testbench and look at the results, significantly speeding debug.

With Tcl script, we have an automated way of checking our code. When we change a single line to fix a bug, we have to make sure something else did not break. We run a regression test that sets all the features and automatically runs all the testbenches. Because the whole process is automated, the speed at which we can make these changes is much greater than it would be otherwise.

It is equally important that we can do design tweaks, run a simulation, then go back and debug in a single environment. Accessing all of these tasks within the same GUI boosts both productivity and performance. It is much more efficient compared to firing up multiple tool sets that invariably have problems communicating with each other. Tool integration also impacts performance, because it frees us from having to use a foreign language interface (FLI).

Flexibility Supports Many Languages and Vendors
The ability to do mixed-mode simulation of both VHDL and Verilog is absolutely critical. Historically, we either got designs in Verilog or VHDL, but the twain never met. For example, a customer might purchase a piece of IP, perhaps a memory, that is in Verilog, but the other 90 percent of the design is in another language.

Mixed-mode simulation enables us to simulate Verilog and VHDL, different models, and different languages all at the same time. This flexibility is very important to us. In fact, we are using mixed-mode simulation on a current project. It is an internal core, memory-based IP written in VHDL, and we’re writing the majority of the project in Verilog.

Interoperability with multiple FPGA vendors is also essential since we work with all leading FPGA vendors. The simulator must seamlessly integrate with the synthesis technologies of all of these vendors without forcing us to change tools.

Design services for leading-edge FPGA creation is not for the faint hearted, especially since these projects are relatively complicated compared to those done by most design houses. Success in our highly competitive market requires us to deliver superior designs within aggressive schedules at an attractive price. The only way to achieve all that is to use the most cutting-edge tools and methodologies, enabling us to quickly comprehend, design, and verify technology that pushes the envelope.

Using a simulator that supports the creation of graphical state and block diagrams of existing code allows us to quickly come up to speed on new designs and realistically determine the scope of work. That, in turn, enables us to put the right talent on the right projects to get them done successfully and in short order. By using the optimal set of tools to serve the needs of our customers, A2e has successfully become a critical design partner for many leading technology companies.

Leave a Reply

featured blogs
Oct 22, 2020
WARNING: If you read this blog and visit the featured site, Max'€™s Cool Beans will accept no responsibility for the countless hours you may fritter away....
Oct 22, 2020
Cadence ® Spectre ® AMS Designer is a high-performance mixed-signal simulation system. The ability to use multiple engines and drive from a variety of platforms enables you to "rev... [[ Click on the title to access the full blog on the Cadence Community site....
Oct 20, 2020
In 2020, mobile traffic has skyrocketed everywhere as our planet battles a pandemic. Samtec.com saw nearly double the mobile traffic in the first two quarters than it normally sees. While these levels have dropped off from their peaks in the spring, they have not returned to ...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

Autonomous vehicles, robotics, augmented and virtual reality all require simultaneous localization and mapping (SLAM) to build a map of the surroundings. Combining SLAM with a neural network engine adds intelligence, allowing the system to identify objects and make decisions. In this demo, Synopsys ARC EV processor’s vision engine (VPU) accelerates KudanSLAM algorithms by up to 40% while running object detection on its CNN engine.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

An engineer’s guide to autonomous and collaborative industrial robots

Sponsored by Texas Instruments

As robots are becoming more commonplace in factories, it is important that they become more intelligent, autonomous, safer and efficient. All of this is enabled with precise motor control, advanced sensing technologies and processing at the edge, all with robust real-time communication. In our e-book, an engineer’s guide to industrial robots, we take an in-depth look at the key technologies used in various robotic applications.

Click here to download the e-book

Featured Chalk Talk

ERFV Coax Connectors

Sponsored by Mouser Electronics and TE Connectivity

5G pushes every dimension of electronic and RF design, and that puts extraordinary demand on or connectors. The best designs in the world won’t work reliably if your connector solution isn’t up to the task. In this episode of Chalk Talk, Amelia Dalton chats with Claude de Lorraine of TE Connectivity about ERFV Coax Connectors - RF connectors that are designed specifically for 5G applications.

Click here for more information about TE Connectivity ERFV Coax Connectors