feature article
Subscribe Now

Tabula Tames Verification

DesignInsight Brings Unique Debugging Superpowers

I have to prepare myself any time I go to meet with Steve Teig from Tabula. Steve is a bona-fide genius, and any time I talk with him I feel like I have to have my mental running shoes tightly laced. Steve brings a level of creativity and insight to the table that one seldom encounters, and when he’s telling you about a new thing, you can bet it will be something you didn’t expect.

So, when I went to Tabula for a briefing with Steve on what has now been announced as the new DesignInsight technology, I knew it wouldn’t just be another one of your typical hum-drum, “we added a completely predictable new feature to our chips” kinda deal. I wasn’t disappointed.

Tabula, for those of you who haven’t been following along, makes programmable logic chips that are probably most closely related to FPGAs. They are similar in that they feature an array of logic cells based on look-up-tables (LUTs) that can be programmed and interconnected to perform a variety of logic functions. Tabula’s architecture is distinct, because each of those logic cells is time-domain multiplexed at a frequency of about 2GHz to achieve what appears logically to be a 3D FPGA. This architecture, called “spacetime” allows a Tabula ABAX device to perform the function of a much larger FPGA on a much smaller piece of silicon. The magic of this time-domain multiplexing is hidden from us as designers by the metaphor of treating the device as a 3D array of LUTs. As far as synthesis goes, this is a normal FPGA. According to place-and-route, it’s business as usual, except the placement takes place in three spatial dimensions.

Does your brain hurt yet? We’ve explained it all before in these articles:

[links]

As you might expect, this 3D-ness makes Tabula’s devices theoretically much cheaper for a given amount of logic than a conventional FPGA, or it allows them to build a much bigger FPGA than you could effectively build with a conventional architecture and a monolithic device. This is cool, of course, but it turns out that the architecture has other less-obvious benefits that are really useful. We have talked about some of them in the past – like the fact that timing closure is made much simpler by the fact that most routes traverse the multiplexer circuitry and therefore have a completely predictable routing delay. What’s more, that delay allows for hedging of clock edge boundaries – since there is a much faster clock than the system logic clock operating the multiplexers.

Today, we’re here to talk about a much bigger benefit. And, it appears this one has been part of Tabula’s plan all along, but it is just now being made public. DesignInsight is a revolutionary design verification technology, made possible by Tabula’s unique spacetime architecture and by careful design of the accompanying “stylus” design tool suite. 

As we have discussed at length in these pages, FPGAs offer an extremely attractive alternative to custom chips because they eliminate the substantial non-recurring engineering (NRE) costs and the enormous risks associated with doing a custom chip design. Doing a custom chip at today’s leading-edge node can run into the hundreds of millions of dollars, and that kind of cost isn’t possible to amortize over anything but the largest-volume designs. However, while FPGAs remove a lot of the challenges of custom chip design, we still face a substantial obstacle in design verification. Doing a complex, multi-million-gate SoC design in an FPGA does not reduce the challenge of making sure your design is functionally correct. And, design verification is estimated to occupy as much as 70% of the engineering for today’s designs.

Tabula plans to do something about that.

Normally, when we are still in the stage of simulating our design, we have wonderful insight into what our design is doing. We can view signals at will, anywhere in our design, with basically zero turnaround time. The simulation model is completely transparent, accessible, and controllable. Using assertion-based verification, we can instrument our design to warn us if anything gets out of whack. Simulation is an amazingly useful tool in our efforts to create and initially debug our design.

It is also incredibly slow. Simulation often runs thousands of times slower than the actual clock speed of our design, and that makes simulating a meaningful range of situations with a modern design such as a 100G packet processor a practical impossibility.

What we can do, of course, is to build actual hardware and run it at actual speeds to find our bugs. The problem with that approach is that we no longer have the kind of visibility and control that we get with a software simulation. And, when we do find a problem, we face the additional challenge that the optimized, fully implemented version of our design sometimes bears little or no resemblance to the thing we actually originally created in HDL. So, tracking a functional problem back to the code that caused it can be a nightmare.

There are some solutions to this problem, of course. We can use multi-million dollar emulators that run much faster than simulation, although still much slower than our actual design, and get some substantial benefits in terms of visibility and traceability. If we’re doing an FPGA design (or using an FPGA prototype of our design), we can instrument our FPGA designs with virtual logic analyzers by inserting special additional hardware into the design that monitors selected signals for us.

The problem with those approaches, though, is that we still have limited visibility into which signals we can and can’t observe, long cycles to re-compile our design if we want to change what we’re looking at, and the thing we’re debugging isn’t actually the same as our real design. We’re debugging a model that is, in most ways (we hope) a good approximation of our design. This can lead to all kinds of situations where the approximation doesn’t behave the same as the real thing. Bugs in our actual design may not show up in the prototype and vice versa. 

What we would love is a way to observe and debug our actual, working design – using tools like SystemVerilog assertions – while our design is operating normally.

That is what Tabula’s new DesignInsight technology can do.

Tieg explains that DesignInsight was one of the designed-in benefits of Tabula’s architecture from the get-go. DesignInsight takes advantage of the fact that all signals in Tabula’s ABAX chips have to pass through the multiplexing logic at the boundaries of each logic cell, and the state of those signals already has to be saved as the chip cycles through the “layers” of multiplexed cells at 2GHz. The magic muxing is accomplished by a special configuration layer that sequences the new logic functions into the cells at every stroke of the high-speed configuration clock, and retains the logic values of all signals on every layer until the next active cycle of that “fold” comes along.

That means, in simple terms, that Tabula’s chips already monitor and store the value of all signals at all times, and those signal values and logic states are all completely visible from the outside. The device acts as its own embedded logic analyzer. It can perform this function at full operating speed, and for every signal in our design.

The next part of the magic is in Tabula’s Stylus design tools. As your design goes from source HDL through synthesis and place-and-route, all of the connections to the original logic design are maintained and mapped onto the optimized, generated netlist. That means that this mapping can be traversed in reverse when the device is running and being monitored for debug. You can even monitor signals that were actually optimized away during synthesis and place-and-route.

The SystemVerilog assertion-based verification methodology attempts to separate the verification code from the design. The idea is that you should be able to add, change, and remove assertions at any time – without affecting the design itself. That separation works very well in a simulation environment, and DesignInsight has brought that same separation into the actual-hardware verification realm as well.

Now, we can observe any signal in our design, while the system is running, without interrupting execution. We can add and edit assertions, written in SystemVerilog or TCL, and apply them to a working, running system – with only a small compile step. This observation can be done in the lab, in prototypes, and even in fully deployed systems in the field. We can even do this remotely. We can apply triggers, and we can observe any signal in the design. The signal mapping in the tools allows us to do this in terms of our original design source, and we can map values back to our original code – even for signals and logic that were optimized away or dramatically altered during synthesis and place-and-route.

The implications of this technology are huge. We have all encountered situations where our design behaved perfectly well during all of our verification and all of our in-house testing. Then, when the system was operating in the field, new problems showed up as a result of different stimuli or different operating conditions. We had to roll a truck or bring a system back into the lab for diagnosis. Often, we would end up with a situation where the problem no longer showed up once we were set up to diagnose it properly.

With DesignInsight, we can remotely interact with a working system and catch and diagnose problems as they occur. Further, with this capability in the FPGA and with a little cleverness, we could use this capability to diagnose issues in our system far beyond what is inside the FPGA. Imagine getting a report from a customer that a system is acting up, and then being able to remotely connect your debugger to your customer’s system and debug it in terms of your original design source code.

If you’ve tried to verify a large, fast system design before – particularly one whose behavior varies over a very wide gamut of stimuli, you know this problem well. High-speed packet switches can mysteriously slow down as queues are unexpectedly filled by corner-case situations. Video processing systems can create odd artifacts that show up only in certain types of signals. The number and variety of situations that can arise in the field, compared with what our structured testing efforts can accomplish in the lab, are huge. The ability to interact with a working system and debug it without interrupting it or slowing it down is truly revolutionary.

DesignInsight is available now, and it is automatically included at no additional charge for all users of Tabula’s devices and tools. The company has also included verification code specifically to take advantage of this capability in the company’s 100G reference design and IP offering. 

Sadly, if you’re not using Tabula’s devices, DesignInsight is not an option for you. And this is certainly an interesting differentiator for the company’s products, well outside the scope of the usual suspects of bigger, faster, cheaper in the programmable logic industry. But then, that’s exactly the sort of thing we’d expect from Steve Teig.

5 thoughts on “Tabula Tames Verification”

  1. Will the instrument logic occupy the logic resource which may used for the design been monitored? If so, it is still not monitoring a real implementation of the design.

  2. Pkfan – there is no additional instrumentation logic. The normally-configured device can be monitored without the addition of any logic. The instrumentation is accomplished by the same circuitry that is already doing the spacetime constant reconfiguration.

  3. Kevin, thanks for your explanation! Do you mean every outputs of each logic cycle have been saved and could be monitored through the same circuit, which is also used for spacetime constant reconfiguration? If so, it is great! The next question is where to put those trigger logic? How to find those interesting signals from them? If we put the trigger logic in software, I guess it would be the new bottleneck of the whole flow.

  4. Kevin, thanks for your explanation!

    Regarding your question: “Do you mean every output of each logic cycle are saved and could be monitored through the same circuit, which is also used for Spacetime constant reconfiguration?”
    Yes, in the Spacetime architecture, states are stored in the interconnect. They can be retrieved and processed on the fly using the DesignInsight compiler and its associated trigger unit.

    “The next question is where to put those trigger logic? How do we find those interesting signals from them? If we put the trigger logic in software, I guess it would be the new bottleneck of the whole flow.”
    The trigger unit is a hard IP block built into the ABAX2P1 chip. It is configurable through the DesignInsight compiler and can support a large variety of complex triggers. In addition, it operates at the full fabric speed of 2GHz.

Leave a Reply

featured blogs
Nov 5, 2024
Learn about Works With Virtual, the premiere IoT developer event. Specifically for IoT developers, Silicon Labs will help you to accelerate IoT development. ...
Nov 7, 2024
I don't know about you, but I would LOVE to build one of those rock, paper, scissors-playing robots....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Tungsten 700/510 SMARC SOMs with Wi-Fi 6 / BLE
Sponsored by Mouser Electronics and Ezurio
In this episode of Chalk Talk, Pejman Kalkhorar from Ezurio and Amelia Dalton explore the biggest challenges for medical and industrial embedded designs. They also investigate the benefits that Ezurio’s Tungsten700 and 510 SOMs bring to these kinds of designs and how you can get started using them in your next design.
Nov 7, 2024
1,358 views