feature article
Subscribe Now

Making a Connection

The Mathworks Simplifies Transfers

A few years ago I took the train from San Diego back home to Sunnyvale. This actually involved three steps: a commuter train to LA, Amtrak to San Jose, and then a commuter train to Sunnyvale.

The train is a fabulous way to travel. However, it is also true that Amtrak is fabulous as long as you don’t have to be anywhere at any particular time. And, true to form, we stayed stationary in San Luis Obispo for a couple hours while they sorted out some crew problem.

I had planned the trip with lots of margin for error. But not quite enough. We arrived into San Jose six minutes after the last commuter train of the day left.

Six minutes.

I even asked a conductor ahead of time to see if any arrangements (or acceleration) might be possible to make the connection.

Apparently not.

After arriving, I mentioned the situation to the Amtrak station agent. She got this shocked look on her face – it would never have occurred to her. Each system is in its own little world, and the thought of talking to the person just five feet to your left about perhaps holding that last train for just a couple minutes doesn’t come up.

I was OK – I could take a bus (with a 40-minute walk after), but the international travelers going up the peninsula were dumped with nary a howdy-do on the platform. Welcome to California. Hope you can find a ride. See ya.

That’s kind of how we do things out here. We have our transit systems, but, historically, they don’t connect. Many don’t even intersect – you have to take a cab or bus or walk to get between them, although that’s slowly improving. It’s been only recently that you could get from the train to BART or the airport directly. Even so, it’s only physically possible. There’s no real schedule coordination going on.

As a result, this describes the process I just tried in order to get from San Jose Airport to Santa Cruz:

–          Get to shuttle stop right after shuttle left

–          Wait 15 minutes for the next shuttle bus to the train

–          10 minute ride to the train

–          Wait 20 minutes for the train

–          5 minute ride to San Jose for the Santa Cruz bus – arrive right after the bus left

–          1 hour wait for the next bus

–          1 hour ride to Santa Cruz

In other words, there was more wait time than ride time, all of which could have been fixed by coordination.

Of course, fixing things requires money, and money, in this case, requires someone to pay, and, while we like transit (or we like having emptier roads for us), we don’t like to pay for it, so fixing it is hard.

And by this time, you’re wondering, “OK, this guy is pissed off because it took so long to get home from the airport yesterday. Why do I care about his crappy day?”

Because this is completely analogous to so many design flows out there. Each tool starts as a world unto itself. It has its own gozinta and generates its own gozouta. The next tool in the chain has a gozinta that’s different from the prior tool’s gozouta, so you script the transition to keep from punching someone the umpteenth time you have to do it. And some tools compound the problem by trying to make themselves the center of the flow.

This has also gotten better – it’s had to, given the complexity of silicon design flows. And it’s easier to fix things because the money used for fixing comes from investors. (Not that getting investor money is actually easy, but, depending on the market cycle, it can be easier than getting a new transit bond passed.) New startups now know better than to suggest to any CAD manager that he or she should change their painstakingly-crafted flow to accommodate a new tool (that may or may not be around in a few months).

In fact, the further down the flow you get, the more things get tied together – too much, in fact. Special ECO flows are needed to break things when changes are required at the last minute.

Which introduces yet another issue with broken flows: design synchronization. The problem with making an ECO change at the last minute is that the finished design no longer corresponds to the original design files that went through the flow. It’s a price that has to be paid, given the cumulative number of sleepless person-nights that would result from trying to push a last-minute change through the entire flow.

But, more and more, there’s a focus on keeping the design in synch at the various stages of the flow.

Which ideally would mean having one golden reference that specifies everything necessary for that level of abstraction and then only adding more detail as the flow progresses. In that manner, no change at one stage would affect anything at a prior stage.

We’re not there yet, but the Mathworks recently announced another couple steps in that direction. One focuses on their ability to generate embedded C code and target specific systems. This bridges the gap between “experimental” code and code that’s actually runnable on a system.

But the more ambitious step was to integrate the FPGA design flow under Simulink. With their Simulink HDL Coder product, you can now generate an entire FPGA without touching an FPGA tool.

As long as it fits.

It’s long been a goal of various companies to abstract out the FPGA design process. But that’s kind of like abstracting out the layout process. There is no design that is completely isomorphic to an FPGA structure, so there is never any guarantee that any given design will fit.

And, if it does fit, there are thousands of threatened FPGA hardware designers that will remind you that they could make the implementation much more efficient – use a smaller device, leave you more breathing room in the device you’ve got, or get you better performance. And they’re probably right.

Of course, if time were available, hand-crafting transistors would be better than using standard cells too. At some point, good enough becomes good enough.

But there’s another reason why good enough is particularly relevant for IC designers: typically, FPGAs are a prototyping tool. They won’t go to production, so cost/size isn’t an issue. And they’ll never have the actual performance of the finished silicon, so, while faster is usually better, it matters less.

This makes more real the ability to focus on SoC design without having to detour off into specialized FPGA design.

As long as it fits.

This tool (and others with a similar goal) allows you to control some aspects of how the design will be implemented in the FPGA with the HDL Workflow Advisor. The idea in general is to abstract the most critical decisions required in the FPGA flow and make them visible through the Mathworks interface. That increases the chances that a successful fit will result.

The quality of the HDL code generated will also matter. And “quality” in this case often means style; what’s good quality for one synthesis tool may not be good for another. Such quality has to be proven over time and can have a big impact on fit and performance.

Of course, in the end, if a design won’t fit for some reason (or if it will fit but miss performance), then there may be no recourse but to bring in an FPGA designer to get the thing going. And that’s where you break the link. Now you’ve got to get off the train at one end of town and take a cab several blocks to the transit station.

But the fact remains that, to date, it’s been impossible to transfer smoothly for most designs; this reduces the number of times when you’ll need to take the cab.

So while there will probably never be a completely integrated system that allows you to move seamlessly from your thoughts to silicon (or from the airport to your front door), gradually, gradually, the systems are linking up.

At least with EDA tools, you don’t have to pass a bond measure to make it happen.

Leave a Reply

Making a Connection

The Mathworks Simplifies Transfers

A few years ago I took the train from San Diego back home to Sunnyvale. This actually involved three steps: a commuter train to LA, Amtrak to San Jose, and then a commuter train to Sunnyvale.

The train is a fabulous way to travel. However, it is also true that Amtrak is fabulous as long as you don’t have to be anywhere at any particular time. And, true to form, we stayed stationary in San Luis Obispo for a couple hours while they sorted out some crew problem.

I had planned the trip with lots of margin for error. But not quite enough. We arrived into San Jose six minutes after the last commuter train of the day left.

Six minutes.

I even asked a conductor ahead of time to see if any arrangements (or acceleration) might be possible to make the connection.

Apparently not.

After arriving, I mentioned the situation to the Amtrak station agent. She got this shocked look on her face – it would never have occurred to her. Each system is in its own little world, and the thought of talking to the person just five feet to your left about perhaps holding that last train for just a couple minutes doesn’t come up.

I was OK – I could take a bus (with a 40-minute walk after), but the international travelers going up the peninsula were dumped with nary a howdy-do on the platform. Welcome to California. Hope you can find a ride. See ya.

That’s kind of how we do things out here. We have our transit systems, but, historically, they don’t connect. Many don’t even intersect – you have to take a cab or bus or walk to get between them, although that’s slowly improving. It’s been only recently that you could get from the train to BART or the airport directly. Even so, it’s only physically possible. There’s no real schedule coordination going on.

As a result, this describes the process I just tried in order to get from San Jose Airport to Santa Cruz:

–          Get to shuttle stop right after shuttle left

–          Wait 15 minutes for the next shuttle bus to the train

–          10 minute ride to the train

–          Wait 20 minutes for the train

–          5 minute ride to San Jose for the Santa Cruz bus – arrive right after the bus left

–          1 hour wait for the next bus

–          1 hour ride to Santa Cruz

In other words, there was more wait time than ride time, all of which could have been fixed by coordination.

Of course, fixing things requires money, and money, in this case, requires someone to pay, and, while we like transit (or we like having emptier roads for us), we don’t like to pay for it, so fixing it is hard.

And by this time, you’re wondering, “OK, this guy is pissed off because it took so long to get home from the airport yesterday. Why do I care about his crappy day?”

Because this is completely analogous to so many design flows out there. Each tool starts as a world unto itself. It has its own gozinta and generates its own gozouta. The next tool in the chain has a gozinta that’s different from the prior tool’s gozouta, so you script the transition to keep from punching someone the umpteenth time you have to do it. And some tools compound the problem by trying to make themselves the center of the flow.

This has also gotten better – it’s had to, given the complexity of silicon design flows. And it’s easier to fix things because the money used for fixing comes from investors. (Not that getting investor money is actually easy, but, depending on the market cycle, it can be easier than getting a new transit bond passed.) New startups now know better than to suggest to any CAD manager that he or she should change their painstakingly-crafted flow to accommodate a new tool (that may or may not be around in a few months).

In fact, the further down the flow you get, the more things get tied together – too much, in fact. Special ECO flows are needed to break things when changes are required at the last minute.

Which introduces yet another issue with broken flows: design synchronization. The problem with making an ECO change at the last minute is that the finished design no longer corresponds to the original design files that went through the flow. It’s a price that has to be paid, given the cumulative number of sleepless person-nights that would result from trying to push a last-minute change through the entire flow.

But, more and more, there’s a focus on keeping the design in synch at the various stages of the flow.

Which ideally would mean having one golden reference that specifies everything necessary for that level of abstraction and then only adding more detail as the flow progresses. In that manner, no change at one stage would affect anything at a prior stage.

We’re not there yet, but the Mathworks recently announced another couple steps in that direction. One focuses on their ability to generate embedded C code and target specific systems. This bridges the gap between “experimental” code and code that’s actually runnable on a system.

But the more ambitious step was to integrate the FPGA design flow under Simulink. With their Simulink HDL Coder product, you can now generate an entire FPGA without touching an FPGA tool.

As long as it fits.

It’s long been a goal of various companies to abstract out the FPGA design process. But that’s kind of like abstracting out the layout process. There is no design that is completely isomorphic to an FPGA structure, so there is never any guarantee that any given design will fit.

And, if it does fit, there are thousands of threatened FPGA hardware designers that will remind you that they could make the implementation much more efficient – use a smaller device, leave you more breathing room in the device you’ve got, or get you better performance. And they’re probably right.

Of course, if time were available, hand-crafting transistors would be better than using standard cells too. At some point, good enough becomes good enough.

But there’s another reason why good enough is particularly relevant for IC designers: typically, FPGAs are a prototyping tool. They won’t go to production, so cost/size isn’t an issue. And they’ll never have the actual performance of the finished silicon, so, while faster is usually better, it matters less.

This makes more real the ability to focus on SoC design without having to detour off into specialized FPGA design.

As long as it fits.

This tool (and others with a similar goal) allows you to control some aspects of how the design will be implemented in the FPGA with the HDL Workflow Advisor. The idea in general is to abstract the most critical decisions required in the FPGA flow and make them visible through the Mathworks interface. That increases the chances that a successful fit will result.

The quality of the HDL code generated will also matter. And “quality” in this case often means style; what’s good quality for one synthesis tool may not be good for another. Such quality has to be proven over time and can have a big impact on fit and performance.

Of course, in the end, if a design won’t fit for some reason (or if it will fit but miss performance), then there may be no recourse but to bring in an FPGA designer to get the thing going. And that’s where you break the link. Now you’ve got to get off the train at one end of town and take a cab several blocks to the transit station.

But the fact remains that, to date, it’s been impossible to transfer smoothly for most designs; this reduces the number of times when you’ll need to take the cab.

So while there will probably never be a completely integrated system that allows you to move seamlessly from your thoughts to silicon (or from the airport to your front door), gradually, gradually, the systems are linking up.

At least with EDA tools, you don’t have to pass a bond measure to make it happen.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Reliable Connections for Rugged Handling
Sponsored by Mouser Electronics and Amphenol
Materials handling is a growing market for electronic designs. In this episode of Chalk Talk, Amelia Dalton and Jordan Grupe from Amphenol Industrial explore the variety of connectivity solutions that Amphenol Industrial offers for materials handling designs. They also examine the DIN charging solutions that Amphenol Industrial offers and the specific applications where these connectors can be a great fit.
Dec 5, 2023
29,881 views