feature article
Subscribe Now

Making a Connection

The Mathworks Simplifies Transfers

A few years ago I took the train from San Diego back home to Sunnyvale. This actually involved three steps: a commuter train to LA, Amtrak to San Jose, and then a commuter train to Sunnyvale.

The train is a fabulous way to travel. However, it is also true that Amtrak is fabulous as long as you don’t have to be anywhere at any particular time. And, true to form, we stayed stationary in San Luis Obispo for a couple hours while they sorted out some crew problem.

I had planned the trip with lots of margin for error. But not quite enough. We arrived into San Jose six minutes after the last commuter train of the day left.

Six minutes.

I even asked a conductor ahead of time to see if any arrangements (or acceleration) might be possible to make the connection.

Apparently not.

After arriving, I mentioned the situation to the Amtrak station agent. She got this shocked look on her face – it would never have occurred to her. Each system is in its own little world, and the thought of talking to the person just five feet to your left about perhaps holding that last train for just a couple minutes doesn’t come up.

I was OK – I could take a bus (with a 40-minute walk after), but the international travelers going up the peninsula were dumped with nary a howdy-do on the platform. Welcome to California. Hope you can find a ride. See ya.

That’s kind of how we do things out here. We have our transit systems, but, historically, they don’t connect. Many don’t even intersect – you have to take a cab or bus or walk to get between them, although that’s slowly improving. It’s been only recently that you could get from the train to BART or the airport directly. Even so, it’s only physically possible. There’s no real schedule coordination going on.

As a result, this describes the process I just tried in order to get from San Jose Airport to Santa Cruz:

–          Get to shuttle stop right after shuttle left

–          Wait 15 minutes for the next shuttle bus to the train

–          10 minute ride to the train

–          Wait 20 minutes for the train

–          5 minute ride to San Jose for the Santa Cruz bus – arrive right after the bus left

–          1 hour wait for the next bus

–          1 hour ride to Santa Cruz

In other words, there was more wait time than ride time, all of which could have been fixed by coordination.

Of course, fixing things requires money, and money, in this case, requires someone to pay, and, while we like transit (or we like having emptier roads for us), we don’t like to pay for it, so fixing it is hard.

And by this time, you’re wondering, “OK, this guy is pissed off because it took so long to get home from the airport yesterday. Why do I care about his crappy day?”

Because this is completely analogous to so many design flows out there. Each tool starts as a world unto itself. It has its own gozinta and generates its own gozouta. The next tool in the chain has a gozinta that’s different from the prior tool’s gozouta, so you script the transition to keep from punching someone the umpteenth time you have to do it. And some tools compound the problem by trying to make themselves the center of the flow.

This has also gotten better – it’s had to, given the complexity of silicon design flows. And it’s easier to fix things because the money used for fixing comes from investors. (Not that getting investor money is actually easy, but, depending on the market cycle, it can be easier than getting a new transit bond passed.) New startups now know better than to suggest to any CAD manager that he or she should change their painstakingly-crafted flow to accommodate a new tool (that may or may not be around in a few months).

In fact, the further down the flow you get, the more things get tied together – too much, in fact. Special ECO flows are needed to break things when changes are required at the last minute.

Which introduces yet another issue with broken flows: design synchronization. The problem with making an ECO change at the last minute is that the finished design no longer corresponds to the original design files that went through the flow. It’s a price that has to be paid, given the cumulative number of sleepless person-nights that would result from trying to push a last-minute change through the entire flow.

But, more and more, there’s a focus on keeping the design in synch at the various stages of the flow.

Which ideally would mean having one golden reference that specifies everything necessary for that level of abstraction and then only adding more detail as the flow progresses. In that manner, no change at one stage would affect anything at a prior stage.

We’re not there yet, but the Mathworks recently announced another couple steps in that direction. One focuses on their ability to generate embedded C code and target specific systems. This bridges the gap between “experimental” code and code that’s actually runnable on a system.

But the more ambitious step was to integrate the FPGA design flow under Simulink. With their Simulink HDL Coder product, you can now generate an entire FPGA without touching an FPGA tool.

As long as it fits.

It’s long been a goal of various companies to abstract out the FPGA design process. But that’s kind of like abstracting out the layout process. There is no design that is completely isomorphic to an FPGA structure, so there is never any guarantee that any given design will fit.

And, if it does fit, there are thousands of threatened FPGA hardware designers that will remind you that they could make the implementation much more efficient – use a smaller device, leave you more breathing room in the device you’ve got, or get you better performance. And they’re probably right.

Of course, if time were available, hand-crafting transistors would be better than using standard cells too. At some point, good enough becomes good enough.

But there’s another reason why good enough is particularly relevant for IC designers: typically, FPGAs are a prototyping tool. They won’t go to production, so cost/size isn’t an issue. And they’ll never have the actual performance of the finished silicon, so, while faster is usually better, it matters less.

This makes more real the ability to focus on SoC design without having to detour off into specialized FPGA design.

As long as it fits.

This tool (and others with a similar goal) allows you to control some aspects of how the design will be implemented in the FPGA with the HDL Workflow Advisor. The idea in general is to abstract the most critical decisions required in the FPGA flow and make them visible through the Mathworks interface. That increases the chances that a successful fit will result.

The quality of the HDL code generated will also matter. And “quality” in this case often means style; what’s good quality for one synthesis tool may not be good for another. Such quality has to be proven over time and can have a big impact on fit and performance.

Of course, in the end, if a design won’t fit for some reason (or if it will fit but miss performance), then there may be no recourse but to bring in an FPGA designer to get the thing going. And that’s where you break the link. Now you’ve got to get off the train at one end of town and take a cab several blocks to the transit station.

But the fact remains that, to date, it’s been impossible to transfer smoothly for most designs; this reduces the number of times when you’ll need to take the cab.

So while there will probably never be a completely integrated system that allows you to move seamlessly from your thoughts to silicon (or from the airport to your front door), gradually, gradually, the systems are linking up.

At least with EDA tools, you don’t have to pass a bond measure to make it happen.

Leave a Reply

Making a Connection

The Mathworks Simplifies Transfers

A few years ago I took the train from San Diego back home to Sunnyvale. This actually involved three steps: a commuter train to LA, Amtrak to San Jose, and then a commuter train to Sunnyvale.

The train is a fabulous way to travel. However, it is also true that Amtrak is fabulous as long as you don’t have to be anywhere at any particular time. And, true to form, we stayed stationary in San Luis Obispo for a couple hours while they sorted out some crew problem.

I had planned the trip with lots of margin for error. But not quite enough. We arrived into San Jose six minutes after the last commuter train of the day left.

Six minutes.

I even asked a conductor ahead of time to see if any arrangements (or acceleration) might be possible to make the connection.

Apparently not.

After arriving, I mentioned the situation to the Amtrak station agent. She got this shocked look on her face – it would never have occurred to her. Each system is in its own little world, and the thought of talking to the person just five feet to your left about perhaps holding that last train for just a couple minutes doesn’t come up.

I was OK – I could take a bus (with a 40-minute walk after), but the international travelers going up the peninsula were dumped with nary a howdy-do on the platform. Welcome to California. Hope you can find a ride. See ya.

That’s kind of how we do things out here. We have our transit systems, but, historically, they don’t connect. Many don’t even intersect – you have to take a cab or bus or walk to get between them, although that’s slowly improving. It’s been only recently that you could get from the train to BART or the airport directly. Even so, it’s only physically possible. There’s no real schedule coordination going on.

As a result, this describes the process I just tried in order to get from San Jose Airport to Santa Cruz:

–          Get to shuttle stop right after shuttle left

–          Wait 15 minutes for the next shuttle bus to the train

–          10 minute ride to the train

–          Wait 20 minutes for the train

–          5 minute ride to San Jose for the Santa Cruz bus – arrive right after the bus left

–          1 hour wait for the next bus

–          1 hour ride to Santa Cruz

In other words, there was more wait time than ride time, all of which could have been fixed by coordination.

Of course, fixing things requires money, and money, in this case, requires someone to pay, and, while we like transit (or we like having emptier roads for us), we don’t like to pay for it, so fixing it is hard.

And by this time, you’re wondering, “OK, this guy is pissed off because it took so long to get home from the airport yesterday. Why do I care about his crappy day?”

Because this is completely analogous to so many design flows out there. Each tool starts as a world unto itself. It has its own gozinta and generates its own gozouta. The next tool in the chain has a gozinta that’s different from the prior tool’s gozouta, so you script the transition to keep from punching someone the umpteenth time you have to do it. And some tools compound the problem by trying to make themselves the center of the flow.

This has also gotten better – it’s had to, given the complexity of silicon design flows. And it’s easier to fix things because the money used for fixing comes from investors. (Not that getting investor money is actually easy, but, depending on the market cycle, it can be easier than getting a new transit bond passed.) New startups now know better than to suggest to any CAD manager that he or she should change their painstakingly-crafted flow to accommodate a new tool (that may or may not be around in a few months).

In fact, the further down the flow you get, the more things get tied together – too much, in fact. Special ECO flows are needed to break things when changes are required at the last minute.

Which introduces yet another issue with broken flows: design synchronization. The problem with making an ECO change at the last minute is that the finished design no longer corresponds to the original design files that went through the flow. It’s a price that has to be paid, given the cumulative number of sleepless person-nights that would result from trying to push a last-minute change through the entire flow.

But, more and more, there’s a focus on keeping the design in synch at the various stages of the flow.

Which ideally would mean having one golden reference that specifies everything necessary for that level of abstraction and then only adding more detail as the flow progresses. In that manner, no change at one stage would affect anything at a prior stage.

We’re not there yet, but the Mathworks recently announced another couple steps in that direction. One focuses on their ability to generate embedded C code and target specific systems. This bridges the gap between “experimental” code and code that’s actually runnable on a system.

But the more ambitious step was to integrate the FPGA design flow under Simulink. With their Simulink HDL Coder product, you can now generate an entire FPGA without touching an FPGA tool.

As long as it fits.

It’s long been a goal of various companies to abstract out the FPGA design process. But that’s kind of like abstracting out the layout process. There is no design that is completely isomorphic to an FPGA structure, so there is never any guarantee that any given design will fit.

And, if it does fit, there are thousands of threatened FPGA hardware designers that will remind you that they could make the implementation much more efficient – use a smaller device, leave you more breathing room in the device you’ve got, or get you better performance. And they’re probably right.

Of course, if time were available, hand-crafting transistors would be better than using standard cells too. At some point, good enough becomes good enough.

But there’s another reason why good enough is particularly relevant for IC designers: typically, FPGAs are a prototyping tool. They won’t go to production, so cost/size isn’t an issue. And they’ll never have the actual performance of the finished silicon, so, while faster is usually better, it matters less.

This makes more real the ability to focus on SoC design without having to detour off into specialized FPGA design.

As long as it fits.

This tool (and others with a similar goal) allows you to control some aspects of how the design will be implemented in the FPGA with the HDL Workflow Advisor. The idea in general is to abstract the most critical decisions required in the FPGA flow and make them visible through the Mathworks interface. That increases the chances that a successful fit will result.

The quality of the HDL code generated will also matter. And “quality” in this case often means style; what’s good quality for one synthesis tool may not be good for another. Such quality has to be proven over time and can have a big impact on fit and performance.

Of course, in the end, if a design won’t fit for some reason (or if it will fit but miss performance), then there may be no recourse but to bring in an FPGA designer to get the thing going. And that’s where you break the link. Now you’ve got to get off the train at one end of town and take a cab several blocks to the transit station.

But the fact remains that, to date, it’s been impossible to transfer smoothly for most designs; this reduces the number of times when you’ll need to take the cab.

So while there will probably never be a completely integrated system that allows you to move seamlessly from your thoughts to silicon (or from the airport to your front door), gradually, gradually, the systems are linking up.

At least with EDA tools, you don’t have to pass a bond measure to make it happen.

Leave a Reply

featured blogs
Oct 26, 2020
Last week was the Linley Group's Fall Processor Conference. The conference opened, as usual, with Linley Gwenap's overview of the processor market (both silicon and IP). His opening keynote... [[ Click on the title to access the full blog on the Cadence Community s...
Oct 23, 2020
Processing a component onto a PCB used to be fairly straightforward. Through-hole products, or a single or double row surface mount with a larger centerline rarely offer unique challenges obtaining a proper solder joint. However, as electronics continue to get smaller and con...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...
Oct 23, 2020
Any suggestions for a 4x4 keypad in which the keys aren'€™t wobbly and you don'€™t have to strike a key dead center for it to make contact?...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

See Inuitive’s NU4000 3D imaging and vision processor in action. The SoC supports high-quality 3D depth processor engine, SLAM accelerators, computer vision, and deep learning by integrating Synopsys ARC EV processor. In this demo, the NU4000 demonstrates simultaneous 3D sensing, SLAM and CNN functionality by mapping out its environment and localizing the sensor while identifying the objects within it. For more information, visit inuitive-tech.com.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

Designing highly efficient, powerful and fast EV charging stations

Sponsored by Texas Instruments

Scaling the necessary power for fast EV charging stations can be challenging. One solution is to use modular power converters stacked in parallel. Learn more in our technical article.

Click here to download the technical article

Featured Chalk Talk

Nano Pulse Control Clears Issues in the Automotive and Industrial Markets

Sponsored by Mouser Electronics and ROHM Semiconductor

In EV and industrial applications, converting from high voltages on the power side to low voltages on the electronics side poses a big challenge. In order to convert big voltage drops efficiently, you need very narrow pulse widths. In this episode of Chalk Talk, Amelia Dalton chats with Satya Dixit from ROHM about new Nano Pulse Control technology that changes the game in DC to DC conversion.

More information about ROHM Semiconductor BD9V10xMUF Buck Converters