feature article
Subscribe Now

Making a Connection

The Mathworks Simplifies Transfers

A few years ago I took the train from San Diego back home to Sunnyvale. This actually involved three steps: a commuter train to LA, Amtrak to San Jose, and then a commuter train to Sunnyvale.

The train is a fabulous way to travel. However, it is also true that Amtrak is fabulous as long as you don’t have to be anywhere at any particular time. And, true to form, we stayed stationary in San Luis Obispo for a couple hours while they sorted out some crew problem.

I had planned the trip with lots of margin for error. But not quite enough. We arrived into San Jose six minutes after the last commuter train of the day left.

Six minutes.

I even asked a conductor ahead of time to see if any arrangements (or acceleration) might be possible to make the connection.

Apparently not.

After arriving, I mentioned the situation to the Amtrak station agent. She got this shocked look on her face – it would never have occurred to her. Each system is in its own little world, and the thought of talking to the person just five feet to your left about perhaps holding that last train for just a couple minutes doesn’t come up.

I was OK – I could take a bus (with a 40-minute walk after), but the international travelers going up the peninsula were dumped with nary a howdy-do on the platform. Welcome to California. Hope you can find a ride. See ya.

That’s kind of how we do things out here. We have our transit systems, but, historically, they don’t connect. Many don’t even intersect – you have to take a cab or bus or walk to get between them, although that’s slowly improving. It’s been only recently that you could get from the train to BART or the airport directly. Even so, it’s only physically possible. There’s no real schedule coordination going on.

As a result, this describes the process I just tried in order to get from San Jose Airport to Santa Cruz:

–          Get to shuttle stop right after shuttle left

–          Wait 15 minutes for the next shuttle bus to the train

–          10 minute ride to the train

–          Wait 20 minutes for the train

–          5 minute ride to San Jose for the Santa Cruz bus – arrive right after the bus left

–          1 hour wait for the next bus

–          1 hour ride to Santa Cruz

In other words, there was more wait time than ride time, all of which could have been fixed by coordination.

Of course, fixing things requires money, and money, in this case, requires someone to pay, and, while we like transit (or we like having emptier roads for us), we don’t like to pay for it, so fixing it is hard.

And by this time, you’re wondering, “OK, this guy is pissed off because it took so long to get home from the airport yesterday. Why do I care about his crappy day?”

Because this is completely analogous to so many design flows out there. Each tool starts as a world unto itself. It has its own gozinta and generates its own gozouta. The next tool in the chain has a gozinta that’s different from the prior tool’s gozouta, so you script the transition to keep from punching someone the umpteenth time you have to do it. And some tools compound the problem by trying to make themselves the center of the flow.

This has also gotten better – it’s had to, given the complexity of silicon design flows. And it’s easier to fix things because the money used for fixing comes from investors. (Not that getting investor money is actually easy, but, depending on the market cycle, it can be easier than getting a new transit bond passed.) New startups now know better than to suggest to any CAD manager that he or she should change their painstakingly-crafted flow to accommodate a new tool (that may or may not be around in a few months).

In fact, the further down the flow you get, the more things get tied together – too much, in fact. Special ECO flows are needed to break things when changes are required at the last minute.

Which introduces yet another issue with broken flows: design synchronization. The problem with making an ECO change at the last minute is that the finished design no longer corresponds to the original design files that went through the flow. It’s a price that has to be paid, given the cumulative number of sleepless person-nights that would result from trying to push a last-minute change through the entire flow.

But, more and more, there’s a focus on keeping the design in synch at the various stages of the flow.

Which ideally would mean having one golden reference that specifies everything necessary for that level of abstraction and then only adding more detail as the flow progresses. In that manner, no change at one stage would affect anything at a prior stage.

We’re not there yet, but the Mathworks recently announced another couple steps in that direction. One focuses on their ability to generate embedded C code and target specific systems. This bridges the gap between “experimental” code and code that’s actually runnable on a system.

But the more ambitious step was to integrate the FPGA design flow under Simulink. With their Simulink HDL Coder product, you can now generate an entire FPGA without touching an FPGA tool.

As long as it fits.

It’s long been a goal of various companies to abstract out the FPGA design process. But that’s kind of like abstracting out the layout process. There is no design that is completely isomorphic to an FPGA structure, so there is never any guarantee that any given design will fit.

And, if it does fit, there are thousands of threatened FPGA hardware designers that will remind you that they could make the implementation much more efficient – use a smaller device, leave you more breathing room in the device you’ve got, or get you better performance. And they’re probably right.

Of course, if time were available, hand-crafting transistors would be better than using standard cells too. At some point, good enough becomes good enough.

But there’s another reason why good enough is particularly relevant for IC designers: typically, FPGAs are a prototyping tool. They won’t go to production, so cost/size isn’t an issue. And they’ll never have the actual performance of the finished silicon, so, while faster is usually better, it matters less.

This makes more real the ability to focus on SoC design without having to detour off into specialized FPGA design.

As long as it fits.

This tool (and others with a similar goal) allows you to control some aspects of how the design will be implemented in the FPGA with the HDL Workflow Advisor. The idea in general is to abstract the most critical decisions required in the FPGA flow and make them visible through the Mathworks interface. That increases the chances that a successful fit will result.

The quality of the HDL code generated will also matter. And “quality” in this case often means style; what’s good quality for one synthesis tool may not be good for another. Such quality has to be proven over time and can have a big impact on fit and performance.

Of course, in the end, if a design won’t fit for some reason (or if it will fit but miss performance), then there may be no recourse but to bring in an FPGA designer to get the thing going. And that’s where you break the link. Now you’ve got to get off the train at one end of town and take a cab several blocks to the transit station.

But the fact remains that, to date, it’s been impossible to transfer smoothly for most designs; this reduces the number of times when you’ll need to take the cab.

So while there will probably never be a completely integrated system that allows you to move seamlessly from your thoughts to silicon (or from the airport to your front door), gradually, gradually, the systems are linking up.

At least with EDA tools, you don’t have to pass a bond measure to make it happen.

Leave a Reply

Making a Connection

The Mathworks Simplifies Transfers

A few years ago I took the train from San Diego back home to Sunnyvale. This actually involved three steps: a commuter train to LA, Amtrak to San Jose, and then a commuter train to Sunnyvale.

The train is a fabulous way to travel. However, it is also true that Amtrak is fabulous as long as you don’t have to be anywhere at any particular time. And, true to form, we stayed stationary in San Luis Obispo for a couple hours while they sorted out some crew problem.

I had planned the trip with lots of margin for error. But not quite enough. We arrived into San Jose six minutes after the last commuter train of the day left.

Six minutes.

I even asked a conductor ahead of time to see if any arrangements (or acceleration) might be possible to make the connection.

Apparently not.

After arriving, I mentioned the situation to the Amtrak station agent. She got this shocked look on her face – it would never have occurred to her. Each system is in its own little world, and the thought of talking to the person just five feet to your left about perhaps holding that last train for just a couple minutes doesn’t come up.

I was OK – I could take a bus (with a 40-minute walk after), but the international travelers going up the peninsula were dumped with nary a howdy-do on the platform. Welcome to California. Hope you can find a ride. See ya.

That’s kind of how we do things out here. We have our transit systems, but, historically, they don’t connect. Many don’t even intersect – you have to take a cab or bus or walk to get between them, although that’s slowly improving. It’s been only recently that you could get from the train to BART or the airport directly. Even so, it’s only physically possible. There’s no real schedule coordination going on.

As a result, this describes the process I just tried in order to get from San Jose Airport to Santa Cruz:

–          Get to shuttle stop right after shuttle left

–          Wait 15 minutes for the next shuttle bus to the train

–          10 minute ride to the train

–          Wait 20 minutes for the train

–          5 minute ride to San Jose for the Santa Cruz bus – arrive right after the bus left

–          1 hour wait for the next bus

–          1 hour ride to Santa Cruz

In other words, there was more wait time than ride time, all of which could have been fixed by coordination.

Of course, fixing things requires money, and money, in this case, requires someone to pay, and, while we like transit (or we like having emptier roads for us), we don’t like to pay for it, so fixing it is hard.

And by this time, you’re wondering, “OK, this guy is pissed off because it took so long to get home from the airport yesterday. Why do I care about his crappy day?”

Because this is completely analogous to so many design flows out there. Each tool starts as a world unto itself. It has its own gozinta and generates its own gozouta. The next tool in the chain has a gozinta that’s different from the prior tool’s gozouta, so you script the transition to keep from punching someone the umpteenth time you have to do it. And some tools compound the problem by trying to make themselves the center of the flow.

This has also gotten better – it’s had to, given the complexity of silicon design flows. And it’s easier to fix things because the money used for fixing comes from investors. (Not that getting investor money is actually easy, but, depending on the market cycle, it can be easier than getting a new transit bond passed.) New startups now know better than to suggest to any CAD manager that he or she should change their painstakingly-crafted flow to accommodate a new tool (that may or may not be around in a few months).

In fact, the further down the flow you get, the more things get tied together – too much, in fact. Special ECO flows are needed to break things when changes are required at the last minute.

Which introduces yet another issue with broken flows: design synchronization. The problem with making an ECO change at the last minute is that the finished design no longer corresponds to the original design files that went through the flow. It’s a price that has to be paid, given the cumulative number of sleepless person-nights that would result from trying to push a last-minute change through the entire flow.

But, more and more, there’s a focus on keeping the design in synch at the various stages of the flow.

Which ideally would mean having one golden reference that specifies everything necessary for that level of abstraction and then only adding more detail as the flow progresses. In that manner, no change at one stage would affect anything at a prior stage.

We’re not there yet, but the Mathworks recently announced another couple steps in that direction. One focuses on their ability to generate embedded C code and target specific systems. This bridges the gap between “experimental” code and code that’s actually runnable on a system.

But the more ambitious step was to integrate the FPGA design flow under Simulink. With their Simulink HDL Coder product, you can now generate an entire FPGA without touching an FPGA tool.

As long as it fits.

It’s long been a goal of various companies to abstract out the FPGA design process. But that’s kind of like abstracting out the layout process. There is no design that is completely isomorphic to an FPGA structure, so there is never any guarantee that any given design will fit.

And, if it does fit, there are thousands of threatened FPGA hardware designers that will remind you that they could make the implementation much more efficient – use a smaller device, leave you more breathing room in the device you’ve got, or get you better performance. And they’re probably right.

Of course, if time were available, hand-crafting transistors would be better than using standard cells too. At some point, good enough becomes good enough.

But there’s another reason why good enough is particularly relevant for IC designers: typically, FPGAs are a prototyping tool. They won’t go to production, so cost/size isn’t an issue. And they’ll never have the actual performance of the finished silicon, so, while faster is usually better, it matters less.

This makes more real the ability to focus on SoC design without having to detour off into specialized FPGA design.

As long as it fits.

This tool (and others with a similar goal) allows you to control some aspects of how the design will be implemented in the FPGA with the HDL Workflow Advisor. The idea in general is to abstract the most critical decisions required in the FPGA flow and make them visible through the Mathworks interface. That increases the chances that a successful fit will result.

The quality of the HDL code generated will also matter. And “quality” in this case often means style; what’s good quality for one synthesis tool may not be good for another. Such quality has to be proven over time and can have a big impact on fit and performance.

Of course, in the end, if a design won’t fit for some reason (or if it will fit but miss performance), then there may be no recourse but to bring in an FPGA designer to get the thing going. And that’s where you break the link. Now you’ve got to get off the train at one end of town and take a cab several blocks to the transit station.

But the fact remains that, to date, it’s been impossible to transfer smoothly for most designs; this reduces the number of times when you’ll need to take the cab.

So while there will probably never be a completely integrated system that allows you to move seamlessly from your thoughts to silicon (or from the airport to your front door), gradually, gradually, the systems are linking up.

At least with EDA tools, you don’t have to pass a bond measure to make it happen.

Leave a Reply

featured blogs
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 23, 2024
We explore Aerospace and Government (A&G) chip design and explain how Silicon Lifecycle Management (SLM) ensures semiconductor reliability for A&G applications.The post SLM Solutions for Mission-Critical Aerospace and Government Chip Designs appeared first on Chip ...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

It’s the little things that get you; Light to Voltage Converters
In this episode of Chalk Talk, Amelia Dalton and Ed Mullins from Analog Devices chat about the what, where, and how of photodiode amplifiers. They discuss the challenges involved in designing these kinds of components, the best practices for analyzing the stability of photodiode amplifiers, and how Analog Devices can help you with your next photodiode amplifier design.
Apr 22, 2024
315 views