feature article
Subscribe Now

Intel and Altera – Eleven-Figure Chicken

Stakes are Raised and Tensions Build

We’ve written a few times now about the rabidly rumored Intel bid to buy Altera. In fact, we actually predicted the whole thing almost a year ago: 

When Intel Buys Altera — Will FPGAs Take Over the Data Center? 

Were we right? It’s still too early to tell. And all we really have to go on are: reports of leaked information, the rules of the game, technology trends, and our own speculation about the motivations and positions of the players involved. 

But one thing is for sure – this is one high-stakes game of chicken.

New information has emerged this week – in the form of a new Reuters report – which says that, according to “sources,” Intel signed a “standstill” agreement with Altera back in February. This means that Intel would be free to launch a hostile takeover bid for Altera as early as June 1. 

Why is that interesting? It revs up the engines to near maximum RPM and puts a timeline on when the clutches could be released. 

But this story really began around five years ago. And it’s hard to understand what’s going on today without knowing how we got here – poised perilously on the precipice. So, let’s review a timeline of the events that led us to this dusty road where two angry drivers are about to go head-to-head with pedals to the metal in an eleven-figure game of chicken. We’ll throw in our own interpretation as we go along, of course.

Waaay back in 2010, we talked about an obscure Intel product that mixed an Intel Atom processor with an Altera FPGA in the same package:

Intel’s Stellarton Mixes CPU and FPGA 

While that product – aimed at the embedded systems market – was a clear failure, it did indicate that Intel saw significant value in pairing conventional processors with FPGAs. 

Then, in 2011, Altera’s archrival Xilinx announced a new type of chip that combined conventional processors (ARM processors in this case) with FPGAs – on the same chip. Xilinx called this new family “Zynq,” and (spoiler alert) Zynq has been a pretty smashing success. We explained that here:

Xilinx Zynq Zigs, Zags, and Zooms

It didn’t take Altera long to fire back with an announcement of their own (mostly marketing at the time), that they would offer a combination of processors and FPGAs on the same chip.

Shaking Up Embedded Processing
Altera Introduces SoC FPGAs 

Then, fast-forward to November, 2012, Altera announced that they had created technology to compile OpenCL code targeting Altera FPGAs. That meant that people who had written high-performance software aiming to take advantage of accelerators like graphics processing units (GPUs) could run that software on FPGAs – with better performance and lower power consumption. We wrote about that here:

The Path to Accelleration
Altera Bets on OpenCL 

In that story, we said: “We could truly end up with data centers full of racks of servers – where each blade contains FPGA SoC devices running OpenCL code. The performance capabilities of such an architecture would be staggering, and, more importantly, the power-per-performance would be dramatically lower than for today’s high-performance computing systems. Since power is often the ultimate limitation on the amount of compute power we can deploy today, these could form the basis of the supercomputers of the future.”

Altera doubled down on that announcement shortly after, revealing that they had worked on a very close partnership with ARM to develop a tool flow that would make it particularly easy for ARM customers to take advantage of Altera SoC FPGAs.

The Secret Ingredient
Altera and ARM Roll FPGA/SoC Tools 

Then in February, 2013, Altera announced that they had signed a deal with Intel, where Intel would manufacture the next generation of Altera FPGAs and SoC FPGAs using Intel’s upcoming 14nm Tri-Gate technology.

We gave our take on that announcement here:

Altera Partners with Intel for 14nm Tri-Gate FPGAs

Because Intel is widely regarded as having the world’s most advanced semiconductor manufacturing processes, Altera had the potential to gain a technology advantage over archrival Xilinx, who is developing their own next-generation FPGAs based on TSMC’s 16nm FinFET technology. The terms of the Altera/Intel agreement were not disclosed at that time (and those terms will become EXTREMELY important later in our story), but Altera did say that their deal with Intel was exclusive and that no other “major FPGA company” would be allowed to work with Intel. (hint: the only other “major” FPGA company is Xilinx).

We explained more about that in March, 2013, and introduced potential interlopers Achronix and Tabula:

FPGA Wars
It’s Getting Hot at the Top 

A few months later, in June, 2013, Altera announced some details of the chips Intel would be building for them:

The Next-Node Battle Begins
Altera Announces “Generation 10” 

What Altera DID NOT announce at that time was – whose processors would be included in their upcoming Generation 10 SoC FPGAs. Logically, it should be ARM processors, because Altera had already ventured far enough down that path that switching horses would create severe disruption for Altera customers and would require a major investment in new tools and re-engineering of the SoC FPGA architecture. 

But that meant that Intel would be manufacturing processors designed by archrival ARM. And that possibility would have been hard for them to swallow.

Then, in October, 2013, Altera came clean. The Intel-fabbed, next-generation Altera Generation 10 FPGA SoCs would indeed contain ARM processors. And not just any ARM processors – they were to be the newly announced, 64-bit, data-center-ready ARM Cortex-A53.

ARMing a New Generation
Altera Announces Processor Architecture for Gen X 

Two important points from that announcement – Intel will be making data-center-ready chips for Altera, and those chips will contain ARM processors rather than Intel processors.

We need to think at this point about TWO Intels, or rather two different groups within Intel. One group has scored a major success by signing Altera as the first major client of Intel’s new and emerging merchant semiconductor fab business. Go Team!

The second group – who has most likely been on the sidelines through this process – is Intel’s data center group – which accounts for a very significant portion of Intel’s total revenue. This group may have noticed about this time that their own company is about to begin manufacturing devices for Altera that could pose a major risk to their core data center business. Intel was, in effect, possibly holding a gun to its own head. Totally not cool.

We were then left to speculate about the ensuing race between Altera and Xilinx to develop FinFET-based FPGAs – Altera partnering with Intel, and Xilinx partnering with TSMC. We dropped the details of that battle here:

Xilinx vs. Altera
Calling the Action in the Greatest Semiconductor Rivalry 

Compounding the problem for Intel, Xilinx would pretty obviously be out there making their own, similar, data-center-capable devices. Also with the potential to impact Intel’s data-center dominance. Then, Xilinx came right out and made those intentions known:

FPGAs Cool Off the Datacenter
Xilinx Heats Up the Race 

By this time, one thing was crystal clear – by 2016, there would be two FPGA companies making amazingly capable chips that combined conventional processors with FPGA fabric. Those devices would be serious game changers and would fly into a wide range of markets – all the way from embedded systems to the Internet of Things (IoT) to machine vision to – yep, data centers.

Oh, and most important for this discussion, they’d BOTH be doing it with ARM processors.

In early 2014 Altera showed a bit more of their strategy, announcing a series of floating-point arithmetic features for their FPGAs.

Toward 10 TeraFLOPS
Altera Kicks Up Floating Point 

This is significant because it signaled ever further that Altera was aiming at high-end processing applications – data center and high-performance computing. 

If you’re the Intel data center group, are you sweating yet? If not, you should be. You have a forty-billion-dollar business to defend. Intel’s dominance has always been primarily based on the fact that Intel has the world’s best semiconductor manufacturing. And having the world’s best semiconductor manufacturing is valuable only as long as you’re winning the race in Moore’s Law. 

But if Moore’s Law ends, there is no more race to win. Intel has to find another advantage. We talked about Moore’s Law ending in August, 2014:

The Sun Sets on Moore’s Law
Are FPGAs Harbingers of a New Era? 

But Intel knew that long before we wrote about it. They’ve probably had people trying to blast microscopic droplets of tin with boxcar-sized lasers to make EUV light for a while now. They know that making chips beyond 7nm (which should start around four years from now, according to Moore’s Law) is going to be very, very difficult – maybe impossible – and most definitely so expensive that almost nobody on Earth will be able to take any meaningful advantage of it. 

Intel dropped another bomb – albeit very, very quietly. In June, 2014, at the Gigaom Structure ‘14 event, Diane Bryant announced that Intel would be producing a version of their Xeon processor with a coherent FPGA in the same package.

Let’s think about that for a minute, shall we? Intel is worried enough about the future of their data center business that they’re going to make processors with FPGAs in the same package. Folks, FPGAs are very expensive. Intel must have made a deal with someone to buy these FPGAs, but they don’t identify the supplier. They also give no clue about where they’re going to get the tools required to make these FPGAs useful in any way. All in all – it’s a very mysterious announcement.

It was mysterious enough, in fact, that it inspired us to publish “When Intel Buys Altera” in June of last year:

When Intel Buys Altera
Will FPGAs Take Over the Datacenter? 

That article was based on the idea that we’ve pretty much established on this timeline – that Intel really needed FPGA technology to protect their dominant position in data center processors. But there are other ways they can get access to that technology without buying Altera, right? Let’s list them:

Intel could partner with an FPGA company. But that won’t guarantee that the FPGA company will apply their resources on the priorities needed to achieve Intel’s objective. Xilinx and Altera both have MANY market segments they’re attacking. Data centers is just one of them. If you listened to Altera’s latest quarterly earnings conference call, it’s pretty clear that Altera has only a small portion of their overall focus on data center applications. And, in order to really take advantage of FPGA technology, the FPGA needs to be on the same chip with the conventional processor – not just sitting next to it or in the same package. And Altera and Xilinx both seem quite content to proceed with ARM processors on their chips, which leaves Intel out in the cold.

Or, Intel could buy a different FPGA company, or make their own FPGA fabric, couldn’t they?

Ah, here is where the plot thickens. Things turn all nasty and gnarly. Apparently, in the February, 2013 agreement Intel signed with Altera, they agreed that they would not make FPGAs for any other major FPGA company (meaning Xilinx). And they agreed that they would not make FPGAs themselves, and that they would not BUY another FPGA company.

Yep, you read that right. In that agreement, Intel apparently shot themselves in the foot – repeatedly. When you put yourself in a position where you have no other alternatives, you are no longer negotiating. If Intel needs FPGAs to protect their data center business, they have signed an agreement that says they have no alternative but to get that FPGA technology from Altera. They can’t develop it themselves, and they can’t buy Xilinx. 

Whoa!

But, hang on, how long is the term of that agreement? Can’t they just wait it out?

Well, if we look at what Altera made public, that information is conveniently reduxed out. But we do have a hint – Altera had to ask for at least twelve years of supply guarantee for certain customers (like defense contractors). That means Intel probably had to agree to keep on making Altera FPGAs for at least twelve years. If that’s the term of the overall agreement as well, Intel is basically – what’s the technical term? Oh, yeah, “screwed.”

That brings us to the new information from Reuters. According to that report, sometime around February, 2015, Intel came to Altera with a tentative offer of somewhere around $58 per share. That was (as is common practice) contingent upon the two companies signing a nondisclosure agreement, and Altera giving Intel access to current financial data. After reviewing that data, Intel reportedly came back with a reduced offer of $54 per share. Apparently, at that point, Altera responded with something akin to “pound sand,” or – they may have countered with a normally-reasonable price bump up to $60. Either way, the two parties appear to have stopped talking at that point. 

We can imagine that Intel, having reviewed Altera’s financials, knew that Altera would have to report a far-less-than-stellar quarterly result. If Altera’s shareholders knew that Altera had just turned down a $54/share offer, and that the stock had been in the $30 range before the rumors began, and then a bad quarter had been reported, they might begin to apply significant pressure to Altera to reconsider the offer. And, in fact, we see evidence that pressure has begun to mount.

At the same time, it was disclosed that Intel had signed a “standstill” agreement – that they would not try a takeover by buying Altera shares on the open market before June 1 (a part of the normal process for the pre-acquisition negotiations we described above.) That is, perhaps, the biggest coiled spring in the plot right now. 

Come June 1, if these reports are reasonably accurate, Intel could choose to initiate a hostile takeover of Altera. Many of Altera’s shareholders are clearly nervous already, and they could probably be easily convinced to sell their interests to Intel at somewhere around what the company is rumored to have offered already. Should Altera management hold their ground and go for the extra 10% or so? Is there more at stake that we’re not seeing? 

There certainly is no clear path for Altera to be so successful that they can generate the kind of value Intel seems to be offering the Altera shareholders in any reasonable amount of time. You can feel the torch mob gathering outside the gates.

And as we go to press, the New York Post is running a story saying that the two sides will be sitting down to negotiate again.

If our reasoning is correct, there is no other comparable suitor waiting in the wings for Altera. The company has this unique strategic value to Intel, and to Intel alone. And there is no alternative for Intel to acquire FPGA technology – if they believe that FPGA technology is essential to protect their data center business. The two companies would indeed be on a collision course of epic proportions. 

It will be interesting to watch.

13 thoughts on “Intel and Altera – Eleven-Figure Chicken”

  1. Thanks for the great analysis and historical perspective, Kevin. This marriage sounds inevitable to me. Disclosure: I work at Intel but don’t have any internal info on this.

  2. I think Intel should stop calling fpga that. Instead put that coherent processor near the traditional cpu and demonstrate performance and low power.

    No point in bringing in the baggage of fpga (hard to program) when it seems they are trying to create a integrated solution. This will be then taking a page from Dr. Christensen ‘s thesis. And a la Apple where hardware and software (firmware in this case) are presented as one integrated solution.

  3. There are three completely separate vertical markets here …. data center, embedded systems, and the relatively small market of everything else a hardware engineer sticks an FPGA + SOC into.

    Intel doesn’t need Xilinx or Altera to protect the data center market. An excellent highly optimized co-processor FPGA for software algorithms is a very different beast than your typical Xilinx or Altera FPGA. It’s doesn’t need IO’s, other than a clean coherent L2 cache interface, and some mailbox interrupts in both directions. A clean x86 co-processor instruction set interface would be really cool too — actually far more than cool, it would be the interface of choice for many applications.

    The huge IP Xilinx and Altera have developed, is really centric for hardware designs focused on SOC and communications controllers. It’s all about IO’s, with some modest logic fabric, even in the “large” versions. Purchasing Altera adds some value to Intels embedded market … but not a lot actually.

    Done well, a co-processor FPGA needs very fast re-configuration loading and initialization. Needs lots of independent small memories ranging from a few dozen bytes, to a few dozen kilobytes. Something larger than distributed LUT and Block RAM memories. They probably can be dynamic, instead of static to improve the density (shared hidden refresh logic). Also needs lots of 3, 4, 6, 9 input LUT’s, or large CLB that can be easily configured into LUT’s with 3-10 inputs with various pre-defined LUT ratios/sizes and a two’s complement math block with look-ahead carry chain, with extra muxes and FF/Latches.

    Logic packer backend for GCC, with embedded route/place, would allow for quick verifiable maturity of a tool chain. Work has already been done in the FPGA area. GCC already supports OpenMP. It wouldn’t take much for custom optimized OpenCL libraries for the FPGA fabric. As long as Intel leaves the route/place/topology specifications open, the open source tool chain will mature quickly … something Xilinx and Altera never figured out with their proprietary locked internals. Actually Xilinx and Altera will find themselves scrambling to avoid losing market share with Intel’s router/place/topology technology fully open. Especially in the embedded market.

    If Intel rolls this architecture into Xeon’s, mobile processors, and Atom SOC processors … the game is completely changed.

    So what does Intel really need from Xilinx or Altera?

    Not much … maybe some FPGA customers, but will they not follow if the Intel offering is right?

    This can be a market game changer for Intel … and the software side of data center and embedded engineering. Either way, there are going to be some hardware engineers that stick tight with Xilinx … but does that matter in an Intel centeric market with embedded FPGA fabrics?

    The bigger elephant in the room, is what is AMD going to do to catch up? Or will AMD jump into this first? AMD certainly wants the data center market too — it could also enter the ALtera bidding war, and buy an equal piece of Altera, leaving the move in a stalemate. And if Intel does roll their own FPGA fabric … what happens to the market share and stock valuations of AMD, Xilinx and Altera?

  4. @TotallyLost, I agree with most of your technical points.

    However, in the recent earnings call, John Daane said: “we do not want to compete with our foundry and we agreed that Intel would not invest in, develop their own product line or buy a PLD company or buy another PLD company.”

    While we haven’t yet sifted through the parts of the 86 page agreement that was apparently made public, on the surface that sounds like Intel can’t develop their own PLD technology. That would mean, if they want to take advantage of PLD/FPGA technology for data center acceleration, they may have backed themselves into a corner where their only option is Altera. (I am doing more research to try to confirm or eliminate this theory.)

    And, even if they did develop their own technology, I think you are dramatically underestimating the difficulty of producing the minimum required tool flow:
    – high-level language to RTL conversion/compilation/mapping/synthesis
    – logic synthesis
    – place-and-route
    – timing analysis
    The problem is not just developing or obtaining the tools (various EDA companies have a lot of that laying around), but it is completely nontrivial to map those tools to a particular FPGA fabric and library set, test them with a robust set of designs, and prove them out in the real world. That is a task that cannot be easily bought – it typically takes years of pushing designs through tool flows, seeing what breaks, and re-tuning the tools and flows to fix it.

    The first item in that list – the high-level language part, is a still-unsolved problem. HLS technology exists that kinda’ addresses aspects of it, Altera has a good start with their OpenCL flow, and there are some other options that might provide good starting points. But there is nothing out there that will do what is really needed – take huge legacy x86 applications and automatically (or semi-automatically) map them into heterogeneous environments with conventional processors and FPGA fabric in a way that improves power efficiency and/or performance in a meaningful way (realizing a significant fraction of the potential of FPGA-based acceleration).

    Thoughts?

  5. FPGAs are overrated. I think Intel is making a mistake to spend so much for only ~2B in revenue.

    The reason FPGAs have never really broke out strongly is because they are a pain in the ass to use. You need highly skilled engineers that can do digital design and meet tight timing budgets. It is not easy to get them to dance right. The customers constantly bombard the FPGA companies with a huge support burden.

    I think you can see it from the FPGA revenue trend. It seems to have stalled out at around 4B per year in combined revenue. In other words a saturated market. The people willing to go through the effort is stable and not increasing significantly.

    Both Xilinx and Altera always just keep claiming again and again that this node is the node where ASIC designs will decrease and FPGA designs will increase and take over the world. It’s been the same story for a decade. The story has never lived up to the hype.

    At the end of the day, the most efficient programmable logic is a processor. It has a robust and mature design flow with banks upon banks of software engineers ready to write efficient code. They don’t need huge support.

    The Altera board must be high though not to take the offer. I am amazed they did not with that terrible quarterly report ready to be released.

  6. @gobeavs,
    Most of what you’re saying about FPGAs is true – they are difficult to “program.” But, they don’t have to be. Emerging high-level design techniques make it much easier to take advantage of the inherent benefits of FPGAs. Model-based design, high-level synthesis, and alternative flows like Altera’s OpenCL flow promise to (eventually) make it so that people without extensive hardware design experience can use FPGAs to accelerate algorithms.

    I agree that $2B in revenue would not be worth what Intel is reported to have offered. But I believe it is a strategic move (as the article says) designed to protect and reinforce much larger markets.

    When you say the “most efficient” programmable logic is a processor, it depends what you mean by “efficient.” If you mean “least amount of engineering effort to implement a particular function”, I’ll agree with you. If you mean “least amount of energy consumed to execute a particular function” I will strongly disagree. The primary advantage of FPGAs as accelerators is that they can dramatically reduce the amount of power required – usually while improving throughput.

    I believe this is the key point, and the reason that Intel appears to be offering an amount you don’t understand, and that Altera is apparently willing to hold out for even more.

    None of this is about one quarterly earnings report. In the FPGA business, this quarter’s revenue is always based on how well the company did 2-3 years ago winning sockets with technology two generations older than what they’re actively working on today. Those designs are now in volume production and account for most of the company’s revenue. The fruits of the technology being developed today will be seen 2-3 years from now.

  7. @Kevin,

    The hard part, is always the part we don’t have personal experience with … and full of unknowns.

    I spent several years working on TMCC and FpgaC, and experimenting on meshing that work with research FPGA synthesis tools to avoid exponential computational loads from the simple 4-LUT bit map boolean functions that TMCC was written with. I had about an 80% complete implementation with Berkeley’s ABC a while back. If I were to finish it today, I’d probably merge it with Qflow as a replacement for Odin-II. I lost interest due to the open hostility from Xilinx.

    I can say that usable C to logic (EDIF) isn’t nearly as hard as one might believe, and has been done by about a dozen teams in the last 20 years. It’s actually about the same as implementing Verilog, but a tad easier/different.

    Qflow/ABC is a fairly mature tool chain, completely open source, and free of proprietary interests.

    The fun part, would be mapping this as a back end for a current release of gcc. And borrow some of the lessons learned in AutoPilot.

    Intel would have to provide an EDIF place/route tool to match their FPGA fabric. They have better than good software teams … I don’t see that as an issue for Intel.

    As for FSM and combinatorial timing issues, those too are not nearly as difficult as you might expect. There are several interesting strategies, where the FSM can be optimized for the fastest clock, and use multiple states to hide slower combinatorial paths as necessary.

  8. @TotallyLost,
    I’m actually claiming this is the hard part because I do have a lot of experience with it.

    Before I started this gig, I spent a little over 20 years managing development of commercial place-and-route, high-level-synthesis, and RTL and FPGA synthesis products. I had the privilege to work with some extremely talented engineers, and we built some very good tools that are still in active production use today.

    One (unfortunate) thing I learned in that process is that when the engineers proclaim the code is “100% done” – you are about 1% of the way to a robust, production-quality tool. Typically, we would gather something like 100 customer designs, run them through our (new) tool, and find scores of errors, optimization problems, and designs the tool just couldn’t handle. We’d spend months to years fixing all those problems until we finally reached the point where our 100 designs sailed through smoothly. Yay!

    Then, we’d try the 101st design and everything would break again. Then, we’d go to beta and the whole cycle would repeat.

    It took literally years and hundreds of designs run through each tool (different tools, different teams, different companies) to build something that was robust and reliable enough to throw into a production environment where we were confident it could take on most of the designs that the real world would want to throw at it.

    I don’t know of any shortcuts for this process.

  9. @Kevin,

    I agree that there are a number of designs that will push the boundaries, and break any tool — software or hardware tools.

    We had the same problem with software tool maturity porting applications to new machines/architectures running Unix, FreeBSD, and Linux during the 80’s and 90’s. The constant debugging new customer code for tool chain failures, was painful. It did take a while for proprietary compiler and OS teams to give up, and adopt superior open source GCC and OS solutions, and join the developer pools for those products.

    The difference in reaching maturity with broad open source tool chains, is that community is significantly bigger … both the development community and the end user community. The community working together has produced a far more powerful tool chain and run-time environment than any single company could have done alone — producing a huge global community asset that has enabled far more products along the way.

    Even as mature as GCC is, somebody finds a boundary condition it fails at regularly …. and it’s fixed not long after.

    There are clear labor resource issues with a 30 man or 100 man development team, that limit the maturity rate on a large software project.

    Our proprietary hardware development tool chain industry remains anchored in mediocrity, limited by what small tool chain teams can conceive and deliver inside the limited funding resources available.

    That changes big time when moving to an open source hardware development tool chain. A logic compiler, used by thousands of developers world wide, with dozens of corporate sponsored development teams sharing the work load, along with thousands of interested individual developers that are all stake holders, greatly improves features, quality, and depth of testing. Intel is a big enough market to drive this, and everyone will follow.

    An Intel market with FPGA fabric common across Xeon, desktop, mobile, and embedded processor cores, WILL create the largest common interest developer pool. Far more than Xilinx or Altera can create.

    Or maybe it’s an alliance of AMD, MIPS, ARM, and Atmel joining the game with a common fpga fabric included in their CPU cores and open source tool chain that ends up driving the market.

    This does make a huge difference in time to maturity, and to advancing the state of the art.

    It’s why GCC and Linux replaced dedicated compiler and OS proprietary teams world wide. It’s why I’m certain Intel open sourcing the software tools for a cpu fpga fabric will mature quickly … very quickly.

    And I’m also certain that Xilinx and Altera will have to join the game within 5 years, or loose significant market share. When you sum Intel, plus Xilinx, plus Altera, plus hundreds of other smaller teams development teams … the result will be far greater than Xilinx or Altera alone.

    It certainly happened with IBM, Sun, SGI, HP, and many others embracing GCC and Linux.

    Intel, Xilinx, Altera and others freely borrow from the university research teams results and open source projects like ABC. The world will be a better place cooperating and contributing to a common tool chain, rather than budget limited proprietary tool chains, forming multiple proprietary anchors holding greater progress back.

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

High Voltage Stackable Dual Phase Constant On Time Controllers - Microchip and Mouser
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Chris Romano from Microchip and Amelia Dalton discuss the what, where, and how of Microchip’s high voltage stackable dual phase constant on time controllers. They investigate the stacking capabilities of the MIC2132 controller, how these controllers compare with other solutions on the market, and how you can take advantage of these solutions in your next design.
May 22, 2023
37,913 views