feature article
Subscribe Now

Expensive Silicon

Transistor Costs are Hitting a Wall

Depending on whom you talk to, Moore’s Law says that something doubles every 18-24 months. It might be speed or number of transistors. But these days, as physics raises its ugly head, performance is taking a back seat to transistor density. And yet there’s a third parameter that has always been considered to be a proxy for density: cost.

The assumption has always been that the more transistors you can put in a small space, the cheaper everything is going to be. And that’s been a good assumption for a long time. Either you get more dice per wafer, improving the economics of scale, or, by packing more transistors into the same space, you can do more for the same price. Think of the microcontroller you can get now for the same price that quad NAND gates once commanded.

But that link between process node progress and improved cost is breaking down. To see the evidence of that, you have to look no further than Imec’s recent Technology Forum and Semicon West. Presentation after presentation cautioned that cost is an issue, and a variety of offline discussions reinforced the points.

Bigger Wafers?

The two knobs we have for lowering die cost have traditionally been wafer size and transistor size. Right now, steps are underway to move to 450-mm wafers, but these steps are progressing haltingly. There’s a lot of work and a lot of infrastructure that accompanies each change of wafer size, and either new floorspace must be allocated in a fab for new equipment or old space must be re-allocated by getting rid of old equipment. (Or new equipment can accommodate multiple wafer sizes…)

In the past, this has been a no-brainer. Not so this time: according to Lam Research’s Martin Anstice, who presented at Imec’s forum, 450-mm wafers won’t be in volume production until 2018. That’s a long time from now, and there are lots of problems to solve between now and then. And the payoff isn’t as guaranteed as it used to be.

Combine tiny dice with a gargantuan wafer, and, well, you end up with more dice than a lot of companies will need. Which could suggest that some folks might sit this transition out. And that could kill the deal: Mr. Anstice urges that the only way this can work is if everyone does it. The hesitancy suggests that 450-mm might not be the obvious next-step-in-cost-reduction that we would otherwise take for granted.


So if we can’t necessarily count on wafer size as the guarantee of lower costs, then we have to look to transistor size for help. But there’s some thrash going on here as well.

The received wisdom is that FinFETs are the next step in transistor technology, and yet cost is causing consternation here as well. Granted, some of what follows comes from Soitec, which has a stake in this race, but their contention is that the more people look at FinFETs, the more they’re stepping back, concerned about the cost of the technology. If FinFETs aren’t providing the desired economics, then where do you turn?

Well, there have been a few folks in the back of the room madly waving their hands while the teacher has refused to pick them. Because they’ve had an answer that runs counter to the received wisdom, and such answers rarely find welcoming audiences. But, given FinFET costs, Soitec says that fully-depleted SOI (FD-SOI) is getting another look by folks that might not have paid much attention a couple of years ago.

When FinFET first came onto the scene in a serious way, FD-SOI was still an option under discussion, and we looked back then at how the two compared. Here we are some years later, and the train for FinFET has departed the station – and yet people are rethinking their destination. We’ve known that FinFETs are complex all along, so why is this a surprise? Why are we just now reconsidering whether FD-SOI might work after all?

According to Soitec’s Steve Longoria, when companies were charting their strategic directions, FD-SOI still had some fundamental issues to prove. Key to the technology’s success is the ability to control the thickness of the thin active layer. The tolerances are ±5 Å across a 300-mm wafer (that gets even harder on a 450-mm wafer). He compares that to trying to create a flat surface – say a giant parking lot – from San Francisco to Chicago with no more than a half-inch variation in height. (Never mind that the two cities are at different elevations… you’re being too literal.)

At the time when Intel decided to pursue FinFET, this thickness control hadn’t been solved yet. It has by now, however, using SmartCut technology. But, as reinforced by ST, the main commercial promoter of FD-SOI alongside GlobalFoundries, once Intel announced its FinFET, there was immediate pressure on everyone else to belly up and say, “Yeah, we’re doing that too.” And so the rush to FinFET started – and is now purportedly faltering.

The point here is that the heir apparent transistor technology may bring higher, not lower costs, whereas FD-SOI is cheaper. Yes, the raw wafer costs 3x what a standard silicon wafer costs, but the processing is so much simpler that you get that back by the time you’ve populated the wafer.

How far can FD-SOI take us? Soitec says that fins will be needed at the 10- and 7-nm nodes, although Leti and ST are working on 10-nm planar FD-SOI. Of course, that will work only if someone uses the technology now to keep the fabs working. ST says that their first ST-Ericsson die is complete and that other dice will be in volume in 2015 at the 28/20 node. (20 is generally looked at as 28-nm interconnect with FinFET transistors – a notion that gets messier if you’re not using FinFETs.)

According to Leti, the 14-nm FD-SOI design kit should be available in the third quarter of this year (which we are currently in); 10-nm models are scheduled for the first quarter of next year, with 10-nm design kits out a year from now.

By the way, just to add some 3D-ness to FD-SOI, Leti is actually looking into stacking multiple active layers on a wafer. This would involve a device/local-interconnect layer followed by an interconnect layer followed by another device/local-interconnect layer and then the standard back-end interconnect. The trick is to find a way to activate the dopants on the second layer without screwing up the first layer. Fabless companies have apparently expressed interest in this, but no pipecleaner application has yet been identified.

So one possibility is that, at least for a couple more nodes, FD-SOI might relieve some of the cost pressure. But would that take FinFET out of the picture? Clearly not for folks like Intel (or Altera or Achronix or Tabula, who will use Intel’s FinFET process). Cost aside, FinFETs have more drive for a given area, so for performance-oriented designs where that performance can command a decent price, FinFET will be the answer. It’s more in the SoC space, with its mix of circuitry, where advocates say that FD-SOI can provide more benefit (although Intel has also announced an SoC version of their FinFET technology).

There is one other transistor structure being tossed into the mix from SuVolta. But that discussion is largely about power, and we’re going to look into its progress in more detail in a future piece.


So we’ve looked at two of the cost-lowering knobs: bigger wafers and the basic transistor structure. What about beyond that? Even if FD-SOI provides some relief to some people, it will eventually run out. And this brings us to the perennial whipping boy of lithography: EUV.

We’ve talked about EUV a number of times, charting the claims of progress – some of which have been made prematurely, to the detriment of the credibility of the industry. But at past confabs it’s seemed to me that folks perceived the risk to be that EUV might miss its window of opportunity as other technologies caught up with and passed it before all the EUV problems were solved.

But this time, things sounded different. People have talked about directed self-assembly as the up-and-comer, and it’s been one of the things that might overtake EUV. But guess what: even though, thanks to multiplication, DSA might give you nice parallel lines without aggressive lithography, you still need to be able to cut those lines in order to form actual circuits. Those cuts are critical. Right now, many folks are still looking to EUV to make those cuts.

Yes, it’s possible that ebeam lithography could help here, using the CEBL concept, but Leti sees ebeam throughput at 10 wafers/hr in 2015 (with clustering getting you up to 100 w/hr). The break-even throughput for EUV is 50-60 w/hr, according to Imec. While you can’t compare break-evens without comparing the costs, it still takes a lot longer to process wafers at 10 w/hr than at 50 or 100. Leti is working with Mapper to explore this avenue, but I didn’t hear much beyond that regarding ebeam as a fallback to EUV.

In fact, I really got the sense that, after EUV, there is no plan B. In the past, it’s felt like, if EUV didn’t work, then it will be a damn shame given all the work and investment, but we’d get over it. This time it felt like “not working” isn’t really an option.

So what’s left to do on EUV? We’ve focused on the source power as the sexy aspect of EUV development in the past. Imec’s latest report on EUV is that 80 W is expected by the end of the year, with 250 W in 2015. 9-nm half-pitch lines have been demonstrated, so it’s a matter of cranking them out – as far as exposure is concerned.

But less sexy are things like resist and masks. They’re still tuning the resist to keep patterns from collapsing in the 15-nm range. Local CD uniformity challenges have also limited contacts to around 22 nm. For masks, the focus can be summarized by one word: defects. Efforts to minimize defects – measured as added defects per exposure – are ongoing.

So whether it’s wafer size, transistor structure, or fundamental lithography, cost is on everyone’s mind. It was even the central theme of the Semicon West keynote given by GlobalFoundries CEO Ajit Manocha. The important thing is, this was not a declaration of a war won against cost: it was a declaration of a war that has yet to be fought on a number of fronts. It was a summoning of troops to assist the first signs of faltering in the march from node to node. Much of the fighting is yet to come.

3 thoughts on “Expensive Silicon”

  1. What do you think about transistor costs? Are we going to hit a wall? Will we slip through this yet again, like we have with every other “end of CMOS” scenario?

  2. Zplasma Stable DPP is an alternative technology for generating EUV light at lithography power levels. Zplasma Stable DPP uses Sheared Flow Stabilization to stabilize the EUV-emitting plasma. Stable plasma results in light pulses that are 10-100 times longer than than those produced by the unstable plasmas of other sources. The source uses no tin and has a controlled end to each pulse that does not produce the high-energy debris and molten tin sputtering that have been obstacles for other light sources. We have prototyped and demonstrated the physics of Stable DPP in the lab. Zplasma is seeking funding and development partners to scale our prototype up to the high power light source the industry needs.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured paper

Navigating design challenges: block/chip design-stage verification

Sponsored by Siemens Digital Industries Software

Explore the future of IC design with the Calibre Shift left initiative. In this paper, author David Abercrombie reveals how Siemens is changing the game for block/chip design-stage verification by moving Calibre verification and reliability analysis solutions further left in the design flow, including directly inside your P&R tool cockpit. Discover how you can reduce traditional long-loop verification iterations, saving time, improving accuracy, and dramatically boosting productivity.

Click here to read more

featured chalk talk

Current Sense Shunts
Sponsored by Mouser Electronics and Bourns
In this episode of Chalk Talk, Amelia Dalton and Scott Carson from Bourns talk about the what, where and how of current sense shunts. They explore the benefits that current sense shunts bring to battery management and EV charging systems and investigate how Bourns is encouraging innovation in this arena.
Jan 23, 2024