Depending on whom you talk to, Moore’s Law says that something doubles every 18-24 months. It might be speed or number of transistors. But these days, as physics raises its ugly head, performance is taking a back seat to transistor density. And yet there’s a third parameter that has always been considered to be a proxy for density: cost.
The assumption has always been that the more transistors you can put in a small space, the cheaper everything is going to be. And that’s been a good assumption for a long time. Either you get more dice per wafer, improving the economics of scale, or, by packing more transistors into the same space, you can do more for the same price. Think of the microcontroller you can get now for the same price that quad NAND gates once commanded.
But that link between process node progress and improved cost is breaking down. To see the evidence of that, you have to look no further than Imec’s recent Technology Forum and Semicon West. Presentation after presentation cautioned that cost is an issue, and a variety of offline discussions reinforced the points.
The two knobs we have for lowering die cost have traditionally been wafer size and transistor size. Right now, steps are underway to move to 450-mm wafers, but these steps are progressing haltingly. There’s a lot of work and a lot of infrastructure that accompanies each change of wafer size, and either new floorspace must be allocated in a fab for new equipment or old space must be re-allocated by getting rid of old equipment. (Or new equipment can accommodate multiple wafer sizes…)
In the past, this has been a no-brainer. Not so this time: according to Lam Research’s Martin Anstice, who presented at Imec’s forum, 450-mm wafers won’t be in volume production until 2018. That’s a long time from now, and there are lots of problems to solve between now and then. And the payoff isn’t as guaranteed as it used to be.
Combine tiny dice with a gargantuan wafer, and, well, you end up with more dice than a lot of companies will need. Which could suggest that some folks might sit this transition out. And that could kill the deal: Mr. Anstice urges that the only way this can work is if everyone does it. The hesitancy suggests that 450-mm might not be the obvious next-step-in-cost-reduction that we would otherwise take for granted.
So if we can’t necessarily count on wafer size as the guarantee of lower costs, then we have to look to transistor size for help. But there’s some thrash going on here as well.
The received wisdom is that FinFETs are the next step in transistor technology, and yet cost is causing consternation here as well. Granted, some of what follows comes from Soitec, which has a stake in this race, but their contention is that the more people look at FinFETs, the more they’re stepping back, concerned about the cost of the technology. If FinFETs aren’t providing the desired economics, then where do you turn?
Well, there have been a few folks in the back of the room madly waving their hands while the teacher has refused to pick them. Because they’ve had an answer that runs counter to the received wisdom, and such answers rarely find welcoming audiences. But, given FinFET costs, Soitec says that fully-depleted SOI (FD-SOI) is getting another look by folks that might not have paid much attention a couple of years ago.
When FinFET first came onto the scene in a serious way, FD-SOI was still an option under discussion, and we looked back then at how the two compared. Here we are some years later, and the train for FinFET has departed the station – and yet people are rethinking their destination. We’ve known that FinFETs are complex all along, so why is this a surprise? Why are we just now reconsidering whether FD-SOI might work after all?
According to Soitec’s Steve Longoria, when companies were charting their strategic directions, FD-SOI still had some fundamental issues to prove. Key to the technology’s success is the ability to control the thickness of the thin active layer. The tolerances are ±5 Å across a 300-mm wafer (that gets even harder on a 450-mm wafer). He compares that to trying to create a flat surface – say a giant parking lot – from San Francisco to Chicago with no more than a half-inch variation in height. (Never mind that the two cities are at different elevations… you’re being too literal.)
At the time when Intel decided to pursue FinFET, this thickness control hadn’t been solved yet. It has by now, however, using SmartCut technology. But, as reinforced by ST, the main commercial promoter of FD-SOI alongside GlobalFoundries, once Intel announced its FinFET, there was immediate pressure on everyone else to belly up and say, “Yeah, we’re doing that too.” And so the rush to FinFET started – and is now purportedly faltering.
The point here is that the heir apparent transistor technology may bring higher, not lower costs, whereas FD-SOI is cheaper. Yes, the raw wafer costs 3x what a standard silicon wafer costs, but the processing is so much simpler that you get that back by the time you’ve populated the wafer.
How far can FD-SOI take us? Soitec says that fins will be needed at the 10- and 7-nm nodes, although Leti and ST are working on 10-nm planar FD-SOI. Of course, that will work only if someone uses the technology now to keep the fabs working. ST says that their first ST-Ericsson die is complete and that other dice will be in volume in 2015 at the 28/20 node. (20 is generally looked at as 28-nm interconnect with FinFET transistors – a notion that gets messier if you’re not using FinFETs.)
According to Leti, the 14-nm FD-SOI design kit should be available in the third quarter of this year (which we are currently in); 10-nm models are scheduled for the first quarter of next year, with 10-nm design kits out a year from now.
By the way, just to add some 3D-ness to FD-SOI, Leti is actually looking into stacking multiple active layers on a wafer. This would involve a device/local-interconnect layer followed by an interconnect layer followed by another device/local-interconnect layer and then the standard back-end interconnect. The trick is to find a way to activate the dopants on the second layer without screwing up the first layer. Fabless companies have apparently expressed interest in this, but no pipecleaner application has yet been identified.
So one possibility is that, at least for a couple more nodes, FD-SOI might relieve some of the cost pressure. But would that take FinFET out of the picture? Clearly not for folks like Intel (or Altera or Achronix or Tabula, who will use Intel’s FinFET process). Cost aside, FinFETs have more drive for a given area, so for performance-oriented designs where that performance can command a decent price, FinFET will be the answer. It’s more in the SoC space, with its mix of circuitry, where advocates say that FD-SOI can provide more benefit (although Intel has also announced an SoC version of their FinFET technology).
So we’ve looked at two of the cost-lowering knobs: bigger wafers and the basic transistor structure. What about beyond that? Even if FD-SOI provides some relief to some people, it will eventually run out. And this brings us to the perennial whipping boy of lithography: EUV.
We’ve talked about EUV a number of times, charting the claims of progress – some of which have been made prematurely, to the detriment of the credibility of the industry. But at past confabs it’s seemed to me that folks perceived the risk to be that EUV might miss its window of opportunity as other technologies caught up with and passed it before all the EUV problems were solved.
But this time, things sounded different. People have talked about directed self-assembly as the up-and-comer, and it’s been one of the things that might overtake EUV. But guess what: even though, thanks to multiplication, DSA might give you nice parallel lines without aggressive lithography, you still need to be able to cut those lines in order to form actual circuits. Those cuts are critical. Right now, many folks are still looking to EUV to make those cuts.
Yes, it’s possible that ebeam lithography could help here, using the CEBL concept, but Leti sees ebeam throughput at 10 wafers/hr in 2015 (with clustering getting you up to 100 w/hr). The break-even throughput for EUV is 50-60 w/hr, according to Imec. While you can’t compare break-evens without comparing the costs, it still takes a lot longer to process wafers at 10 w/hr than at 50 or 100. Leti is working with Mapper to explore this avenue, but I didn’t hear much beyond that regarding ebeam as a fallback to EUV.
In fact, I really got the sense that, after EUV, there is no plan B. In the past, it’s felt like, if EUV didn’t work, then it will be a damn shame given all the work and investment, but we’d get over it. This time it felt like “not working” isn’t really an option.
So what’s left to do on EUV? We’ve focused on the source power as the sexy aspect of EUV development in the past. Imec’s latest report on EUV is that 80 W is expected by the end of the year, with 250 W in 2015. 9-nm half-pitch lines have been demonstrated, so it’s a matter of cranking them out – as far as exposure is concerned.
But less sexy are things like resist and masks. They’re still tuning the resist to keep patterns from collapsing in the 15-nm range. Local CD uniformity challenges have also limited contacts to around 22 nm. For masks, the focus can be summarized by one word: defects. Efforts to minimize defects – measured as added defects per exposure – are ongoing.
So whether it’s wafer size, transistor structure, or fundamental lithography, cost is on everyone’s mind. It was even the central theme of the Semicon West keynote given by GlobalFoundries CEO Ajit Manocha. The important thing is, this was not a declaration of a war won against cost: it was a declaration of a war that has yet to be fought on a number of fronts. It was a summoning of troops to assist the first signs of faltering in the march from node to node. Much of the fighting is yet to come.