feature article
Subscribe Now

Science Fiction or Black Art?

Once upon a time, and a long time ago it was, a company in what was struggling to be known as Silicon Valley (Germanium Gulch never really caught on) had an order for a discontinued product. The customer was pressing, so the company ran a batch of wafers (probably one-inch wafers), but none of the end product worked. A second batch was also DoA. After a great deal of head-scratching, someone remembered that since the last successful production run, clean room staff had started wearing gloves. A third batch went through the line without gloves, and there was enough working product to meet the customer’s needs. One possible explanation was that sodium chloride emitted by the operators’ hands (even though wafers were handled with tweezers) created just enough doping at some stage to tip the process into yielding.

In those days, process development was one part technology and several parts black art. Things are different today, of course. But, if Future Horizons’ CTO, Mike Bryant, is correct, we may be going into even more weird realms of the unknown anytime soon.

For at least thirty years people have been predicting that at the next process node we will run out of steam with silicon-based processes. And for thirty years, there has always been something that has made it possible to continue. Today, a number of trends have combined to make the threat seem more likely.

The first barrier is cost, for both process development and product design. Bryant claims that the cost of developing the process node is inversely proportional to the node length, “This means that the cost of developing a 22-nm process will be twice that of 45-nm, which in turn costs twice as much as the 90-nm node.” This is a huge load, even for the big boys, and it is the reason for the growth of industry partnerships, such as IBM, Chartered, and Samsung, and collaborative R & D, like that at IMEC.

Alongside this, product development costs can be close to proportional to the transistor density – doubling the density increases design costs by about 1.9 times, according to Bryant.

But, while costs are rising, the price charged for silicon produces revenue per square cm of around $8 – $9. And this is gradually declining – not an entirely healthy situation.

Things are getting worse. 22 nm is close to production, so we should be seeing significant progress on the next node – 16 nm. But Bryant claims that, “There is still no reliable transistor yet for 16 nm.”  And he also feels that, at this level, Newtonian physics is beginning to break down and we are in the world of quantum mechanics. There are some wonderful quotations from physicists about quantum mechanics, but Richard Feynmann is, as he is so often, the most succinct. In 1965 he said, “I think I can safely say that nobody understands quantum mechanics.” While today there may be academics who understand quantum mechanics, it is a safe bet that there are not many working in process development and transistor design who share that understanding.

Just to add to the fun is the debate over moving to 450-mm wafers. 18-inch wafers are big beasts, and there are not many people out there who have said that they want to move to using them, apart from Intel, Samsung and TSMC. Most other manufacturers say they definitely don’t want to make the move. This is not surprising if you look at the costs involved. Bryant estimates that the cost for developing the processing equipment will be in excess of $25 billion. Semiconductor equipment suppliers earn around $20 billion a year: just developing for 450 mm is going to gobble all R & D investment for years. So who will pay for it? The answer is not clear. There ain’t no Santa Claus, and fairy godmothers are thin on the ground.

Assuming the equipment does eventually get developed, the bill for a single fab is likely to come in at around $8 billion. Who can afford this? Well clearly Intel, Samsung and TSMC think they can. So could this result in just one processor company, one memory company and one foundry?

If the estimates for the timing of 450-mm are accurate, then the first fabs should be coming on stream in 2015, just as 16-nm processing moves into volume. Let the introduction slip a couple of years, and you are bringing up the new equipment alongside introducing 11-nm processes.

Bryant thinks that this timing may be better than it might look, since at 11-nm, silicon-based CMOS, as we know it, must change. What the change will be is unclear, but it will have to be one of the biggest changes ever in semiconductor manufacturing. And some of the options are likely to bring black art skills back to the clean room.

Among the options is the hardy perennial of III-IV materials. These have been on the edge of a breakthrough for almost as long as silicon has been about to die, but they are not easily compatible with the existing process technology and will probably remain in the “specialist” niche.

One intriguing possibility is a strontium-germanium interlayer, sitting between a hi-k insulator and a germanium channel. This could extend CMOS through more process nodes, but it requires up to ten more processing steps.

So how about carbon-based life-forms? Sorry, carbon-based transistors. Carbon is another material that has been considered for semiconductors for a long time, with diamond-based approaches as a possible route. According to Bryant, graphene, a one-atom-thick layer of carbon atoms arranged in a regular honeycomb lattice, has been successfully grown across a six-inch (150-mm) wafer. Graphene has some very interesting electrical properties. It is a semiconductor and has very high electron mobility at room temperature. Transistors of graphene could go down to the molecular level, and a ten-atom-wide transistor has already been demonstrated. Another breakthrough has been the creation of conducting tracks in the graphene.

Carbon nano-tubes are another route to transistors. They are cylindrical carbon molecules that have already shown promise as transistors and memory circuits. Structurally, they can be thought of as cylinders of graphene, with a length many millions of times longer than the diameter. They are difficult to build with consistent behaviour, but one possible approach is to build large networks and then average the electrical behaviour.

Carbon-based benzene rings seem to Bryant to offer the smallest reliable and usable switching block. But they are not the end of the Moore’s law. Instead, he sees three-dimensional structures on the die, with today’s single layer of transistors replaced by multiple layers. This is not a new concept, but previous attempts have had problems with deposition of later layers over the fairly large differences in vertical levels created by the earlier layers. This will be less of a problem as layers become thinner, approaching single-molecule thicknesses, for example. Multiple layers will require significant multiple-processing stages. (And much longer processing times – possibly many weeks in the fab.)

There are knock-on effects from these developments. (Aren’t there always?) The devices will be able to support “true” systems on chip. But since the number of faults in the finished device is going to continue to be related to the number of processing steps, the designs will have to be able to work around faults. This could be accomplished by creating designs that are fault tolerant or by using re-configurability, neither easy. A second consequence will be larger die, just to provide sufficient space for the pin-out for the enormous connectivity these systems are likely to need.

But there are other routes. Back to our quantum mechanics. Instead of them presenting problems, the quantum effects might be harnessed to create logic elements. Some of this is already attempted with quantum computing, but this is still in its very early stages and is limited to specific classes of applications, particularly massively parallel problems, like code breaking.

Finally, again a solution that has been discussed for some time, is to use biological techniques. Nanobiotechnology is looking promising and, in theory, will be able to attack whole new categories of problems. Despite the likelihood that biological systems may be bigger and slower than silicon, they will be capable of building very big and very low-power systems.

Will the future be biological, carbon-based or III-IV? Or will we find ways of continuing to push silicon to reach just that next node again and again? The situation is complicated by an industry that Future Horizon’s Malcolm Penn describes as “not behaving rationally.” For him the industry model is broken. The relationship between equipment suppliers and the chip manufacturers is broken, as is that of the chip manufacturers and their customers. In part because of the cyclical nature of the industry, all along the line, relationships are based on conflict, not co-operation. When capacity is tight, suppliers trade off customers against each other – at every point in the supply chain. When there is excess capacity, customers retaliate. Suppliers also compete irrationally with each other – pricing aggressively to buy market share, whatever the consequences.

There have been, and are, attempts to cooperate. Crolles, in France, is a fab shared by a number of Integrated Device Manufacturers, in an attempt to get the benefits of using foundries while retaining control of the entire process. This has had its ups and downs, mostly downs.

IMEC in Belgium is a success in getting fab operators and equipment suppliers to work together. And there are other examples around the world. But, in total, these are still not working at the scale needed to provide an efficient resolution of the technology challenges the industry faces. Yet somehow the industry will muddle through, will make the investments needed, and will continue to ship the chips that change the world we live in.

Leave a Reply

featured blogs
Aug 16, 2018
Learn about the challenges and solutions for integrating and verification PCIe(r) Gen4 into an Arm-Based Server SoC. Listen to this relatively short webinar by Arm and Cadence, as they describe the collaboration and results, including methodology and technology for speeding i...
Aug 16, 2018
All of the little details were squared up when the check-plots came out for "final" review. Those same preliminary files were shared with the fab and assembly units and, of course, the vendors have c...
Aug 15, 2018
VITA 57.4 FMC+ Standard As an ANSI/VITA member, Samtec supports the release of the new ANSI/VITA 57.4-2018 FPGA Mezzanine Card Plus Standard. VITA 57.4, also referred to as FMC+, expands upon the I/O capabilities defined in ANSI/VITA 57.1 FMC by adding two new connectors that...
Aug 14, 2018
I worked at HP in Ft. Collins, Colorado back in the 1970s. It was a heady experience. We were designing and building early, pre-PC desktop computers and we owned the market back then. The division I worked for eventually migrated to 32-bit workstations, chased from the deskto...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...