feature article
Subscribe Now

Where Do Silicon Chips Come From, Anyway?

In Which We Describe the Basics of the Entire Semiconductor Industry

“If you properly clean a room, it gets dirtier before it gets cleaner.” – Chris Rock

The recent news that GlobalFoundries is suspending its 7nm development, possibly forever, has a lot of industry observers asking, “Huh?”

And that’s one of the more eloquent and considered queries. Here at EEJ Global Nerve Center, our mailbag is filled with letters ranging from, “What are the Van Eck effects of predictive branch-cache layouts,” to “Which Android phone should I buy?” We’re nothing if not helpful, so here’s a primer on the semiconductor industry to fold up and keep in your pocket.

First off, computer chips are made with silicon, not silicone. The latter is the stuff you buy at the hardware store for fixing squeaky hinges. It’s also used for cosmetic surgery (yuck). Silicon, on the other hand, is one of Earth’s natural elements. It’s a shiny metal, sort of like wadded up aluminum foil.

We call silicon a semiconductor because it’s semi-good at conducting electricity. That’s like saying low-tar cigarettes are semi-healthy or that Bud Light is semi-beer. You know how power cords have metal wires inside? Silicon isn’t as good as the wires, but it’s better than the plastic or rubber coating on the outside. Silicon is kind of in-between: it’s halfway decent at carrying electricity, but not great. And that turns out to be exactly what we want.  

Turning that silicon into computer chips like the ones in your phone or your TV requires about a hundred steps. We’re gonna skip over most of those here. But for starters, chip-making companies first melt the silicon until it’s a liquid and then cool it into big long cylinders, like a shiny candle. That’s called an ingot.

The candle/ingot then gets sliced into super-thin circles, like metallic bologna. These are called wafers, and they can be as small as a coin (about 1 inch across) or as big as a dinner plate (12 inches, or 300 millimeters) across. The size of the wafer depends on the diameter of the ingot you’re cutting up. Each ingot produces lots and lots of identical thin, shiny wafers.

Eventually, we’re going to cut up each round wafer into hundreds of square chips, each about the size of your fingernail. But that comes later.

Now for the hard part. We have to somehow draw, or etch, an electrical circuit diagram onto this wafer using the most complex, expensive, and tiniest writing that humankind has ever developed. It’s called lithography (Greek for “writing in stone”) and it’s really hard to do. So difficult, in fact, that only a handful of companies in the entire world are equipped to perform semiconductor lithography, and they charge good money for it. This is where it gets weird.

(If you’ve ever done old-school photography with a film camera and a darkroom, some of this process will seem familiar. If not, well, stick with us.)

Our goal is to scratch a 3D electrical circuit onto the surface of the silicon wafer by alternately building up, and then cutting away, the material we don’t want. It’s like building with LEGO bricks, or making an elaborate wedding cake. Except with more layers. And much smaller. And inedible. But you get the idea.

To do that, we first smear the wafer with a coating of mysterious chemicals that will make it sensitive to light, like a Polaroid photo. Then, we shine a special light through a “mask” that casts shadows on the chemical-coated wafer. The mask has a delicate pattern of black lines on it, like a nerdy lace doily. The light eats away at the chemicals, and when we wash off the residue, we get a pattern of tiny lines and grooves on our wafer that matches the lines on the mask. Neat!

That step gets repeated over and over, with a new layer of chemicals and a new mask with a different pattern at each step. Each layer sits on top of the one before it. Over time, we can build up a taller and taller structure of crisscrossing lines, crafting really intricate lacy patterns onto our wafer. Sometimes, instead of smearing chemicals over the wafer, we put down copper, aluminum, or even gold. These act as tiny wires running across different parts of the chip. Up close, it looks like a very elaborate high-rise building, almost like a Gothic cathedral made from silicon and copper wire. To the naked eye, though, there’s nothing much to see.  

But wait a minute… the silicon wafer is, like, 6 inches across, but these chip designs are tiny. What gives? Ah, well, each chip design is repeated hundreds of times across the entire wafer, like the squares on a chessboard. Your average wafer might hold about 200–400 identical copies of the same chip design, arranged in straight rows and columns. Once the coating and etching process is all done, each tiny square chip is cut apart from its neighbors using a delicate saw in a process called dicing. Each separate chip is called a die, and the plural of that is dice.

All those tiny lines, grooves, and wires on each die are far too small to see with the naked eye. In fact, they’re even too small to see with a microscope. How is that possible? The designs we’ve etched into our dice are so fantastically small, in fact, that normal light can’t even illuminate them; they’re literally invisible, because they’re smaller than the wavelength of visible light. (We told you it would get weird.)

We measure the thickness of these tiny lines in units of a nanometer (nm), which is a billionth of a meter. In US measurements, there are 25,400,000 nm to the inch, so 1nm is about 0.000000040 of an inch. Yikes! A human hair is thousands of times thicker than that. Particles of dust are like boulders. Even germs and RNA molecules are bigger than a nanometer.

Ever since the 1950s, we’ve learned how to make these features smaller and smaller, a path of constant shrinking known as Moore’s Law. Right now, the smallest silicon circuits we can make are about 7nm across. That’s the current state of the art, but it’s extremely difficult to do reliably, which also makes it expensive. See where we’re going with this?

Little Chips, Big Money

The lines and wires are so small and so delicate that we have to be pathologically careful about cleanliness and contamination. There’s a reason the inside of a chip-making factory (normally called a “fab”) is known as a clean room. You’ve seen photos of clean rooms before; they’re the ones where the workers inside are wearing spacesuits or “bunny suits.” A clean room is far more sanitary than any hospital operating room. And a lot bigger, too.

It’s tough to construct a building that’s large enough to hold dozens of workers and lots of big expensive equipment, but also to be completely sanitary inside. You need special air conditioners, the floor has holes in it (so that dust doesn’t settle anywhere), and there are airlocks on all the doors like a spaceship. The workers pass through an “air shower” on their way in, and they walk across sticky floor mats to remove any dust from the bottoms of their already clean booties. Even your average industrial factory assembly line is going to be big and expensive to maintain. A semiconductor fab is exponentially more so. Figure about $10 billion to $15 billion to build a modern, state-of-the-art fab. Who’s got that kind of money?

Worse yet, that big expensive fab depreciates very rapidly, so you don’t have much time to pay it off. We all know that chip-making technology moves forward all the time. That means today’s awesome new fab is tomorrow’s white elephant. You’ve got to somehow pay off that $10 billion investment in just a few years before your fab is obsolete. Ouch. That’s worse than investing in VHS recorders.

Like any good tech startup, fabs make it up in volume. The more chips you make and sell, the more you can spread the burden of cost across each chip. If you make a million chips a week, each one bears a smaller cost burden than if you made only a thousand chips a week. Thus, fab companies are single-mindedly focused on keeping their production lines running at full capacity. Every chip out the door chips away at the cost overhead of the fab.

Not everyone can keep up that pace. One by one, chipmaking companies have given up on trying to run their own fabs and have instead farmed out the work to independent third parties, called foundries. The biggest foundry, TSMC (Taiwan Semiconductor Manufacturing Corporation) has almost 50% market share, but that doesn’t mean it makes half of the world’s chips. A few big holdouts, like Intel and Samsung, still run their own fabs and produce their own chips in-house, so they don’t use foundries. But most companies, called “fabless” chipmakers for obvious reasons, do rely on TSMC and other foundries for all their manufacturing.

Little by little, the foundry business is consolidating as fewer and fewer companies decide it’s worthwhile to shoulder that multibillion-dollar burden. Just last week, the #2 independent, GlobalFoundries (“GloFo” to its friends) decided to throw in the towel. It will continue to make chips – that part hasn’t changed – but it won’t keep upgrading its equipment on the technology treadmill. Instead, GloFo will continue to make chips using its current 14nm technology, which will, presumably, get older and less attractive as time goes on.

But older also means cheaper. As GloFo and other similar companies choose to let their fabs age gracefully, they will also amortize the cost over more time and more chips. That makes older fabs more cost-effective to run. For a customer who doesn’t necessarily need their chips produced with the absolute latest cutting-edge semiconductor manufacturing technology, that’s a great deal. We can’t all be commuting to work in Ferraris. Sometimes a good second-hand Honda Civic will do the job.

Next week, we’ll explain how to play the flute and how to rid the world of all known diseases. See you then.

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 7, 2024
I don't know about you, but I would LOVE to build one of those rock, paper, scissors-playing robots....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
53,461 views