It’s hard to tell if it’s a businessperson’s dream or nightmare market. Imagine a large market, a huge market, a market that spans the world. A market that starts with a device and spawns an entire ecosystem of tools, services, and accessories. A market where the technology starts out complex and just gets harder from there, making it difficult to enter but helping to keep interlopers out. A market where competition is fierce and windows of opportunity are tiny. And where the life of a given product is short and must be immediately followed up with the next version. Where prices need to be low, performance high, and battery life long. And where more types of technology need to play nicely together in a smaller space than in any other application. And where the device itself has to play nicely with its neighbors, like the TV, radio, or cockpit controls.
In case it’s not obvious, we’re talking about cellphones here. The requirements for participating in this business can be grueling, but the potential payoffs just can’t be ignored. Occasionally something sexy like the iPhone will come along and everyone wants to be a part of it – and everyone wants to be seen as being a part of it – except that everyone that is a part of it can’t say they’re a part of it – so someone else has to tear the damn thing apart to see who’s a part of it, with those that are a part of it anxiously awaiting such analysis, hoping to be named without having violated their oaths of silence. This goes even further up the supply chain since the maker of said phone, in addition to controlling what their suppliers can say, controls – at least in the some countries <ahem> – who your service provider will be… and that provider will then take credit for all the iPhone features in their advertising, as if they would be available on any phone through that provider… which must be a terrible disappointment for those people buying a non-iPhone and finding out that the ads really were talking only about the iPhone, even though they never mentioned it and featured some other phone at the end of the ad… But I digress…
There are lots of applications for IC technology out there, and we’ve all seen how analog functionality is getting much more attention than ever from tools providers as the real world moves to cohabitate with the digital world on the same silicon. But there’s one thing that distinguishes cellphones from most other applications, uniting it with only a few other applications like Bluetooth and WiFi and their ilk: the radio. It’s the radio that lets so many things talk to so many other things wirelessly. It’s the “less” in wireless. It’s what takes what should otherwise be more or less random electromagnetic buzzings throughout our ether and unifies them into myriad simultaneous messages much the way a conductor takes various random instruments, tunes them up, and coaxes them into creating coherence out of cacophony.
Yes, radio is analog, but it’s a different analog. It’s one thing to have analog provide continuous signaling ranges or to amplify such signals. But in those cases, you really work hard to have as few components as possible do as restricted an amount of work as possible, making sure that all the transistors operate in a narrow range where they can be good citizens and not make a lot of noise that will upset everything else.
Radio, by definition, will not be quiet. Radio is intended to broadcast loudly to all that will listen, like Ethel Merman bursting into the library the night before finals. So the question is, will Ethel sing the right songs? Will they be in tune? Will the volume be enough but not too loud? Will everyone be able to finish studying while Ethel parades around? Do you really want Ethel, or are you actually looking for Olivia Newton-John or Michael Franks?
While we’re used to seeing the usual suspects in chip design, there’s a less obvious suspect that takes on the challenges of training Ethel how to perform on-chip: Agilent, through their EEsof family of tools. These tools provide a wide range of support for both system and silicon design and analysis. They can take designers from the initial architecture planning steps through prototype verification, dealing with the abstract, with silicon devices, with silicon circuits, and with PC boards. The kinds of analysis and testing required to validate one of these systems would be enough to send a standard digital designer running off to barista school.
Simulation support for non-microwave radio silicon design is handled by their Golden Gate tool, which can tie into Cadence’s Virtuoso flow. The focus is not so much on duplicating the things that Cadence can already do, but is rather on proving out specific radio characteristics given the specific silicon parameters of a specific circuit on a specific process. Their literature makes reference to some of the kinds of analysis available: DC, AC (OK, we can handle that), Transient, Multi-tone, Envelope, Convolution, Harmonic Balance, Large Signal Stability (recently released)… (Um… at this point I’m feeling more comfortable with, “will that be a Tall, Grande, or Venti?” Much less confusing to operate in an environment where a small is called a “tall”… Orwell would be proud… but I digress…).
Exotic analysis aside, these circuits must eventually be manufactured and sold, and, as with any other circuit, you can make money only if you make lots of good chips and not so many bad chips that have to be thrown away. The increasing complexity of these radio chips (think quad-band, for example) drives up the number of degrees of freedom that nature can exercise to test whether or not we’re serious about being in the RFIC business. So we need tools to help assess whether, given a range of process parameters, we’re going to be able to make any money.
Today this is done in two ways: by doing full simulation of corner cases and by doing Monte Carlo analysis. Corner simulation gives us an accurate picture of how the chip will operate at the extremes but shows behavior only at the specific points chosen. There’s no guarantee that those are actually the worst points. Statistical process design kits (PDKs) are available today but aren’t used that much because it takes too long to do all the brute-force simulations. Which is where the Monte Carlo analysis comes in, giving you a sampling of data across a range of process points, hopefully illustrating whether or not you have any yield issues.
There’s one catch: while you can see if you have a problem with Monte Carlo, you can’t tell what the problem is. So with the latest release of Golden Gate, Agilent has included a Yield Contribution Analysis capability that provides results similar to Monte Carlo while providing information on those parameters that are contributing to yield losses. They do this through a single simulation, performing small-signal statistical analysis, which fits the center (1 σ) of the distribution.
The accuracy tails off as you approach the process corners, but, in exchange, they claim that it’s 50x faster (that’s “x”, not “%”) than even Monte Carlo. So the intent is that this helps to guide optimization of the circuits during design. Towards the end of development, Monte Carlo can be used as a sanity check, with corner simulations done for sign-off.
All of which should help to increase yield, making chips with RF capabilities less expensive, allowing us to tie more and more radio communication into more and more pieces of equipment. Which should get more and more systems talking to each other, coordinating operation, sharing information, messaging, emailing, tweeting, gossiping, filling the ether with useless (but coherent) chatter… but I digress…
Link: Agilent Golden Gate
(Image courtesy Agilent)