feature article
Subscribe Now

The Only Chip

The Last One You'll Ever Need?

We all have the same vision — right?  Look waaaaay out there in the future.  Fuzz your eyes a bit.  Oh, that hurts?  Well, don’t then.  But think way ahead.  Your whole system is one chip.  Well, except for IO and peripherals, of course.  The “System on Chip” concept has finally, fully come to life and your system is on one – this system, your next system, any system, every system.  

Moore’s Law is an exponential.  We all know what exponential curves look like, right?  They have an “interesting” part where things are happening along both axes, followed by a long “boring” part where something approaches an asymptote and cruises along toward infinity with nothing much going on.  Let’s take the cost of a gate. If it’s easier, let’s do the cost of a million gates, which would be the same answer anyway.  As the number of gates on a chip increases exponentially, the cost per gate, (or billion gates), approaches zero asymptotically.  When incremental gates cost essentially nothing, we won’t have to be very careful with them.  The three Ps of semiconductors – Power, Price, and Performance, will drop by one.  We’ll just have Power and Performance. 

When we get our performance from frequency, power and performance are related, unfortunately.  The more performance we want, the more power we consume.  However, with infinite gates available for free (Isn’t zero an awesome denominator?) we can do a lot of cool stuff with parallelization that offsets a lot of this power/performance intertwinedness.  Instead of having to have one processor on our chip cranking away at zillions of terahertz, we can have a whole bunch of them marching along at a thrifty frequency, sharing the load. 

Analog?  No problem.  We’ll have analog, RF, all that good stuff.  Probably even antennae.  Worried that the best process for digital won’t work for ultimate analog?  Don’t.  If we need to stack or interpose slices made from different semiconductor processes, the packaging folks will have that long-since handled.  All for free.  

How about storage?  All you want.  Non-volatile, too, and fast.  Connected to and shared by all the processors you’ve got because… why not?  And, while we’re being fancy – let’s throw in some MEMS.  Accelerometers would be nice.  Maybe an image sensor on the same package just so we won’t have any wiring to do? 

The “board” (if you could still call it that) will just connect our chip to power and peripherals —  at least, the few that aren’t working wirelessly. 

All of this stuff will be programmable, of course.  The hardware will be reconfigurable, and, obviously, we’ll have loads of software.  In fact, most of the weight of our system will be software – if you put the whole thing on a complexity-measuring scale.  Most of the people working on it will be software engineers, too.  Digital designers will have become as scarce as… well, analog designers.  It makes sense, because – we’ll be all done.  Once the ultimate chip goes to tape-out, we can all go home and enjoy a margarita on the patio.  Thar she blows!  Go program her, boys!  We’re outa’ here.

All this may seem absurd. 

Of course, it is, by today’s standards – even by the standards of what will probably happen a few years from now.  However, it is easy to see that we are headed firmly in that direction right now.  Altera recently announced a chip with 3.9 billion transistors.  That’s getting right up there next to infinity – for our purposes, anyway.  Xilinx recently announced Zinq – a chip that has just about all the elements we just described – but in more modest proportions.  You can buy 32 GB of flash memory on one chip.  At the grocery store.  In the checkout line.  With your leftover change.

Hardware engineering is rapidly approaching an economy of abundance.  We’ll have everything we want on one chip for free, so why make a new one?  We won’t need some semiconductor company’s special “edit” of The Chip.  Any of them will work.  It’s a good thing, too, because – while we’ve been celebrating the happy side of exponentials with Moore’s Law, there’s a dark side too – NRE.  As we approach one chip that can do anything, we also are approaching the vanishing point where designing and tooling up for that chip will be so expensive, only one company will be able to afford to do it.  We’d better hope they get it right. 

These same forces are not at work on software.  Since software is pure complexity, every generation puts us farther down the curve toward a system so complex that it defies understanding.  By exponentially improving hardware, we create a larger and larger cage for that beast of complexity to grow.  As we become more reliant on software IP to create complex systems, our level of understanding of the lowest-level components diminishes.

Consider EDA tools like synthesis, for example.  Two decades ago, logic synthesis was exploding.  Universities were cranking out PhD students that did dissertations and conference papers on synthesis by the truckload.  Those students went into the work force and did a fantastic job.  They spent their careers designing and refining the low-level algorithms that we all now depend on to do every one of our designs.  Today, however, you don’t find those students at universities.  Today’s students are plugging together the work of their predecessors into more complex systems – with little time to dig into the subtleties of the underlying code.  A decade from now, new students will be similarly assimilating the work of today’s programmers.  In a very foreseeable future, nobody left will have any idea what’s going on at the lower levels of the hierarchy.  There simply won’t be time to work at the new, highest level of abstraction while also understanding the nuances of atomic-level programming.  

Science fiction writers like to spin this into a vision of a world where machines develop artificial intelligence and take over the world.  It seems equally likely that we’ll gradually enter an era where critical systems will begin to fail unpredictably under the sheer weight of their own complexity.  When they do, the task of unravelling them and finding the problems will be daunting indeed.  Lucky for us hardware engineers, we’ll be on the beach – sipping our cocktails.

Leave a Reply

featured blogs
Oct 5, 2022
The newest version of Fine Marine - Cadence's CFD software specifically designed for Marine Engineers and Naval Architects - is out now. Discover re-conceptualized wave generation, drastically expanding the range of waves and the accuracy of the modeling and advanced pos...
Oct 4, 2022
We share 6 key advantages of cloud-based IC hardware design tools, including enhanced scalability, security, and access to AI-enabled EDA tools. The post 6 Reasons to Leverage IC Hardware Development in the Cloud appeared first on From Silicon To Software....
Sep 30, 2022
When I wrote my book 'Bebop to the Boolean Boogie,' it was certainly not my intention to lead 6-year-old boys astray....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Machine Learning with Microchip

Sponsored by Mouser Electronics and Microchip

Can you design a machine learning application without a deep knowledge in machine learning? Yes, you can! In this episode of Chalk Talk, Amelia Dalton chats with Yann Le Faou from Microchip about a machine learning approach that is low power, includes an expertise in communication and security, and is easy to implement.

Click here for more information about Microchip Technology Machine Learning