According to tech folklore, Carver Mead actually coined the term “Moore’s Law” – some ten years or so after the publication of Gordon Moore’s landmark 1965 Electronics Magazine article “Cramming More Components Onto Integrated Circuits.” For the next five and a half decades, the world was reshaped by the self-fulfilling prophecy outlined in that article. Namely, that every two years or so, semiconductor companies would be … Read More → "Neuromorphic Revolution"
Over the years (actually, decades, now I come to think about it), I’ve seen a lot of great silicon chip architectures and technologies pop up like hopeful contenders in a semiconductor Whack-A-Mole competition, only to fail because their developers focused on the hardware side of things and largely relegated the software — in the form of design, analysis, and verification tools — to be “something we’ll … Read More → "Say Hello to Deep Vision’s Polymorphic Dataflow Architecture"
“I have a memory like an elephant. I remember every elephant I’ve ever met.” – Herb Caen
When Micron first told me about their new 176-layer flash memory, I thought I must’ve misheard something. That’s a typo, right? Surely you don’t mean you’ve made a chip with 176 mask layers. How heavy is that thing? </ … Read More → "All I Want for Christmas Is My 176-Layer Flash"
The Next Big Thing! Ferroelectric Nonvolatile Memory and Tiny Aquatic Robots Inspired by Sea Creatures
We’ve got a virtual grab bag of EE goodness in this week’s Fish Fry podcast! First up, we take a closer look at some very unique robots unveiled by a recent research study at Northwestern University. We take a closer look at how these tiny robots (which are powered by light and rotating magnetic fields) are able to walk, roll, and transport cargo. Next, Stefan … Read More → "The Next Big Thing! Ferroelectric Nonvolatile Memory and Tiny Aquatic Robots Inspired by Sea Creatures"
As is usually the case, strange things are afoot in Max’s World (where the butterflies are bigger, the flowers are more colorful, the birds sing sweeter, and the beer runs plentiful and cold). Allow me to expound, elucidate, and explicate — don’t worry, I’m a professional, it won’t hurt at all (well, it won’t hurt me</ … Read More → "Ultra-Low-Cost Flexible ICs Make Possible Trillions of Smart Objects"
How do we know that what our neural networks are telling us should be trusted? Can we build confidence into our neural networks so they can answer that for us? According to a new study out of MIT and Harvard, we can and it won’t break the computational bank! In this week’s Fish Fry podcast, we first check out a new way for deep learning neural networks … Read More → "It’s All About Confidence"