feature article
Subscribe Now

Flash Flood

We Never Thought...

Flash was great for storing small amounts of data.  It was even pretty cool when we could pop a small card into our early digital cameras and save our low-res images with reasonable speed and security.  It was expensive, but we were willing to pay the big premium because it could uniquely solve the non-volatile storage problem for certain classes of mobile devices.  Flash seemed like a novelty, though, and those of us that knew all the evil quirks and kluges under the hood held a healthy respect for the potential of the technology to bite us.  We knew that flash could “wear out,” that flash wasn’t truly random access, and that flash was very temperamental to bring up on a new process node – usually lagging far behind standard CMOS processes.  Special techniques had to be applied to even get flash to appear to behave well, and we expected the curtain to fall aside on that magic at literally any moment.  REAL non-volatile storage was done with hard disk drive (HDD) technology – always had been, always would be.

Uh, until now.

A few years ago, the analyst line on flash was that it was great for the niche it occupied, but that it wasn’t headed for more mainstream storage applications.  When NAND flash started to grab market share, we became even more skeptical.  From an architecture point of view, it seemed like a step backward – even though performance and cost could potentially be much better.  Progress marched ahead, however, and the rhetorical question became: “Will flash someday challenge hard drives for some specialized applications like laptops?”

“Of course not!” we all laughed.  Flash has a 100X disadvantage on cost per megabyte, and it isn’t gaining any on HDD as that technology moves forward.  Nonetheless, we designed flash into mobile devices because we had to – HDD wasn’t up to the rigors of pocket pounding that most mobile devices dished out, and we wanted lots of storage in our mobile companions.  We had photos and music and video and – wait a minute! We’re suddenly dealing with tens of gigabytes of flash in our mobile devices. 

The “billions served” effect drove down prices even farther, and investment in flash technology increased.  Mobile systems had a huge appetite for cheap, fast, reliable non-volatile storage – and those products were selling in the millions of units. 

I’ve been on a panel at the annual Flash Memory Summit for a couple of years now.  The topics of discussion have generally centered around mobile-device related issues.  This year, it was pretty much the same – NAND flash getting cheaper and faster, better management technology alleviating concerns about wear and masking the lack of true random access capability, data centers adopting flash-based SSDs… Wait – rewind… Data Centers?!?

You may already be involved in the debate about SSDs replacing (or more accurately, augmenting) HDDs in enterprise data centers, but the argument goes like this:  Data centers typically use large numbers of high-speed, moderate-capacity, high-cost disk drives to deliver the required capacity and access rate.  However, if you introduce SSD into the mix, you can create a net improvement in every important metric – cost per gigabyte, cost per IOPS, power per gigabyte, and power per IOPS.  Flash-based SSDs are better at speed and power-efficiency, and HDDs are better at cost-per-gigabyte, so mixing the two in the correct proportions can yield better overall results.  Furthermore, since the SSD now carries the burden of the “speed” part of the operation, HDDs can be downgraded to less expensive, lower-speed models to further optimize cost per storage.

While we in the embedded and mobile world have been long since converted, the current revolution in flash cost and performance may spell the beginning of the end of the venerable hard drive – even in the last bastions of HDD supremacy like data centers.  HDDs have become a major bottleneck in system performance, reliability, and power consumption.  In every dimension except cost-per-storage, SSDs have already far surpassed their performance.  In the area of reliability, while we mulled over the tendency of flash to “wear out,” we accepted as fact that the moving, spinning, grinding parts of hard drives would always fail over time (where time is a single-digit number of years), and that the industry was treading water with reliability (which doesn’t sell hardware to consumers) while pushing ahead with cost and density progress (which most certainly does).

NAND flash is now perhaps the most aggressively pushed and downsized technology driving down the Moore’s Law freeway, with no major service stops required until we hit about 20nm technology.  Even at that process node, additional work, like multi-level cell technology (storing of multiple bits per memory cell), promise to increase effective density even farther.  As flash races ahead, other technologies like ferroelectric RAM (FeRAM), magnetoresistive RAM (MRAM), programmable metallization cells (PMC), and phase change memory (PCM) all are nipping at its heels, with advantages in areas like speed, power consumption, and reliability.

For those of us designing embedded systems, this means simply that massive quantities of reliable, power-efficient, fast mass storage are just around the corner.  With hardware foundations including ubiquitous wireless connectivity, incredible processing power, and now practically unlimited storage capacity all available to us in creating our systems, we have to broaden our thinking in architecting our embedded and mobile devices. 

Also, as technologies like flash that cut their teeth in the embedded market make their way into areas like desktop and enterprise computing, we see the continuation of the emerging trend for mobile and embedded systems to lead rather than follow the trends of their less mobile brethren. For us as embedded designers, that’s a big responsibility to bear.

Leave a Reply

featured blogs
Apr 23, 2024
Do you think you are spending too much time fine-tuning your SKILL code? As a SKILL coder, you must be aware that producing bug-free and efficient code requires a lot of effort and analysis. But don't worry, there's good news! The Cadence Virtuoso Studio platform ha...
Apr 22, 2024
Learn what gate-all-around (GAA) transistors are, explore the switch from fin field-effect transistors (FinFETs), and see the impact on SoC design & EDA tools.The post What You Need to Know About Gate-All-Around Designs appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Nexperia Energy Harvesting Solutions
Sponsored by Mouser Electronics and Nexperia
Energy harvesting is a great way to ensure a sustainable future of electronics by eliminating batteries and e-waste. In this episode of Chalk Talk, Amelia Dalton and Rodrigo Mesquita from Nexperia explore the process of designing in energy harvesting and why Nexperia’s inductor-less PMICs are an energy harvesting game changer for wearable technology, sensor-based applications, and more!
May 9, 2023
40,594 views