feature article
Subscribe Now

Flash Flood

We Never Thought...

Flash was great for storing small amounts of data.  It was even pretty cool when we could pop a small card into our early digital cameras and save our low-res images with reasonable speed and security.  It was expensive, but we were willing to pay the big premium because it could uniquely solve the non-volatile storage problem for certain classes of mobile devices.  Flash seemed like a novelty, though, and those of us that knew all the evil quirks and kluges under the hood held a healthy respect for the potential of the technology to bite us.  We knew that flash could “wear out,” that flash wasn’t truly random access, and that flash was very temperamental to bring up on a new process node – usually lagging far behind standard CMOS processes.  Special techniques had to be applied to even get flash to appear to behave well, and we expected the curtain to fall aside on that magic at literally any moment.  REAL non-volatile storage was done with hard disk drive (HDD) technology – always had been, always would be.

Uh, until now.

A few years ago, the analyst line on flash was that it was great for the niche it occupied, but that it wasn’t headed for more mainstream storage applications.  When NAND flash started to grab market share, we became even more skeptical.  From an architecture point of view, it seemed like a step backward – even though performance and cost could potentially be much better.  Progress marched ahead, however, and the rhetorical question became: “Will flash someday challenge hard drives for some specialized applications like laptops?”

“Of course not!” we all laughed.  Flash has a 100X disadvantage on cost per megabyte, and it isn’t gaining any on HDD as that technology moves forward.  Nonetheless, we designed flash into mobile devices because we had to – HDD wasn’t up to the rigors of pocket pounding that most mobile devices dished out, and we wanted lots of storage in our mobile companions.  We had photos and music and video and – wait a minute! We’re suddenly dealing with tens of gigabytes of flash in our mobile devices. 

The “billions served” effect drove down prices even farther, and investment in flash technology increased.  Mobile systems had a huge appetite for cheap, fast, reliable non-volatile storage – and those products were selling in the millions of units. 

I’ve been on a panel at the annual Flash Memory Summit for a couple of years now.  The topics of discussion have generally centered around mobile-device related issues.  This year, it was pretty much the same – NAND flash getting cheaper and faster, better management technology alleviating concerns about wear and masking the lack of true random access capability, data centers adopting flash-based SSDs… Wait – rewind… Data Centers?!?

You may already be involved in the debate about SSDs replacing (or more accurately, augmenting) HDDs in enterprise data centers, but the argument goes like this:  Data centers typically use large numbers of high-speed, moderate-capacity, high-cost disk drives to deliver the required capacity and access rate.  However, if you introduce SSD into the mix, you can create a net improvement in every important metric – cost per gigabyte, cost per IOPS, power per gigabyte, and power per IOPS.  Flash-based SSDs are better at speed and power-efficiency, and HDDs are better at cost-per-gigabyte, so mixing the two in the correct proportions can yield better overall results.  Furthermore, since the SSD now carries the burden of the “speed” part of the operation, HDDs can be downgraded to less expensive, lower-speed models to further optimize cost per storage.

While we in the embedded and mobile world have been long since converted, the current revolution in flash cost and performance may spell the beginning of the end of the venerable hard drive – even in the last bastions of HDD supremacy like data centers.  HDDs have become a major bottleneck in system performance, reliability, and power consumption.  In every dimension except cost-per-storage, SSDs have already far surpassed their performance.  In the area of reliability, while we mulled over the tendency of flash to “wear out,” we accepted as fact that the moving, spinning, grinding parts of hard drives would always fail over time (where time is a single-digit number of years), and that the industry was treading water with reliability (which doesn’t sell hardware to consumers) while pushing ahead with cost and density progress (which most certainly does).

NAND flash is now perhaps the most aggressively pushed and downsized technology driving down the Moore’s Law freeway, with no major service stops required until we hit about 20nm technology.  Even at that process node, additional work, like multi-level cell technology (storing of multiple bits per memory cell), promise to increase effective density even farther.  As flash races ahead, other technologies like ferroelectric RAM (FeRAM), magnetoresistive RAM (MRAM), programmable metallization cells (PMC), and phase change memory (PCM) all are nipping at its heels, with advantages in areas like speed, power consumption, and reliability.

For those of us designing embedded systems, this means simply that massive quantities of reliable, power-efficient, fast mass storage are just around the corner.  With hardware foundations including ubiquitous wireless connectivity, incredible processing power, and now practically unlimited storage capacity all available to us in creating our systems, we have to broaden our thinking in architecting our embedded and mobile devices. 

Also, as technologies like flash that cut their teeth in the embedded market make their way into areas like desktop and enterprise computing, we see the continuation of the emerging trend for mobile and embedded systems to lead rather than follow the trends of their less mobile brethren. For us as embedded designers, that’s a big responsibility to bear.

Leave a Reply

featured blogs
Feb 2, 2023
We share our predictions for high performance computing (HPC) in 2023, including the growth of edge computing solutions and the rise of AI and machine learning. The post Top 5 HPC Trends to Come in 2023 appeared first on From Silicon To Software....
Feb 2, 2023
Register only once to get access to all Cadence on-demand webinars. Tip clearance flow is the flow that passes through the small clearance gaps between rotating and stationary components of a turbomachine. The size and shape of such gaps are directly linked to the turbomachin...
Jan 30, 2023
By Hossam Sarhan Work smarter, not harder. Isn't that what everyone is always telling you? Of course, it's excellent advice,… ...
Jan 19, 2023
Are you having problems adjusting your watch strap or swapping out your watch battery? If so, I am the bearer of glad tidings....

featured video

Synopsys 224G & 112G Ethernet PHY IP OIF Interop at ECOC 2022

Sponsored by Synopsys

This Featured Video shows four demonstrations of the Synopsys 224G and 112G Ethernet PHY IP long and medium reach performance, interoperating with third-party channels and SerDes.

Learn More

featured chalk talk

Automated Benchmark Tuning

Sponsored by Synopsys

Benchmarking is a great way to measure the performance of computing resources, but benchmark tuning can be a very complicated problem to solve. In this episode of Chalk Talk, Nozar Nozarian from Synopsys and Amelia Dalton investigate Synopsys’ Optimizer Studio that combines an evolution search algorithm with a powerful user interface that can help you quickly setup and run benchmarking experiments with much less effort and time than ever before.

Click here for more information about Synopsys Optimizer Runtime & Optimizer Studio