Meta recently published a blog post that proposes adding a QLC SSD layer between HDD and TLC SSDs. The post, titled “A Case for QLC SSDs in the Data Center,” argues that a QLC SSD layer between a system’s TLC SSDs and its HDDs can help improve Meta’s overall system cost/performance ratio.
The SSD Guy finds that this makes perfect sense, since a QLC SSD fits into the memory/storage hierarchy the same as any other layer:
-
-
- A QLC SSD is cheaper and slower than the next-faster layer (TLC SSDs)
- A QLC SSD is more expensive and faster than the next-slower layer (HDDs)
-
There are, of course, some who believe that there are already enough layers in the memory/storage hierarchy without adding another, but the Meta researchers don’t subscribe to that idea, since they find that adding a layer helps improve performance while reducing cost and energy usage.
The added layer reduces power consumption because the write load is very low at this level of the hierarchy, according to Meta. SSD reads consume much less energy than SSD writes, whereas reads and writes on HDDs consume about the same amount of energy, and that level is higher than the energy of an SSD read. The QLC SSDs do a lot of low-energy reads so these reads don’t get passed down to the HDD level.
One of the reasons these researchers considered adding a QLC layer was their concern of HDD’s shrinking “Sustained Throughput/TB,” or more simply “BW/TB,” which they depict with a chart. The measurement can be used to tell how long it would take to read the entire contents of an SSD or HDD. If the capacity of the drive doubles but the interface is unchanged, then it will take twice as long to read its entire contents. The Meta engineers appear not to be familiar with the the fact that the HDD community has been discussing this phenomenon for decades, using the term “Access Density.”
Make Them Huge
The Meta team wanted to get the highest-capacity SSD they could, and explained that the industry ought to be able to provide 512TB SSDs in a U.2 form factor by using 2Tb (terabit) QLC NAND flash chips stacked 32-high in each IC package. NAND chip makers have become very proficient at stacking flash dice for things like microSD cards, so this makes sense. Such a package would contain 8TB (terabytes) of flash, so that U.2 would only need to have 64 of these packages (say 4 rows of 8 packages on 2 sides of the PC card) plus a controller, a little DRAM, and a few other housekeeping chips.
NAND flash makers have been focusing a lot of effort on providing very high capacity SSDs lately. Most NAND flash makers have recently announced hundred-terabyte SSDs as I discussed in a post called New Interest in Monster SSDs. At its February 2025 Investor Day, the newly spun-out Sandisk revealed a plan to introduce SSDs as large as 512TB by 2027, followed by a full 1PB SSD at some later, undisclosed date.
Such enormous SSDs will not only save energy, but they will also consume much less of the datacenter’s floor space than the highest-capacity HDDs, which are currently in the 20TB range. Today’s huge AI build-out might make that a priority.
I’ll be interested to see where this leads. The average density of an HDD has been increasing more steeply ever since the HDD market started to become more focused on the datacenter as it lost PC and server sales to SSDs. This is clear in the chart below, which illustrates that trend for Western Digital. Perhaps something like that will also happen with SSDs. Time will tell.
Objective Analysis makes a point of understanding the drivers of change in the market. If your company would like to take advantage of this understanding, please contact us to explore ways that we can work together for our mutual benefit.