Testing Endurance

We've mentioned in the past that NAND endurance is not an issue for client workloads. While Intel's SSD 335 moves to 20nm MLC NAND, the NAND itself is still still rated at the same 3,000 P/E cycles as Intel's 25nm MLC NAND. Usually we can't do any long-term endurance testing on SSDs for the initial review because it simply takes way too long to wear out an SSD. Even if you're constantly writing to a drive, it will take weeks, possibly even months for the drive to wear out. Fortunately Intel reports total NAND writes and percentage of lifespan remaining as SMART values that can be read using the Intel SSD Toolbox. The variables we want to pay attention to are the E9 and F9 SMART values, which represent the Media Wearout Indicator (MWI) and total NAND writes. Using those values, we can estimate the long-term endurance of an SSD without weeks of testing. Here is what the SMART data looked like before I started our endurance test:

This screenshot was taken after all our regular tests had been run, hence there are already some writes to the drive, although nothing substantial. What surprised me was that the MWI was already at 92, even though I had only written 1.2TB to the NAND. Remember that the MWI begins at 100 and then decreases down to 1 as the drive uses up its program/erase cycles. Even after it has hit 1, it's likely the drive can still withstand additional write/erase cycles thanks to MLC NAND typically behaving better than the worst-case estimates.

We've never received an Intel SSD sample that started with such a low MWI, indicating either a firmware bug or extensive in-house testing before the drive was sent to us.

To write as much as possible to the drive before the NDA lift, I first filled the drive with incompressible data and then proceeded with incompressible 4KB random writes at queue depth of 32. SandForce does real-time data compression and deduplication, so using incompressible random data was the best way to write a lot of data to NAND in a short period of time. I ran the tests in about 10-hour blocks, here is the SMART data after 11 hours of writing:

I had written another ~3.8TB to the NAND in just 11 hours but what's shocking is that the MWI had dropped from 92 to 91. With the SSD 330, Anand wrote 7.6TB to the NAND and the MWI stayed at 100, and that was a 60GB model; our SSD 335 is 240GB and thus it should be more durable (more NAND to write to). It's certainly possible that the MWI was at the edge of 92 and 91 after Intel's in-house testing, but I decided to run more tests to see if that was the case. Let's fast-forward 105 hours that I spent writing to the drive in total:

In a few days, I managed to write a total of 37.8TB to the NAND and during that time, the MWI had dropped from 92 to 79. In other words, I used up 13% of the drive's available P/E cycles. This is far from being good news. Based on the data I gathered, the MWI would hit 0 after around 250TB of NAND writes, which translates to less than 1,000 P/E cycles.

I showed Intel my findings and they were as shocked as I was. The drive had undergone their validation before shipping and nothing out of the ordinary was found. Intel confirmed that the NAND in SSD 335 should indeed be 3,000 P/E cycles, so my findings contradicted with that data by a fairly significant margin. Intel hadn't seen anything like this and asked me to send the drive back for additional testing. We'll be getting a new SSD 335 sample to see if we can replicate the issue.

It's understandable that the endurance of 20nm NAND may be slightly lower compared to 25nm even though they are both rated at 3,000 P/E cycles (Intel does have 25nm with 5,000 cycles as well) because 25nm is now a mature process whereas 20nm is very new. Remember that the P/E cycle rating is the minimum the NAND must withstand; in reality it can be much more durable as we saw with the SSD 330 (based on our tests its NAND was good for at least 6,000 P/E cycles). Hence both 20nm and 25nm MLC NAND can be rated at 3,000 cycles, although their endrudance in real world may vary (but both should still last for at least 3,000 cycles). 

It's too early to conclude much based on our sample size of one. There's always the chance that our drive was defective or subject to a firmware bug. We'll be updating this section once we get a new drive in house for additional testing.

Introduction Inside the Intel SSD 335 and Test Setup
Comments Locked

69 Comments

View All Comments

  • Per Hansson - Tuesday, October 30, 2012 - link

    No, it does not work like that.
    A slow DMM might take a reading every second.
    An example, in seconds:
    1: 2w
    2: 2w
    3: 2w
    Average=2w

    A fast DMM might take readings every 100ms:
    1: 2w
    2: 0.5w
    3: 2w
    4: 0.5w
    Average=1w

    As you see a DMM does not take a continous reading, it takes readings at points in time and averages those...

    An SSD drive might actually change power levels much more frequently, like every millisencond (consider their performance, how long does it take to write 4KB of data as an example?)
  • hrga - Thursday, November 1, 2012 - link

    dont think SSD even try to write such a small amount of data as 4kB every milisecond considering how large buffers usually have 128GB LPDDR2. So thes kind of small writes occur in bursts when they accumulate every 15-30s (at least hope so as this was case with hard drives) That ofc depends on firmware and values in it.
  • Per Hansson - Thursday, November 1, 2012 - link

    That makes no difference, I sincerely hope that no drive waits 15 > 30 seconds to write data to disk because that is just a recipe for data loss in case of power failure or BSOD.
    I also hope no drive uses a 128GB write cache. (Intel's in house controller keeps no user data in cache as an example, but I digress)

    Even if the drive waits a minute before it writes the 4KB of data you must still have a DMM capable of catching that write, which is completed in less than a millisecond.
    Otherwise the increased power consumption during the disk write will be completely missed by the DMM
  • Mr Alpha - Monday, October 29, 2012 - link

    Wouldn't it make more sense to the idle power consumption on a platform that supports DPIM? The idle power usage is mostly a matter on mobile devices, and it is on those you get DPIM support.
  • sheh - Monday, October 29, 2012 - link

    The text says total writes were 1.2TB, (+3.8TB=) 5TB, and 37.8TB. The screenshots show "host writes" at 1.51TB, 2.11TB, and 3.90TB?
  • sheh - Monday, October 29, 2012 - link

    And why the odd power on hours counts?
  • Kristian Vättö - Monday, October 29, 2012 - link

    You are mixing host writes with the actual NAND writes. Host writes are the data that the host (e.g. an operating system) sends to the SSD controller to write. NAND writes show much is written to the NAND.

    When the SSD is pushed to a corner like I did, you will end up having more NAND writes than host writes because of read-modify-write (i.e. all user-accessible LBAs are already full, so the controller must read the block to a cache, modify the data and rewrite the block). Basically, your host may be telling the controller to write 4KB but the controller ends up writing 2048MB (that's the block size).
  • extide - Monday, October 29, 2012 - link

    Block size is 2048KB*
  • sheh - Monday, October 29, 2012 - link

    So the write amplification in the end was x9.7?

    Are NAND writes also reported by SMART?

    And with the messed up power on count, how can you know the rest of the SMART data is reliable?
  • Kristian Vättö - Tuesday, October 30, 2012 - link

    Yes, write amplification was around 9.7x in the end. That makes sense because the drive becomes more and more fragmented the more you write to it.

    As you can see in the screenshots, the SMAT value F9 corresponds to NAND writes. Most manufacturers don't report this data, though.

    We just have to assume that the values are correct. Otherwise we could doubt every single test result we get, which would make reviewing impossible. The data makes sense so at least it's not screaming that something is off, and from what I have read, we aren't the only site who noticed weird endurance behavior.

Log in

Don't have an account? Sign up now