Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

Um, hello, awesome? The SanDisk Extreme II is the first Marvell based consumer SSD to actually prioritize performance consistency. The Extreme II does significantly better than pretty much every other drive here with the exception of Corsair's Neutron. Note that increasing the amount of spare area on the drive actually reduces IO consistency, at least during the short duration of this test, as SanDisk's firmware aggressively attempts to improve the overall performance of the drive. Either way this is the first SSD from a big OEM supplier that actually delivers consistent performance in the worst case scenario.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 

Introduction AnandTech Storage Bench 2013
Comments Locked

51 Comments

View All Comments

  • dsumanik - Tuesday, June 4, 2013 - link

    The benches on this drive are good.....not great, and I don think the opening bias is necessary. Who runs any disk at capacity 24/7? Perhaps some people temporarily... But 24/7 drive full???

    Only a fool.

    Kudos to sandisk for making a competitive offering, but please anandtech keep the bias out of the reviews....specially when it's not warranted.

    Storage bench is great, but it's not the only metric.

    Haswell is good, not great. But if your rocking a 2600k from 2 years ago? Meh.

    Where are the legendary power savings? Why don't we have 4 ghz + skus? 8 cores? 64gb ram support? Quick sync degraded lol!! Good job on iris pro. Why can't I buy it and slap it into an enthusiast board?

    Yet you read this review and the haswell review and come away feeling positive.

    Real life:

    Intel,
    A mild upgrade in IPC, higher In use TDP, 2 year old CPU's are still competitive

    Sandisk,
    Mixed bag of results, on unproven firmware.
    .
  • Death666Angel - Tuesday, June 4, 2013 - link

    Why do you keep ignoring the Samsung 840 Pro with spare area increased when it comes to consistency. It seems to me to be the best drive around. And if you value and know about consistency it seems pretty straight forward to increase the spare area and you should have the abilities to do so as well.
  • seapeople - Wednesday, June 5, 2013 - link

    Agreed, it looks like a Samsung 840 Pro that's not completely full would be the performance king in every aspect - most consistent (check the 25% spare area graphs!), fastest in every test, good reliability history, and the best all around power consumption numbers, especially in the idle state which is presumably the most important.

    Yet this drive is virtually ignored in the review, other than the ancillary mention in all the performance benchmarks it still wins, "The SanDisk did great here! Only a little behind all the Samsung drives... and as long as the Samsung drives are completely full, then the SanDisk gets better consistency, too! The SanDisk is my FAVORITE!"

    The prevailing theme of this review should probably be "The SanDisk gives you performance nearly as good as a Samsung at a lower price." Not, "OMG I HAVE A NEW FAV0RIT3 DRIVE! Look at the contrived benchmark I came up with to punish all the other drives being used in ways that nobody would actually use them in..."

    Seriously, anybody doing all that junk with their SSD would know to partition 25% of spare area into it, which then makes the Samsung Pro the clear winner, albeit at a higher cost per usable GB.
  • FunBunny2 - Tuesday, June 4, 2013 - link

    To the extent that "cloud" (re-)creates server-dense/client-thin computing, how well an SSD behaves in today's "client" doesn't matter much. Server workloads, with lots o random operations, will be where storage happens. Anand is correct to test SSDs under loads more server-like. As many have figured out, HDD in the enterprise are little different from consumer parts. "Cloud" vendors, in order to make money, will segue to "consumer" SSD. Thus, we do need to know how well they behave doing "server" loads; they will in any case. Clients will come with some amount flash (not necessarily even on current file system protocols).
  • joel4565 - Tuesday, June 4, 2013 - link

    Any word on whether this drive will be offered in a 960 GB capacity for a reasonable price in the near future?

    This looks like the best performing drive yet reviewed, but I doubt I will see that big of difference from my 120 GB Crucial M4 in day to day usage. I really don't think most of us will see a large difference until we go to a faster interface.

    So unless this drastically change in the next few months, I think my next drive will be the Crucial M500 960GB. Yes it will not be as consistent or quite as fast as the SanDisk Extreme II, but I won't have to worry about splitting my files, or moving steam games from my 7200 rpm drive to the SSD if they have long load times.
  • clepsydrae - Wednesday, June 5, 2013 - link

    Question for those more knowledgeable: I'm building a new DAW (4770k, win 8) which will also be used for development (Eclipse in linux). Based on earlier anandtech reviews I ordered a 128GB 840P Pro for use as the OS drive and eclipse workspace directory and the like. Reading this article, i'm not sure if I should return the 840P for the SanDisk... the 840P leads it in almost all the metrics except the one that is the most "real-world" and which seems to mimic what I'll be using it for (i.e. Eclipse.)

    Opinions?
  • bmgoodman - Wednesday, June 5, 2013 - link

    I gave up on SanDisk after they totally botched TRIM on their previous generation drive. They did such a poor job admitting it and finally fixing it that it left a bad taste in my mouth. They'd have to *give* me a drive for me to try their products again.
  • samster712 - Friday, June 7, 2013 - link

    So would anyone recommend this drive over the 840pro 256? Im very indecisive about buying a new drive.
  • Rumboogy - Thursday, July 11, 2013 - link

    Quick question. You mentioned a method to create an unused block of storage that could be used by the controller by creating a new partition (I assume fully formatting it) and then deleting it. This assumes TRIM marks the whole set of LBAs that covered the partition as being available. What is the comparable procedure on a Mac? Particularly if you don't get TRIM by default. And if you do turn it would it work in this case? Is there a way to guarantee you are allocating a block of LBAs to non-use on the Mac?
  • pcmax - Monday, August 12, 2013 - link

    Would have been really nice to compare it to their previous gen the Extreme I?

Log in

Don't have an account? Sign up now