Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area.  If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Strontium Hawk 256GB Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB
Default
25% Spare Area

Performance consistency isn't very good. It's not horrible but compared to the best consumer drives, it leaves a lot to be desired. Fortunately there are no major dips in performance as the IOPS is over 1000 in all of our data points, so users shouldn't experience any significant slowdowns. Increasing the over-provisioning definitely helps but the IOPS is still not very consistent: It's going between 10K and ~2K IOPS, while for desirable drives the graph is very linear with low amplitude.

  Strontium Hawk 256GB Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB
Default
25% Spare Area

 

  Strontium Hawk 256GB Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB
Default
25% Spare Area

 

Inside The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

26 Comments

View All Comments

  • karasaj - Tuesday, June 25, 2013 - link

    Every time I kept reading that, I kept thinking about how no name suppliers would be bad... hard to remember it's from Toshiba, haha. Good luck to them. Nice review :) Power consumption seems obnoxiously low at load.
  • mcveigh - Tuesday, June 25, 2013 - link

    Strontium Hawk is the name of my spirit animal!
  • jmke - Tuesday, June 25, 2013 - link

    not the best product name I must admit... might not do well in Benelux ;)
    http://translate.google.com/?hl=en#nl/en/stront
  • Pessimism - Tuesday, June 25, 2013 - link

    Why poo-poo the manufacturer for taking a stand to end the 1000^3 garbage? Everyone should follow suit and label with formatted capacity.
  • hedleyroos - Tuesday, June 25, 2013 - link

    Your use of the word "poo-poo" is ironically funny to people who speak Afrikaans (and probably Dutch and Flemish) since "stront" means "poo". They have zero chance of succeeding in those markets.
  • Kristian Vättö - Tuesday, June 25, 2013 - link

    Like I said, it's pretty much useless for a small OEM like Strontium to try and change the industry.
  • piroroadkill - Tuesday, June 25, 2013 - link

    It should then say 240GiB...
  • dealcorn - Tuesday, June 25, 2013 - link

    OK. Laptops outsell desktops and you were unable to test with HIPM and DIPM enabled and no mention was made of DEVSLP support which is a big deal for any Haswell mobile device. By impairing the relevance of the review to typical use cases, someone properly earns a demerit.

    You need a Haswell mobile device to test with if you want to maintain relevance to the mainstream market. I would complain to my bosses that you need better hardware support to perform at the level they and readers expect of you.
  • Kristian Vättö - Tuesday, June 25, 2013 - link

    I'll be brutally honest here. Just because we are AnandTech doesn't mean that we have piles of laptops laying around, especially ones that are based on a chip that was released less than a month ago. All reviewers want Haswell-based devices at the moment, the supply is extremely tight. Usually laptops only have a review time of two weeks or so because the same system may be sent to others once it's been returned. Obviously reviewers who are actually going to review the system are the first priority, so it'd be really hard for me to get one because 1) I would only be using it for one test 2) I couldn't send it back anytime soon. In other words, the manufacturer wouldn't get much bang for their marketing $ because they wouldn't get much visibility, which is the reason review samples exist in the first place.

    I know there's the option of buying one but again, I'll be honest here: It would be around $1000 for just one test. I would definitely take one if Anand paid for it but as far as I've understood, Anand isn't into spending thousands on test equipment (keep in mind that the financial situation isn't all that good for us since it's usually the marketing budgets that get cut when bad times hit, so it's harder for us to get advertisers). There's a ton of stuff I'd love to have as they would really take our SSD tests (especially power related) to a next level but I'm not the one making decisions.

    We have talked with Intel and ASUS and asked if there's anyway HIPM/DIPM could be enabled on a desktop system (even via custom firmware) but as far as I know, they are not up for that.

    Trust me, I would take a Haswell laptop on a heartbeat if someone gave me one, but I hope you also understand that we don't get whatever we want from manufacturers.
  • dealcorn - Tuesday, June 25, 2013 - link

    Well handled and responsive to my concern. So, BK is telling his team we have a winning hand but we need to release products faster. Anandtech is arguably the premier web site consumers turn to for help understanding how new technology benefits them. Intel, however, makes no effort to ensure you have access to the hardware necessary to explain why consumers should value the new stuff. If Intel is trying to increase the cadence, they should step up their game. If Intel does not understand that, it lessens BK's credibility.

Log in

Don't have an account? Sign up now