Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  Samsung SSD XP941 Plextor M6e Samsung SSD 840 Pro SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

The interface has never been the bottleneck when it comes to random write performance, especially in steady-state. Ultimately the NAND performance is the bottleneck, so without faster NAND we aren't going to see any major increases in steady-state performance.

The graphs above and below illustrate this as the XP941 isn't really any faster than the SATA 6Gbps based 840 Pro. Samsung has made some tweaks to their garbage collection algorithms and overall the IO consistency gets a nice bump over the 840 Pro but still, this is something we've already seen with SATA 6Gbps SSDs. I wouldn't say the IO consistency is outstanding because the Plextor M6e does slightly better with the default over-provisioning (both drives have ~7%) but if you increase the over-provisioning the XP941 will show its magic.

  Samsung SSD XP941 Plextor M6e Samsung SSD 840 Pro SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

  Samsung SSD XP941 Plextor M6e Samsung SSD 840 Pro SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

TRIM Validation

Update 5/20: I got an email from one of our readers suggesting that the TRIM issue might be related to Windows 7 and that Windows 8 should have functioning TRIM for PCIe SSDs. To try this, I installed Windows 8.1 to a secondary drive and ran our regular pre-conditioning (fill with sequential data and torture with 4KB random write for 60 minutes). To measure performance, I had to rely on Iometer as HD Tach didn't work properly under Windows 8. I ran the same 128KB sequential write test that we usually run (QD=1, 100% LBA) but extended the length to 10 minutes to ensure that the results are steady and not affected by burst performance.

Samsung SSD XP941 512GB - Iometer 128KB Sequential Write (QD1)
  Clean After TRIM
Samsung SSD XP941 512GB 607.7 MB/s 598.9 MB/s

And TRIM seems to function as it should, so it indeed looks like this is just a Windows 7 limitation, which is excellent news.

------------------------

To test TRIM, I took a secure erased XP941 and filled it with sequential data, followed by a 60-minute torture with 4KB random writes (QD32). After the torture, I TRIM'ed all user-accessible LBAs and ran HD Tach to produce the graph below:

It looks like TRIM isn't functional, although I'm not that surprised. I'm waiting to hear back from Samsung about whether this is a limitation in the operating system because I've heard that Windows doesn't treat PCIe drives the same even if they utilize the same AHCI software stack like the XP941 does. If that's true, we'll need either updates to Windows or some other solution.

In a Mac TRIM support is listed as "yes" when TRIM is enabled for third party drives using TRIM Enabler, though I didn't have the time to verify if it actually works.

Boot Support: Mac? Yes. PC? Mostly No. AnandTech Storage Bench 2013
Comments Locked

110 Comments

View All Comments

  • hulu - Thursday, May 15, 2014 - link

    Second page of review, fourth paragraph, states they were only able to aquire the 512 GB version, since as an OEM product Samsung isn't sampling the drive to media.

    Always helps if you read the entire story before commenting!
  • JoyTech - Friday, May 16, 2014 - link

    In that case, the reviewer better leave out the 128 & 256 GB out or mention the exclusions on first page, not 2nd page, 4th para; a good reviewer should make it easy for readers to access info, not act as lawyers and read the fine print!

    Also, I forgot to mention that their SSD bench marks have same problem (http://anandtech.com/bench/SSD/730), where they leave out Samsung SSD 840 EVO 250 GB, which is perhaps the best selling SSD in the market now. Very few people give a crap about 1 TB products, which is so proudly displayed in the bench!
  • Kristian Vättö - Saturday, May 17, 2014 - link

    The first page is just an introduction with no mention of the XP941 anyway. It wouldn't have fit the context there and in the end I at least like to think that the reader reads the whole review and not just a paragraph or two. It's rather hard to write something for a reader who reads a part here and part there.

    As for the 250GB 840 EVO, it is in the bench but we haven't run Storage Bench 2013 on it. That's because the test itself takes around 24 hours to complete and with the strict review times we don't usually have the time to test all available capacities.
  • critical_ - Thursday, May 15, 2014 - link

    Paradoxically, my problem with the M.2 form-factor is the number of sizes available to manufacturers. My Dell Venue 11 Pro tablet has a 2260 size 256GB SSD by Lite-On. There have been lots of firmware issues. The best thing would be to swap it out with a Samsung or Intel variant. However, there isn't much selection out there and 2260 is an oddball size. I'd like a 1TB mSATA SSD but it doesn't exist.

    Lenovo was smarter in this regard. Their Yoga 2 Pro uses the newer connector for the wireless card but the SSD is plain old mSATA. This allows me to pick from a variety of options without size concerns.

    I know I'm ranting and it is still early in the M.2 game but I hope manufacturers settle on providing high capcities in the 2242 and 2260 sizes with plates (like half mPCI-E to full mPCI-E) to allow them to fit in bigger slots.
  • Babar Javied - Thursday, May 15, 2014 - link

    Getting a smaller drive to fit into the bigger slot is easy. As you said, this can/should be easy with the use of "plates" or "expansion cards". So give it some time and you should have lots of options for your device. Should the 2260 size still remain an oddball, you can always get a 2242 size with extensions to help it fit into the bigger slots
  • dstarr3 - Thursday, May 15, 2014 - link

    All due respect to the awesome performance the new interface promises, I still feel like it's going to be a while before the 6Gbps bottleneck makes my computer feel frustratingly slow.
  • darwinosx - Thursday, May 15, 2014 - link

    Read the benchmarks or ask someone who has been using PCIE SSD on a Mac for some time now. It's much faster and noticeable.
  • Calista - Friday, May 16, 2014 - link

    But also highly depending on what you're doing. Maybe most people are accepting a slight drop in performance in exchange for less issues with compatibility and the option of moving the drive to a second machine down the line or mounting it in a usb cabinet.
  • Sabresiberian - Thursday, May 15, 2014 - link

    I have 2 issues with PCIe as a storage interface, at this point in time.

    First is that, for me, as a high-end gaming PC user, the number of PCIe lanes to the CPU is already limited. SATA lanes are not since I simply don't use that many storage devices. The second is cost. A few weeks ago I bought 2 480GB Sandisk Extreme II's for $300 each, and just saw them for $260 each listed on Newegg - so, for less than the cost of a 512GB XP941 I can get around twice the storage at similar speeds if I install using RAID 0 using current high-end SSD devices.

    Until Intel and/or AMD decides to provide more direct PCIe lanes and the cost comes down, PCIe SSDs are just an interesting upcoming technology, for me. :)
  • SirKnobsworth - Thursday, May 15, 2014 - link

    At least on an Intel platform, you wouldn't normally be using lanes from the CPU for a storage device (which are usually dedicated to graphics) - you'd be using lanes from the chipset (of which there are usually 8).

Log in

Don't have an account? Sign up now