Random Read/Write Speed

This test reads/writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

I've had to run this test two different ways thanks to the way the newer controllers handle write alignment. Without a manually aligned partition, Windows XP executes writes on sector aligned boundaries while most modern OSes write with 4K alignment. Some controllers take this into account when mapping LBAs to page addresses, which generates additional overhead but makes for relatively similar performance regardless of OS/partition alignment. Other controllers skip the management overhead and just perform worse under Windows XP without partition alignment as file system writes are not automatically aligned with the SSD's internal pages.

First up is my traditional 4KB random write test, each write here is aligned to 512-byte sectors, similar to how Windows XP might write data to a drive:

Update: Random write performance of the drive we reviewed may change with future firmware updates. Read here to find out more.

4KB Random Write - MB/s

Again, we have the Force and Vertex LE running shoulder to shoulder. The situation doesn't change at all when we look at 4K aligned writes (similar to how Windows 7/OS X 10.5/6 would behave):

4K Aligned - 4KB Random Write - MB/s

4KB Random Read - MB/s

Random read speed rounds out our tests and shows us no difference between the SF-1200 and earlier SF-1500 derived SSDs.

Sequential Read/Write Speed Overall System Performance using PCMark Vantage
Comments Locked

63 Comments

View All Comments

  • Anand Lal Shimpi - Wednesday, April 14, 2010 - link

    That I'm not sure of, the 2008 Iometer build is supposed to use a fairly real world inspired data set (Intel helped develop the random algorithm apparently) and the performance appears to be reflected in our real world tests (both PCMark Vantage and our Storage Bench).

    That being said, SandForce is apparently working on their own build of Iometer that lets you select from all different types of source data to really stress the engine.

    Also keep in mind that the technology at work here is most likely more than just compression/data deduplication.

    Take care,
    Anand
  • keemik - Wednesday, April 14, 2010 - link

    Call me anal, but I am still not happy with the response ;)
    Maybe the first 4k block is filled with random data, but then that block is used over and over again.

    That random read/write performance is too good to be true.
  • Per Hansson - Wednesday, April 14, 2010 - link

    Just curious about the missing capacitor, will there not be a big risk of dataloss incase of power outage?

    Do you know what design changes where done to get rid of the capacitor, where any additional components other than the capacitor removed?

    Because it can be bought in low quantities for a quite ok retail price of £16.50 here;
    http://www.tecategroup.com/ultracapacitors/product...
  • bluorb - Wednesday, April 14, 2010 - link

    A question: if the controller is using lossless compression in order to write less data, is it not possible to say that the drive work volume is determined by the type of information written to it?

    Example: if user x data can be routinely compressed at a 2 to 1 ratio then it can be said that for this user the drive work volume is 186GB and cost per GB is 2.2$.

    Am I on to something or completely of the track ?
  • semo - Wednesday, April 14, 2010 - link

    this compression is detectable by the OS. As the name suggests (DuraWrite) it is there to reduce the wear on the drive which can also give better performance but not extra capacity.
  • ptmixer - Wednesday, April 14, 2010 - link

    I'm also wondering about the capacity on these SandForce drives. It seems the actual capacity is variable depending on the type of data stored. If the drive has 128 GB of flash, 93.1 usable after spare area, then that must be the amount of compressed/thinned data you can store, so the amount of 'real' data should be much more.. thereby helping the price/GB of the drive.

    For example, if the drive is partly used and your OS says it has 80 GB available, then you store 10 GB of compressible data on it, won't it then report that it perhaps still has 75 GB available (rather than 70 GB as on a normal drive)? Anand -- help us with our confusion!

    ps - thanks for all the great SSD articles! Could you also continue to speculate how well a drive will work on a non trim-enabled system, like OS X, or as a ESXi Datastore?
  • JarredWalton - Wednesday, April 14, 2010 - link

    I commented on this in the "This Just In" article, but to recap:

    In terms of pure area used, Corsair sets aside 27.3% of the available capacity. However, with DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. You're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining.

    So even at 0% compression you'd still have at least 35GiB of spare and 93GiB of storage, but with an easily achievable 25% compression average you would have as much as ~58GiB of spare area (45% of the total capacity would be "spare"). If you get an even better 33% compression you'd have 66GiB of spare area (51% of total capacity), etc.
  • KaarlisK - Wednesday, April 14, 2010 - link

    Just resize the browser window.
    Margins won't help if you have a 1920x1080 screen anyway.
  • RaistlinZ - Wednesday, April 14, 2010 - link

    I don't see a reason to opt for this over the Crucial C300 drive, which performs better overall and is quite a bit cheaper per GB. Yes, these use less power but I hardly see that as a determining factor for people running high-end CPU's and video cards anyway.

    If they can get the price down to $299 then I may give it a look. But $410 is just way too expensive considering the competition that's out there.
  • Chloiber - Wednesday, April 14, 2010 - link

    I did test it. If you create the test file it compressable to 0 percent of its original size.
    But if you write sequential or random data to the file you can't compress it at all. So i think that iometer uses random data for the tests. Of course this is a critical point when testing such drives and I am sure that anand did test it too before doing the tests. I hope so at least ;)

Log in

Don't have an account? Sign up now