AnandTech Storage Bench

The first in our benchmark suite is a light usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

That's right. A pair of X25-Vs in RAID-0 offers better performance in our light workload than Crucial's RealSSD C300, a $799 drive. The performance scaling is more than perfect, but that's a side effect of the increase in capacity. Remember that Intel's controller uses any available space on the SSD as spare area to keep write amplification at a minimum. Our storage bench is based on a ~34GB image, which doesn't leave much room for the 40GB X25-V to keep write amplification under control. With two our total capacity is 74.5GB, which is more than enough for this short workload. With the capacity cap removed, the X25-Vs can scale very well. Not nearly twice the performance of an X25-M G2, but much faster than a single drive from Intel.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

We see the same super scaling here thanks to the increase in capacity offered by RAIDing two of these drives together. The overall performance is great. We're at around 91% better performance than a single X25-M G2.

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

As we saw in our sequential read tests, the X25-Vs in RAID-0 can do very well in sequential read workloads. Our game loading test has the X25-V RAID array beating even Crucial's 6Gbps RealSSD C300.

Overall System Performance using PCMark Vantage Missing TRIM - Does it Matter?
Comments Locked

87 Comments

View All Comments

  • Makaveli - Tuesday, March 30, 2010 - link

    Why are so many of you having difficultly understanding this. YOU DO NOT GET TRIM SUPPORT WITH THE NEW INTEL DRIVERS IF YOU HAVE A RAID ARRAY BUILT OF JUST SSDs!

    Where every you guys are reading that stop its wrong!

  • vol7ron - Tuesday, March 30, 2010 - link

    Finally a RAID! Thank you, thank you, thank you. Just a few days too late before I bought the 80GB, but still this makes your review a little more meaningful - it is essentially the equivalent of showing overclocks for CPUs and more meaningful, since HDs are a bottleneck.

    Advice:
    RAID-0s see a greater impact with 3 or more HDs. I think the impact is exponential to the number of drives in the array, not just seemingly double. I know TRIM is not supported, but if you could get one more 40GB drive and also include the impact, that would be nice - I would consider anything more than 3 drives in the array as purely academic; 3 or less drives is a practical (and realistic) system setup.


    Notes to others:
    I saw the $75 discount on Newegg for 80GB X25MG2 (@ $225) and decided to grab it, since one of my 74GB Raptors finally failed in RAID. This discount (or price drop) is most likely due to the $125 40GB version. I also picked up Win7 Ultimate x64, to give it a try.
  • cliffa3 - Tuesday, March 30, 2010 - link

    Anand,

    On an install of Win7, I'm guessing a good bit of random writes occur.

    How much longer would you stave off the performance penalty due to having no TRIM with RAID if you took an image of the drive after installation, secure erased, and restored the image?

    Please correct me if I'm wrong in assuming restoring an image would be entirely sequential.

    I would probably image it anyway, but just trying to get a guess on what you think the impact would be in the above scenario to see if I should immediately secure erase and restore.

    I also would be interested in how much improvement you get by adding another drive to the array in RAID 0...is it linear?
  • GullLars - Tuesday, March 30, 2010 - link

    Some of the powerusers i know have used the method of secure erase + image to restore performance if/when it degrades. Mostly they do it after heavy benchmarking or once every few months on their workstations (WMvare, databases and the like).

    RAID scales linearly as long as the controller can keep up. This is the RAW performance numbers, the real life impact is not linear, and will have diminishing returns due to storage performance being divided in two major categories: Throughput and accesstime. Throughput scales linearly, accesstime stays unchanged. Though average accesstime for larger blocks and under heavy load takes less of a hit in RAIDs.

    Intels southbridges scale to roughly 600-650MB/s, and i've seen 400+ MB/s done at 4KB random.

    As for random read scaling in RAID, you have the formula IOPS = {average accesstime} * {Queue Depth}
    Average accesstime has a more gentle slope in RAID as the Queue Depth scales the more units you put in the RAID, but at low QD (1-4) there is little to gain for blocks smaller than stripe size. No matter how may SSDs you add in the RAID, you will never get scaling more than QD * IOPS @ QD 1.
  • ThePooBurner - Tuesday, March 30, 2010 - link

    Check out this video of 24 SSDs in a raid 0 array. Mind blowing.

    http://www.youtube.com/watch?v=96dWOEa4Djs
  • GullLars - Tuesday, March 30, 2010 - link

    Actually, that RAID has BAD performance compared to the number of SSDs.
    You are blinded by the sequential read numbers. Those Samsung SSDs have horrible IOPS performance, and the cost of the setup in the video compared to performance is just outright awfull.

    You can get the same read bandwidth with 12 x25-V's, and at the same time 10X the IOPS performance.
    Or if you choose to go for C300, 8 C300 will beat that setup in every test and performance meteric you can think of.

    Here is a youtube video of the performance of a Kingston V 40GB launching 50 apps for you to compare to the Samsung setup:
    http://www.youtube.com/watch?v=sax5wk300u4&fea...

    I will also point out my 2 Mtron 7025 SSD that were produced in dec 2007 can open the entire MS office 2007 suite in 1 second, from SB650 and prefetch/superfetch deactivated.
  • Slash3 - Tuesday, March 30, 2010 - link

    Speaking of which, is there a "report abuse" button for comments on this new site design system? I didn't notice one, just in fumbling around a bit.
  • waynethepain - Tuesday, March 30, 2010 - link

    Would defragging the SSDs mitigate some of the build up of garbage?
  • 7Enigma - Tuesday, March 30, 2010 - link

    You do not defrag an SSD.
  • GullLars - Tuesday, March 30, 2010 - link

    You don't need to defrag a SSD, but you can. This does not affect the physical placement of the files, but will defragment their LBA fragmentation in the file tables. Since most SSDs can reach full bandwidth (or close to) at 32-64KB random read, you need a seriously fragmented system before you will notice anything. There are almost no files that will get fragmented into that small pieces, and even if you get 50 files each in 1000 fragments of 4KB, the SSD will read each one when they are needed in a fraction of a second.

    It doesn't hurt to defrag if you notice a few files in hundreds or thousands of fragments, the lifespan of the SSD will be unaffected by one defrag a week, but it will cause spikes of random writes, wich may cause a _temporary_ performance degrading if you don't have TRIM.

Log in

Don't have an account? Sign up now