AnandTech Storage Bench

The first in our benchmark suite is a light usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

That's right. A pair of X25-Vs in RAID-0 offers better performance in our light workload than Crucial's RealSSD C300, a $799 drive. The performance scaling is more than perfect, but that's a side effect of the increase in capacity. Remember that Intel's controller uses any available space on the SSD as spare area to keep write amplification at a minimum. Our storage bench is based on a ~34GB image, which doesn't leave much room for the 40GB X25-V to keep write amplification under control. With two our total capacity is 74.5GB, which is more than enough for this short workload. With the capacity cap removed, the X25-Vs can scale very well. Not nearly twice the performance of an X25-M G2, but much faster than a single drive from Intel.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

We see the same super scaling here thanks to the increase in capacity offered by RAIDing two of these drives together. The overall performance is great. We're at around 91% better performance than a single X25-M G2.

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

As we saw in our sequential read tests, the X25-Vs in RAID-0 can do very well in sequential read workloads. Our game loading test has the X25-V RAID array beating even Crucial's 6Gbps RealSSD C300.

Overall System Performance using PCMark Vantage Missing TRIM - Does it Matter?
Comments Locked

87 Comments

View All Comments

  • rhvarona - Tuesday, March 30, 2010 - link

    Some Adaptec Series 2, Series 5 and Series 5Z RAID controller cards allows you to add one or more SSD drives as a cache for your array.

    So, for example, you can have 4x1TB SATA disks in RAID 10, and 1 32GB Intel SLC SSD as a transparent cache for frequently accessed data.

    The feature is called MaxIQ. One card that has it is the Adaptec 2405 which retails for about $250 shipped.

    The kit is the Adaptec MaxIQ SSD Cache Performance Kit, but it ain't cheap! Retails for about $1,200. Works great for database and web servers though.
  • GDM - Tuesday, March 30, 2010 - link

    Hi I was under the impression that intel has new raid drivers that can pass through the TRIM command. Can you please rerun the test if that is true. Also can you test the 160gbs in raid?

    And although benchmarks are nice, do you really notice it during normal use?

    Regards,
  • Makaveli - Tuesday, March 30, 2010 - link

    You cannot do TRIM to an SSD Raid even with the new intel drivers.

    The drivers will allow you to pass trim to a single SSD+ HD RAID setup.

  • Roomraider - Wednesday, March 31, 2010 - link

    Wrong, Wrong, Wrong!!!!!!!
    The new drivers does in fact pass Trim to Raid-0 in Windows 7. My 2 160 g2' striped in 0 now has trim running on the array "verified via Windows 7 Trim cmd" . According to Intel, this works with any Trim enabled SSD' No Raid 5 support yet.
  • jed22281 - Friday, April 2, 2010 - link

    what so Anand is wrong when he speak to Intel engineers directly?
    I've seen several other threads where this claims has since been quashed.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Well this is definitely a test I was looking for. I just bought 3 of the Kingston drives off Amazon cheap and was trying to decide whether to RAID them or use them separately for OS/apps and games. Would a partition of 97.5GB (so about 14GB unpartitioned) be good enough for a wear-leveling buffer?
  • GullLars - Tuesday, March 30, 2010 - link

    Yes, it should be. You can consider making it 90GiB (gibibytes, 90*2^30 bytes), if you anticipate a lot of random writes and not a lot of larger files going in and out regularly.

    You will likely get about 550MB/s sequential read, and enough IOPS for anything you may do (unless you start doing databases, WMvare and stuff). 120MB/s sustained and consistent write should also keep you content.

    Tip: use a small stripe size, even 16KB stripe will work whitout fuzz on these controllers.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Main reason I want to go with a 97.5GB partition is because that's the size of my current OS/apps/games partition. It's got about 21GB free, which I wanted to keep in case I wanted to install more games.

    In regards to stripe size, most of the posts I've seen suggest 64KB or 128KB are the best choices. What difference does this make? Why do you suggest smaller stripe sizes?

    Plans are for the SSDs to be OS/apps/games, with general data going on a pair of 1.5TB hard drives. Usage is mainly gaming, browsing, and watching videos, with some programming and the occasional fiddling with DVDs and video editing
  • GullLars - Tuesday, March 30, 2010 - link

    Then you should be fine with a 97,5GB partition.
    The reason smaller is better when it comes to stripe size on SSD RAIDs has to do with the nature of the storage medium combined with the mechanisms of RAID. I will explain in short here, and you can read up more for yourself you are more curious.

    Intel SSDs can do 90-100% of their sequential bandwidth with 16-32KB blocks @ QD 1, and at higher queue depths they can reach it at 8KB blocks. Harddisks on the other hand reach their maximum bandwidth around 64-128KB sequential blocks, and do not benefit noticably from increasing the queue depth.

    When you RAID-0, the files that are larger than the stripe size get split up in chucks equal in size to the stripe size and distributed amongs the units in the RAID. Say you have a 128KB file (or want to read a 128KB chunk of a larger file), this will get divided into 8 pieces when the stripe size is 16KB, and with 3 SSDs in the RAID this means 3 chunks for 2 of the SSDs, and 2 chukcs for the third. When you read this file, you will read 16KB blocks from all 3 SSDs at Queue Depth 2 and 3. If you check out ATTO, you will see 2x 16KB @ QD 3 + 1x 16KB @ QD 2 summarize to higher bandwidth than 1x 128KB @ QD 1.

    The bandwidth when reading or writing files equal to or smaller the stripe size will not be affected by the RAID. The sequential bandwidth of blocks of 1MB or larger will also be the same since the SSDs will be able to deliver max bandwidth with any stripe size since data is striped over all in blocks large enough or enough blocks to reach max bandwidth for each SSD.

    So to summarize, benefits and drawbacks of using a small stripe size:
    + Higher performance of files/blocks above the stripe size while still relatively small (<1MB)
    - Additional computational overhead from managing more blocks in-flight, although this is negligable for RAID-0.
    The added performance of small-medium files/blocks from a small stripe size can make a difference for OS/apps, and can be meassured in PCmark Vantage.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Many thanks for the explanation. I may just go ahead and fiddle with various configurations and choose which feels best to me.

Log in

Don't have an account? Sign up now