One Tough Act to Follow

What have I gotten myself into? The SSD Anthology I wrote back in March was read over 2 million times. Microsoft linked it, Wikipedia linked it, my esteemed colleagues in the press linked it, Linus freakin Torvalds linked it.

The Anthology took me six months to piece together; I wrote and re-wrote parts of that article more times than I'd care to admit. And today I'm charged with the task of producing its successor. I can't do it.

The article that started all of this was the Intel X25-M review. Intel gave me gold with that drive; the article wrote itself, the X25-M was awesome, everything else in the market was crap.


Intel's X25-M SSDs: The drives that started a revolution

The Anthology all began with a spark: the SSD performance degradation issue. It took a while to put together, but the concept and the article were handed to me on a silver platter: just use an SSD for a while and you’ll spot the issue. I just had to do the testing and writing.


OCZ's Vertex: The first Indilinx drive I reviewed, the drive that gave us hope there might be another.

But today, as I write this, the words just aren't coming to me. The material is all there, but it just seems so mature and at the same time, so clouded and so done. We've found the undiscovered country, we've left no stone unturned, everyone knows how these things work - now SSD reviews join the rest as a bunch of graphs and analysis, hopefully with witty commentary in between.

It's a daunting, no, deflating task to write what I view as the third part in this trilogy of articles. JMicron is all but gone from the market for now, Indilinx came and improved (a lot) and TRIM is nearly upon us. Plus, we all know how trilogies turn out. Here's hoping that this one doesn't have Ewoks in it.

What Goes Around, Comes Around

No we're not going back to the stuttering crap that shipped for months before Intel released their X25-M last year, but we are going back in the way we have to look at SSD performance.

In my X25-M review the focus was on why the mainstream drives at the time stuttered and why the X25-M didn't. Performance degradation over time didn't matter because all of the SSDs on the market were slow out of the box; and as I later showed, the pre-Intel MLC SSDs didn’t perform worse over time, they sucked all of the time.

Samsung and Indilinx emerged with high performance, non-stuttering alternatives, and then we once again had to thin the herd. Simply not stuttering wasn't enough, a good SSD had to maintain a reasonable amount of performance over the life of the drive.

The falling performance was actually a side effect of the way NAND flash works. You write in pages (4KB) but you can only erase in blocks (128 pages or 512KB); thus SSDs don't erase data when you delete it, only when they run out of space to write internally. When that time comes, you run into a nasty situation called the read-modify-write. Here, even to just write 4KB, the controller must read an entire block (512KB), update the single page, and write the entire block back out. Instead of writing 4KB, the controller has to actually write 512KB - a much slower operation.

I simulated this worst case scenario performance by writing to every single page on the SSDs I tested before running any tests. The performance degradation ranged from negligible to significant:

PCMark Vantage HDD Score New "Used"
Corsair P256 (Samsung MLC) 26607 18786
OCZ Vertex Turbo (Indilinx MLC) 26157 25035

 

So that's how I approached today's article. Filling the latest generations of Indilinx, Intel and Samsung drives before testing them. But, my friends, things have changed.

The table below shows the performance of the same drives showcased above, but after running the TRIM instruction (or a close equivalent) against their contents:

PCMark Vantage HDD Score New "Used" After TRIM/Idle GC % of New Perf
Corsair P256 (Samsung MLC) 26607 18786 24317 91%
OCZ Vertex Turbo (Indilinx MLC) 26157 25035 26038 99.5%

 

Oh boy. I need a new way to test.

A Quick Flash Refresher
Comments Locked

295 Comments

View All Comments

  • Wwhat - Sunday, September 6, 2009 - link

    If you read the first part of the article alone you would see how important a good controller is in a SSD and you would no ask his question probably, plus SSD's use the flash in parallel where a bunch of USB drives would not, the parallel thing is also mentioned in the article.
    And USB has a lot of overhead actually on the system, both in CPU cycles as well as in IO interrupts.

    There are plug in PCI(e) cards to stick SD cards in though, to get a similar setup, but it's a bit of a hack and with the overhead and the management and controllers used and the price to buy many SD cards it's not competitive in the end and you are better of with a real SSD I'm told.
  • Transisto - Sunday, September 6, 2009 - link

    You are right, the controller is very important.

    I think caching about 4-8 gig of most often accessed program files has the best price/performance ratio, for improving application load time. It it also very easily scalable.

    One of the problem I see is integrating this ssd cache in the OS or before booting so it act where it matter the most.

    I think there could be a near x25-m speedup from optimized caching and good controller no matter what SSD form factor it rely on. SD, CF, usb , pci or onboard.

    Why it seams nobody talk about eboostr type of caching AND ,,, on other news ,,, Intel's Braidwood flash memory module could kill SSD market.

    I am quite of a performance seeker.

    But I don't think I need 80gig of SSD in my desktop,just some 8gb of good caching. Mabe a 60gb ssd on a laptop.

    Well... I'm gonna pay for that controller once, not twice (160gb?)
  • Wwhat - Saturday, September 5, 2009 - link

    Not that it's not a good article, although it does seem like 2 articles in one, but what I miss is getting to brass tacks regarding the filesystem used, and why there isn't a SSD-specific file system made, and what choices can be made during formatting in regards to blocksize, obviously if you select large blocks on filesystem level a would impact he performance of the garbage collection right? It actually seem the author never delved very deeply into filesystems from reading this.
    The thing is that even with large blocks on filesystem level the system might still use small segments for the actuall keepin track, and if it needs to write small bits to keep track of large blocks you'd still have issues, that's why I say a specific SSD filesystem migh be good, but only if there isn't a new form of SSD in the near future that makes the effort poinless, and if a filesystem for SSD was made then the firmware should not try to compensate for exising filesystem issues with SSD's.
    I read that the SD people selected exFAT as filesystem for their next generation, and that also makes me wonder, is that just to do with licensing costs or is NTFS bad for flash based devices?
    Point being at the filesystem needs to be highlighted more I think,
  • Bolas - Friday, September 4, 2009 - link

    Would someone please hit Dell with the clue-board and convince them to offer the Intel SSD's in their Alienware systems? The Samsung SSD's are all that is stopping me from buying an Alienware laptop at the moment.
  • EatTheMeat - Friday, September 4, 2009 - link

    Congratulations on another fab masterclass. This is easily the best educational material on the internet regarding SSDs, and contrary to some comments, I think you've pitched your recommendations just right. I can also appreciate why you approached this article with some trepidation. Bravo.

    I have a RAID question for Anand (or anyone else who feels qualified :-))

    I'm thinking of setting up 2 160GB x25-m G2 drives in RAID-0 for Win 7. I'd simply use the ICH10R controller for it. It's not so much to increase performance but rather to increase capacity and make sure each drive wears equally. After considering it further I'm wondering if SSD RAID is wise. First there's the eternal question of stripe size and write amplification. It makes sense to me to set the stripe size to be the same as, or a fraction of, the block size of the SSD. If you choose the wrong stripe size does it influence write amplification?

    I'm aware that performance should increase with larger spripes, but I'm more concerned about what's healthy for the SSD.

    Do you think I should just let SSD RAID wait until RAID drivers are optimised for SSDs?

    I know you're planning a RAID article for SSDs - I for one look forward to it greatly. I've read all your other SSD articles like four times!
  • Bolas - Friday, September 4, 2009 - link

    If SSD's in RAID lose the benefit of the TRIM command, then you're shooting yourself in the foot if you set them up in RAID. If you need more capacity, wait for the Intel 320GB SSD drives next year. Or better yet, use a 160 GB for your boot drive, then set up some traditional hard disk drives in RAID for your storage requirements.
  • EatTheMeat - Friday, September 4, 2009 - link

    Thanks for reply. I definitely hear you about the TRIM functionality as I doubt RAID drivers will pass this through before 2010. Still though, it doesn't look like the G2s drop much in performance with use anyway from Anand's graphs. With regard to waiting for 320 GB drives - I can't. These things are just too enticing, and you could always say that technology will be better / faster / cheaper next year. I've decided to take the plunge now as I'm fed up with an i7 965 booting and loading apps / games like a snail even from a RAID drive.

    I just don't want to bugger the SSDs up with loads of write amplification / fragmentation due to RAID-0. ie, is RAID-0 bad for the health of SSDs like defragmentation / prefetch is? I wonder if anyone knows the answer to this question yet.
  • jagreenm - Saturday, September 5, 2009 - link

    What about just using Windows drive spanning for 2 160's?
  • EatTheMeat - Saturday, September 5, 2009 - link

    As far as I know drive spanning doesn't even the wear between the discs. It just fills up first one and then the other. That's important with SSDs because RAID can really help reduce drive wear by spreading all reads and writes across 2 drives. In fact, it should more than half drive wear as both drives will have large scratch portions. Not so with spanning as far as I know.

    Does anyone know if I'm talking sh1t here? :-)
  • pepito - Monday, November 16, 2009 - link

    If you are not sure, then why do you assert such things?

    I don't know about Windows, but at least in Linux when using LVM2 or RAID0 the writes spread evenly against all block devices.
    That means you get twice the speed and better drive wear.

    I would like to think that microsoft's implementation works more or less the same way, as this is completely logical (but then again, its microsoft, so who can really know?).

Log in

Don't have an account? Sign up now