POST A COMMENT

296 Comments

Back to Article

  • drsethl - Monday, March 15, 2010 - link

    Hi,

    just to add to the chorus of praise: this is a superbly informative article, thank you for all the effort, and I hope that it has paid off for you, as I'm sure it must have.

    My first question is this. Is it possible to analyse a program while you're using it, to see whether it is primarily doing sequential or random writes? Since there seems to be a quite clear difference between the Intel X25m 80gb and the OCZ vertex 120gb, which are the natural entry-level drives here, where the Intel works better for random access, the vertex for sequential, it would be very useful to know which I would make best use of.

    Second question: does anyone know whether lightroom in particular is based around random or sequential writes? I know that a LR catalog is always radically fragmented, which suggests presumably that it is based around random writes, but that's just an uninformed guess. It does have a cache function, which produces files in the region of 3-5mb in size--are they likely to be sequential?

    Third question: with photoshop, is it specifically as a scratch disk that the intel x25m underperforms? Or does photoshop do other sequential writes, besides those to the scratch disk? I ask because if it only doesn't work as a scratch disk, then that's not a big problem--anyone using this in a PC is likely to have a decent regular HDD for data anyway, so the scratch disk can just be sent there. In fact, I've been using a vertex 120gb, with a samsung spinpoint f3 500gb on my PC, and I found that with the scratch disk on the samsung I got better retouch artists results (only by about half a second, but that's out of 14 seconds, so still fairly significant).

    Thanks in advance to anyone who might be able to answer, and thanks again Anand for such an informative read.

    Cheers
    Seth
    Reply
  • drsethl - Friday, July 09, 2010 - link

    Hi again,

    just to report back, since writing the previous comment I have bought both drives, vertex and intel (the original vertex 128gb, and the intel g2 x25m). While the Intel does perform better in benchmarks, the difference in general usage is barely noticeable. Except when using lightroom 3, when the intel is considerably slower than the vertex. I'm using a canon 550d, which produces 18mpx pictures. When viewing a catalogue for the first time (without any pre-created previews), the intel takes on average about 20s to produce a full scale 1:1 preview. This is infuriating. The vertex takes about 8s. Bear in mind that i've got 4gb of 1333mhz ram, intel i7 q720 processor, ati 5470 mobility radeon graphics. So it's not the most powerful laptop in the world, but it's no slouch either. I can only conclude that when LR3 makes previews it does large sequential writes, and that the considerable performance advantage of the vertex on this metric alone suddenly becomes very important. With which in mind, I'm now going to sell the Intel and buy a vertex 2e, which will give the best of both worlds. But I'm sure there are lots of photographers out there wondering about this like I was, so hopefully this will help.
    cheers,
    Seth
    Reply
  • jgstew - Friday, October 08, 2010 - link

    I believe you are correct about the LR Catalog being mostly random writes, but I don't think this is a performance concern since the Catalog is likely stored in RAM for reads, and written back to the drive when changes are made that affect the Catalog, which is not happening all the time.

    As for the generating previews and Photoshop scratch disk, this is going to be primarily sequential since it is generating the data one at a time and writing it to disk completely. If LR was generating multiple previews for multiple photos simultaneously and writing them simultaneously, then you would have heavy fragmentation of the cache, and more random writes.

    Any SSD is going to give significant performance benefit over spindle HD when it comes to random read/write/access. Sequential performance is the man concern with Photos/Video/Audio and similar data in most cases.

    One thing you might consider trying is having more than one SSD, or doing this if you upgrade down the road. Have the smaller SSD with fast sequential read/write act as the cache disk for LR/Photoshop/Others and have the other SSD be the boot drive with all the OS/Apps/etc. This way other things going on in the system will not effect the cache disk performance, as well as speed up writes from boot ssd to cache disk, and back.
    Reply
  • ogreinside - Monday, December 14, 2009 - link

    After spending all weekend reading this article, 2 previous in the trilogy, and all the comments, I wanted to post my thanks for all of your hard work. I've been ignoring SSDs for a while as I wanted to see them mature first. I am in the market for a new Alienware desktop, but as the wife is letting me purchase only on our Dell charge account, I have a limited selection and budget.

    I was settled on everything except the disks. They are offering the Samsung 256SSD, which I believe is the Samsung PM800 drive. The cost is exactly double that of the WD VelociRaptor 300 GB. So naturally I have done a ton of research for this final choice. After exploring your results here, and reading comments, I am definitely not getting their Samsung SSD. I would love to grab an Intel G2 or OCZ Indilinx, but that means real cash now, and we simply can't do that yet. The charge account gives us room to pay it off at 12-month no-interest.

    So at this point I can get a 2x WD VR in raid 0 to hold me over for a year or so when I can replace (or add) a good SSD. My problem is that I have seen my share issues with raid 0 on an ICH controller on two different Dell machines (boot issues, unsure of performance gain). In fact, using the same drives/machine, I saw better random read performance (512K) on a single drive than the ICH raid, and 4k wasn't far behind. I'm thinking I may stick to a single WD VR for now, but I really want to believe raid0 would be better.

    So, back on topic, it would be nice to see the ICH raid controller explored a bit, and maybe add a raid0 WD VR configuration to your next round of tests.

    (CryastalDiskMark 2.2)
    Single-drive 7200 rpm g:
    Sequential Read : 123.326 MB/s
    Sequential Write : 114.957 MB/s
    Random Read 512KB : 55.793 MB/s
    Random Write 512KB : 94.408 MB/s
    Random Read 4KB : 0.861 MB/s
    Random Write 4KB : 1.724 MB/s

    Test Size : 100 MB
    Date : 2009/12/09 2:03:4

    ICH raid0:
    Sequential Read : 218.909 MB/s

    Sequential Write : 175.]347 MB/s
    Random Read 512KB : 51.884 MB/s
    Random Write 512KB : 135.466 MB/s
    Random Read 4KB : 1.001 MB/s
    Random Write 4KB : 2.868 MB/s

    Test Size : 100 MB
    Date : 2009/12/08 21:45:20
    Reply
  • marraco - Friday, August 13, 2010 - link

    Thumbs up for the ICH10 petition. It's the most common RAID controller on i7.

    Also, I would like to see different models of SSD in RAID (For example one intel raided with one Indilinx).

    I suspect that performance with SSD scales much better that with older technologies. So I want to know if makes sense to buy a single SSD, and wait for prices to get cheaper at the time of upgrade. The problem is that as prices get cheaper, old SSD models are no more available.
    Reply
  • aaphid - Friday, November 27, 2009 - link

    OK, I'm still slightly confused. It seems that running the wipe/trim utility will keep the ssd in top condition but it won't run on a Mac. So are these going to be a poor decision for use in a Mac? Reply
  • ekerazha - Monday, October 26, 2009 - link

    Anand,

    it's strange to see your

    "Is Intel still my overall recommendation? Of course. The random write performance is simply too good to give up and it's only in very specific cases that the 80MB/s sequential write speed hurts you."

    of the last review, is now a

    "The write speed improvement that the Intel firmware brings to 160GB drives is nice but ultimately highlights a bigger issue: Intel's write speed is unacceptable in today's market."
    Reply
  • ekerazha - Monday, October 26, 2009 - link

    Ops wrong article Reply
  • mohsh86 - Tuesday, October 13, 2009 - link

    am 23 years old computer engineer..

    this is the most awesome informative article ever read !
    Reply
  • Pehu - Tuesday, October 13, 2009 - link

    First of all, thanks for the article. It was superb and led to my first SSD purchase last week. Installed the intel G2 yesterday and windows 7 (64 bit) with 8 G of RAM. A smooth ride I have to say :)

    Now, there is one question I have been trying to find an answer:

    Should I put the windows page file (swap) to the SSD disk or to another normal HD?

    Generally the swap should be behind other controller than your OS disk, to speed things up. However, SSD disks are so fast that there is a temptation to put the swap on OS disk. Also, one consideration is the disk age, does it preserve it longer if swap is moved away from SSD.

    Also what I am lacking is some general info about how to maximise the disk age without too much loss of speed, in one guru3d article instructions were given as:

    * Drive indexing disabled. (useless for SSD anyway, because access times are so low).
    * Prefetch disabled.
    * Superfetch disabled
    * Defrag disabled.

    Any comments and/or suggestions for windows 7 on that?

    Thanks.
    Reply
  • albor - Friday, June 18, 2010 - link

    Hi,
    try RamDisk Plus 11 from SuperSpeed.
    (http://www.superspeed.com/desktop/ramdisk.php)
    I use it on Xp pro 32 bit with 30 GB OCZ Vertex and 8 GB RAM. All above 3.2 GB is configured for swap and temp. Works perfectly and no visible SSD performance degradation after about 10 months.
    Greetings.
    Reply
  • jmr3000 - Thursday, August 23, 2012 - link

    would explain to me how did you install it?

    the ssd as a second drive or did u install all the program on the ssd and use the hhd as a second?

    thanks in advance!

    jm
    Reply
  • marraco - Friday, August 13, 2010 - link

    SWAP file is one of the most important speed bottleneck on windows.

    it writes frequently to disk, so consumes the read write cicles of the disk, reducing his useful life.

    But you are not buying space storage when you buy SSD. You are buying speed, so it makes nosense to buy an expensive SSD, and then remove from it all the activities that need the speed and are bottlenecks.

    you buy a SSD to do the fastest SWAP. keep it on SSD.

    Also, drive indexing permanently does a lot of reads, but it does not matter if the disk is fast. Drive indexing is like a little local google. If you disable it, and then search for all the files with a given text on it, searching the entire disk takes longer than just read an indexing.

    Those activities consume the useful life of the disk, but at the time the disk gonna need replacement, (5 years, maybe), this disk gonna need replacement anyways, and new SSD gonna be dirty cheap, so it makes no sense to disable swap, temp files, and indexing.

    On other side, prefetch, superfetch and defrag most probably are better disabled under SSD.
    Reply
  • jimlocke - Wednesday, June 01, 2011 - link

    Pehu, I know this much after your posting, but I was curios what you ended up doing for swap.
    8GB of RAM almost seems like swap may not be needed, unless you have several memory-hogging apps open.
    Hope you still like your SSD. I'm looking at getting one soon, and agree this was an excellent article!
    -Jim
    Reply
  • krumme - Friday, October 09, 2009 - link

    First: I submit to the importance of random 4k for ssd.
    Second: Over the years I have highly valued the articles of Anand. It is remarkable to see such detailed and enthusiastic information.

    Now I have a few questions, following the general impact of this work.
    Some observations first:
    Following an article at Toms of a ssd article the 6 of September. The author was called a “Moron”, primarily as the random 4k synthetic bm was missing. The author was giving a different opinion on the indilinx vs intel, in the desktop sector, compared to Anand, giving more weight to transfer vs iops.
    In an discussion about a Kingston V-series review, one said that he would take the indilinx ssd any day because it was “750 times faster” – an argument based on iops.
    Another remark I have read several times is: “The Intel x25-M g2 is the only drive to get”.
    Another is: “I would like to buy the Dell xx, but it has an Samsung controller so its of no use”.

    I think it is time to stop, and make sure there is reason in what is happening for normal desktop use.

    Do we have blind test where to tell the difference between the Intel, Samsung and Indilinx?
    What is the actual real world bm fx. Win7 boot times for the 3 controlers?

    There is something called good enough. When is 4k random read/write time enough, to not notice any subjective improvement afterwards in win7? Could it be fx. 10M/s?

    The ssd is the best thing happening since 3d gfx, but I think we should enjoy what is happeing right now, because this time, could be the turning point where we soon are focusing on small differences.

    Anyone knows what´s the next big thing?
    Reply
  • bebby - Friday, October 30, 2009 - link

    Random 4k and its relevance for desktop use is really the main topic for me, too.
    If I assume that I only use the SSD for the OS and software and save my data on other, much less expensive HDDs, I doubt very much that this discussion is worth it. The Samsung SSD then suddenly looks not so bad at all and much cheaper...
    The next big thing for me would be an OS starting up in 5 seconds, like the OS we had in the 90s...making SSD obsolete.
    Reply
  • bebby - Friday, October 30, 2009 - link

    Random 4k and its relevance for desktop use is really the main topic for me, too.
    If I assume that I only use the SSD for the OS and software and save my data on other, much less expensive HDDs, I doubt very much that this discussion is worth it. The Samsung SSD then suddenly looks not so bad at all and much cheaper...
    The next big thing for me would be an OS starting up in 5 seconds, like the OS we had in the 90s...making SSD obsolete.
    Reply
  • marraco - Friday, August 13, 2010 - link

    I agree completely. I think that human beings can nottice the difference between a hard disk, and a non bad SSD, because the difference is too large, but over "good enough", it does not matter much if the SSD is 2X or 4X faster in 4Kb random R/W.

    But mine is just an opinion, and I don't have good data to test it. I would like to read an article with repeatable testing on human perception.
    Reply
  • SimesP - Wednesday, September 23, 2009 - link

    I haven't read all 254 comments (yet) but I'd like to add my thanks to everyone elses for the comprehensive and illuminating article. This, along with the previous AnandTech SSD articles have increased my understanding of SSD's immensely.

    Thanks again!
    Reply
  • ClemSnide - Friday, October 02, 2009 - link

    Anand,

    A couple of guys from HotHardware.com pointed me at your SSD article, and it allowed me to make an informed decision. Thanks!

    I wanted to speed up one game in particular (World of Warcraft) as well as routine OS tasks and web browsing. I think an SSD will do a bang-up job on at least the first two. The one I decided upon was the OCZ Agility 60 GB, which offers some growth room; I currently have 40 GB on my system drive. I know the Intel has better numbers, but I was able to get the OCZ for $156 after a rebate, which translates to decent performance at a price I can justify. (For the curious, it's available from TigerDirect for $184, and OCZ is giving a $30 rebate.)

    Even though my system build is still months away, this should be usable on my old clunker as well. Very nifty!
    Reply
  • tachi1247 - Friday, September 18, 2009 - link

    Does anyone know what the difference is between the 7mm thick and 9.5mm thick drives?

    http://download.intel.com/design/flash/nand/mainst...">http://download.intel.com/design/flash/nand/mainst...

    They seem to be identical except for the drive thickness.
    Reply
  • dszc - Saturday, September 12, 2009 - link

    FANTASTIC series of articles. Kudos! They go a long way toward satisfying my intellectual curiosity.

    But now it is time to reap the rewards of this technology and earn a living.
    So I need some real-world HELP.

    How do I clone my 320GB (80GB used) Hitachi OS drive (Vista 32 SP2) over to a 128GB Indilinx Torqx?

    All I really care about is Photoshop and Bridge CS4 performance. I am a pro and work 4-16 hours per day in Bridge and Photoshop, with tens of thousands of images, including 500MB - 2GB layered TIFFs. The Photoshop Scratch Disk and Bridge and CameraRaw Cache performance are killing me. Solid State Storage seems to be the perfect solution to my problem

    I really want to simply clone my 320 over to the Torqx, because it would take me a week to re-install and configure all of my software and settings that are now on the 320GB Hitachi.

    Do I just bring the Torqx up in the Vista Storage Disk Management, initialize it with one big partition, and then format it?
    What size allocation unit should I use? :
    Default? 4096? 64k? ???
    Will these settings be wiped out when I clone over the stuff from the old hard drive?
    What about "alignment"?
    What is the best software for a SIMPLE & painless clone procedure?

    I'm not a techie or geek, but have a fair working knowledge of computers.

    Any help would be hugely appreciated. Thanks.
    Reply
  • userwhat - Thursday, September 17, 2009 - link

    I use Drive Snapshot for all these purposes. It works 100%, it´s a very small and fast program. After having issues with Norton Ghost and some other similar programs which were absolutely unable to restore an imaged partition stored on a DVD this is THE one to use.

    Get it here: http://www.drivesnapshot.de/en/">http://www.drivesnapshot.de/en/
    Reply
  • dszc - Saturday, September 26, 2009 - link

    Thank you very much for your help and recommendations.
    To get my Patriot (SolidStateStorage) up and running, I used Seagate DiskWizard (an Acronis subset), as I have lots of Seagate drives already on my system and this free software seems to work.
    When I get a window of time in my schedule, I'll try DriveSnapShot and/or DriveImage to see if they do a better job in helping my Torqx SSS run at its full potential.
    Thanks again for your help.
    Dave
    Reply
  • JakFrost - Tuesday, September 15, 2009 - link

    If you want to image out your current drive and migrate over to an SSD you can use the free software below that works with Windows Volume Shadow Copies to do a online live migration to another drive without losing or corrupting your data. This means that you can do this from the same OS that you are running.

    This software will allow you to image out to an already created partition that is already aligned at the 1MB boundry that is standard for Microsoft Vista/7 operating systems.

    DriveImage XML V2.11
    English (1.78MB)

    Image and Backup logical Drives and Partitions

    Price: Private Edition Free - Commercial Edition - Buy Now Go!
    System Requirements: Pentium Processor - 256 MB RAM
    Windows XP, 2003, Vista, or Windows 7

    An alternative is to use an offline migration system such as Acronis TrueImage, Norton Ghost, etc. to do the migration offline from a bootable CD or USB drive. Search around for Hiren's BootCD to check out these and other tools to do the migration.
    Reply
  • dszc - Saturday, September 26, 2009 - link

    Thank you very much for your help and recommendations.
    To get my Patriot (SolidStateStorage) up and running, I used Seagate DiskWizard (an Acronis subset), as I have lots of Seagate drives already on my system and this free software seems to work.
    When I get a window of time in my schedule, I'll try DriveImage and/or DriveSnapShot to see if they do a better job in helping my Torqx SSS run at its full potential.
    Thanks again for your help.
    Dave
    Reply
  • jgehrcke - Friday, September 11, 2009 - link

    Be careful when buying a Super Talent UltraDrive GX 128 GB with "XXXX" in serial number (unfortunately you cannot check this before ordering the drive). These drives are much slower than measured in the benchmark here and in other benchmarks.

    For more information and related links see

    http://gehrcke.de/2009/09/performance-issue-with-n...">http://gehrcke.de/2009/09/performance-i...est-supe...
    Reply
  • Kitohru - Thursday, September 10, 2009 - link

    Does OS X Snow Leopard have trim support, and if not any word from apple about that or the like? Reply
  • Zool - Thursday, September 10, 2009 - link

    I still dont think that with this price ssd-s will be more mainstream in the next years. And honestly the performance is not even that extra if everything would work like it should. The mechanical drives can reach now 100 MB reads when things are optimal. The small files performance is still only software problem. U should never ever reach point when u need to randomly find 4 KB files in a long row. With todays ram capacity and cpus-s programs should never read such small files or group things in larger files and read whole to memmory. A solid today programed aplication (let it be game or programs) should never let your disk spam with 30k files (like catia or other plenty of aged so called "profesional" programs). With today ram and disk capacity it should read things to memmory and let only grouped larger files on disk and never ever touch the hdd again until users isnt comunicating with the software (u can tell it to windows).Saves can be made to memmory and than write to disk without even seeing a fps drop in games(not just games) becouse of disk comunication latenci
    I dont even think the IO performance would be a problem with the RIGHT software and OS. With 100MB reads it could run perfectly fine with few seconds loading times. Even the latencies of ssd-s are no match to ram latencies so everithing that should activly comunicate with disks (which is just stupid with curent ram prices and 64bit) would just level your latencies down to disks.
    Why should worry about latencies and read speeds when u could copy it to ram and keep the files on disk in shape where the mechanical drive
    should never find itself to read files smaller than few MB.(even your
    small txt documents u can hide in archive).
    Just my toughs. (sorry for my english)
    Reply
  • AlExAkE - Wednesday, September 09, 2009 - link

    Hey, I'm a web & multimedia designer. I spend lots of my time using most of the Adobe CS4 products including Photoshop, Flash, Dreamweaver, Illustrator, After Effects & Premiere Pro.

    The Intel 80GB G2 looks amazing, but the Photoshop test is awful because of his write speed. The Intel 25-Extreme series seem to be the best but is to pricey. The OCZ Vertex has good write speed but is slower than Intel G2 in most of the test. What would be the recommended SSD for my purpose. Thanks
    Reply
  • jtleon - Tuesday, September 08, 2009 - link

    Yes I fell asleep atleast 3 times reading this article (it IS Monday afterall)

    Yes, Indilinx clearly rocks the SSD world - Now I know thanks to Anand!

    Stories like this set the standard for all review sites - I don't come away with the feeling I was just sold a bill of goods by some schiester in Intel's pocket, or otherwise.

    Great Job Anand! Keep them coming!
    Reply
  • SSDdaydreamer - Tuesday, September 08, 2009 - link

    I too am wondering whether TRIM will be available on the Intel Drives for Windows XP or Vista. I seriously doubt it, as the OCZ Wiper Tool appears to only be available for Indilinx controllers. Perhaps Intel will introduce their own wiper utility. I am leaning towards the OCZ Vertex or Patriot Torqx drives, as I am quite content with Windows XP and Windows Vista.
    I have an itchy trigger finger on these SSDs, but I want to hold back for the following unknowns.

    1. I would like to use the NTFS file system for my drive, but I am unsure of the proper/ideal block size.

    2. I would merely like to image my existing Windows Installation, but I am worried that performance or stability problems will arise from the NTFS file system. A fresh install could be in order, but it is preferred to image.

    3. Is there a way to change the size of the spare area? Maybe I have the wrong idea (perhaps only format part of the drive, unformatted space goes appends to the spare area?) I am willing to sacrifice some of the usable partition space for an increased spare area for improved performance.

    4. Are there complications with multiple partitions? If there are multiple partitions on the drive (for multi-boot) do they all share the same spare area? Is it possible to allow their own respective spare areas?


    Is there anybody out there that could enlighten me? I'm sure others would do well to have the answers as well. If I make any discoveries, I will be sure to post them.
    Thanks in advance.
    Reply
  • bradhs - Monday, September 07, 2009 - link

    IS there a "Wiper" app for Intel X-25m G2 drives? For people who don't have Windows 7 (TRIM) and want to keep the Intel X-25m G2 running smooth. Reply
  • smjohns - Tuesday, September 08, 2009 - link

    No there is no wiper tool for Intel drives at the mo. In addition to this the current firmware on the Intel drives do not have TRIM enabled. I guess this will be released soon after Windows7 is released. I think I have read somewhere that Intel are working on a TRIM version of it's Matrix Storage Manager software that will provide this functionality to the other operating systems. Reply
  • Burny - Monday, September 07, 2009 - link

    As many before me: great article! I learned a lot about SSD's. Even up to the point i'm ready to buy one.
    I still have 2 questions tough:

    2. Will TRIM be available on the G2 Intel drives for sure? Some sources doubt this: http://www.microsoft.com/communities/newsgroups/en...">http://www.microsoft.com/communities/ne...t=&l...


    3. As I understand, TRIM will work on a firmware level. That implies that TRIM will also function under Windows XP or any OS for that matter? Then why the need to build another TRIM into Windows 7? Or does a TRIM firmware enabled SSD simply allows the OS to use TRIM?

    Thanks!
    Reply
  • smegforbrain - Monday, September 07, 2009 - link

    While I consider myself handy with computers, I'm not the best technical mind when it comes to the details. You do an excellent job of presenting everything in a manner that it can be understood with little difficulty. I look forward to future articles about SSDs.

    I do have a question I'm hoping somebody can answer. I'm as interested in the long-term storage outlook of SSD drives as I am every day use. I've seen it said that an SSD drive should hold its charge for 10 years if not used, and it was discussed a bit earlier in this thread.

    Yet, none of my current mechanical hard drives are more than 3 years old; none of my burned DVDs/CDs are older than 5 years. It seems far more likely that I would replace an SSD for one with a greater storage capacity after 5 years tops than to expect one to be in use, even as archival storage, for as long as 10 years.

    So, is the 10 year 'lifespan' even going to be an issue with archival storage for most people?

    Will this worry over the life span of an SSD become even less of an issue as the technology matures over the next couple of years?
    Reply
  • Starcub - Tuesday, September 08, 2009 - link

    "So, is the 10 year 'lifespan' even going to be an issue with archival storage for most people?"

    No, but who takes wads of money out of their wallet to store it on their shelf?
    Reply
  • smegforbrain - Tuesday, September 08, 2009 - link

    "No, but who takes wads of money out of their wallet to store it on their shelf?"

    That is simply assuming that they will remain as expensive as they are now. They won't.
    Reply
  • BlackSphinx - Sunday, September 06, 2009 - link

    Hello! I'm taking the time to comment on this article, because I am very thankful for all of these awesome write-ups on SSD.

    I'm in the process of building an heavily overclocked i7 rig for gaming and video edition, and I was going to jam 2 Velociraptors in Raid0 in there. Why? I had only heard bad things about SSDs in the past.

    Reading your aticles, who are, while in depth, very clear and easy to understand, I understand much better what happened in early SSDs, what's so good about recent Indilinx and Intel SSD, and, truly, why I should forgo mechanical drives and instead go the SSD route (which, frankly, isn't more costly than a Raid0 raptor setup). In short, these articles are a great service to the end users just like myself, and if they were intended as such, you have passed with flying colors. Congratulations and thanks.
    Reply
  • Transisto - Sunday, September 06, 2009 - link

    Could someone reset my brain as to why there is no way to get a (very noticeable) improvement from USB thumb-drives. I mean these thing also get 0.1 ms latency.

    It's a bit extreme but for the same price I could get 9 cheap 8gb SLC usb drive for around 20$ each and put them in three separate PCI-USB add-on card (5$)

    They would saturate the USB controler with 3 drive in it so I Could get around 140mb/s read and 60mb/s write.

    Say you manage to merge that into a raid or ... ? Is eboost or Readyboost any good at scaling up ?

    Reply
  • Wwhat - Sunday, September 06, 2009 - link

    If you read the first part of the article alone you would see how important a good controller is in a SSD and you would no ask his question probably, plus SSD's use the flash in parallel where a bunch of USB drives would not, the parallel thing is also mentioned in the article.
    And USB has a lot of overhead actually on the system, both in CPU cycles as well as in IO interrupts.

    There are plug in PCI(e) cards to stick SD cards in though, to get a similar setup, but it's a bit of a hack and with the overhead and the management and controllers used and the price to buy many SD cards it's not competitive in the end and you are better of with a real SSD I'm told.
    Reply
  • Transisto - Sunday, September 06, 2009 - link

    You are right, the controller is very important.

    I think caching about 4-8 gig of most often accessed program files has the best price/performance ratio, for improving application load time. It it also very easily scalable.

    One of the problem I see is integrating this ssd cache in the OS or before booting so it act where it matter the most.

    I think there could be a near x25-m speedup from optimized caching and good controller no matter what SSD form factor it rely on. SD, CF, usb , pci or onboard.

    Why it seams nobody talk about eboostr type of caching AND ,,, on other news ,,, Intel's Braidwood flash memory module could kill SSD market.

    I am quite of a performance seeker.

    But I don't think I need 80gig of SSD in my desktop,just some 8gb of good caching. Mabe a 60gb ssd on a laptop.

    Well... I'm gonna pay for that controller once, not twice (160gb?)
    Reply
  • Wwhat - Saturday, September 05, 2009 - link

    Not that it's not a good article, although it does seem like 2 articles in one, but what I miss is getting to brass tacks regarding the filesystem used, and why there isn't a SSD-specific file system made, and what choices can be made during formatting in regards to blocksize, obviously if you select large blocks on filesystem level a would impact he performance of the garbage collection right? It actually seem the author never delved very deeply into filesystems from reading this.
    The thing is that even with large blocks on filesystem level the system might still use small segments for the actuall keepin track, and if it needs to write small bits to keep track of large blocks you'd still have issues, that's why I say a specific SSD filesystem migh be good, but only if there isn't a new form of SSD in the near future that makes the effort poinless, and if a filesystem for SSD was made then the firmware should not try to compensate for exising filesystem issues with SSD's.
    I read that the SD people selected exFAT as filesystem for their next generation, and that also makes me wonder, is that just to do with licensing costs or is NTFS bad for flash based devices?
    Point being at the filesystem needs to be highlighted more I think,
    Reply
  • Bolas - Friday, September 04, 2009 - link

    Would someone please hit Dell with the clue-board and convince them to offer the Intel SSD's in their Alienware systems? The Samsung SSD's are all that is stopping me from buying an Alienware laptop at the moment. Reply
  • EatTheMeat - Friday, September 04, 2009 - link

    Congratulations on another fab masterclass. This is easily the best educational material on the internet regarding SSDs, and contrary to some comments, I think you've pitched your recommendations just right. I can also appreciate why you approached this article with some trepidation. Bravo.

    I have a RAID question for Anand (or anyone else who feels qualified :-))

    I'm thinking of setting up 2 160GB x25-m G2 drives in RAID-0 for Win 7. I'd simply use the ICH10R controller for it. It's not so much to increase performance but rather to increase capacity and make sure each drive wears equally. After considering it further I'm wondering if SSD RAID is wise. First there's the eternal question of stripe size and write amplification. It makes sense to me to set the stripe size to be the same as, or a fraction of, the block size of the SSD. If you choose the wrong stripe size does it influence write amplification?

    I'm aware that performance should increase with larger spripes, but I'm more concerned about what's healthy for the SSD.

    Do you think I should just let SSD RAID wait until RAID drivers are optimised for SSDs?

    I know you're planning a RAID article for SSDs - I for one look forward to it greatly. I've read all your other SSD articles like four times!
    Reply
  • Bolas - Friday, September 04, 2009 - link

    If SSD's in RAID lose the benefit of the TRIM command, then you're shooting yourself in the foot if you set them up in RAID. If you need more capacity, wait for the Intel 320GB SSD drives next year. Or better yet, use a 160 GB for your boot drive, then set up some traditional hard disk drives in RAID for your storage requirements. Reply
  • EatTheMeat - Friday, September 04, 2009 - link

    Thanks for reply. I definitely hear you about the TRIM functionality as I doubt RAID drivers will pass this through before 2010. Still though, it doesn't look like the G2s drop much in performance with use anyway from Anand's graphs. With regard to waiting for 320 GB drives - I can't. These things are just too enticing, and you could always say that technology will be better / faster / cheaper next year. I've decided to take the plunge now as I'm fed up with an i7 965 booting and loading apps / games like a snail even from a RAID drive.

    I just don't want to bugger the SSDs up with loads of write amplification / fragmentation due to RAID-0. ie, is RAID-0 bad for the health of SSDs like defragmentation / prefetch is? I wonder if anyone knows the answer to this question yet.
    Reply
  • jagreenm - Saturday, September 05, 2009 - link

    What about just using Windows drive spanning for 2 160's? Reply
  • EatTheMeat - Saturday, September 05, 2009 - link

    As far as I know drive spanning doesn't even the wear between the discs. It just fills up first one and then the other. That's important with SSDs because RAID can really help reduce drive wear by spreading all reads and writes across 2 drives. In fact, it should more than half drive wear as both drives will have large scratch portions. Not so with spanning as far as I know.

    Does anyone know if I'm talking sh1t here? :-)
    Reply
  • pepito - Monday, November 16, 2009 - link

    If you are not sure, then why do you assert such things?

    I don't know about Windows, but at least in Linux when using LVM2 or RAID0 the writes spread evenly against all block devices.
    That means you get twice the speed and better drive wear.

    I would like to think that microsoft's implementation works more or less the same way, as this is completely logical (but then again, its microsoft, so who can really know?).
    Reply
  • sotoa - Friday, September 04, 2009 - link

    Another great article. You making me drool over these SSD's!
    I can't wait till Win7 comes to my door so I can finally get an SSD for my laptop.
    Hopefully prices will drop some more by then and Trim firmware will be available.
    Reply
  • lordmetroid - Thursday, September 03, 2009 - link

    I use them both because they are damn good and explanatory suffixes. It is 2009, soon 2010 I think we can at least get the suffixes correct, if someone doesn't know what they mean, wikipedia has answers. Reply
  • AnnonymousCoward - Saturday, September 05, 2009 - link

    As someone who's particular about using SI and being correct, I think it's better to stick to GB for the sake of simplicity and consistency. The tiny inaccuracy is almost always irrelevant, and as long as all storage products advertise in GB, it wouldn't make sense to speak in terms of GiB. Reply
  • Touche - Thursday, September 03, 2009 - link

    Both articles emphasize Intel's performance lead, but, looking at real world tests, the difference between it and Vertex is really small. Not hardly enough to justify the price difference. I feel like the articles are giving an impression that Intel is in a league of its own when in fact it's only marginally faster. Reply
  • smjohns - Tuesday, September 08, 2009 - link

    This is where I struggle. It is all very well quoting lots of stats about all these drives but what I really want to know is if I went for Intel over the OCZ Vertex (non-turbo) where would I really notice the difference in performance on a laptop?

    Would it be slower start up / shut down?
    Slower application response times?
    Speed at opening large zipped files?
    Copying / processing large video files?

    If the difference is that slim then I guess it is down to just a personal preference....
    Reply
  • morrie - Thursday, September 03, 2009 - link

    I've made it a habit of securely deleting files by using "shred" like this: shred -fuvz, and accepting the default number of passes, 25. Looks like this security practice is now out, as the "wear" on the drive would be at least 25x faster, bringing the stated life cycles closer to having an impact on drive longevity. So what's the alternative solution for securely deleting a file? Got to "delete" and forget about security? Or "shred" with a lower number of passes, say 7 or 10, and be sure to purchase a non-Intel drive with the ten year warranty and hope that the company is still in business, and in the hard drive business, should you need warranty service in the outer years... Reply
  • Rasterman - Wednesday, September 16, 2009 - link

    watching too much CSI, there is an article somewhere i read by a data repair tech who works in one of the multi-million dollar data recovery labs, basically he said writing over it once is all you should do and even that is overkill 99% of the time. theoretically it is possible to even recover that _sometimes_, but the expense required is so high that unless you are committing a billion dollar fraud or are the secretary to osama bin laden no one will ever try to recover such data. chances are if you are in such circles you can afford a new drive 25x more often. and if you have such information or knowledge wouldn't be far easier and cheaper to simply beat it out of you than trying to recover a deleted drive? Reply
  • iamezza - Friday, September 04, 2009 - link

    1 pass should be sufficient for most purposes. Unless you happen to be working on some _extremely_ sensitive/important data. Reply
  • derkurt - Thursday, September 03, 2009 - link

    quote:

    So what's the alternative solution for securely deleting a file?


    I may be wrong on this, but I'd assume that once TRIM is enabled, a file is securely deleted if it has been deleted on the filesystem level. However, it might depend on the firmware when exactly the drive is going to actually delete the flash blocks which are marked as deletable by TRIM. For performance reasons the drive should do that as soon as possible after a TRIM command, but also preferably at a time when there is not much "action" going on - after all, the whole point of TRIM is to change the time of block erasing flash cells to a point where the drive is idle.
    Reply
  • morrie - Thursday, September 03, 2009 - link

    That's on a Linux system btw

    As to aligning drives...how about an update to the article on what needs to be done/ensured, if anything, for using the drives with a Linux OS?
    Reply
  • jasperjones - Monday, September 07, 2009 - link

    Bit of a late reply to your question but a single overwrite with random data is fully secure. At least for HDDs there have been tests from academics that tried to recover data from a HDD after wiping it via "dd if=/dev/urandom bs=1M" They weren't able to recover anything. Reply
  • smjohns - Thursday, September 03, 2009 - link

    Great 3rd installment and I have learnt more about SSD's from this site than any other !!

    Whilst there is no doubt that Intel G2 definitely remains the SSD drive of choice (assuming you have the cash). Why did Intel choose not to address the poor sequential write speeds? In the above tests it seems no better than a standard 5400 hard disc....which is a little poor. I accept it is blisteringly fast for everything else but not sure why this was ignored / shelved?

    Is it that it is currently impossible to build a drive that can be fast at both large sequential and small random file writes? Or is it that the G2 was always intended to be an incremental improvement over the G1 (fixing some of its short comings) rather than a complete top to bottom redesign of the unit, which may have lead to this being addressed? As such could a future firmware release improve these speeds....or is it definitely a hardware restriction?

    I have to say I am personally torn between the OCZ Vertex and Intel G2 at the moment. Whilst I accept the G2 seems to be the quicker drive in the real world, I was disappointed that they did not improve the sequential write speeds and in addition to this, they do seem a little slow with support. The OCZ on the other hand seems a bit of an all rounder and not that much slower than the G2. In addition to this I REALLY like OCZ's approach to supporting these drives and they really seem to listen to their customers feedback.

    One final question....when installing an SSD into a laptop with a fresh Windows 7 install, is there now any need for special formatting / OS settings to ensure best drive performance / life? There is a lot of stuff on the web but it all seems particularly relevant for XP and partially Vista but I was under the impression that Win7 was designed to work with SSDs out of the box?
    Reply
  • derkurt - Thursday, September 03, 2009 - link

    Then, there is another reason why SSDs are not covered extensively by the mainstream press: They are too complicated.

    Let's say you want to buy a hard disk. You could just buy any hard disk, since the difference between good and bad ones is fairly small. If you buy an ExcelStor, for example, you will still get something which works and delivers sufficient performance compared to faster models. Unless you are looking at the server market, there is not that much difference at all. Some models have larger caches, faster seek times and higher transfer rates due to higher rotational speeds, but the main difference is capacity, so the market is transparent.

    Now look at the SSD market: The difference between good and bad ones is huge, incredibly huge. The Intel G2 is lightning fast while some old JMicron-based drives are much worse than a 5-years-old hard disk. You can't just go and buy "an SSD". You need to be informed:

    What controller is the SSD using? Do I have to align my partitions, or is my operating system detecting the SSD and doing that for me? Does my OS support TRIM? Does my AHCI driver support TRIM? Does my SSD support TRIM? Does my current firmware revision support TRIM, and if so, do I need to flash a beta firmware which still has some serious flaws in it? How is the performance degradation after heavy use? What about random write access times (very big differences here which strongly affect real world performance)?

    If you don't care about the above, chances are you will get a crappy drive. And even if you do, you'll have a hard time finding out some essential facts (thanks Anand!), since the manufacturers aren't exactly putting them on their webpages. They will tell you the capacity and the maximum linear transfer rates. That's all, basically. You will have to do some exhaustive googling to investigate what controller the drive is using, whether the firmware supports TRIM, and so on. Even Intel is holding back with detailed information, though they wouldn't have to, since they have nothing to hide as their drives are the fastest in nearly all aspects.

    I don't know for sure why the manufacturers are making a secret out of essential information, even if they can shine there. But there's one thing I do know: Only when people don't need to care about controllers, OS support, firmwares etc. anymore, SSDs are ready to hit the mainstream.
    Reply
  • smjohns - Thursday, September 03, 2009 - link

    I fully agree with you here and it is one of the reasons why I have not taken the plunge yet. I am definitely holding out for Win7 and then upgrade my laptop with both that and an SSD.

    Even after reading these great articles, whilst I now know which drives support Trim and the fact that none of them have this functionality fully enabled and will require a future firmware update ("shudders"), the SSD market is indeed a confusing place to be. And thats before you consider having to align partitions (what the heck is this) and the various settings in the BIOS / OS you need to enable / disable to ensure your lovely new drive does not die within a few weeks / months / years.

    If the industry really does want widespread adoption of these new drives, it needs to resolve these issues and come up with some easy and readily available standards we can all follow. I just hope Win7 is as SSD friendly as we are led to believe.
    Reply
  • derkurt - Thursday, September 03, 2009 - link

    quote:

    and thats before you consider having to align partitions (what the heck is this)


    AFAIK, "aligning" partitions means that the logical layout of blocks has to match the physical block assigment on the SSD in a certain way, otherwise writing one logical block on the filesystem level may result in an unnecessary I/O operation covering two blocks on the SSD (because the logical block spans the boundaries between two physical blocks). But don't ask me for details, I haven't dug into that yet.

    According to MS, Windows 7 detects SSDs and applies a proper alignment scheme automatically during installation of the OS. If you'd like to install a Linux distribution or an older version of Windows, you'll probably have to take care of that by yourself, unfortunately.

    quote:

    and the various settings in the BIOS / OS you need to enable / disable to ensure your lovely new drive does not die within a few weeks / months / years.


    I guess there aren't that many, you should just turn on AHCI support - the drive will work without it, but you need it for enabling NCQ, which can give you a 5-10% performance boost. However, you need to do this before the OS installation, otherwise your OS might cease to boot. Oh, and also you may have to temporarily turn off AHCI support when flashing a new firmware, because some flashing tools are struggling of AHCI is turned on.

    I hope that with the advent of Windows 7 going into public sale, SSD manufacturers will start to ship reliable, TRIM-enabled firmware revisions. If so, you shouldn't have to think about all these issues anymore as long as you are using Windows 7.
    Reply
  • derkurt - Thursday, September 03, 2009 - link

    I was one of the lucky guys to get an Intel G2 drive before they stopped shipping it for a while, and I can absolutely confirm everything Anand states about performance.

    However, I still wonder why there is relatively few competition out there. At least in theory, it takes far less know-how to produce a good SSD than is required to manufacture reliable hard disk drives - think about the expansive and complicated fine mechanics involved. Actually, there are some Chinese manufacturers most of us have never heard of, such as RunCore, which manage to deliver SSDs of at least usable quality.

    Where is the Samsung drive that blows the competition away? What about Seagate, Western Digital, Hitachi? Are they just watching from the sideways while SSDs from some young and small companies are cannibalizing their markets?

    At the time the shift from CRTs to LCDs was taking place, German premium TV manufacturer Loewe estimated that it would take many years until CRTs became obsolete. But the change happened so fast it nearly blew off their business before they finally started to ship high-quality LCDs in response to market demand. It seems to me that the very same thing is happening again now.

    The G2 is gorgeous, no doubt about it. But the price point is still way above being ready to hit the mainstream. Computers are simply not important enough to Joe Sixpack to spend 200+ USD for storage solutions only, even if it _really_ accelerates the machine (something most people won't believe until they experienced it themselves), and especially considering the low capacities offered by SSDs so far.

    If something as great as the G2 can be offered for 240 USD while being sold to a relatively small audience, what prices can we expect to see if the mainstream is hit? If USB sticks can be sold for less than 5 USD, what is the fundamental problem at reaching a price point of 60 USD for high-quality SSDs? Of course, SSDs contain much more intelligence than USB pen drives: Multi-channel controllers with sophisticated strategies, caches, and so on, but the main difference should be the effort required to engineer these devices, rather than the cost for building them.

    I am a bit frustrated that while there are SSDs available now which deliver superior performance, they still cover a small niche of enthusiasts (and there are probably a lot more people who would want to buy one if they only knew that these things exist), and the traditional hard drive manufacturers cease to join the game. The most important reason why Intel priced the G2 at a more affordable level is probably not the competition by Indilinx drives, but rather the idea that they can gain more profit by selling much more drives, even if they are sold at a lower price, as long as production costs are fairly small.

    Is Samsung sleeping, or are they just fearing that the shift to SSDs might destroy their mechanical hard drive business? I doubt that they don't have engineers capable of creating SSDs which deliver a performance comparable to Intel's drives. Maybe the mediocre performance of their SSDs is part of a strategy, which says that SSD development shouldn't be pushed too fast until the rest of the market is really forcing them to do so.

    Companies such as Apple need to sell good SSDs with their computers, by default. Why can't premium PC manufacturers like Apple sell their hardware with a G2 drive, while they are offering similarly expansive CPUs? If you are spending the 240 USD for a CPU upgrade instead, I'd take every bet that you were unable to feel a comparable performance gain. It's a shame that PC sellers are neglecting hard drive performance while at the same time stressing the CPU power of their systems in their advertisements. Only if Seagate & Co. realize that they are losing a large and growing market share by not joining the SSD race, prices will drop. So far, they just don't care about some hardware geeks like us.
    Reply
  • pepito - Monday, November 16, 2009 - link

    There are a bunch of companies selling SSD already, its just that you don't know where to find them, and most reviewers only care about big players, such as Intel or Samsung.

    If you check, for example, http://kakaku.com/pc/ssd/">http://kakaku.com/pc/ssd/ you can see there are currently 24 manufacturers listed there (use google translate, as its in japanese).

    Some you probably never heard of: MTRON, Greenhouse, Buffalo, CFD, Wintec, PhotoFast, etc.
    Reply
  • iwodo - Thursday, September 03, 2009 - link

    I have trouble understanding WHY Apple, uses Samsung CRAPPY SSD like everyone else when they could easily make their own.

    And SSD drive, like all Indlinx drive, are nothing more then Flash Chip soldered on to PCB with Indilinx Core. Apple is already the largest Flash buyer in the world, they properly buy the cheapest Flash memory in the market. ( Intel and Samsung of coz don't count since they make the flash themselfs. ) Building an SSD themself would be adding $20 dollars on top of 8 Chips 64Gb Flash.

    Why they dont build one and use it accross its Mac is beyond me. Since even the firmware is the same as everyone else.
    Reply
  • pepito - Monday, November 16, 2009 - link

    For the same reason that Dell doesn't make their own batteries, its not their business. Reply
  • Borski - Thursday, September 03, 2009 - link

    How does G.skill Falcon compare with the reviewed units? I've seen very good reviews (close to Vertex) elsewhere but they don't mention things like used vs new performance, or power consumption.

    I'm considering buying the G.Skill Falcon 64G, which is cheaper than Agility in some places.
    Reply
  • zodiacfml - Wednesday, September 02, 2009 - link

    Very informative, answered more than anything in my mind. Hope to see this again in the future with these drive capacities around $100. Reply
  • mgrmgr - Wednesday, September 02, 2009 - link

    Any idea if the (mid-Sept release?) OCZ Colossus's internal RAID setup will handle the problem of RAID controllers not being able to pass Windows 7's TRIM command to the SSD array. I'm intent on getting a new Photoshop machine with two SSDs in Raid-0 as soon as Win7 releases, but the word here and elsewhere so far is that RAID will block the TRIM function. Reply
  • kunedog - Wednesday, September 02, 2009 - link

    All the Gen2 X-25M 80GB drives are apparently gone from Newegg . . . so they've marked up the Gen1 drives to $360 (from $230):
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...

    Unbelievable.
    Reply
  • gfody - Wednesday, September 02, 2009 - link

    What happened to the gen2 160gb on Newegg? For a month the ETA was 9/2 (today) and now it's as if they never had it in the first place. The product page has been removed.

    It's like Newegg are holding the gen2 drives hostage until we buy out their remaining stock of gen1 drives.
    Reply
  • iwodo - Tuesday, September 01, 2009 - link

    I think it acts as a good summary. However someone wrote last time about Intel drive handling Random Read / Write extremely poorly during Sequential Read / Write.

    Has Aanand investigate yet?

    I am hoping next Gen Intel SSD coming in Q2 10 will bring some substantial improvement.
    Reply
  • statik213 - Tuesday, September 01, 2009 - link

    Does the RAID controller propagate TRIM commands to the SSD? Or will having RAID negate TRIM? Reply
  • justaviking - Tuesday, September 01, 2009 - link

    Another great article, Anand! Thanks, and keep them coming.

    If this has already been discussed, I apologize. I'm still exhausted from reading the wonderful article, and have not read all 17 pages of comments.

    On PAGE 3, it talks about the trade-off of larger vs. smaller pages.

    I wonder if it would be feasible to make a hybrid drive, with a portion of the drive using small pages for faster performance when writing small files, and the majority of it being larger pages to keep the management of the drive reasonable.

    Any file could be written anywhere, but the controller would bias small writes to the small pages, and large writes to large files.

    Externally it would appear as a single drive, of course, but deep down in the internals, it would essentially be two drives. Each of the two portions would be tuned for maximum performance in different areas, but able to serve as backup or overflow if the other portion became full or ever got written to too many times.

    Interesting concept? Or a hair brained idea buy an ignorant amateur?
    Reply
  • CList - Tuesday, September 01, 2009 - link

    Great article, wonderful to see insightful, in depth analysis.

    I'd be curious to hear anyone's thoughts on the implications are of running virtual hard disk files on SSD's. I do a lot of work these days on virtual machines, and I'd love to get them feeling more snappy - especially on my laptop which is limited to 4GB of ram.

    For example;
    What would the constant updates of those vmdk (or "vhd") files do to the disk's lifespan?

    If the OS hosting the VM is windows 7, but the virtual machine is WinServer2003 will the TRIM command be used properly?

    Cheers,
    CList
    Reply
  • pcfxer - Tuesday, September 01, 2009 - link

    Great article!

    "It seems that building Pidgin is more CPU than IO bound.."

    Obviously, Mr. Anand doesnt' understand how compilers work ;). Compilers will always be CPU and memory bound, reduce your memory in the computer to say 256MB (or lower) and you'll see what I mean. The levels of recursion necessary to follow the production (grammars that define the language) use up memory but would rarely use the drive unless the OS had terrible resource management. :0.
    Reply
  • CMGuy - Wednesday, September 02, 2009 - link

    While I can't comment on the specifics of software compilers I know that faster disk IO makes a big difference when your performing a full build (compilation and packaging) of software.
    IDEs these days spend a lot their time reading/writing small files (thats a lot of small, random, disk IO) and a good SSD can make a huge difference to this.
    Reply
  • Abjuk - Wednesday, September 02, 2009 - link

    Agreed CM, my current project at work takes about six minutes to build from scratch and CPU usage never gets above about 35%. The process is totally IO bound.

    It really depends on whether you have several large source files or several hundred small ones.
    Reply
  • Weyzer - Tuesday, September 01, 2009 - link

    Good article and testing, but why was the Crucial M225 not mentioned at all? It's performance is similar to the vertex drives, I know, but I think it could have been mentioned somewhere, if it is in the good or bad range. Reply
  • jasperjones - Tuesday, September 01, 2009 - link

    javascript:link('frmText') $997 @ Newegg omgomgomg

    Needless to say, that price will come down quickly. So more seriously, after reading the article I really feel I understand better what to look for in an SSD. Thanks!
    Reply
  • paesan - Tuesday, September 01, 2009 - link

    Wow, does NE really think that anyone will buy the Intel drive at that price. OMG!!! Funny thing, it is in stock and it says limit 1 per customer. Lol Reply
  • CList - Tuesday, September 01, 2009 - link

    Obviously someone is buying them at that price or they'd lower it. The people who can't wait two or three weeks and are willing to be gouged for these drives are the ones that allow NewEgg to give us low margins on other products while not going out of business :D

    Reply
  • ravaneli - Tuesday, September 01, 2009 - link

    I just decided to buy one and when I opened newegg i couldn't believe my eyes. I hope that is only because they have a few drives left, and once Intel pumps up some stock in the retailers the prices will go back to Intel's retail.

    Does anyone know what are the production capabilities of Intel's SSD factories? I don't want to wait a whole year until the market saturates.
    Reply
  • LazierSaid - Tuesday, September 01, 2009 - link

    This article was so good that Newegg doubled their X25M G2 prices overnight.

    Reply
  • medi01 - Tuesday, September 01, 2009 - link

    Yep, very impressive advertisement indeed. Reply
  • HVAC - Tuesday, September 01, 2009 - link

    I'd rather have ewoks in the sequels than Jar-jar ... Reply
  • Naccah - Tuesday, September 01, 2009 - link

    Newegg's prices on all the Intel SSDs skyrocketed. The X-25 G2s are $499 now. Is this price a reflection of the high demand or did Intel change the price again? Reply
  • Mr Perfect - Tuesday, September 01, 2009 - link

    Probably demand. When I saw that price, I shopped around to see what was going on. Answer? Everyone else seems to be out of stock. Reply
  • Naccah - Tuesday, September 01, 2009 - link

    I've been waiting to get an SSD till Win 7 released hoping that the prices would have stabilized somewhat by that time. The recent price fluctuation is disturbing as well as the availability of the X25 G2. When the G2 first hit Newegg I was surfing the site and could have grabbed one for $230, but like I said I was content to wait. Now I'm having second thoughts! and wondering if I should grab one if the price goes down again. Reply
  • gfody - Tuesday, September 01, 2009 - link

    That doesn't explain the 160gb - it's not even in stock yet. I have been waiting a month for this drive to be in stock and here they more than double the price one day before the ETA date! It's an outrage.. if I'd known the drive was $1000 I would have bought something else.

    Way to screw your customers Newegg
    Reply
  • araczynski - Tuesday, September 01, 2009 - link

    A) your intro has the familiar smell of tomshardware, you'd do to be without that, its unbecoming.

    B) your final words smell of the typical big corp establishment mentality; bigger, faster, more expensive, consumers want! while if the market is any indication, is completely the opposite of the truth. people want 'good enough' for cheap, as the recent Wired magazine article more or less said. granted, Wired isn't the source for indepth technical reading, but it is a good source sometimes of getting the pulse of things...sometimes, still, more often than anything coming out of the mouths of the big corps.

    C) everything in between A and B is great though :) Please leave the opinions/spins to the PR machines.

    Personally, the cost of these things is still more than i'm willing to pay for, for any speed increase. the idiotic shenanigans of firmwares and features only present after special downloads/phases of the moon make me just blow off the whole technology for a few more years. I'll revisit this in say 2 or 3 years, perhaps the MLC's will finally die off and the SLC's (unless i have the 2 backwards) or something better rolls out with a longer lifespan.
    Reply
  • Anand Lal Shimpi - Tuesday, September 01, 2009 - link

    A) My intention with the intro was to convey how difficult it was for me to even get to the point where I felt remotely comfortable publishing this article. I don't like posting something that I don't feel is worthy of the readership's reception. My sincere apologies if it came off as arrogant or anything other than an honest expression of how difficult it was to complete. I was simply trying to bring you all behind the scenes and take you into the crazy place that's my mind for a bit :)

    B) I agree that good enough for cheap is important, hence my Indilinx recommendation at the end. But we can't stifle innovation. We need bigger, better, faster (but not necessarily more expensive, thank you Moore's Law) to keep improving. I remember when the P3 hit 1GHz and everyone said we don't need faster CPUs. If we stopped back then we wouldn't have the apps/web we have today since developers can count on a large install base of very fast processors.

    Imagine what happens in another decade when everyone has many-core CPUs in their notebooks...

    Take care,
    Anand
    Reply
  • DynacomDave - Tuesday, September 29, 2009 - link

    First - Anand thanks for the good work and the great article.

    I too have an older laptop that has a PATA interface that I'd like to upgrade with an SSD. I contacted Super Talent about their MasterDrive EX2 - IDE/PATA. Their response was; We only use Indilinx controller for SATA drives, like UltraDrive series. We use Phison controller for EX2/IDE drives.

    I want to improve performance not degrade it. I don't know if this will perform like the Indilinx or like the old SSDs. Can anyone help me with this?
    Reply
  • bji - Tuesday, September 01, 2009 - link

    There are a few more smaller players in the SSD controller game that don't ever show up in these reviews. They are Silicon Motion and Mtron. The reason I am interested in them is because I have a laptop that is PATA only (it's old I know but I love it and I want to extend its life with an SSD), and I am trying to get an SSD that works in it.

    Turns out the Mtron MOBI SSDs are not compatible with this laptop. I have no idea why. So I have put an order into eBay for an SSDFactory SSD and am crossing my fingers that it will work.

    Mtron makes SATA SSD drives so they could be included in these reviews, and I don't know why they are excluded. It would be interesting to see how their controllers stack up. I personally own two Mtron SSD drives (both 32 GB SLC drives) that I tried to get to work in my laptop and failed to - so one is now the system disk in my desktop and it is very fast (at least compared to platter drives, maybe not compared to newer SSDs). The other one I am still trying to find a use for.

    The only Silicon Motion controller drives I have seen are PATA drives so they clearly are a different beast than the SATA drives typically reviewed in these articles. But I would still be interested in seeing the numbers for the Silicon Motion controller just to get an idea of how well they stack up against the other controllers, especially for the 4K random writes tests. The PATA interface ought not to be the limiting factor for that test at least.
    Reply
  • paesan - Tuesday, September 01, 2009 - link

    I see NewEgg has a Patriot Troqx and a Patriot Torqx M28. What is the difference in the 2 drives. Reply
  • paesan - Tuesday, September 01, 2009 - link

    After reading thru the Patriot forum I found the differences. The M28 has 128MB cache compared to 64MB cache on the non M28. The biggest difference is the M28 uses a Samsung controller instead of the Indilinx controller on the non M28. I wonder why they switched controllers. Reply
  • valnar - Tuesday, September 01, 2009 - link

    It seems to be that using trim would make a "used" SSD faster, no doubt, but is it required? Would it be okay to buy an SSD for a Windows XP box and just set and forget it? Even used and fragmented, it appears to be faster than any hard drive. My second question is longevity. How long would one last compared to a hard drive? Reply
  • valnar - Wednesday, September 02, 2009 - link

    Anyone? Reply
  • antinah - Tuesday, September 01, 2009 - link

    For another great article on the SSD technology.

    I'm considering an Intel G2 for my brand new macbook pro, and if I understand what I've read correctly, performance should not degrade too much although OSX doesn't support trim yet.

    I also doubt Apple will wait too long before they release an update with trim support for osx.

    I just recently switched to mac after a lifetime with pc/windows. Anything i shoud be aware of when I install the SSD in a mac compared to pc running windows? (other than voiding the warranty and such). I'm thinking precations regarding swap usage or such.

    Best regards from norway
    Stein
    Reply
  • medi01 - Tuesday, September 01, 2009 - link

    So I absolutelly need to pay 15 times as much per gigabyte as normal HDDs, so that when I start Photoshop, Firefox and WoW, straight after windows boots, it loads whopping 24 seconds faster?

    That's what one calls "absolutelly need" indeed and you also chose amazingly common combination of apps.
    Reply
  • Anand Lal Shimpi - Tuesday, September 01, 2009 - link

    You can look back at the other two major SSD pieces (X25-M Review and The SSD Anthology) for other examples of application launch performance improvements. The point is that all applications launch as fast as possible, regardless of the state of your machine. Whether you're just firing it up from start (which is a valid use scenario as many users do shut off their PCs entirely) or launching an application after your PC has been on for a while, the apps take the same amount of time to start. The same can't be said for a conventional hard drive.

    Take care,
    Anand
    Reply
  • Seramics - Tuesday, September 01, 2009 - link

    its not abt the 24seconds but rather the wholly different experience of near instantaneous u get wit ssd tht cannot be replicated by hdds Reply
  • medi01 - Tuesday, September 01, 2009 - link

    Nobody starts mentioned apps together directly after boot.

    I've played WoW for a couple of years, and never had to wait dozen of seconds for it to start.

    Most well written applications start almost instantly.

    And the whole "after fresh boot" is not quite a valid option neither, I don't recall when I last switched off my pc, "hibernate" works just fine.

    The "you get completely different experience" MIGHT be a valid point, but it was destroyed by ridiculous choice of apps to start. And I suspect that it is because NOT starting stuff all together and right after boot, didn't show gap as big.
    Reply
  • kunedog - Tuesday, September 01, 2009 - link

    Anand, I think your article titled "Intel Forces OCZ's Hand: Indilinx Drives To Drop in Price" (http://www.anandtech.com/storage/showdoc.aspx?i=36...">http://www.anandtech.com/storage/showdoc.aspx?i=36... could also use a follow-up, primarily to explain why the opposite has happened (especially with the Intel drives). Is this *all* attributable to Intel's disaster of a product launch? Maybe not, but in any case it deserves more attention than a brief mention at the end of this article. Reply
  • zero2espect - Tuesday, September 01, 2009 - link

    great work again. it's for this reason that i've been coming here for ages. great analysis, great writing and an understanding about what we're all looking for.

    one thing that you may have overlooked is the difference in user experience due to the lack of hdd "buzz". fortunate enough to find myself in posession of a couple of g2160gb jobbies, one is in my gaming rig and the other in the work notebook. using the notebook the single biggest difference is speed (it makes a 18mo old notebook seems like it performs as fast as a current generation desktop) but the next biggest and very noticible difference is the lack of "hum", "buz", "thrash" and "vibrate" as the drive goes about it's business.

    thanks anadtech and thanks intel ;-P
    Reply
  • Mr Perfect - Tuesday, September 01, 2009 - link

    Anand,

    Would you happen to know if there are different revisions of the G2 drives out? Newegg is listing a 80GB Intel drive with model #SSDSA2MH080G2C1 for $499, and another 80GB Intel with model #SSDSA2MH080G2R5 for $599. They are both marked as 2.5" MLC Retail drives, and as far as I can tell they're both G2. What has a R5 got that a C1 doesn't? The updated firmware maybe?

    Thanks!

    PS, dear Newegg, WTF? 100% plus price premiums? I'm thinking I'll just wait until stock returns and buy from another site just to spite you now....
    Reply
  • gfody - Tuesday, September 01, 2009 - link

    It looks like the R5 is just a different retail package - shiny box, nuts and a bracket instead of just the brown box.
    Why Newegg is charging an extra $100 for it.. just look at what they're doing with the other prices. I am losing so much respect for Newegg right now. disgusting!
    Reply
  • CList - Tuesday, September 01, 2009 - link

    Don't be disgusted at Newegg, be disgusted at the people who are willing to pay the premium price! Newegg is simply playing a reactionary role in the course of natural free-market economics and cannot be blamed. The consumers, on the other hand, are willing participants and are choosing to pay those prices. When no one is left who is willing to pay those prices, Newegg will quickly lower them.

    Cheers,
    CList
    Reply
  • gfody - Tuesday, September 01, 2009 - link

    I don't understand how consumers have any control over what Newegg is charging for the 160gb that's not even in stock yet.

    If Newegg wants to get the absolute most anyone is willing to pay for every piece of merchandise they may as well just move to an auction format.
    Reply
  • DrLudvig - Tuesday, September 01, 2009 - link

    Yeah, if you look at intel's website, http://www.intel.com/cd/channel/reseller/asmo-na/e...">http://www.intel.com/cd/channel/reselle...na/eng/p..., you will se that the R5 includes "3.5" desktop drive bay adapter to 2.5" SSD adapter bracket, screws, installation guide, and warranty documentation.
    Why on earth Newegg is charging that much more for it i really don't know, here in denmark the R5 retails for about 15 bucks more than the C1.. Which really isn't that bad..
    Reply
  • Mr Perfect - Tuesday, September 01, 2009 - link

    Whoa. That's it? An adapter kit? With that kind of price difference, I expected it to be the D0 stepping of SSDs or something.

    Thanks for clearing that up.
    Reply
  • NA1NSXR - Monday, August 31, 2009 - link

    The reason not being that performance or longevity is not good enough, but because improvements are still coming too quickly, and prices falling fast still. Once the frequency of significant improvements and price drops slow down, I will more seriously consider an SSD. I suppose it depends on how much waiting on the I/O you do though. For me, it is not so much that a Velociraptor is intolerable. Reply
  • bji - Tuesday, September 01, 2009 - link

    Perhaps this is what you meant, but you should really clarify. It's still not time for YOU to buy an SSD. SSDs represent an incredible performance improvement that is well worth the money for many people.
    Reply
  • DragonReborn - Monday, August 31, 2009 - link

    say i wanted to go crazy (it happens)...should i get two 80gb intel g2's or the 160gb intel g2? same space...is the RAID 0 performance worth it?

    i have all my important data backed on a big 2tb drive so the two ssd's (or 1 160gb) will just hold my OS/progs/etc.

    thoughts?
    Reply
  • kensiko - Monday, August 31, 2009 - link

    I would say that in real world usage, you won't notice a huge difference between RAID and not RAID, SSD are already fast enough for the rest of the system. Also, TRIM may not work for now in RAID configuration.

    Just look at Windows Start up, no difference between Gen2 SSD!
    Reply
  • Gc - Monday, August 31, 2009 - link

    This is a nice article, but the numbers leave an open question.
    What is Samsung doing right? Multiprocess/multithread performance?

    The article finds Samsung drives performance is low on 2MB reads,

    (new 2MB sequential reads not given, assume same as 'used')
    used 2MB sequential reads (low rank, 79% of top)

    good on 2MB writes:

    new 2MB sequential writes (middle rank, 89% of top)
    used 2MB sequential writes (2nd place, 91% of top)

    and horrible on 4KB random files:

    (new 4KB random reads not given, assume same as 'used')
    used 4KB random read (bottom ssd ranked, only 36% of top)
    new 4KB random write (low rank, only 9% of top)
    used 4KB random write (bottom ssd ranked, only 3% of top, < HD)

    Yet somehow in the multitasking Productivity test and Gaming test, it was surprisingly competitive:

    multitasking productivity (mid-high rank, 88% of top)
    gaming (mid-high rank, 95% of top)

    The productivity test is described as "four tasks going on at once, searching through Windows contacts, searching through Windows Mail, browsing multiple webpages in IE7 and loading applications". In other words, nearly all READS (except maybe for occasionally writing to disk new items for the browser history or cache).

    The gaming test is described as "reading textures and loading level data", again nearly all READS.

    Q. Given that the Samsung controller's 2MB read performance and
    4KB read performance are both at the bottom of the pack, how
    did it come out so high in the read-mostly productivity test
    and gaming test?

    Does this indicate the Samsung controllers might be better than Indilinx for multiprocess/multithreaded loads?

    (The Futuremark pdf indicates Productivity 2 is the only test with 4 simultaneous tasks, and doesn't say whether the browser tabs load concurrently. The Gaming 2 test is multithreaded with up to 16 threads. [The Samsung controller also ranks well on the communications test, but that may be explained: Communications 1 includes encryption and decompression tasks where Samsung's good sequential write performance might shine.])

    Since many notebooks/laptops are used primarily for multitasking productivity (students, "office"-work), maybe the Samsung was a reasonable choice for notebook/laptop OEMs. Also, in these uses the cpu and drive are idle much of the time, so the Samsung best rank on idle power looks good. (But inability to upgrade firmware is bad.)

    (The article doesn't explain what the load was in the load drive test, though it says the power drops by half if the test is switched to random writes; maybe it was sequential writes for peak power consumption. It would have been helpful to see the power consumption rankings for read-mostly loads.)

    Thanks!
    Reply
  • rcocchiararo - Monday, August 31, 2009 - link

    Your prices are way off, newegg is charging ludicrous ammounts right now :(

    also, the 128 agility was 269 last week, i was super exited, then it went back to 329, and its now 309.
    Reply
  • shabby - Monday, August 31, 2009 - link

    The 80gig g2 is $399 now! Reply
  • gfody - Tuesday, September 01, 2009 - link

    The gen2 80gb is at $499 as of 12:00AM PST Reply
  • maxfisher05 - Monday, August 31, 2009 - link

    As of right now (8/31) newegg has the 160GB Intel G2 listed at $899!!!!!!!!!!!!!!!!!!! To quote Anand "lolqtfbbq!" Reply
  • siliq - Monday, August 31, 2009 - link

    Great article! Love reading this. Thanks Anand.

    We gather from this article that all the pain-in-@$$ about SSDs come from the inconsistency between the size of the read-write page and the erase block. When SSDs are reading/writing a page it's 4K, but the minimum size of erasing operation is 512K. Just wondering is there any possibility that manufacturers can come up with NAND chips that allows controllers to directly erase a 4K page without all the extra hassles. What are the obstacles that prevent manufacturers from achieving this today?
    Reply
  • bji - Tuesday, September 01, 2009 - link

    It is my understanding that flash memory has already been pushed to its limit of efficiency in terms of silicon usage in order to allow for the lowest possible per-GB price. It is much cheaper to implement sophisticated controllers that hide the erase penalty as much as possible than it is to "fix" the issue in the flash memory itself.

    It is absolutely possible to make flash memory that has the characteristics you describe - 4K erase blocks - but it would require a very large number of extra gates in silicon and this would push the cost up per GB quite a bit. Just pulling numbers out of the air, let's say it would cost 2x as much per GB for flash with 4K erase blocks. People already complain about the high cost per GB of SSD drives (well I don't - because I don't steal software/music/movies so I have trouble filling even a 60 GB drive), I can't imagine that it would make market sense for any company to release an SSD based on flash memory that costs $7 per GB, especially when incredible performance can be achieved using standard flash, which is already highly optimized for price/performance/size as much as possible, as long as a sufficiently smart controller is used.

    Also - you should read up on NOR flash. This is a different technology that already exists, that has small erase blocks and is probably just what you're asking for. However, it uses 66% more silicon area than equivalent NAND flash (the flash used in SSD drives), so it is at least 66% more expensive. And no one uses it in SSDs (or other types of flash drives AFAIK) for this reason.
    Reply
  • bji - Tuesday, September 01, 2009 - link

    Oh I just noticed in the Wikipedia article about NOR flash, that typical NOR flash erase block sizes are also 64, 128, or 256 KB. So the eraseblocks are just as problematic there as in NAND flash. However, NOR flash is more easily bit-addressable so would avoid some of the other penalties associated with NAND that the smart contollers have to work around.

    So to make a NAND or NOR flash with 4K eraseblocks would probably make them both 2X - 4X more expensive. No one is going to do that - it would push the price back out to where SSDs were not viable, just as they were a few years ago.
    Reply
  • siliq - Tuesday, September 01, 2009 - link

    Amazing answers! Thank you very much Reply
  • morrie - Monday, August 31, 2009 - link

    My laptop is limited to 4 GB swap. While that's enough for 99% of Linux users, I don't shut down my laptop, it's used as a desktop with dozens of apps running and hundreds of browser tabs. Therefore, after a few months of uptime, memory usage climbs above 4 GB. I have two hard drives in the laptop, and set up a software raid0 1GB swap partition, but I went with software raid1 for the other swap partition. So once the ram is used up for swap, the laptop slows noticeably, but after the raid0 swap partition fills up, the raid1 partition really slows it down. Once that fills up, it hits swap files (non raid) which slow it down more. But thanks to the kernel and the way swappiness works, once about 4 GB of Ram plus about 3 GB of physical swap is used, it really slows. I can gain a bit of speed by adding some physical swap files to increase the ratio of physical swap to ram swap (thus changing swappiness through other means), but this only works for another 1 GB of ram.

    No lectures or advice please, on how I'm using up memory or about how 4GB is more than sufficient, my uptimes are in the hundreds of days on this laptop and thanks to ADD/limited attention span, intermittent printer availability for printing out saved browser tabs and other reasons (old habits dying hard being one), my memory usage is what it is.

    So, the big question is, since the laptop has an eSATA port, can I install one of these ssd drives in an externel SATA tray, connected via eSATA to the laptop and move physical swap partitions to the ssd? I believe that swap on the ssd would be a lot faster even on the eSATA wire, than swap on the drives in the laptop (they're 7200 rpm drives btw). I'm aware that using the ssd for swap would shorten it's life, but if it lasts a year till faster laptops with more memory are available (and I get used to virtual machines and saving state so I can limit open browser windows), I'll be happy.

    Buying two of the drives and using them raided in the laptop is too costly right now, when prices drop that'll be a solution for this current laptop.

    Externel SSD over eSATA for Linux swap on a laptop? Faster than my current setup?
    Reply
  • hpr - Monday, August 31, 2009 - link

    Sounds like you have some very small memory leak going on there.

    Have you tried that Firefox plugin that enables you to have your tabs but it doesn't really have a tab open in memory.


    TooManyTabs
    https://addons.mozilla.org/en-US/firefox/addon/942...">https://addons.mozilla.org/en-US/firefox/addon/942...

    Have fun filling up thousands of tabs and having low memory usage.
    Reply
  • gstrickler - Monday, August 31, 2009 - link

    You should be able to use an SSD in an eSATA case, and yes, it should be faster than using your internal 7200 RPM drives. You probably want to use an Intel SSD for that (see page 19 of the article and note that the Intel drives don't drop off dramatically with usage).

    If you don't need to storage of your two internal 7200 RPM drives (or if you can get a sufficiently large SSD), you might be better off replacing one of them with an SSD and reconsider how you're allocating all your storage.

    As for printer availability, seems to me it would make more sense to use a CUPS based setup to create PDFs rather than having jobs sit in a print queue indefinitely. Then, print the PDFs at your convenience when you have a printer available. I don't know how your printing setup currently works, but it sounds like doing so would reduce your swap space usage.
    Reply
  • sunbear - Monday, August 31, 2009 - link

    Even though most laptops are now SATA-300 compatible, the majority are not able to actually exceed SATA-150 transfer speeds according to some people who have tried. I would imagine that sequential read/write performance would be important for swap but the SATA-150 will be the limiting factor for any of the SSD's mentioned in Anand's article in this case.


    Here's the situation with Thinkpads:
    http://blogs.technet.com/keithcombs/archive/2008/1...">http://blogs.technet.com/keithcombs/arc...vo-think...

    The new MacBookPro is also limited to SATA-150.
    Reply
  • smartins - Tuesday, September 01, 2009 - link

    Actually, The ThinkPad T500/T400/W500 are fully SATA-300 compatible, it's only the drives that ship with the machines that are SATA-150 capped.
    I have a Corsair P64 on my T500 and get an average of 180MB/read which is consistent with all the reviews of this drive.
    Reply
  • mczak - Monday, August 31, 2009 - link

    article says you shouldn't expect it soon, but I don't think so. Several dealers already list it, though not exactly in stock (http://ht4u.net/preisvergleich/a444071.html)">http://ht4u.net/preisvergleich/a444071.html). Price tag, to say it nicely, is a bit steep though. Reply
  • Seramics - Monday, August 31, 2009 - link

    Another great articles from Anandtech. Kudos guys at AT, ur my no. 1 hardware site! Anyway, its really great that we have a really viable competitor to Intel- Indilinx. They really deserve the praise. Now we can buy a non Intel SSD and have no nonsensical stuttering issue! Overall, Intel is still leader but its completely nonsensical how bad their sequential write speed is! I mean, its even slower than a mechanical hard disk! Thats juz not acceptable given the gap in performance is so large and Intel SSD's actually can suffer a significantly worst performance in real world when sequential write speed performance matters. Intel, fix your seq write speed nonsence please! Reply
  • Seramics - Monday, August 31, 2009 - link

    Sorry for double post. Its unintentional and i duno how to delete the 2nd post. Reply
  • Seramics - Monday, August 31, 2009 - link

    Another great articles from Anandtech. Kudos guys at AT, ur my no. 1 hardware site! Anyway, its really great that we have a really viable competitor to Intel- Indilinx. They really deserve the praise. Now we can buy a non Intel SSD and have no nonsensical stuttering issue! Overall, Intel is still leader but its completely nonsensical how bad their sequential write speed is! I mean, its even slower than a mechanical hard disk! Thats juz not acceptable given the gap in performance is so large and Intel SSD's actually can suffer a significantly worst performance in real world when sequential write speed performance matters. Intel, fix your seq write speed nonsence please! Reply
  • Shadowmaster625 - Monday, August 31, 2009 - link

    Subtle. Very subtle. Good article though.

    3 questions:

    1. Is there any way to read the individual page history off the SSD device so I can construct a WinDirStat style graphical representation of the remaining expected life of the flash? Or better yet is there already a program that does this?

    2. Suppose I had a 2 gigabyte movie file on my 60gb vertex drive. And suppose I had 40GB of free space. If I were to make 20 copies of that movie file, then delete them all, would that be the same as running Wiper?

    3. Any guesses as to which of these drives will perform best when we make the move to SATA-III?

    4. (Bonus) What is stopping Intel from buying Indilinx (and pulling their plug)? (Or just pulling their plug without buying them...)

    Reply
  • SRSpod - Thursday, September 03, 2009 - link

    3. These drives will perform just as they do now when connected to a 6 GBps SATA controller. In order to communicate at the higher speed, both the drive and the controller need to support it. So you'll need new 6 GBps drives to connect to your 6 GBps controller before you'll see any benefit from the new interface. Reply
  • heulenwolf - Monday, August 31, 2009 - link

    Yeah, once the technology matures a little more and drives become more commoditized, I'd like to see more features in terms of feedback on drive life, reliability, etc. When I got my refurb Samsung drives from Dell, for example, they could have been on the verge of dying or they could have been almost new. There's no telling. The controller could know exactly where the drive stands, however. Some kind of controller-tracked indication of drive life left would be a feature that might distinguish comparable drives from one another in a crowded marketplace.

    While they're at it, a tool to allow adjusting of values such as the amount of space not reported to the OS with output in terms of write amplification and predicted drive life would be really nifty.

    Sure, its over the top, but we can always hope.
    Reply
  • nemitech - Monday, August 31, 2009 - link

    I picked up an Agility 120 Gb for $234 last week from ebay ($270 list price - - 6% bing cashback - $20 pay pal discount). I am sure there will be similar deals around black Friday. $2 per Gb is possible for a good SSD. Reply
  • nemitech - Monday, August 31, 2009 - link

    opps - not ebay - it was NEWEGG. Reply
  • Loki726 - Monday, August 31, 2009 - link

    Thanks a ton for including the pidgin compiler benchmarks. I didn't think that HD performance would make much of a difference (linking large builds might be a different story), but it is great to have numbers to back up that intuition. Keep it up. Reply
  • torsteinowich - Monday, August 31, 2009 - link

    Hi

    You write that the Indilinx wiper tool collects a free page list from the OS, then wipes the pages. This sounds like a dangerous operation to me since the OS might allocate some of these blocks after the tool collects the list, but before they are wiped.

    Have you received a good explanation for Indilinx about how they ensure file system integrity? As far as i know Windows cannot temporarily switch to read-only mode on an active file system (at least not the system drive). The only way i could see this tool working safely would be by booting off a different media and accessing the file system to be trimmed offline with a tool that correctly identifies the unused pages for the particular file system being used. I could be wrong of course, maybe windows 7 has a system call to temporarily freeze FS writes, but i doubt it.
    Reply
  • has407 - Monday, August 31, 2009 - link

    It: (1) creates a large temporary file (wiper.dat) which gobbles up all (or most) of the free space; (2) determines the LBA's occupied by that file; (3) tells the SSD to TRIM those LBA's; and then (4) deletes the temporary file (wiper.date).

    From the OS/filesystem perspective, it's just another app and another file. (A similar technique is used by, e.g., sysinternals Windows SDelete app to zero free space. For Windows you could also probably use the hooks used by defrag utilities to accomplis it, but that would be a lot more work.)
    Reply
  • cghebert - Monday, August 31, 2009 - link

    Anand,

    Great article. Once again you have outclassed pretty much every other site out there with the depth of content in this review. You should start marketing t-shirts that say "Everything I learned about SSDs I learned from AnandTech"

    I did have a question about gaming benchmarks, since you made this statement:

    " but as you'll see later on in my gaming tests the benefits of an SSD really vary depending on the game"

    But I never saw any gaming benchmarks. Did I miss something?
    Reply
  • nafhan - Monday, August 31, 2009 - link

    Just wanted to say awesome review.
    I've been reading Anandtech since 2000, and while other sites have gone downhill or (apparently) succumbed to pressure from advertisers, you guys have continued to give in depth, critical reviews.
    I also appreciate that you do some real analysis instead of just throwing 10 pages of charts online.
    Thanks, and keep up the good work!
    Reply
  • zysurge - Monday, August 31, 2009 - link

    Awesome amazing article. So much information, presented clearly.

    Question, though? I have an Intel G2 160GB drive coming in the next few days for my Dell D830 laptop, which will be running Windows 7 x64.

    Do I set the controller to ATA and use the Intel Matrix driver, or set it to AHCI and use Microsoft's driver? Will either provide an advantage? I realize neither will provide TRIM until Q4, but after the firmware update, both should, right?

    Thanks in advance!
    Reply
  • ggathagan - Wednesday, September 16, 2009 - link

    From page 15 (Early Trim support...):
    Under Windows 7 that means you have to use a Microsoft made IDE or AHCI driver (you can't install chipset drivers from anyone else).
    Reply
  • Mumrik - Monday, August 31, 2009 - link

    but I can't live with less than 300GB on that drive, and SSDs in usable sizes still cost more than high end video cards :-(

    I really hope I'll be able to pick up a 300GB drive for 100-200 bucks in a year or so, but it is probably a bit too optimistic.
    Reply
  • Simen1 - Monday, August 31, 2009 - link

    This is simply wrong. Ask anyone over 10 years if they think this mathematical statement is true or false. 80 can never equal 74,5.

    Now, someone claims that 1 GB = 10^9 B and others claim that 1 GB is 2^30 B. Who is really right? What does the G and the B mean? Who defines that?

    The answers is easy to find and document. B means Byte. G stands for Giga ans means 10^6, not 2^30. Giga is defined in the international system of units, SI.

    No standardization organization have _ever_ defined Giga to be 2^30. But IEC, International Electrotechnical Commission, have defined "Gi" to 2^30. This is supposed to be used for digital storage so people wont be confused by all the misunderstandings around this. Misunderstandings that mainly comes from Microsoft and quite a few other big software vendors. Companies that ignore the mathematical errors in their software when they claim that 80GB = 74,5 GB, and ignore both international standards on how to shorten large numbers.
    Reply
  • GourdFreeMan - Tuesday, September 01, 2009 - link

    You would, in fact, be incorrect. I refer you to ANSI/IEEE Std 1084-1986, which defines kilo, mega, etc. as powers of two when used to refer to sizes of computer storage. It was common practice to use such definitons in Computer Science from the 1970s until standards were changed in 1991. As many people reading Anandtech received their formal education during this time period, it is understandable that the usage is still commonplace. Reply
  • Undersea - Monday, August 31, 2009 - link

    Where was this article two weeks ago before I bought my OCZ summit? I hope this little article will jump start samsung.

    Thanks for all the hard work :)
    Reply
  • FrancoisD - Monday, August 31, 2009 - link

    Hi Anand,

    Great article, as always. I've been following your site since the beginning and it's still the best one out there today!

    I mainly use Mac's these days and was wondering if you knew anything about Apple's plans for TRIM??

    Thanks for all the fantastic work, very technical yet easy to understand.

    François
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Thanks for your support over the years :)

    No word on Apple's plans for TRIM yet, I am digging though...

    Take care,
    Anand
    Reply
  • Dynotaku - Monday, August 31, 2009 - link

    Amazing article as always, now I just need one that shows me how to install just Win 7 and my Steam folder to the SSD and move Program Files and "My Documents" or whatever it's called in Win7 to a mechanical disk. Reply
  • GullLars - Monday, August 31, 2009 - link

    A really great article with loads of data.
    I only have one complaint. The 4kb random read/write tests in IOmeter was done with QD=3, this simulates a really light workload, and does not allow the controllers to make use of the potential of all their flash channels. I've seen intels x25-M scale up to 130-140 MB/s of 4KB random read @ QD=64 (medium load) with AHCI activated. I have not yet tested my Vertex SSDs or Mtron Pro's, but i suspect they also scale well beyond QD=3.

    It would also be usefull to compare the different tests in the HDDsuite in PCmark vantage instead of only the total score.
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    The reason I chose a queue depth of 3 is because that's, on average, what I found when I tried heavily (but realistically) loading some Windows desktop machines. I rarely found a queue depth over 5. The super high QDs are great for enterprise workloads but I don't believe they do a good job at showcasing single user desktop/notebook performance.

    I agree about the individual HDD suite tests, I was just trying to cut down on the number of graphs everyone had to mow through :)

    Take care,
    Anand
    Reply
  • heulenwolf - Monday, August 31, 2009 - link

    Anand,

    I'd like to add my thanks to the many in the comments. Your articles really do stand out in their completeness and clarity. Well done.

    I'm hoping you or someone else in the forums can shed some light on a problem I'm having. I got talked into getting a Dell "Ultraperformance" SSD for my new work system last year. Its a Samsung-branded SLC SSD 64 GB capacity. As your results predict, its really snappy when its first loaded and performance degrades after a few months with the drive ~3/4 full. One thing I haven't seen predicted, though, is that the drives have only lasted 6 months. The first system I received was so unstable without explanation that we convinced Dell to replace the entire machine. Since then, I'm now on my second SSD refurb replacement under warranty. In both SDD failures, the drive worked normally for ~6 months, then performance dropped to 5-10 MB/sec, Vista boot times went up to ~15 minutes, and I paid dearly in time for every single click and keypress. Once everything finally loaded, the system behaved almost normally. Dell's own diagnostics pointed to bad drives, yet, in each case, the bad SSD continued to work just at super slow speeds. I was careful to disable Vista's automatic defrag with every install.

    My IT staff has blamestormed first Vista (we're still mostly an XP shop) and now SSDs in general as the culprit. They want me to turn in the SSD and replace it with a magnetic hard drive. So, my question is how to explain this:
    A) Am I that 1 in a bazillion case of having gotten a bad system followed by a bad drive followed by another bad drive
    B) Is there something about Vista - beyond auto defrag - that accelerates the wear and tear on these drives
    C) Is there something about Samsung's early SSD controllers that drops them to a lower speed under certain conditions (e.g. poorly implemented SMART diagnostics)
    D) Is my IT department right and all SSDs are evil ;)?
    Reply
  • Ardax - Monday, August 31, 2009 - link

    Well, first you could point them to this article to point out how bad the Samsung SSDs are. Replace it with an Intel or Indilinx-based drive and you should be fine. Anecdotes so far indicate that people have been beating on them for months.

    As far as configuring Vista for SSD usage, MS posted in the Engineering Windows 7 Blog about what they're doing for SSDs. [url=http://blogs.msdn.com/e7/archive/2009/05/05/suppor...">http://blogs.msdn.com/e7/archive/2009/0...nd-q-a-f...]Article Link[/url].

    The short version of it is this: Disable Defrag, SuperFetch, ReadyBoost, and Application and Boot Prefetching. All these technologies were created to work around the low random read/write performance of traditional HDs and are unnecessary (or unhealthy, in the case of defrag) with SSDs.
    Reply
  • heulenwolf - Monday, August 31, 2009 - link

    Thanks for the reply, Ardax. Unfortunately, the choice of SSD brand was Dell's. As Anand points out, OEM sales is where Samsung's seems to have a corner on the market. The choices are: Samsung "Ultraperformance" SSD, Samsung not-so-ultraperformance SSD, Magnetic HDD, or void the warranty by getting installing a non-Dell part. I could ask that we buy a non-Dell SSD but since installing it would preclude further warranty support from Dell and all SSDs have become the scapegoat, I doubt my request would be accepted. Additionally, the article doesn't say much about drive reliability which is the fundamental problem in my case.

    I'll look into the linked recommendations on Win 7 and SSDs. I had already done some research on these features and found the general concensus to be that leaving any of them enabled (with the exception of defrag) should do no harm.
    Reply
  • Ardax - Tuesday, September 01, 2009 - link

    Installing a non-OEM drive is not going to void the warranty on the rest of the system. And as the other commenter posted, your problem isn't reliability, it's performance. Anand's excellent article shows the performance dropoff of the Samsung drives.

    Finally, if you do get another SSD (or still have one currently), definitely disable Prefetching. SuperFetch and ReadyBoost are read-only as far as the SSD is concerned, but Prefetch optimizations do write to the drive. It selectively fragments files so that booting the system and launching the profiled applications do as much sequential reading of the HD as possible. Letting prefetch reorganize all those files is bad on any SSD, and extra bad on one where you're seeing write penalties.

    ...

    And "One More Thing" (apologies to Steve Jobs)! Check out FlashFire (http://flashfire.org/)">http://flashfire.org/). It's a program designed to help out with low end SSDs. At a very basic view, what it does is use some of your RAM as a massive write-coalescing cache and puts that between the OS and your SSD. It collects a series of small random writes from the OS and applications and tries to turn them into a large sequential write for your SSD. It's beta, and I've never attempted to use it, but if it works for you it might be a life-saver.
    Reply
  • heulenwolf - Friday, September 04, 2009 - link

    Thanks again for the feedback, Ardax. Duly noted about the Dell warranty. They will continue to warrant the rest of the laptop, AFAIK, even if we install a 3rd party drive.

    Can you point to your source on the statements about how prefetch fragments files on the drive? Nothing I've read about it describes it as write intensive.

    I'd like to point out that this SSD is not a low-performance unit, the kind Flashfire is supposed to help with. It was one of the fastest drives available last year, before Intel's drives came out and set the curve. When its performing normally, this system boots Vista in ~30 seconds. Its uses SLC flash with an order of magnitude more write cycles than comparable MLC-based drives. Were standard Windows installs the cause of these failures, we would have heard about MLC drives failing similarly within the first month.

    Its also a business machine so loading alpha rev software on it for performance optimization isn't really an option. The known issues on Flashfire's site make it not worth the risk until its more mature.
    Reply
  • ggathagan - Wednesday, September 16, 2009 - link

    One possible work-around:

    I know that, for instance, if I buy a drive from Dell for one of my servers, that drive is covered under the Dell warranty for that server.


    Dell sells a Kingston re-branded Intel X25-E drive and a Corsair Extreme SSD drive.
    The Corsair Extreme series is an Indilinx SSD drive:
    http://www.corsair.com/products/ssd_extreme/defaul...">http://www.corsair.com/products/ssd_extreme/defaul...

    I don't know if this applies to non Dell-branded products that Dell sells, but it might be worth looking into.
    Reply
  • TGressus - Wednesday, September 02, 2009 - link

    "Disable Defrag, SuperFetch, ReadyBoost, and Application and Boot Prefetching"

    This is keen advice, especially for the OP's laptop SSD usage scenario. Definitely disable these services.

    I'd also suggest disabling Automatic Restore Points on the SSD Volume(s) from the System Protection task in the "System" Control Panel. When enabled this setting can generate a lot file I/O and will add to block fragmentation and eventual garbage collection. http://en.wikipedia.org/wiki/System_Restore">http://en.wikipedia.org/wiki/System_Restore

    Regular external backup and disk imaging should be just as effective, without the resource penalties.
    Reply
  • heulenwolf - Friday, September 04, 2009 - link

    TGressus, thanks for the feedback. With the exception of defrag, however, I can't find any real-world data about these services that leads me to beleive they place undue wear on the SSD. Sure, disabling them may free up marginal system resources for performance operation, but I haven't heard anything leading me to believe they're the cause of the repeated failures I saw. Since none of those features are check-box-disable-able items (again, beyound defrag) - they seem to require custom registry hacks - I'm not comfortable performing them on my business machine purely for performance optimization.

    I guess I've narrowed it down to options A or C from my original question:
    A) Am I that 1 in a bazillion case of having gotten a bad system followed by a bad drive followed by another bad drive
    B) Is there something about Vista - beyond auto defrag - that accelerates the wear and tear on these drives
    C) Is there something about Samsung's early SSD controllers that drops them to a lower speed under certain conditions (e.g. poorly implemented SMART diagnostics)
    D) Is my IT department right and all SSDs are evil ;)?
    Reply
  • gstrickler - Monday, August 31, 2009 - link

    Drive reliability does not appear to be a problem in your case. There is nothing in you description that indicates the drives failed, only the performance dropped to unacceptable levels. That is exactly the situation described by Anand's earlier tests of SSDs, especially with earlier firmware revisions (any brand) or with heavily used Samsung drives.

    I presume your swap space (paging file) is on the SSD? The symptoms you describe would occur if writing to the swap space (which Windows will do during boot up/login) is slow. You might be able to regain some performance and/or delay the reappearance of symptoms by simply moving your swap space off the SSD.

    For your purposes, it sounds like your best solution would be to switch to an Intel or Indilinx drive, probably an SLC drive, but a larger MLC drive might work well also. Dell won't warranty the new drive, but it won't "void" your warranty either. You'll still have the remainder of the warranty from Dell on everything except the new SSD, which will be under warranty from the company making the drive. If you have a support contract with Dell, they might try to point to the non-Dell SSD as an issue, but at least with the Gold/Enterprise support group, I have not found Dell to do that type of finger pointing.

    The Intel drives are now good at automatically cleaning up with repeated writing, while with an Indilinx drive, you may need to occasionally (perhaps every 6 months) run the "Wiper" utility to restore performance.

    Also, you indicate your drive is about 3/4 full, if you can reduce that, you may see less performance hit also. You can do that by removing some data, moving data to a secondary drive (HD or SSD), or buying a larger SSD.

    If you're working with large data files that you're not accessing and updating randomly (e.g. you're not primarily working with a large database), then you might benefit from having your OS and applications on the SSD, but use a HD for your data files and/or temp/swap space. Of course, make sure you have sufficient RAM to minimize any swapping regardless of whether you're using a HD or an SSD.
    Reply
  • heulenwolf - Tuesday, September 01, 2009 - link

    gstricker - duly noted about the Dell warranty.

    I have to disagree that drive reliability is not the issue for two reasons, only the first of which I'd mentioned before:
    1) Dell's diagnostics failed on the SSD
    2) Anand's test results show major slowdowns, but not from 100 MB/s 5 MB/s for both read and write operations. No matter what I did, even writing my own scripts to just read files as fast as it could, I couldn't get read access over 10 MB/s peak with average around 5. Its like the drive switched to an old PIO interface or something. The kinds of slowdowns in Anand's results do not lead to 15 minute boot times.

    Its a laptop with only one drive bay so, yes, page file is on the SSD and a second drive isn't really an option. According to the windows 7 engineering blog, linked by Ardax above, SSD's are a great place to store pagefiles. Since the system has 4 GB of RAM, its not like the system has undue swap writing going on.

    I can't imagine Samsung or Dell selling a drive with a 3 year warranty that would have to be replaced every 6 months under relatively normal OS wear and tear (swapping, prefetch). Vista was well past SP1 at the time the system was bought so they'd had plenty of time to qualify the drive for such uses. They'd both be out of business were this the case.

    Agree that the best bet would be to switch brands but I'm kinda stuck on what's wrong with this one. Thanks for the feedback.
    Reply
  • gstrickler - Thursday, September 03, 2009 - link

    That it failed Dell's diagnostics might indicate a problem, but do you know for certain that Dell's diagnostics pass on a properly functioning Samsung SSD? I don't know the answer to that, but it needs to be answered.

    While Anand's tests don't show 90% drops in performance, his tests didn't continue to fragment the drive for months, so performance might continue to drop with increasing fragmentation.

    More importantly, I've experienced problems with the Windows page file before, it does slow the system dramatically. Furthermore, the Windows page file does get written to as part of the boot process, so any performance problems with the page file will notably slow the boot, 15 minutes with Vista is not difficult to believe. While I haven't verified this on Vista, with NT4/W2k/XP the caching algorithm will allow reads to trigger paging other items out to disk, so even a simple read test can cause writes to the page file if the amount of data read approaches "free RAM". Again, performance problems with the page file could dramatically affect your results, even for a "read-only" test. Don't be so certain of your diagnosis unless you've eliminated these factors.

    You should try removing the page file (or setting it to 2MB), then see what happens to performance.
    Reply
  • heulenwolf - Friday, September 04, 2009 - link

    I had the same thought about Dell's diagnostics. I've run them again on the latest refurb drive and found that it passes all drive-related tests. Unfortunately, Dell's diagnostics simply spit out an error code which only they hold the secret decoder-ring to so I have no idea what the diagnosis was. This result isn't conclusive but its another data point.

    To more fully describe the testing I did, I wrote a script that generated a random data matrix and timed writing it to file. I then read the data back in, timing only the read, and compared the two datasets to ensure no read/write errors. I looped through this process hundreds to thousands of times with file sizes from 4k up to 50 MB. Since I was using an interpreted language, I don't put much stock in the performance times, however, I was using the lowest level read and write functions available. Additionally, my 5-10 MB/s numbers come from watching Vista's Resource Monitor while the test was running, not from the program. No other measured component of the system was taxed so I don't think the CPU or being near the physical memory limit, for example, was holding it up.
    Reply
  • donjuancarlos - Monday, August 31, 2009 - link

    I have not found many articles on the net about SSDs and this one is even easy to understand.

    The only negative part about this article is the Lenovo T400 I am typing on (it has a Samsung drive :( ) And I have to agree, startup times are nothing special.
    Reply
  • mtoma - Monday, August 31, 2009 - link

    Here is an issue I think deserves to be adressed: could an conventional HDD (with 2-3 or 4 platters) slow down the performance of a PC , even if that PC boots from an excellent SSD drive, like an Intel X-25M? Let's say that on the SSD lies only the operating system, and that onto the conventional HDD lies the movie and music archive. But both drives run at the same time, and it is a well known fact that the PC runs at the speed of the slowest component (in our case the conventional HDD).
    I did not found ANYWHERE in the Web a review, or even an opinion regarding this issue.
    I would appreciate if I get a competent answer.
    Thanks a lot!
    Reply
  • gstrickler - Monday, August 31, 2009 - link

    That's a good question, and I too would like to see a report from someone who has done it.

    Some of your assertions/assumptions are not quite accurate. A PC doesn't "run at the speed of the slowest component", but rather it's performance is limited by the slowest component. Depending upon your usage patterns, a slow component may have very little effect on performance or it may make the machine nearly unusable. I think that's probably what you meant, I'm just clarifying it.

    As for putting the OS on an SSD and user files on a HD, you would want to have not only the OS, but also your applications (at least your frequently used ones) installed on the SSD. Put user data (especially large files such as .jpg, music, video, etc.), and less frequently used applications and data on the HD. Typical user documents (.doc, .xls, .pdf) can be on either drive, but access might be better with them on the SSD so that you don't have to wait for the HD to spin-up. In that case, the HD might stay spun-down (low power idle) most of the time, which might improve battery life a bit.

    Databases are a bit trickier. It depends upon how large the database is, how much space you have available on the SSD, how complex the data relations are, how complex the queries are, how important performance is, how much RAM is available, how well indexes are used, and how well the database program can take advantage of caching. Performance should be as good or better with the database on the SSD, but the difference may be so small that it's not noticeable, or it might be dramatically faster. That one is basically "try it and see".

    Where to put the paging file/swap space? That's a tough one to answer. Putting it on the SSD might be slightly faster if your SSD has high write speeds, however,that will increase the amount of writing the the SSD and could potentially shorten it's usable life. It also seems like a waste to use expensive SSD storage for swap space. You should be able to minimize those by using a permanent swap space of the smallest practical size for your environment.

    However, putting the swap space on a less costly HD means the HD will be spun-up (active idle) and/or active more often, possibly costing you some battery life. Also, while the HD may have very good streaming write speeds, it's streaming read speed and random access (read or write) speed will be slower than most SSDs, so you're likely to have slightly slower overall response and slightly shorter battery life than you will by putting the swap space on the SSD.

    On a desktop machine with a very fast HD, it might make sense to put the paging file on the HD (or to put a small swap space on the SSD and some more on the HD), but on a machine where battery life is an important consideration, it might be better to have the swap space on the SSD, even though it's "expensive".
    Reply
  • Pirks - Monday, August 31, 2009 - link

    just turn the page file off, and get yourself 4 or 8 gigs of RAM Reply
  • gstrickler - Monday, August 31, 2009 - link

    Windows doesn't like to operate without a page file. Reply
  • smartins - Tuesday, September 01, 2009 - link

    Actually, I've been running without a page file for a while and never had any problems. Windows feels much more responsive. You do have to have plenty or ram, I have 6GB on this machine. Reply
  • mtoma - Thursday, September 03, 2009 - link

    In my case, it's not a problem of RAM (I have 12 GB RAM and a Core i7 920),it's a problem of throwing or not 300 dolars down the window (on a Intel SSD drive). Currently I have a 1.5 TB Seagate Barracuda 11th generation, on wich I store ONLY movies, music and photos. My primary drive (OS plus programms) is a 300 GB Velociraptor.
    Do you think diffrent types of Windows behave difrent if you remove the page file? It seems to me if I remove this page file, I walk onto a minefield, and I don't want to do that.
    Besides that, my real problem is to use (when I purachase the Intel drive) the Seagate Barracuda in a external HDD enclosure OR internally, and thus, possibly slow down my PC.
    Reply
  • SRSpod - Thursday, September 03, 2009 - link

    Adding a slow hard drive to your system will not slow your system down (well, apart from a slight delay at POST when it detects the drive). The only difference in speed will be that when you access something on the HDD instead of the SSD, it will be slower than if you were accessing it on the SSD. You won't notice any difference until you access data from the HDD, and if it's only music, movies and photos, and you're not doing complex editing of those files, then a regular HDD will be fast enough to view and play those files without issues.
    If you don't plan to remove it from your system, then attach it internally. Introducing a USB connection between the HDD and your system will only slow things down compared to using SATA.

    Removing the pagefile can cause problems in certain situations and with certain programs (Photoshop, for example). If you have enough RAM, then you shouldn't be hitting the pagefile much anyway, so where it's stored won't make so much of a difference. Personally, I'd put it on the SSD, so that when you do need it, it's fast.
    Reply
  • samssf - Friday, September 18, 2009 - link

    Won't Windows write to the page file regardless of how much RAM you have? I was under the impression Windows will swap out memory that it determines isn't being used / needed at the moment.

    If you absolutely need to have a page file, I would use available RAM to create a RAM disk, and place your page file on this virtual disk. That way you're setting aside RAM you know you don't need for the page file, since Windows will write to that file anyway.

    If you can, just turn it off.
    Reply
  • minime - Monday, August 31, 2009 - link

    Would someone please have the courtesy to test those things in a business environment? I'm talking about servers. Database, webapplication, Java, etc. Reliability? Maybe even enrich the article with a PCI-E SSD (Fusion-IO)? Reply
  • ciukacz - Monday, August 31, 2009 - link

    http://it.anandtech.com/IT/showdoc.aspx?i=3532">http://it.anandtech.com/IT/showdoc.aspx?i=3532

    Reply
  • minime - Tuesday, September 01, 2009 - link

    Thanks for that, but still, this is not quite a real business test, right? Reply
  • Live - Monday, August 31, 2009 - link

    Great article! Again I might add.

    Just a quick question:

    In the article it says all Indilinx drives are basically the same. But there are 2 controllers:
    Indilinx IDX110M00-FC
    Indilinx IDX110M00-LC

    What's the difference?
    Reply
  • yacoub - Monday, August 31, 2009 - link

    If Idle Garbage Collection cannot be turned off, how can it be called "[Another] option that Indilinx provides its users"? If it's not optional, it's not an option. :( Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Well it's sort of optional since you had to upgrade to the idle GC firmware to enable it. That firmware has since been pulled and I've informed at least one of the companies involved of the dangers associated with it. We'll see what happens...

    Take care,
    Anand
    Reply
  • helloAnand - Monday, August 31, 2009 - link

    Anand,

    The best way to test compiler performance is compiling the compiler itself ;). GCC has an enormous test suite (I/O bound) to boot. Building it on windows is complicated, so you can try compiling the latest version on the mac.
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Hmm I've never played with the gcc test suite, got any pointers you can email me? :)

    Take care,
    Anand
    Reply
  • UNHchabo - Tuesday, September 01, 2009 - link

    Compiling almost anything on Visual Studio also tends to be IO-bound, so you could try that as well. Reply
  • CMGuy - Wednesday, September 02, 2009 - link

    We've got a few big java apps at work and the compile times are heavily I/O bound. Like it takes 30 minutes to build on a 15 disk SAN array (The cpu usage barely gets above 30%). Got a 160Gig G2 on order, very much looking forward to benchmarking the build on it! Reply
  • CMGuy - Sunday, October 11, 2009 - link

    Finally got an X25-m G2 to benchmark our builds on. What was previously a 30 minute build on a 15 disk SAN array in a server has become a 6.5 minute build on my laptop.
    The real plus has come when running multiple builds simultaneously. Previously 2 builds running together would take around 50 minutes to complete (not great for Continuous Integration). With the intel SSD - 10 minutes and the bottleneck is now the CPU. I see more cores and hyperthreading in my future...
    Reply
  • Ipatinga - Monday, August 31, 2009 - link

    Another great article about SSD, Anand. Big as always, but this is not just a SSD review or roundup. It´s a SSD class.

    Here are my points about some stuff:

    1 - Correct me if I´m wrong, but as far as capacity goes, this is what I know:

    - Manufacturers says their drive has 80GB, because it actually has 80GB. GB comes from GIGA, wich is a decimal unit (base 10).

    - Microsoft is dumb, so Windows follows it, and while the OS says your 80GB drive has 74,5GB, it should say 80GB (GIGA). When windows says 74,5, it should use Gi (Gibi), wich is a binary unit).

    - To sum up, with a 80GB drive, Windows should say it has 80GB or 74,5GiB.

    - A SSD from Intel has more space than it´s advertised 80GB (or 74,5GiB), and that´s to use as a spare area. That´s it. Intel is smart for using this (since the spare area is, well, big and does a good job for wear and performance over sometime).

    2 - I wonder why Intel is holding back on the 320GB X25-M... just she knows... it must be something dark behind it...

    Maybe, just maybe, like in a dream, Intel could be working on a 320GB X25-M that comes with a second controller (like a mirror of the one side pcb it has now). This would be awesome... like the best RAID 0 from two 160GB, in one X25-M.

    3 - Indilinx seems to be doing a good job... even without TRIM support at it´s best, the garbage cleaning system is another good tool to add to a SSD. Maybe with TRIM around, the garbage cleaning will become more like a "SSD defrag".

    4 - About the firmware procedure in Indilinx SSD goes, as far as I know, some manufacturers use the no-jumper scheme to make easier the user´s life, others offer the jumper scheme (like G.Skill on it´s Falcon SSD) to get better security: if the user is using the jumper and the firmware update goes bad, the user can keep flashing the firmware without any problem. Without the jumper scheme, you better get lucky if things don´t go well on the first try. Nevertheless, G.Skill could put the SSD pins closer to the edge... to put a jumper in those pins today is a pain in the @$$.

    5 - I must ask you Anand, did you get any huge variations on the SSD benchmarks? Even with a dirty drive, the G.Skill Falcon (I tested) sometimes perform better than when new (or after wiper). The Benchmarks are Vantage, CrystalMark, HD Tach, HD Tune.... very weird. Also, when in new state, my Vantage scores are all around in all 8 tests... sometimes it´s 0, sometimes it´s 50, sometimes it´s 100, sometimes it´s 150 (all thousand)... very weird indeed.

    6 - The SSD race today is very interesting. Good bye Seagate and WD... kings of HD... Welcome Intel, Super Talent, G.Skill, Corsair, Patriot, bla bla bla. OCZ is also going hard on SSD... and I like to see that. Very big line of SSD models for you to choose and they are doind a good job with Indilinx.

    7 - Samsung? Should be on the edge of SSD, but manage to loose the race on the end user side. No firmware update system? You gotta be kidding, right? Thank good for Indilinx (and Intel, but there is not TRIM for G1... another mistake).

    8 - And yes... SSD rocks (huge performance benefit on a notebook)... even though I had just one weekend with them. Forget about burst speed... SSD crushes hard drives where it matters, specially sequencial read/write and low latency.

    - Let me finish here... this comment is freaking big.
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Maybe I should compile these things into a book? :)

    Here are my answers about some stuff:

    1) There's a spec for how hard drive makers report capacity. They define 1GB as 1 billion bytes. This is technically correct (base 10 SI prefix as you correctly pointed out). The HDDs also physically have this much storage on them, they are made up of sequentially numbered sectors that are easily counted in a decimal number system.

    All other aspects of PC storage (e.g. cache, DRAM, NAND flash) however work in base 2 (like the rest of the PC). In these respects 1GB is defined as 1024^3 because we're dealing with a base 2 number system. There are reasons for this but it goes beyond the scope of what I'm posting :)

    Intel adheres to the same spec that the HDD makers use. But the X25-M is made up of flash, which as I just mentioned is addressed in a base 2 number system. There's more flash than user space on the drive, it's used as spare area, woohoo. I think we're both on the same page here, just saying things differently :)

    2) We'll see a 320GB drive, just not this year. I don't know that the demand is there especially given the weak economy.

    Dreams do sometimes come true... ;)

    3) Perhaps, but I don't like the idea of a drive doing anything but idling when it's supposed to be...idle. This does funny things to notebook battery life I'd think.

    4) This is true. There's also another thing you can do with the jumper (and perhaps some additional software): flash any indilinx drive with any firmware regardless of vendor :)

    5) I had to throw out a lot of data because of variations between runs. It ended up being a combination of immature drivers, immature benchmarks and some OS trickery. The setup I have now is very reliable and provides very repeatable results with very little variation. While I run everything three times, the runs are so close that you could technically do only one run per drive and still be fine.

    6) I wouldn't count WD and Seagate out just yet. It may take them a while but they won't go quietly...

    7) Samsung makes a ton of money from SSD sales to OEMs, they don't seem to care about the end user market as much. If end users start protesting Samsung drives however, things will change.

    In my opinion? Once Apple falls, the rest will follow. If Apple will migrate to Intel (possible) or Indilinx (less likely), we'll see the same from the other OEMs and Samsung will be forced to change.

    Or I could be too pessimistic and we'll see better performance from Samsung before then.

    8) Agreed :)

    I'll finish here too :)

    Take care,
    Anand
    Reply
  • Reven - Monday, August 31, 2009 - link

    Anand, dont listen to the guys like blyndy who diss on the anthologies, I love them. You can find a basic review anywhere, its the in-depth yet simple to understand stuff like these anthologies that make me visit Anandtech all the time.

    Keep it up, dude!
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Thank you :) Reply
  • EasterEEL - Monday, August 31, 2009 - link

    I have a couple of questions regarding the Intel® SATA SSD Firmware Update Tool (2832KB) v1.3 8/24/2009.

    Does this firmware enable TRIM within the SSD to work with Windows 7?

    If AHCI is enabled in the BIOS (but not RAID) does Windows 7 use it's own drivers with TRIM? Or does it load Intel’s Matrix Storage Manager driver which does not support TRIM as per the article note below?

    "Unfortunately if you’re running an Intel controller in RAID mode (whether non-member RAID or not), Windows 7 loads Intel’s Matrix Storage Manager driver, which presently does not pass the TRIM command. Intel is working on a solution to this and I'd expect that it'll get fixed after the release of Intel's 34nm TRIM firmware in Q4 of this year."

    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    That update does not enable TRIM. The TRIM firmware is in testing now and it will be out sometime in Q4 of this year (October - December).

    If AHCI is enabled in the BIOS and you haven't loaded Intel's MSM drivers then it will use the Windows 7 driver and TRIM will be supported.

    Take care,
    Anand
    Reply
  • uberowo - Monday, August 31, 2009 - link

    I do have a question however. :D

    I am building a gaming pc, and I am buying ssd disk/s. Would I benefit from getting 2x80gb intel gen2s and using raid0? Or should I stick with a single 160gb?
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    While I haven't tested 2 x 80GB drives in RAID-0, my feeling is that a single SSD is going to be better than two in RAID going forward. As of now I don't know that anyone's TRIM firmware is going to work if you've got two drives in RAID-0.

    The perceived performance gains in RAID-0 also aren't that great on SSDs from what I've seen.

    Take care,
    Anand
    Reply
  • Ardax - Monday, August 31, 2009 - link

    A naive guess would be that it depends on the workload. For lots of sequential transfers a RAID-0 should shine -- particularly on reads -- because you're spreading the transfers out over multiple SATA channels.

    Losing TRIM is a problem. Finding a controller than can handle the performance is entirely likely to be another.
    Reply
  • uberowo - Monday, August 31, 2009 - link

    Thanks a lot for taking the time to answer. Not to mention making this awesome site. :) Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    You guys take the time to read it and make some truly wonderful comments, it's the least I can do :)

    -A
    Reply
  • IPL - Monday, August 31, 2009 - link

    I first started reading anandtech when I got seriously interested in SSDs and honestly, you write the best SSD articles around! Thank you for all the help you gave me in deciding which SSD to buy.

    I ordered online the new G2 last week and should be getting it in a few days. I live in Greece and the re-launched G2 has been available here for about a week now.

    I am planning on replacing the HDD on my Feb 08 Macbook Pro (last refresh pre-unibody) as soon as I get it. I am just a consumer with a little bit of knowledge on tech but not a pro at all. I just thought of asking all a few questions that I have pre-drive swapping.

    1. Will TRIM be supported on macs? Any news if and when?
    2. When then new TRIM firmware is out, do I have to just install the firmware or will I need to format everything and start from fresh in order to get it to work?
    3. I have bought a 2,5'' SATA USB enclosure in order to put my G2 in there first, connect it to the laptop via the USB and install Snow Leopard from there. After I am done, I will remove the G2 from the enclosure, swap the drives and hopefully, everything will be working. Does this sound logical? I am worried about the h/w drivers to be honest.

    Thanks in advance for your help. I will post some non-scientific time results as soon as get this done. Cant wait.
    Reply
  • gstrickler - Monday, August 31, 2009 - link

    The simplest way to swap the HD on most Mac OS machines is:

    1. Connect both the old and the new drive to the machine (internally or in an external USB or FireWire case).
    2. Use Disk Utility (included in Mac OS X) to set the appropriate partitioning scheme (GUID for Intel based Macs, Apple Partition Scheme for PPC Macs) on the new drive.
    3. Partition and format the new drive.
    4. Use Carbon Copy Cloner (shareware) to clone the old drive to the new drive.
    5. Try booting off the new drive. Note that PPC Macs can't boot from USB drives, but Intel based Macs can. All PPC and Intel Macs with a built-in FireWire port can boot from a FireWire drive.
    6. If not already done, physically swap the drives to the desired locations, boot and set the preferred startup drive.

    Reply
  • IPL - Tuesday, September 01, 2009 - link

    Awesome, thanks for the help.

    I have checked Carbon Copy Cloner and it is already one of my options. Never tried it before but looked easy enough.

    I havent decided yet which way I will do it (fresh install or clone existing drive) but I will make my mind up when everything is ready!
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Thank you for reading and saying such wonderful things, I really do appreciate it :)

    1) I don't believe TRIM is presently supported in Snow Leopard. I've heard that Apple may be working on it but I don't think it's there now.

    2) From what I've seen, it should preserve your data. It's still worth backing up just in case something ridiculous happens.

    3) What you're describing should work, although if I were you I'd just swap the drives and install. Hook your old drive up via USB and pull any data you need off of that.

    Take care,
    Anand
    Reply
  • sunbear - Monday, August 31, 2009 - link

    Another fantastic article. I just wanted to draw your attention to recent reports that the majority of currently available laptops (including the MacBookPro) are unable to support transfer rates greater than SATA-150 (http://www.hardmac.com/news/2009/06/16/new-macbook...">http://www.hardmac.com/news/2009/06/16/...imited-1....

    Since most laptops can't even use the full performance of these SSD's, do you have any recommendation regarding which one would be the best bang-for-the-buck to speed up a laptop?

    Personally, I am interested in putting SSD's in a laptop not only for the speed improvements, but I'm also hoping that it reduces the amount of heat that my laptop will put out so that I can finally find a laptop that you can use comfortably on your lap!

    Incidentally, it would be really great if laptop reviewers checked to see if they could comfortably work with a laptop at full load on their lap as a standard test.
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Even on a SATA-150 interface, you're generally only going to be limiting your sequential read speed and perhaps your sequential write speed a bit. Random read/write speeds don't really go above 60MB/s so you're fine there.

    They recommendations remain the same; Intel at the top end, anything Indilinx MLC to save a bit. If anything, a SATA-150 interface makes the Intel drive look a bit better since its 80MB/s sequential write limit isn't as embarrassing :)

    Take care,
    Anand
    Reply
  • Dobs - Monday, August 31, 2009 - link

    I hope Seagate / Western Digital etc. bring even more innovation / competition in SSD's next year... and not just Enterprise products.

    And one thing I don't fully understand is why there aren't more dedicated 3.5" drives. Patriot has the adapter but what about the rest??? No money in desktops anymore???
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    OCZ is making a 3.5" Vertex drive, waiting on it for review :)

    Take care,
    Anand
    Reply
  • kisjoink - Monday, August 31, 2009 - link

    Now that the good performing SSDs are half the price of last year, I'd really like to see a 2xSSD in RAID 0 article! Reply
  • mgrmgr - Monday, August 31, 2009 - link

    I second the request for a 2xSSD RAID-0 article...with specific discussions about which applications it benefits (Photoshop?) and which ones it doesn't.

    Before October 22nd when I buy a new Win7 computer? Please. :-)
    Reply
  • kisjoink - Monday, August 31, 2009 - link

    "Intel doesn't need to touch the G1, the only thing faster than it is the G2."

    I've been an avid anandtech-reader since 1998, but this is the first time I'm commenting on an article. I just have to say how much I disagree with this quote!

    I've used the x25m since its launch (December 08) and at first I was very happy with the drive. Although really expensive, I got convinced to buy one after reading your first previews and review. And the performance was great, at least for the first 4-5 months. At that time I started noticing some 'hiccups' (system freeze). At first they were few and short. But over time they become more noticeable and now they're a real pain. Sometimes my system can freeze for more than 15 seconds. It usually happens when I edit a picture in photoshop, but it can also happen while writing something in Word, programming java in Eclipse or just surfing the web.

    The problem? I'm pretty sure its the Intel drive. After reading too many SSD-articles I immediately suspected the x25m when the I started noticing the hiccups. So I got used to running the "Windows Resource Monitor" in the background - studying the disk activity after every hiccup. Just take a look at this example (just started photoshop and did some light editing on a picture):
    http://img256.imageshack.us/img256/3073/hickup.jpg">http://img256.imageshack.us/img256/3073/hickup.jpg

    I'm sure there are many ways I could tune my system better. I've done a couple of things, like moved the internet temp folder to a mechanical drive etc. And the performance of the drive will probably recover if I do this special SSD-format - but it's a real pain to have to do complete OS installation 2-3 times a year when you claim it's possible for Intel to create a new firmware with TRIM-support. I mean - I really did pay premium price for this product (close to 800$ included VAT here in Norway for the 80GB version in December 08).

    So, to summarise - I got convinced to buy the drive after reading your articles (you write great reviews!) - and I understand that the problem that I (and others from what I've been reading on forums) is really difficult to recreate in a testing environment - but that doesn't mean that the problem doesn't exist. I just wish you could point this out. The expensive G1 has some really big performance issues that might force you to do a complete reinstall of your system a couple times a year - and although Intel could fix it they wont, because they have a new, better and cheaper product out - and people like me (altough we feel really screwed over by Intel) will buy their next device (as long as its the best device out there).
    Reply
  • IntelUser2000 - Monday, August 31, 2009 - link

    Is that after you installed the firmware version 8820 or before?? That reduces the problem a lot unless you filled the drive to more than 70%. Reply
  • kisjoink - Monday, August 31, 2009 - link

    Yes, I forgot to mention that - it's after I upgraded to the 8820 firmware. I don't think I've ever filled it up with more than 80%, usually I have about 30GB of free space Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    This is actually an interesting scenario that I've been investigating a bit myself. The 8820 firmware actually significantly changes the way the drive likes to store data compared to the original firmware. That's fine for a cleanly secure-erased drive, but what happens if you have data/fragmentation on the drive already?

    Every time you write to the drive the controller will look at the preferred state specified by the new firmware. It will see that your data is organized the way the old firmware liked it, but not the new firmware. Thus upon every...single...write it will try and reorganize the data until it gets to its happy state.

    I honestly have no idea how long this process will take, I can see it taking quite a bit of time but perhaps you could speed it up by writing a bunch of sequential files to fill up the drive? The safer bet would be to backup, secure erase and restore onto the drive. You shouldn't see it happen again.

    Think of it like this. I live in my house and I have everything organized a certain way. It takes me minimal time to find everything I need. Let's say tomorrow I leave my house and you move in. You look at how things are organized and it's quite different from how you like things setup. Whenever you go to grab a plate or book you try cleaning up a bit. Naturally it'll take a while before things get cleaned up and until then you won't be as quick as you're used to.

    Take care,
    Anand
    Reply
  • jimhsu - Friday, September 11, 2009 - link

    The G2's I've discovered REALLY don't like to be filled up more than 80% or so. When I had 8GB free on the 80GB drive, seq write performance basically plummeted at random intervals (to levels like 30MB/s.) Random writes sometimes dropped down to 4MB/s. Now that I've freed 20GB and tried writing and deleting large ISO files to the drive, the performance is coming back slowly. Reply
  • Dunk - Monday, August 31, 2009 - link

    Hi Anand,

    I'm blown away by your article series on SSD - absolutely fantastic.

    When new Intel firmware is launched with TRIM support for the G2, can I flash it without losing the drive and needing to reinstall everything?

    I'm happy using the out of the box MS driver for now in Win7, but would prefer to use Intel's TRIM version once available.

    Many thanks
    Duncan
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    If Intel follows the same pattern as what we saw with the G1's firmware update, you should be able to flash without destroying your data (although it's always a good idea to back up).

    Thank you for your comment :)

    Take care,
    Anand
    Reply
  • mgrmgr - Monday, August 31, 2009 - link

    Okay, single X25-M G2s can be updated without losing data. But I am considering two 80GB drives in RAID-0 to overcome the sequential write slowdown with Photoshop. How will updating work for the RAID-0 pair?

    Do you have an opinion about using a RAID pair for Photoshop?
    Reply
  • Noteleet - Monday, August 31, 2009 - link

    Fantastic article, I'm definitely planning on getting a SSD next time I upgrade.

    I'm fairly interested in seeing some reviews for the Solid 2. If OCZ can get the kinks worked out I think the Intel flash and the Indilinx controller would make a winning combination for price to performance.
    Reply
  • Visual - Monday, August 31, 2009 - link

    Where does the drive store the mapping between logical and physical pages and other system data it needs to operate? Does it use the same memory where user data is stored? If so, doesn't it need to write-balance that map data as well? And if that's true, doesn't it need to have a map for the map written somewhere? How is that circular logic broken?

    Or does the drive have some small amount of higher-quality, more reliable, maybe single-level-cell based flash memory for its system data?
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    The tables the drive needs to operate are also stored in a small amount of flash on the drive. The start of the circular logic happens in firmware which points to the initial flash locations, which then tells the controller how to map LBAs to flash pages.

    Take care,
    Anand
    Reply
  • Bakkone - Monday, August 31, 2009 - link

    Any gossip about the new SATA? Reply
  • Zaitsev - Monday, August 31, 2009 - link

    Thanks for the great article, Anand! It's been quite entertaining thus far. Reply
  • cosmotic - Monday, August 31, 2009 - link

    The page about sizes (GB, GiB, spare areas, etc) is very confusing. It sounds very much like you are confusing the 'missing' space when converting from GB to GiB with the space the drive is using for its spare area.

    Is it the case that the drive has 80GiB internally, uses 5.5GiB for spare, and reports it's size as 80GB to the OS leaving the OS to say 74.5GiB as usable?
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    I tried to keep it simply by not introducing the Gibibyte but I see that I failed there :)

    You are correct, the drive has 80GiB internally, uses 5.5GiB for spare and reports that it has 156,301,488 sectors (or 74.5GiB) of user addressable space.

    Take care,
    Anand
    Reply
  • sprockkets - Tuesday, September 01, 2009 - link

    Weird. So what you are saying is, the drive has 80Gib capacity, but then reports it has 80GB to the OS, advertised as having an 80GB capacity, which the OS then says the capacity is 74.5GiB? Reply
  • sprockkets - Tuesday, September 01, 2009 - link

    As a quick followup, this whole SI vs binary thing needs to be clarified using the proper terms, as people like Microsoft and others have been saying GB when it really is GiB (or was the GiB term invented later?)

    For those who want a quick way to convert:

    http://converter.50webs.com">http://converter.50webs.com
    Reply
  • ilkhan - Monday, August 31, 2009 - link

    so they are artifically bringing the capacity down, because the drive has the full advertised capacity and is getting the "normal" real capacity. :argh: Reply
  • Vozer - Monday, August 31, 2009 - link

    I tried looking for the answer, but haven't found it anywhere so here it is: There are 10 flash memory blocks on both Intel 160GB and 80GB X25-M G2, right? (and 20 blocks with the G1).

    So, is the 80GB version actually a 160GB with some bad blocks or do they actually produce two different kind of flash memory block to use on their drives?
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    While I haven't cracked open the 80GB G2 I have here, I don't believe the drives are binned for capacity. The 80GB model should have 10 x 8GB NAND flash devices on it, while the 160GB model should have 10 x 16GB NAND flash devices.

    Take care,
    Ananad
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    wow I misspelled my own name :) Time to sleep for real this time :)

    Take care,
    Anand

    Reply
  • IntelUser2000 - Monday, August 31, 2009 - link

    Looking at pure max TDP and idle power numbers and concluding the power consumption based on those figures are wrong.

    Look here: http://www.anandtech.com/cpuchipsets...px?i=3403&a...">http://www.anandtech.com/cpuchipsets...px?i=3403&a...

    Modern drives quickly reach idle even between times where the user don't even know and at "load". Faster drives will reach lower average power because it'll work faster to get to idle. This is why initial battery life tests showed X25-M with much higher active/idle power figures got better battery life than Samsungs with less active/idle power.

    Max power is important, but unless you are running that app 24/7 its not real at all, especially the max power benchmarks are designed to reach close to TDP as possible.
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    I agree, it's more than just max power consumption. I tried to point that out with the last paragraph on the page:

    "As I alluded to before, the much higher performance of these drives than a traditional hard drive means that they spend much more time at an idle power state. The Seagate Momentus 5400.6 has roughly the same power characteristics of these two drives, but they outperform the Seagate by a factor of at least 16x. In other words, a good SSD delivers an order of magnitude better performance per watt than even a very efficient hard drive."

    I didn't have time to run through some notebook tests to look at impact on battery life but it's something I plan to do in the future.

    Take care,
    Anand
    Reply
  • IntelUser2000 - Monday, August 31, 2009 - link

    Thanks, people pay too much attention to just the max TDP and idle power alone. Properly done, no real apps should ever reach max TDP for 100% of the duration its running at. Reply
  • cristis - Monday, August 31, 2009 - link

    page 6: "So we’re at approximately 36 days before I exhaust one out of my ~10,000 write cycles. Multiply that out and it would take 36,000 days" --- wait, isn't that 360,000 days = 986 years? Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    woops, you're right :) Either way your flash will give out in about 10 years and perfectly wear leveled drives with no write amplification aren't possible regardless.

    Take care,
    Anand
    Reply
  • cdillon - Monday, August 31, 2009 - link

    I gather that you're saying it'll give out after 10 years because a flash cell will lose its stored charge after about 10 years, not because the write-life will be surpassed after 10 years, which doesn't seem to be the case. The 10-year charge life doesn't mean they become useless after 10 years, just that you need to refresh the data before the charge is lost. This makes flash less useful for data archival purposes, but for regular use, who doesn't re-format their system (and thus re-write 100% of the data) at least once every 10 years? :-)
    Reply
  • Zheos - Monday, August 31, 2009 - link

    "This makes flash less useful for data archival purposes, but for regular use, who doesn't re-format their system (and thus re-write 100% of the data) at least once every 10 years? :-)"

    I would like an input on that too, cuz thats a bit confusing.
    Reply
  • GourdFreeMan - Tuesday, September 01, 2009 - link

    Thermal energy (i.e. heat) allows the electrons trapped in the floating gate to overcome the potential well and escape, causing zeros (represented by a larger concentration of electrons in the floating gate) to eventually become ones (represented by a smaller concentration of electrons in the floating gate). Most SLC flash is rated at about 10 years of data retention at either 20C (68F) or 25C (77F). What Anand doesn't mention is that as a rule of thumb for every 9 degrees C (~16F) that the temperature is raised above that point, data retention lifespan is halved. (This rule of thumb only holds for human habitable temperatures... the exact relation is governed by the Arrhenius equation.)

    Wear leveling and error correction codes can be employed to mitigate this problem, which only gets worse as you try to store more bits per cell or use a smaller lithography process without changing materials or design.
    Reply
  • Zheos - Tuesday, September 01, 2009 - link

    Thank you GourdFreeMan for the additional input,

    But, if we format like every year or so , doesnt the countdown on data retention restart from 0 ? or after ~10 year (seems too be less if like you said temperature affect it) the SSD will not only fail at times but become unusable ? Or if we come to that point a format/reinstall would resolve the problem ?

    I dont care about losing data stored after 10 years, what i do care is if the drive become ASSURELY unsusable after 10 year maximum. For drives that comes at a premium price, i don't like this if its the case.
    Reply
  • GourdFreeMan - Tuesday, September 01, 2009 - link

    Yes, rewriting a cell will refill the floating gate with trapped electrons to the proper voltage level unless the gate has begun to wear out, so backing up your data, secure erasing your drive and copying the data back will preserve the life (within reason) of even drives that use minimalistic wear leveling to safeguard data. Charge retention is only a problem for users if they intend to use the drive for archival storage, or operate the drive at highly elevated temperatures.

    It is a bigger problem for flash engineers, however, and one of the reasons why MLC cannot be moved easily to more bits per cell without design changes. To store n-bits in a single cell you need 2^n separate energy levels to represent them, and thus each bit is only has approximately 1/(2^(n-1)) the amount of energy difference between states when compared to SLC using similar designs and materials.
    Reply
  • Zheos - Tuesday, September 01, 2009 - link

    Man you seem to know a lot about what you're talking about :)

    Yeah now i understand why SSD for database and file storage server would be quite a bad idea.

    But for personal windows & everyday application storage, seems like a pure win to me if you can afford one :)

    I was only worried about its life-span but thankx to you and you're quick replys (and for the maths and technical stuff about how it realy work ;) im sold on the fact that i will buy one soon.

    The G2 from Intel seems like the best choice for now but I'll just wait and see how it's going when TRIM will become almost enable on every SSD and i'll make my decision there in a couple of months =)


    Reply
  • GourdFreeMan - Wednesday, September 02, 2009 - link

    It isn't so much that SSDs make a bad storage server, but rather that you can't neglect to make periodic backups, as with any type of storage, if your data has great monetary or sentimental value. In addition to backups, RAID (1-6) is also an option if cost is no object and you want to use SSDs for long term storage in a running server. Database servers are a little more complicated, but SSDs can be an intelligent choice there as well if your usage patterns aren't continuous heavy small (i.e. <= 4K) writes.

    I plan on getting a G2 myself for my laptop after Intel updates the firmware to support TRIM and Anand reviews the effects in Windows 7, and I have already been using an Indilinx-based SLC drive in my home server.

    If you do anything that stresses your hard drive(s), or just like snappy boot times and application load times you will probably be impressed by the speeds of a new SSD. The cost per GB and lack of long term reliability studies are really the only things holding them back from taking the storage market by storm now.
    Reply
  • ninevoltz - Thursday, September 17, 2009 - link

    GourdFreeMan could you please continue your explanation? I would like to learn more. You have really dived deeply into the physical properties of these drives. Reply
  • GourdFreeMan - Tuesday, September 01, 2009 - link

    Minor correction to the second paragraph in my post above -- "each bit is only has" should read "each representation only has" in the last sentence. Reply
  • philosofool - Monday, August 31, 2009 - link

    Nice job. This has been a great series.

    I'm getting a SSD once I can get one at $1/GB. I want a system/program files drive of at least 80GB and then a conventional HDD (a tenth of the cost/GB) for user data.

    Would keeping user data on a conventional HDD affect these results? It would seem like it wouldn't, but I would like to see the evidence.

    I would really like to see more benchmarks for these drives that aren't synthetic. Have you tried things like Crysis or The Witcher load times? (Both seemed to me to have pretty slow loads for maps.) I don't know if these would be affected, but as real world applications, I think it makes sense to try them out.
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Personally I keep docs on my SSD but I keep pictures/music on a hard drive. Neither gets touched all that often in the grand scheme of things, but one is a lot smaller :)

    In The SSD Anthology I looked at Crysis load times. Performance didn't really improve when going to an SSD.

    Take care,
    Anand
    Reply
  • Eeqmcsq - Monday, August 31, 2009 - link

    I would have thought that the read speed of an SSD would have helped cut down some of the compile time. Is there any tool that lets you analyze disk usage vs cpu usage during the compile time, to see what percentage of the compile was spent reading/writing to disk vs CPU processing?

    Is there any way you can add a temperature test between an HDD and an SSD? I read a couple of Newegg reviews that say their SSDs got HOT after use, though I think that may have just been 1 particular brand that I don't remember. Also, there was at least one article online that tested an SSD vs an HDD and the SSD ran a little warmer than the HDD.

    Also, garbage collection does have one advantage: It's OS independent. I'm still using Ubuntu 8.04 at work, and I'm stuck on 8.04 because my development environment WORKS, and I won't risk upgrading and destabilizing it. A garbage collecting SSD would certainly be helpful for my system... though your compiling tests are now swaying me against an SSD upgrade. Doh!

    And just for fun, have you thought about running some of your benchmarks on a RAM drive? I'd like to see how far SSDs and SATA have to go before matching the speed of RAM.

    Finally, any word from JMicron and their supposed update to the much "loved" JMF602 controller? I'd like to see some non-stuttering cheapo SSDs enter the market and really bring the $$$/GB down, like the Kingston V-series. Also, I'd like to see a refresh in the PATA SSD market.

    "Am I relieved to be done with this article? You betcha." And I give you a great THANK YOU!!! for spending the time working on it. As usual, it was a great read.
    Reply
  • Per Hansson - Monday, August 31, 2009 - link

    Photofast have released Indilinx based PATA drives;
    http://www.photofastuk.com/engine/shop/category/G-...">http://www.photofastuk.com/engine/shop/category/G-...
    Reply
  • aggressor - Monday, August 31, 2009 - link

    What ever happened to the price drops that OCZ announced when the Intel G2 drives came out? I want 128GB for $280! Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    I believe OCZ cut prices to distributors that day, but the retail prices will take time to fall. Once you see X25-M G2s in stock then I'd expect to see the Indilinx drives fall in price. Resellers won't give you a break unless they have to :)

    Take care,
    Anand
    Reply
  • bobjones32 - Monday, August 31, 2009 - link

    Another great AnandTech article, thanks for the read.

    Just a head's-up on the 80GB X-25m Gen2 - A day before Newegg finally had them on sale, they bumped their price listing from $230 to $250. They sold at $250 for about 2 hours last Friday, went back out of stock until next week, and bumped the price again from $250 to $280.

    So....plain supply vs. demand is driving the price of the G2 roughly $50 higher than it was listed at a week ago. I have a feeling that if you wait a week or two, or shop around a bit, you'll easily find them selling elsewhere for the $230 price they were originally going for.
    Reply
  • AbRASiON - Monday, August 31, 2009 - link

    Correct, Newegg has gouged the 80gb from 229 to 279 and the 160gb from 449 to 499 :(

    Reply
  • Stan Zaske - Monday, August 31, 2009 - link

    Absolutely first rate article Anand and I thoroughly enjoyed reading it. Get some rest dude! LOL
    Reply
  • Jaramin - Monday, August 31, 2009 - link

    I'm wondering, if I were to use a low capacity SSD to install my OS on, but install my programs to a HDD for space reasons, just how much would that spoil the SSD advantage? All OS reads an writes would still be on the SSD, and the paging file would also be there. I'm very curious about the amount of degradation one would see relative to different use routines and apps. Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Putting all of your apps (especially frequently used ones) off of your SSD would defeat the purpose of an SSD. You'd be missing out on the ultra-fast app launch times.

    Pick a good SSD and you won't have to worry too much about performance degradation. As long as you don't stick it into a database server :)

    Take care,
    Anand
    Reply
  • swedishchef - Tuesday, September 01, 2009 - link

    What if you just put your photoshop cache on a pair of Velociraptors? Would it be the same loss of benefit?

    I have the same question regarding uncompressed HD video work, where I need write speeds well over the Intel x25-m ( over 240Mb/s). My assumption would be that I could enjoy the fast IO and App. launch of an SSD and increase CPU performance with the SSD while keeping the files on a fast external or internal raid configuration.


    Thank you again for a a brilliant Article Anand.
    I have been waiting for it for a long time. Yours are the only calm words out on the net.

    Grateful Geek /Also professional image creator.
    Reply
  • creathir - Monday, August 31, 2009 - link

    Great article Anand. I've been waiting for it...

    My only thoughts are, why can't Intel get their act together with the sequential business? Why can the others handle it, but they can't? To have such an awesome piece of hardware have such a nasty blemish is strange to me, especially on a Gen-2 product.

    I suppose there is some technical reason as to why, but it needs to be addressed.

    - Creathir
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    If Intel would only let me do a deep dive on their controller I'd be able to tell you :) There's more I'd like to say but I can't yet unfortunately.

    Take care,
    Anand
    Reply
  • shotage - Monday, August 31, 2009 - link

    Awesome article!

    I'm intrigued with the cap on the sequential reads that Intel has on the G2 drives as well. I always thought it was strange to see even on their first gen stuff.

    I'm assuming that this cap might be in place to somehow ensure the excellent performance they are giving with random read/writes. All until TRIM finally shows up and you'll have to write up another full on review (which I eagerly await!).

    I can't wait to see what 2010 brings to the table. What with the next version of SATA and TRIM just over the horizon, I could finally get the kind of performance out of my PC that I want!!
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Intel insists it's not an artificial cap and I tend to believe the source that fed me that information.

    That being said, if it's not an artificial cap it's either:

    1) Designed that way and can't be changed without a new controller
    2) A bug and can be fixed with firmware
    3) A bug and can't be fixed without a new controller

    Or some combination of those items. We'll see :)

    Take care,
    Anand
    Reply
  • Adul - Monday, August 31, 2009 - link

    Another fine article anand :). Keep up the good work. Reply
  • CurseTheSky - Monday, August 31, 2009 - link

    This is absolutely the best article I've read in a very long time - not just from Anandtech - from anywhere.

    I've been collecting information and comparing benchmarks / testimonials for over a month, trying to help myself decide between Intel, Indilinx, and Samsung-based drives. While it was easy to see that one of the three trails the pack, it was difficult to decide if the Intel G2 or Indilinx drives were the best bang for the buck.

    This article made it all apparent: The Intel G2 drives have better random read / write performance, but worse sequential write performance. Regardless, both drives are perfectly acceptable for every day use, and the real world difference would be hardly noticeable. Now if only the Intel drives would come back in stock, close to MSRP.

    Thank you for taking the time to write the article.
    Reply
  • deputc26 - Monday, August 31, 2009 - link

    been waiting months for this one. Reply
  • therealnickdanger - Monday, August 31, 2009 - link

    Ditto! Thanks Anand! Now the big question... Intel G2 or Vertex Turbo? :) It's nice to have options! Reply
  • Hank Scorpion - Monday, August 31, 2009 - link

    Anand,

    YOU ARE A LEGEND!!! go and get some good sleep, thanks for answering and allaying my fears... i appreciate all your hard work!!!!

    256GB OCZ Vertex is on the top of my list as soon as a validated Windows 7 TRIM firmware that doesnt need any work by me is organized....

    once a firmware is organised then my new machine is born.... MUHAHAHAHAHAHA
    Reply
  • AbRASiON - Monday, August 31, 2009 - link

    Vertex Turbo is a complete rip off, Anand clearly held back saying it from offending the guy at OCZ.
    Now the other OCZ models however, could be a different story.
    Reply
  • MikeZZZZ - Monday, August 31, 2009 - link

    I too love my Vertex. Running these things in RAID0 will blow your mind. I'm just waiting for some affordable enterprise-class drives for our servers.

    Mike
    http://solidstatedrivehome.com">http://solidstatedrivehome.com
    Reply
  • JPS - Monday, August 31, 2009 - link

    I loved the first draft of the Anthology and this is a great follow-up. I have been running a Vertex in workstation and laptop for months know and continue to be amazed at the difference when I boot up a comparable system still running standard HDDs. Reply
  • gigahertz20 - Monday, August 31, 2009 - link

    Another great article from Anand, now where can I get my Intel X-25M G2 :) Reply
  • rree - Wednesday, January 06, 2010 - link

    http://ecartshopping.biz">http://ecartshopping.biz

    Air jordan(1-24)shoes $33

    Nike shox(R4,NZ,OZ,TL1,TL2,TL3) $35
    Handbags(Coach lv fendi d&g) $35
    Tshirts (Polo ,ed hardy,lacoste) $16

    Jean(True Religion,ed hardy,coogi) $30
    Sunglasses(Oakey,coach,gucci,Armaini) $16
    New era cap $15

    Bikini (Ed hardy,polo) $25

    FREE sHIPPING
    http://ecartshopping.biz">http://ecartshopping.biz
    Reply
  • jengeek - Wednesday, September 02, 2009 - link

    As of 09-02-09 from Toshiba Direct:

    80GB = $243
    160GB = $473

    http://www.toshibadirect.com/td/b2c/adet.to?poid=4...">http://www.toshibadirect.com/td/b2c/adet.to?poid=4...

    http://www.toshibadirect.com/td/b2c/adet.to?poid=4...">http://www.toshibadirect.com/td/b2c/adet.to?poid=4...
    Reply
  • gfody - Thursday, September 03, 2009 - link

    nice thank you, ordered mine from here
    screw Newegg! :D
    Reply
  • jengeek - Wednesday, September 02, 2009 - link

    Both are G2, in stock and ship the next day

    Both are retail box including the installation kit

    Best price I've found
    Reply
  • ARoyalF - Sunday, September 13, 2009 - link

    Thank you posting that!

    I was going to wait out that awful price hike over at the egg.

    You rock
    Reply
  • ElderTech - Tuesday, September 01, 2009 - link

    It's difficult to imagine the amount of time and effort that went into this article, Anand. Just the clean installs of Win7 took a fair amount of extra effort, let alone the other detailed diagrams and testing involved. From an old technology advocate over many years of working to keep pace with Moore's Law in a variety of research environments, your site provides the most satisfying learning experience of all. A sincere thank you!

    PS: As for the availability of the G2, it pops in and out of stock at a variety of online retailers, including Newegg, of course, as well as MWave. Both had it available for a short while at $249, Newegg on Friday and MWave today, Monday. However, it's out of stock presently as of midnight, EST 9-1-09 at both, with MWave still at $249 but Newegg going from there to $279 over the weekend and now at an amazing $499! OUCH. Sounds like supply and demand gouging if the price holds when they are next available! There is also some stock available in the distributor channel from small Intel Partners, as I confirmed by calling around the Chicago area. You might give this a try tomorrow. Good luck!
    Reply
  • blyndy - Monday, August 31, 2009 - link

    You really got performance anxiety because some high-profile people/sites liked your article and linked to it? It's hardly like it got printed in some prestigious science journal and the publishers are waiting on a follow-up.

    It was just the first time that SDD operation had been detailed in plain english from a reputable website.

    Enough of this 'anthology' nonsense, I don't care if it's 1 page or 20, just tell me how some of the new SSDs perform (eg OCZ, Western Digital). You've already detailed how they work so now I want to know which ones do/will support TRIM and some details on the controller. Nothing to get anxious about.
    Reply
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Indeed I did get performance anxiety after the last one, I even got it after the first X25-M. It's not so much the linkage, but the feedback from all of you guys. I received more positive feedback to the last SSD article than any one prior. More than anything I don't want to let you all down and I want to make sure I live up to everyone's expectations.

    As far as your interests go, all three manufacturers (Indilinx, Intel and Samsung) have confirmed support for TRIM. When? I'd say all three before December.

    Take care,
    Anand
    Reply
  • cacca - Thursday, September 03, 2009 - link

    Dear Anand i really thank you for your SSD articles, the improvements in this area seem tangible.
    Can I ask you to test Fusion-IO & ioXtreme, i am really curious to see how this other approach performs.
    I know that isn't a perfect apple to apple comparison but at least we could compare the per $ performance.

    Best regards

    Ca
    Reply
  • vol7ron - Monday, August 31, 2009 - link

    Good article.

    I have a follow-up question regarding your size suggestion.

    In more words you say, "get the size you need," but don't these drives perform that much better in a RAIDed system?

    The cost per GB isn't that much more if you're looking at getting a 160GB Intel drive, to get the 2x 80GB instead.

    SSDs are more reliable than HDs and you have the benefit of more RAM. 2x 32MB for an SSD in RAID0.


    Curious to hear your thoughts,
    vol7ron
    Reply
  • StraightPipe - Tuesday, September 01, 2009 - link

    Since RAID cards aren't going to support TRIM commands for a while, I'd stick with a large, single SSD.

    Anybody ahve any experience running these cards in RAID? I'd love to put some of these in my server, but i'm terrified of lossing data through the complexities of RAID combined with SSD.

    I'd love to do a simple RAID1 setup, but it looks like i may be better of waiting too.

    In the mean time, these look like a mean machine for an OS disk.
    Reply
  • Jedi2155 - Monday, August 31, 2009 - link

    Anandtech has always been known for its in-depth analysis, you're just looking for a simple review list. I much prefer these detailed articles than just hearing the list of performance and simple recommendations that most people can write if provided the proper hardware.

    I love how Anand always writes excellent, very well detailed articles that are still SIMPLE to understand. A number of other sites may offer some similar levels of detailed but are sometimes a bit too difficult to comprehend without a background in the same field.
    Reply
  • KommisMar - Sunday, April 04, 2010 - link

    Anand,

    I read your long series of articles on SSDs today, and just wanted to say thanks for writing the most informative and interesting series of tech articles I've read in years. I've been avoiding SSDs because my first experience with one was horrible. The sustained transfer rates were no better than a traditional hard drive, and the system halting for several seconds on each random write operation was too much for me to stand.

    I was so sick of the SSD coverage that I was reading on other websites because none of them seemed to answer my biggest question, which was "Which SSD won't bring my system to a screeching halt every time it needs to write a little data?"

    Thanks for answering that question and explaining what to look for and what to avoid. It sounds like it's a good time for me to give SSDs another shot.
    Reply
  • jamesy - Thursday, April 22, 2010 - link

    That about sums it up: disappointment. Although this was a top-caliber SSD article, like i have come to love and expect out of anand, this article didn't make my buying decision any easier as all. In fact, it might have made it more complicated.

    I understand Intel, Indillinx, and Sandforce are good, but there are so many drives out there, and most suck. This article was amazing by most standards but the headline should be changed: removing the "Choosing the Best SSD."

    Maybe "Choosing the right controller before sorting through a hundred drives" would be an appropriate replacement.

    Do i still go with the intel 160 X-25m G2?
    Do I get the addon Sata 6g card and get the C300?
    Do i save the money, and get an indillinx drive? Is the extra money worth the Intel/C300 drive?

    These are the main questions enthusiasts have, and while this article contained a great overview of the market in Q3 2009, SSD Tech has progressed dramatically. Only now, i think, are we getting to the point that we could publish a buying guide and have it last a few months.

    I trust Anandtech, i just wish they would flat-out make a buying guide, assign points in different categories (points for sequential read/write, points for random read/write, points for real-life performance or perceived performance, points for reliability, and points for price.). Take all of these points, add em up, and make a table pls.

    A few graphs can help, but the 200 included in each article is overwhelming, and does nothing to de-complicate or make me confindent in my purchase.

    It's great to know how drives score, how they perform. But it's even important to know that you bought the right drive.
    Reply
  • mudslinger - Monday, June 28, 2010 - link

    This article is dated 8/30/2009!!!!
    It’s ancient history
    Since then newer, faster SSD’s have been introduced to the market.
    And their firmware have all been updated to address known past issues.
    This article is completely irrelevant and should be taken down or updated.
    I’m constantly amazed at how old trash info is left lingering about the web for search engines like Google to find. Just because Google lists an article doesn’t make it legit.
    Reply
  • cklein - Monday, July 12, 2010 - link

    Actually I am trying to find a reason to use SSD.
    1. Server Environment
    No matter it's a webserver or a SQL server, I don't see a way we can use SSD. My SERVER comes with plenty of RAM 32G or 64G. The OS/start a little bit slow, but it's OK, since it never stop after it's started. And everything is loaded into RAM, no page file usage is needed. So, really why do we need SSD here to boost the OS start time or application start time?
    For SQL server database, that's even worse. Let's say I have a 10GB SQL server database, and it grows to 50GB after a year. Can you image how many random writes, updates between the process? I am not quite sure this will wear off the SSD really quick.

    2. For desktop / laptop, I can probably say, install the OS and applications on SSD, and leave everything on other drives? And even create page file on other drives? As I feel SSD is only good for readonly access. For frequent write, it may wear off pretty quick? I am doing development, I am not even sure I should save source code on SSD, as it compiles, builds, I am sure it writes a lot to SSD.

    So over all, I don't see it fits in Server environment, but for desktop/laptop, maybe? even so, it's limited?

    someone correct me if I am wrong?
    Reply
  • TCQU - Thursday, July 29, 2010 - link

    Hi people

    I'm up for getting a new Macbook pro with ssd.
    BUT i heard somthing about, that the 128gb ssd, for apples machines, was made by samsung. I was ready to buy it, but now that i've heard that first of all "apples" ssd's is much slower that they others on the marked. Then i read this. So now i'm really confused.
    What shoud i do?
    buy apples macbook pro with 128gb ssd
    or should i buy it without and replace it with an other ssd? thoughts? plzz help me out
    thanks
    Thomas
    Reply
  • TCQU - Thursday, July 29, 2010 - link

    Hi people

    I'm up for getting a new Macbook pro with ssd.
    BUT i heard somthing about, that the 128gb ssd, for apples machines, was made by samsung. I was ready to buy it, but now that i've heard that first of all "apples" ssd's is much slower that they others on the marked. Then i read this. So now i'm really confused.
    What shoud i do?
    buy apples macbook pro with 128gb ssd
    or should i buy it without and replace it with an other ssd? thoughts? plzz help me out
    thanks
    Thomas
    Reply
  • marraco - Friday, August 13, 2010 - link

    Why Sandforce controllers are ignored?

    I’m extremely disappointed with the compiler benchmark. Please test .NET (With lot of classes source files and dependencies). It seems like nothing speeds up compilation. No CPU, no memory, no SSD. It makes nonsense.
    Reply
  • sylvm - Thursday, October 07, 2010 - link

    I found this article of very good quality.

    I was looking for a similar article about express card SSDs using PCIe port, but found nothing about their performance for rewrite.
    The best I found is this review http://www.pro-clockers.com/storage/192-wintec-fil... saying nothing about it.

    Expresscard SSDs would allow good speed improvement/price compromise : buying a relatively small and cheap one for OS and softwares, while keeping the HDD for data.

    Has anyone some info about it ?

    Best regards,

    Sylvain
    Reply
  • paulgj - Saturday, October 09, 2010 - link

    Well I was curious about the flash in my Agility 60GB so I opened it up and noted a different Intel part number - mine consisted of 8 x 29F64G08CAMDB chips whereas the pic above shows the 29F64G08FAMCI. I wonder what the difference is?

    -Paul
    Reply
  • Bonesdad - Sunday, October 10, 2010 - link

    Been over a year since this article was published...still very relevant. Any plans to update it with the latest products/drivers/firmware? There have been some significant updates, and it would be good to at least have updated comparisons.

    Well done, more more more!
    Reply
  • hescominsoon - Thursday, February 17, 2011 - link

    Excellent article but you left out sandforce. I'm curious if this was an oversight or a purposeful moission. Reply
  • PHT - Friday, September 28, 2012 - link

    This article is fantastic, the best I ever read about SSD.
    Any follow up with new SATA III drives and new controllers like SandForce, new Indilinx etc.?
    I will be glad to see it.

    My Best
    Zygmunt
    Reply
  • lucasgonz - Wednesday, October 16, 2013 - link

    Hello everyone.
    This post is quite old but I hope someone can answer.
    I am concerned about the life of my ssd (sandisk extreme 240). I performed partitions ignoring the issue of the level of wear and partitions. I have it for one year ago with a 30gb partition and one with 200GB. I wanted to use large drive for data but I did not have time for that and just use the first 30gb partition . My question is if the ssd may be damaged by using only a little segment. DiskInfo shows 10tb reading 18 tb and writing.
    sorry my poor English.
    Thanks for any help.
    Reply
  • Ojaswin Singh - Monday, January 13, 2014 - link

    Hey,This is the most informative article i have ever read.Can You Please clear Out Some Of my Doubts:-
    1.Does Playing Video Games or Running Programs add to Writing on the SSD
    2.Is 1 Write Cycle=Filling 120GB of SSD once
    3.I really write on my HDD a lot(Seriusly a Lot) So how much life cycle can i expect from Samsung 840 SSD(Neither Pro nor EVO) I mean for how much time can i expect it to be writable
    Please Help me cause i want the speeds of SSD but i want it to last for me too
    Thanks,
    Ojaswin
    Reply

Log in

Don't have an account? Sign up now