This is probably the most excited I've been about any SSD launch in quite a while. At CES this year, Crucial announced its M500 SSD - the world's first to use Micron's new 128Gbit MLC NAND die. Courtesy of the cost savings and density increase associated with this new 128Gbit NAND, the M500 would be available in a 960GB capacity, priced at $599. That works out to be around $0.62 per GB for a truly gigantic drive by today's standards. It's exciting. For the past five years I've been learning to live off of less storage that I thought I needed, but the M500 had the potential to spoil me once again.

The M500 starts out with a familiar refrain: a Marvell controller with custom firmware from Crucial/Micron and of course, Micron NAND. All of these parts get updated though, some in more interesting ways than others. The controller is now Marvell’s 88SS9187, an updated version of the 9174 used in the m4. The 9187 is a speed/feature bump over the 9174 and is also used in Plextor’s M5 Pro. I should note that this time around both the Crucial (end user) and Micron (OEM) drives will feature the same M500 branding.

One of the benefits of Marvell’s 9187 is the support for DDR3 memory, which we see exercised on the M500. In its largest configuration, the M500 features 1GB of DDR3-1600. Crucial claims only 2 - 4MB of user data ever ends up in this DRAM, the overwhelming majority of the DRAM is used to cache the page/indirection table that maps logical block addresses to pages in NAND. Like most SSD makers, Crucial won’t talk about the structure of its mapping table but given the size of the DRAM I think it’s safe to assume that we’re looking at a relatively flat structure that should be easy to manage (more on this later).

Crucial / Micron M500 Specifications
  120GB 240GB 480GB 960GB
Controller Marvell 88SS9187
NAND Micron 20nm 2bpc MLC NAND (128Gbit die)
Form Factor 2.5" 7mm/9.5mm, mSATA, M.2 2.5" 7mm/9.5mm, mSATA, M.2 2.5" 7mm/9.5mm, mSATA, M.2 2.5" 7mm/9.5mm
Sequential Read
500MB/s
500MB/s
500MB/s
500MB/s
Sequential Write
130MB/s
250MB/s
400MB/s
400MB/s
4KB Random Read
62K IOPS
72K IOPS
80K IOPS
80K IOPS
4KB Random Write
35K IOPS
60K IOPS
80K IOPS
80K IOPS
Drive Lifetime 72TB Writes (90% full, 25/75% sequential/random IO - 50% 4KB, 40% 64KB, 10% 128KB)
Warranty 3 years

While the M500’s controller is nothing new, its NAND is. The M500 is the first drive to ship with the latest version of IMFT’s 20nm MLC NAND, featuring 128Gbit die. All previous NAND devices from IMFT (as well as its competitors) top out at 64Gbit (8GB) per 2-bit MLC NAND die. The move to larger die decreases the number of die/devices needed to hit each capacity point, and it also makes 1TB SSDs cost effective for the first time ever.

The cost savings come from the fact that these 128Gbit die aren’t simple doublings of last year’s 64Gbit devices; they include a few changes. The most prominent is a shift in page size from 8KB to 16KB. Larger page sizes are more desirable to implement at smaller NAND geometries, which is why you normally see these page size transitions with major shifts in process technology (e.g. 4KB to 8KB page size transition back at 25nm). The good news is that larger page sizes increase sequential throughput, but at the expense of latency. Given that NAND program times increase with smaller NAND geometries, once again the deck is stacked against manufacturers looking to increase performance as they exploit the benefits of Moore’s Law.

The other big change with the 128Gbit implementation of IMFT’s 20nm process is the inclusion of ONFI 3.0 support. There are some power savings courtesy of ONFI 3.0 (lower voltages, on-die termination), but the big news here is an increase in max interface speed. The previous ONFI interface standard (2.x) topped out at around 200MB/s, while ONFI 3.0 kicks that up to 400MB/s. Crucial’s implementation seems to be limited to around 330MB/s, but the drive isn’t anywhere close to saturating that. Remember the interface speed governs the maximum rate at which you can transfer data to/from a NAND device. Most NAND devices are capable of dual-channel operation so in the higher capacity implementations we’re talking about a maximum NAND-to-controller transfer rate of over 600MB/s. There’s more than enough headroom here.

Supporting the new controller, new NAND die, larger page sizes and ONFI 3.0 obviously require a new firmware, so the M500 ships with an evolution of what Crucial developed for the m4. The end result is vastly improved performance across the board, the big question being how well does it compare to the rest of the world given how much has changed since the m4 first arrived on the market.

The 20nm 128Gbit NAND: Larger Pages, Larger Blocks, Lower Performance & Cost?

Intel/Micron NAND Evolution
  50nm 34nm 25nm 20nm 20nm
Single Die Max Capacity 16Gbit 32Gbit 64Gbit 64Gbit 128Gbit
Page Size 4KB 4KB 8KB 8KB 16KB
Pages per Block 128 128 256 256 512
Read Page (max) - - 75 µs 100 µs 115 µs
Program Page (typical) 900 µs 1200 µs 1300 µs 1300 µs 1600 µs
Erase Block (typical) - - 3 ms 3 ms 3.8 ms
Die Size - 172mm2 167mm2 118mm2 202mm2
Gbit per mm2 - 0.186 0.383 0.542 0.634
Rated Program/Erase Cycles 10000 5000 3000 3000 3000

There's a lot of data in the table above, but if you look closely you'll see a couple of trends. The obvious ones are increasing page and block size over time. NAND program latency has also climbed steadily over the years, while endurance decreased. All in all, the picture looks pretty bleak. It's impressive that performance keeps going up each generation given how much the deck is stacked against seeing continued performance improvements. The increase in program time gives you a preview of what we're going to see in the performance pages. Small writes will take longer. Garbage collection routines on a full drive will also take longer to run as each block that needs to be recycled for use has more pages and more data to deal with. Although Crucial uses a faster controller in the M500 vs. m4, the internal housekeeping it has to do goes up tremendously as well. The M500 isn't a drive that was built in pursuit of peak performance. Instead this drive targets the mainstream.

Looking at the difference in density between the two 20nm NAND devices, there's nearly a 17% increase in density from moving to the larger page/block sizes. It's a remarkable improvement especially when you consider the gains are decoupled from a new process node. Ultimately this is Micron's answer to TLC for the time being. Rather than sacrificing endurance to get to lower price points, the 20nm 128Gbit 2bpc MLC NAND device at mature yields should deliver competitive pricing at higher endurance. Indeed this is the message behind Crucial's M500. The company isn't targeting Samsung's SSD 840 Pro, but rather the TLC based 840.

Price Comparison
  120/128GB 240/256GB 480/512GB 960GB
Crucial M500 $129 ($129) $219 ($202) $399 ($442) $599 ($570)
Intel SSD 335 $181 $220 - -
Samsung SSD 840 $100 $169 $333 -
Samsung SSD 840 Pro $139 $229 $463 -

The reality of it all is the M500's MSRPs are closer to the 840 Pro's street prices than the 840's. MSRPs tend to run a bit high on SSDs, so I wouldn't be too surprised to see the M500 eventually settle down closer to the 840 (remember the MSRP for the 840/840 Pro at 250/256GB are $199 and $269, respectively). It's definitely a different approach to driving costs down vs. going to TLC, and it's one that can't necessarily be repeated each generation, but for now the answer works. I'm not sure how meaningful the added endurance is for most client users, although you could make an interesting case for the M500 in some enterprise workloads that the TLC 840 wouldn't be able to make it into.

 

The big news is of course the 960GB capacity point. At $599 the 960GB M500 is by far the cheapest drive available at anywhere that capacity. A quick search on Newegg reveals a $1000 Mushkin 960GB drive and a $3000 1TB OCZ Octane. At $599, the 960GB is a steal at $0.62/GB. Even the Phison based 960GB BP4 from MyDigitalSSD weighs in at $799, and OWC's Mercury Electra MAX (3Gbps SATA) is still over $1000. To put the drive's excellent price in perspective, the 960GB M500 has roughly the same MSRP as Intel's 80GB X25-M had back in 2008. That's an order of magnitude more storage capacity at the same price in 5 years time. Moore's Law makes me happy.

Encryption Done Right & Drive Configurations
Comments Locked

111 Comments

View All Comments

  • Solid State Brain - Saturday, April 13, 2013 - link

    In theory, the spare area can be only configured on a clean drive, which means one would have to secure erase it (and therefore lose all data) and then create a partition smaller than the drive's maximum user capacity. The remaining unused (raw, unpartitioned) capacity should then be used by the drive as spare area for wear leveling operations, in addition to the factory OP area (usually derived from the GiB->GB capacity difference). In practice it *should* be sufficient to notify the drive that the empty space is actually empty with a TRIM command before resizing the partition.

    In your case the Samsung Magician software allows to double the drive's factory spare area (no other adjustment possible, at least in version 4). It doesn't perform a secure erase, so perhaps it isn't really necessary after all.

    I don't know however if the Samsung 840 controller actually actively detects when a certain portion of the drive is "raw/unpartitioned". Theory dictates that it shouldn't be able to discern that without the OS somehow telling it so.

    If a partition-wide TRIM operation is enough, then one can increase overprovisioning manually on an live/used system by:

    1) Performing a full-system TRIM command with the Windows 8 integrated "drive defrag/optimization" tool (or with the "fstrim" command line tool in Linux, although this works only on ext4 partitions), or with dedicated third party utilities (some commercial defragmentation software performs a system-wide trim on SSDs instead of regular defrag).
    2) Resize the last partition manually with Computer Management>Disk Management>Shrink Partition.

    Anyway, in practice all this hassle is going to benefit you only if you routinely perform dozens of gigabytes of sustained writes per day in a possibly trim-less environment. I doubt very much that most users would be able to feel any difference with their workloads.
  • AlB80 - Saturday, April 13, 2013 - link

    "Total NAND on-board" and "DRAM" values are specified in "GB" and "MB", but it should be "GiB" and "MiB".
  • JellyRoll - Saturday, April 13, 2013 - link

    Shut up JohnW lol
  • JellyRoll - Saturday, April 13, 2013 - link

    There is a huge misstatement in the article..."I introduced a new method of characterizing performance: looking at the latency of individual operations over time."
    First: it isnt individual operations, several thousand are taking place per one second interval.
    Second: Anand did not introduce this type of testing, it was a blatant copying of other another tech websites testing.
  • JellyRoll - Saturday, April 13, 2013 - link

    There is a huge misstatement in the article..."I introduced a new method of characterizing performance: looking at the latency of individual operations over time."
    First: it isnt individual operations, several thousand are taking place per one second interval.
    Second: Anand did not introduce this type of testing, it was a blatant copying of other another tech websites testing.
  • twtech - Sunday, April 14, 2013 - link

    I think it's kind of interesting in the comments, people are looking at the performance figures and saying, "Oh, it doesn't perform as well as a Samsung 840 Pro, so I'm disappointed."

    I have a couple computers booting off an M4 (slower than the M500), and one that has a Samsung 830 as the boot drive. The Samsung is quite a bit faster in benchmarks, but do I notice? Nope, not really. The jump to having any SSD at all is significant. The jump from one SSD to another - provided neither have something like firmware issues causing stuttering as some old models did - is negligible.

    I think the more important factor here is that we have a nearly 1TB SSD for $600 - less than what 512GB drives were selling for 1 year ago. That's big enough that many users may not even need a separate mechanical storage drive.
  • JellyRoll - Sunday, April 14, 2013 - link

    Part of the issue is the unrealistic test parameters. Testing with such ridiculously severe workloads is not irepresentative of a real-world use.
  • Wolfpup - Monday, April 15, 2013 - link

    Unfortunately I couldn't wait for the launch of the M500...had to "make due" with a 512GB M4. Oh well, it's still a great drive!
  • random2 - Monday, April 15, 2013 - link

    I cannot imagine anyone who doesn't have some sort of tech background, trying to read these articles. Granted I am no certificated IT professional, I have been very interested in hardware and software for over a decade, and have been a reader of Anandtech for almost as long. Which brings me to this. Can we not have some of the terms abbreviated or otherwise, hyper-linked at least to an article providing further explanation?

    Case in point; ONFI 3.0
  • af3 - Tuesday, April 16, 2013 - link

    I was thinking of ordering a $350 256G Lacie Thunderbolt Rugged external SSD for the purposes of booting another OS without needing to use space on my internal/main (SSD) drive.

    Can anyone tell me whether there might be a superior (in terms of performance and cost) alternative that might utilize something like one of these new Micron drives?

    Does anyone know whether or not the Lacie is fast and whether or not I might have something better by getting another external Thunderbolt device and installing one of these Micron drives?

Log in

Don't have an account? Sign up now