A few weeks ago I mentioned on twitter that I had a new favorite SSD. This is that SSD, and surprisingly enough, it’s made by SanDisk.

The SanDisk part is very unexpected, because until now SanDisk hadn’t really put out a very impressive drive. Much like Samsung in the early days of SSDs, SanDisk is best known for its OEM efforts. The U100 and U110 are quite common in Ultrabooks, and more recently even Apple adopted SanDisk as a source for its notebooks. Low power consumption, competitive pricing and solid validation kept SanDisk in the good graces of the OEMs. Unfortunately, SanDisk did little to push the envelope on performance, and definitely did nothing to prioritize IO consistency. Until now.

The previous generation SanDisk Extreme SSD used a SandForce controller, with largely unchanged firmware. This new drive however moves to a much more favorable combination for companies who have their own firmware development team. Like Crucial’s M500, the Extreme II uses Marvell’s 88SS9187 (codename Monet) controller. SanDisk also rolls its own firmware, a combination we’ve seen in previous SanDisk SSDs (e.g. the SanDisk Ultra Plus). Rounding out the nearly vertical integration is the use of SanDisk’s 19nm eX2 ABL MLC NAND.

This is standard 2-bit-per-cell MLC NAND with a twist: a portion of each MLC NAND die is set to operate in SLC/pseudo-SLC mode. SanDisk calls this its nCache. The nCache is used as a lower latency/higher performance write buffer. In the Ultra Plus, I pointed out that there simply wasn’t much NAND allocated to the nCache since it is pulled from the ~7% spare area on the drive. With the Extreme II SanDisk doubled the amount of spare area on the drive, which could impact the size of the nCache.

SanDisk Extreme II Specifications
  120GB 240GB 480GB
Controller Marvell 88SS9187
NAND SanDisk 19nm eX2 ABL MLC NAND
DRAM

128MB DDR3-1600

256MB DDR3-1600
512MB DDR3-1600
Form Factor 2.5" 7mm
Sequential Read

550MB/s

550MB/s
545MB/s
Sequential Write
340MB/s
510MB/s
500MB/s
4KB Random Read
91K IOPS
95K IOPS
95K IOPS
4KB Random Write
74K IOPS
78K IOPS
75K IOPS
Drive Lifetime 80TB Written
Warranty 5 years
MSRP
$129.99
$229.99
$439.99

Some small file writes are supposed to be buffered to the nCache, but that didn’t seem to improve performance in the case of the Ultra Plus, leading me to doubt its effectiveness. However, SanDisk mentioned the nCache can be used to improve data integrity as well. The indirection/page table is stored in nCache, which SanDisk believes gives it a better chance of maintaining the integrity of that table in the event of sudden power loss (since writes to nCache are quicker than to the MLC portion of the NAND). The Extreme II itself doesn’t have any capacitor based power loss data protection.

Don't be too put off by the 80TB of drive writes rating for the drives. The larger drives should carry higher ratings (and they will last longer), but in order to claim a higher endurance SanDisk would have to actually validate to that higher endurance specification. For client drives, we often times see SSD vendors provide a single endurance rating in order to keep validation costs low - despite the fact that larger drives will be able to sustain more writes over the lifetime of the drive. SanDisk offers a 5 year warranty with the Extreme II.

Despite the controller’s capabilities (as we’ve seen with the M500), SanDisk’s Extreme II doesn’t enable any sort of AES encryption or eDrive support.

With the Extreme II, SanDisk moved to a much larger amount of DRAM per capacity point. Similar to Intel’s S3700, SanDisk now uses around 1MB of DRAM per 1GB of NAND capacity. With a flat indirection/page table structure, sufficient DRAM and an increase in spare area, it would appear that SanDisk is trying to improve IO consistency. Let’s find out if they have.

Performance Consistency
Comments Locked

51 Comments

View All Comments

  • Quizzical - Monday, June 3, 2013 - link

    Good stuff, as usual. But at what point do SSD performance numbers cease to matter because they're all so fast that the difference doesn't matter?

    Back when there were awful JMicron SSDs that struggled along at 2 IOPS in some cases, the difference was extremely important. More recently, your performance consistency numbers offered a finer grained way to say that some SSDs were flawed.

    But are we heading toward a future in which most SSDs do well in any test that you can come up with shows all of the SSDs performing well? Does the difference between 10000 IOPS and 20000 really matter for any consumer use? How about the difference between 300 MB/s and 400 MB/s in sequential transfers? If so, do we declare victory and cease caring about SSD reviews?

    If so, then you could claim some part in creating that future, at least if you believe that vendors react to flaws that reviews point out, even if only because they want to avoid negative reviews of their own products.

    Or maybe it will be like power supply reviews, where mostly only good ones get sent in for reviews, while bad ones just show up on New Egg and hope that some sucker will buy it, or occasionally get a review when some tech site buys one rather than getting a review sample sent from the manufacturer?
  • Tukano - Monday, June 3, 2013 - link

    I feel the same way. Almost need an order of magnitude improvement to notice anything different.

    My question now is, where are the bottlenecks?

    What causes my PC to boot in 30 seconds as opposed to 10?

    I don't think I ever use the same amount of throughput as what these SSD's offer
    My 2500K @ 4.5GHz doesn't seem to ever get stressed (I didn't notice a huge difference between stock vs OC)

    Is it now limited to the connections between devices? i.e. transferring from SSD to RAM to CPU and vice versa?
  • talldude2 - Monday, June 3, 2013 - link

    Storage is still the bottleneck for performance in most cases. Bandwidth between CPU and DDR3 1600 is 12.8GB/s. The fastest consumer SSDs are still ~25 times slower than that in a best case scenario. Also, you have to take into account all the different latencies associated with any given process (i.e. fetch this from the disk, fetch that from the RAM, do an operation on them, etc.). The reduced latency is really what makes the SSD so much faster than an HDD.

    As for the tests - I think that the new 2013 test looks good in that it will show you real world heavy usage data. At this point it looks like the differentiator really is worst case performance - i.e. the drive not getting bogged down under a heavy load.
  • whyso - Monday, June 3, 2013 - link

    Its twice that If you have two RAM sticks.
  • Chapbass - Monday, June 3, 2013 - link

    I came in to post that same thing, talldude2. Remember why RAM is around in the first place: Storage is too slow. Even with SSDs, the latency is too high, and the performance isn't fast enough.

    Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD. That changes a lot of the ways that things can be accessed, and perhaps frees up RAM for more important things. I don't know this for a fact, but if the possibility is there you never know.

    Either way, back to my original point, until RAM becomes redundant, were not fast enough, IMO.
  • FunBunny2 - Monday, June 3, 2013 - link

    -- Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD.

    It's called an organic normal form relational schema. Lot's less bytes, lots more performance. But the coder types hate it because it requires so much less coding and so much more thinking (to build it, not use it).
  • crimson117 - Tuesday, June 4, 2013 - link

    > It's called an organic normal form relational schema

    I'm pretty sure you just made that up... or you read "Dr. Codd Was Right" :P
  • FunBunny2 - Tuesday, June 4, 2013 - link

    When I was an undergraduate, freshman actually, whenever a professor (english, -ology, and such) would assign us to write a paper, we'd all cry out, "how long does it have to be????" One such professor replied, "organic length, as long as it has to be." Not very satisfying, but absolutely correct.

    When I was in grad school, a professor mentioned that he'd known one guy who's Ph.D. dissertation (economics, mathy variety) was one page long. An equation and its derivation. Not sure I believe that one, but it makes the point.
  • santiagoanders - Tuesday, June 4, 2013 - link

    I'm guessing you didn't get a graduate degree in English. "Whose" is possessive while "who's" is a contraction that means "who is."
  • FunBunny2 - Tuesday, June 4, 2013 - link

    Econometrics. But, whose counting?

Log in

Don't have an account? Sign up now