I'm continually amazed by Samsung's rise to power in the SSD space. If you compare their market dominating products today to what we were reviewing from Samsung just a few years ago you'd assume they came from a different company. The past three generations of Samsung consumer SSDs have been good, but if you focus exclusively on the past two generations (830/840) they've been really good.

Last year Samsung bifurcated its consumer SSD lineup by intoducing the 840 Pro in addition to the vanilla 840. We'd seen other companies explore a similar strategy, but usually by playing with synchronous vs asynchronous NAND or sometimes just using different NAND suppliers between lines. Samsung used NAND to differentiate the two but went even more extreme. The non-Pro version of the 840 was the first large scale consumer SSD made with 3-bit-per-cell MLC NAND, more commonly known as TLC (triple-level-cell) NAND. Companies had toyed with the idea of going TLC well before the 840's release but were usually stopped either by economic or endurance realities. The 840 changed all of that. Although it didn't come with tremendous cost savings initially, over time the Samsung SSD 840 proved to be one of the better values on the market - you'd just have to get over the worry of wearing out TLC NAND.

Despite having a far more limited lifespan compared to its 2bpc MLC brethren, the TLC NAND Samsung used in its 840 turned out to be quite reliable. Even our own aggressive estimates pegged typical client write endurance on the 840 at more than 11 years for the 128GB model.


Samsung 19nm TLC NAND

We haven't seen Samsung's love of TLC embraced by other manufacturers. The most significant contrast actually comes from Micron, another NAND supplier turned SSD manufacturer, and its M500. Relying on 2bpc MLC NAND, the M500 gets its cost down by using a combination of large page/block sizes (to reduce overall die area) as well as aggressively embracing the latest NAND manufacturing processes (in this case 20nm). That's always been the Intel/Micron way - spend all of your time getting to the next process node quickly, and drive down cost that way rather than going TLC. The benefit of the TLC approach is the potential for even more cost reduction, but the downside is it usually takes a while to get production to yield high enough endurance TLC to make it viable for use in SSDs. The question of which is quicker is pretty simple to answer. If we look at the 25nm and 20nm generations from IMFT, the manufacturer was able to get down to new process nodes quicker than Samsung could ship TLC in volume.

The discussion then shifts to whether or not TLC makes sense at that point, or if you'd be better off just transitioning to the next process node on MLC. Samsung clearly believes its mainstream TLC/high-end MLC split makes a lot of sense, and seeing how the 840 turned out last time I tend to agree. It's not the only solution, but given how supply constrained everyone is on the latest NAND processes this generation - any good solution to get more die per wafer is going to be well received. Samsung doesn't disclose die areas of its NAND, so we unfortunately can't tell just how much more area efficient its TLC approach is compared to IMFT's 128Gb/16K page area efficient 20nm MLC NAND.

As with any other business in the tech industry, it turns out that a regular, predictable release cadence is a great way to build marketshare. Here we are, around 9 months after the release of the Samsung SSD 840 and we have its first successor: the 840 EVO.

As its name implies, Samsung's SSD 840 EVO is an evolution over last year's SSD 840. The EVO still uses 3-bit-per-cell TLC NAND, but it moves to a smaller process geometry. Samsung calls its latest NAND process 10nm-class or 1x-nm, which can refer to feature sizes anywhere from 10nm to 19nm but we've also heard it referred to as 19nm TLC. The new 19nm TLC is available in capacities of up to 128Gbit per die, like IMFT's latest 20nm MLC process. Unlike IMFT's 128Gb offering, Samsung remains on a 8KB page size even with this latest generation of NAND. The number of pages per block is also more like IMFT's previous 64Gbit 20nm MLC at 256:

IMFT vs. Samsung NAND Comparison
  IMFT 20nm MLC IMFT 20nm MLC Samsung 19nm TLC Samsung 21nm TLC Samsung 21nm MLC
Bits per Cell 2 2 3 3 2
Single Die Max Capacity 64Gbit 128Gbit 128Gbit 128Gbit 64Gbit
Page Size 8KB 16KB 8KB 8KB 8KB
Pages per Block 256 512 256 192 128
Read Page (max) 100 µs 115 µs ? ? ?
Program Page (typical) 1300 µs 1600 µs ? ? ?
Erase Block (typical) 3 ms 3.8 ms ? ? ?
Die Size 118mm2 202mm2 ? ? ?
Gbit per mm2 0.542 0.634 ? ? ?
Rated Program/Erase Cycles 3000 3000 1000 - 3000 1000 - 3000 3000 (?)

The high level specs, at least those Samsung gives us, points to an unwillingness to sacrifice latency even further in order to shrink die area. The decision makes sense since TLC is already expected to have 50% longer program times than 2bpc MLC. IMFT on the other hand has some latency to give up with its MLC NAND, which is why we see the move to 2x larger page and block sizes with its 128Gbit NAND die. Ultimately that's going to be the comparison that's the most interesting - how Samsung's SSD 840 EVO with its 19nm TLC NAND stacks up to Crucial's M500, the first implementation of IMFT's 128Gbit 20nm MLC NAND.

Modern Features

Along with the NAND update, the EVO also sees a pretty significant controller upgrade. The underlying architecture hasn't changed, Samsung's MEX controller is still based on the same triple-core Cortex R4 design as the previous generation MDX controller. The cores now run at 400MHz compared to 300MHz previously, which helps enable some of the higher performance on the EVO. The MEX controller also sees an update to SATA 3.1, something we first saw with SanDisk's Extreme II. SATA 3.1 brings a number of features, one of the most interesting being support for queued TRIM commands.

The EVO boasts hardware AES-256 encryption, and has its PSID printed on each drive label like Crucial's M500. In the event that you set and lose the drive's encryption key, you can use the PSID to unlock the drive (although all data will be lost). At launch the EVO doesn't support TCG Opal and thus Microsoft's eDrive spec, however Samsung tells us that a firmware update scheduled for September will enable both of these things - again bringing the EVO to encryption feature parity with Crucial's M500.

As one of the world's prominent DRAM makers, it's no surprise to find a ton of DRAM used to cache the firmware and indirection table on the EVO. DRAM size scales with capacity, although Samsung tosses a bit more than is necessary at a couple capacity points (e.g. 250GB).

Samsung SSD 840 EVO DRAM
  120GB 250GB 500GB 750GB 1TB
DRAM Size 256MB LPDDR2-1066 512MB LPDDR2-1066 512MB LPDDR2-1066 1GB LPDDR2-1066 1GB LPDDR2-1066

The move to 19nm 128Gbit TLC NAND die paves the way for some very large drive capacities. Similar to Crucial's M500, the 840 EVO is offered in configurations of up to 1TB.

Samsung SSD 840 EVO Specifications
  120GB 250GB 500GB 750GB 1TB
Controller, Interface Samsung MEX, SATA 3.1
NAND Samsung 19nm 3bpc TLC Toggle DDR 2.0 NAND
Form Factor 2.5" 7mm
Max Sequential Read
540MB/s
 
Max Sequential Write
410MB/s
520MB/s
 
Max 4KB Random Read
94K IOPS
97K IOPS
98K IOPS
 
Max 4KB Random Write
35K IOPS
66K IOPS
90K IOPS
 
Encryption AES-256 FDE, PSID printed on SSD label
Warranty 3 years

I'll get to the dissection of performance specs momentarily, but you'll notice some very high peak random and sequential performance out of these mainstream drives. The peak performance improvement over last year's 840 is beyond significant. The keyword there is peak of course.

Pricing

Samsung expects the 840 EVO to be available in the channel at the beginning of August. What we have in the table below are suggested MSRPs, which as long as supply isn't limited usually end up being higher than street prices:

SSD Pricing Comparison - 7/24/2013
  120/128GB 240/250/256GB 480/500/512GB 750GB 960GB/1TB
Crucial M500 $120.99 $193.56 $387.27   $599.99
Intel SSD 335   $219.99      
Samsung SSD 840 $98.44 $168.77 $328.77    
Samsung SSD 840 EVO $109.99 $189.99 $369.99 $529.99 $649.99
Samsung SSD 840 Pro $133.49 $230.95 $458.77    
SanDisk Extreme II $129.99 $229.77 $449.99    
SanDisk Ultra Plus $96.85 $174.29      
OCZ Vertex 450 $129.99 $246.84      

Prices are a bit higher than the outgoing Samsung SSD 840, which makes sense since we're looking at the beginning of the cost curve of a new process node. Crucial's highly sought after $600 960GB M500 seems finally back in stock just in time for the EVO to go head to head with it. Samsung is expecting roughly a $50 premium for the 1TB EVO over the Crucial solution, but over time I'd expect that gap to shrink down to nothing (or in favor of Samsung). The EVO is considerably more affordable than Samsung's 840 Pro, and the higher capcacity points are at particularly tempting prices.

Inside the Drives & Spare Area
Comments Locked

137 Comments

View All Comments

  • ervinshiznit - Thursday, July 25, 2013 - link

    Typo? On the Turbowrite page you say "For most light use cases I can see TurboWrite being a great way to deliver more of an MLC experience but on a TLC drive."
    It should be deliver more of a SLC experience but on a TLC drive.
  • ciri - Sunday, July 28, 2013 - link

    SLC>MLC>TLC
  • Guspaz - Thursday, July 25, 2013 - link

    The fact that RAPID sees any performance improvement at all illustrates to me a failure of the operating system's disk caching subsystem. That's all that RAPID really is, after all, a replacement for the Windows disk cache.

    I'd be curious to see the performance results of RAPID compared to the disk caching subsystems on other platforms, such as Linux and ZFS (which even on Linux has it's own cache called the "ARC"). Are the large improvements because Windows disk caching is particularly bad, or because RAPID is a better implementation than anybody else?
  • themelon - Thursday, July 25, 2013 - link

    Windows is absolutely horrible at filesystem caching and I don't think it does any sort of block caching. It seems to use more of a FIFO algorithm that has no sequential write bypass no matter what you do. ZFS and the 2 block device caches that recently integrated into the linux kernel, bcache and dm-cache, use more of an LRU method. All of them have at least basic sequential bypass detection as well. bcache in particular is tuneable to your load in almost all aspects of performance. Of course these are only block side caching and currently have no filesystem specific knowledge.

    There is some interesting work going on to track hot spots that will eventually allow for preemptive cache warming and/or hot relocation. Right now it is BTRFS specific but it is being integrated below the filesystem layer so any filesystem will eventually be able to take advantage of it.

    ZFS on Linux is a waste of time in my opinion. ZFS's L2ARC and SLOG are great but limited by some of what I feel are architectural flaws in zfs itself. I used to love zfs but the Linux kernel block stack has caught up to it in features and still offers all of the flexibility that it always has.
  • aicom - Friday, July 26, 2013 - link

    Windows' cache system is better than you give it credit for. It does support sequential bypass (see FILE_FLAG_SEQUENTIAL_SCAN flag). It works with filesystem drivers with the Cc* APIs in the kernel. It also supports caching files over a network, even with other clients modifying the files. It does standard read-ahead and write-behind and is supplemented by an adaptive prefetcher (SuperFetch).

    The reason we're seeing such huge gains is because the programs being tested explicitly ask NOT to be cached. The whole point is to test the drive, so they pass FILE_FLAG_NO_BUFFERING to disable caching on the files being accessed.
  • MrSpadge - Saturday, July 27, 2013 - link

    Excellent post!
  • Timur Born - Sunday, July 28, 2013 - link

    Question still arises why the Anand Storage Bench is affected beneficial by RAPID?! Is it because ASB also asks the Windows cache to be bypassed, is it because of the Windows cache flushing parts of its pages every second or does RAPID communicate with the drive (firmware) at a more fundamental level that allows further optimizations?
  • watersb - Friday, July 26, 2013 - link

    Excellent points. I stick with ZFS because I trust it (after many hardware failures but no data loss) and because it is cross-platform.

    Mac HFS does "hot relocation", I believe. And NTFS has always tried to keep hot files in the middle of the disk in order to reduce hard disk seek times. So maybe I don't understand what is meant by hot relocation.
  • piroroadkill - Thursday, July 25, 2013 - link

    I agree. I'm pretty sure Windows' own disk caching is terrible. It's pretty poor even on the server side. They really need to work on that shit.
  • tincmulc - Thursday, July 25, 2013 - link

    How is rapid any better from SuperCache or FancyCache? Not only do they do the same thing, but can also be configured to use more ram or use os invisble memory (32 bit os with more than 3GB of ram) and they work for any drive, even HDDs.

Log in

Don't have an account? Sign up now