A TLC Refresher

Back in February, we published an article called Understanding TLC NAND, where we went in-depth about how NAND works and the differences between various kinds of NAND (SLC, MLC, and TLC). Back then we didn't know when TLC SSDs would be publicly available or who would be the first manufacturer. Supposedly, OCZ had interest in releasing TLC based SSDs but the supply of TLC NAND wasn't good enough for their needs. Samsung has the benefit of being a tier one manufacturer that makes its own NAND, which gives it an advantage when dealing with new technologies as it can control the output of NAND. In this case, Samsung was able to ramp up the production of TLC NAND when it wanted to, whereas OCZ must live with whatever the NAND manufacturers are ready to sell them.

While we have covered TLC in detail already, we have some new details to add:

  SLC MLC TLC
Bits per Cell 1 2 3
P/E Cycles 100,000 3,000 1,000
Read Time 25us 50us ~75us
Program Time 200-300us 600-900us ~900-1350us
Erase Time 1.5-2ms 3ms ~4.5ms

Samsung would not tell us the exact read, program, and erase latencies but they told us that their TLC is around 50% slower than their MLC NAND. We don't know the latencies for Samsung's MLC NAND either, hence we have to go by general MLC NAND latencies, which varies a lot depending on process. However, we were able to get the P/E cycle count for TLC, which is 1,000. Samsung did not specify the process node but given that they listed MLC at 3,000 cycles, we are most likely talking about 27nm or 21nm. I wouldn't find it unlikely that Samsung is rating their 21nm MLC NAND at 3,000 P/E cycles as well because IMFT was able to keep the endurance at the same level with their 20nm MLC NAND.

Physically, TLC is similar to SLC and MLC. All three consist of similar transistors, the only difference is that they store a different amount of bits per cell. SLC only stores one, whereas MLC stores two and TLC stores three. This actually creates a minor problem, as there is no multiple of three that is a power of two. Unlike with hard drives, SSD capacities typically go in powers of two, such as 64GB, 128GB, and 256GB.

NAND is actually built based on binary prefixes (Mebi, Gibi...) but is almost always referred to using metric prefixes (Mega, Giga...). For example a 128GB SSD has ~137.4GB of storage (128GiB) due to Gibi to Giga translation, but the remaining space is used as spare area.

If the raw NAND array has 17.2 billion transistors, you would get 16Gibibits (17.2Gbits) of storage with SLC NAND because each cell can store one bit of data. MLC yields 32Gib, which is still a nice power of two because all you're doing is adding one level. However, with TLC you get 48Gib, which is not a power of two. Technically nothing is stopping manufacturers from making a 48Gib die, but from the marketing and engineering standpoint it's much easier to stick with powers of two. A TLC die in this case should be 32Gib just like MLC. To achieve that, the die is simply reduced in size to around 11.5 billion transistors. 32Gib isn't exactly divisible by three, but thanks to spare bits it doesn't have to be. The trick here is that the same capacity TLC die is smaller than an MLC die, which results in more dies per wafer and hence lower production costs.

Introduction Lower Endurance - Why?
Comments Locked

86 Comments

View All Comments

  • xdrol - Monday, October 8, 2012 - link

    You sir need to learn how SSDs work. Static data is not static on the flash chip - the controller shuffles it around, exactly because of wear levelling.
  • name99 - Tuesday, October 9, 2012 - link

    "I think Kristian should have made this all more clear because too many people don't bother to actually read stuff and just look at charts."

    Kristian is not the problem.
    There is a bizarre fraction of the world of tech "enthusiasts" who are convinced that every change in the world is a conspiracy to screw them over.

    These people have been obsessing about the supposed fragility of flash memory from day one. We have YEARS of real world experience with these devices but it means nothing to them. We haven't been screwed yet, but with TLC it's coming, I tell you.
    The same people spent years insisting that non-replacable batteries were a disaster waiting to happen.
    Fifteen years ago they were whining about the iMac not including a floppy drive, for the past few years they have been whining about recent computers not including an optical drive.
    A few weeks ago we saw the exact same thing regarding Apple's new Lightning connector.

    The thing you have to remember about these people is
    - evidence means NOTHING. you can tell them all the figures you want, about .1% failure rates, or minuscule return rates or whatever. None of that counts against their gut feeling that this won't work, or even better an anecdote that some guy some somewhere had a problem.
    - they have NO sense of history. Even if they lived through these transitions before, they cannot see how changes in 2000 are relevant to changes in 2012.
    - they will NEVER admit that they were wrong. The best you can possibly get out of them is a grudging acceptance that, yeah, Apple was right to get rid of floppy disks, but they did it too soon.

    In other words these are fools that are best ignored. They have zero knowledge of history, zero knowledge of the market, zero knowledge of the technology --- and the grandiose opinions that come from not actually knowing any pesky details or facts.
  • piiman - Tuesday, February 19, 2013 - link

    Then stick with Intel not because they last longer but they have a great warranty.(5 years) My drive went bad at about 3.5 years and Intel replaced it no questions asked and did it very quickly. I sent it in and had a new one 2 days after they received my old one. great service!
  • GTRagnarok - Monday, October 8, 2012 - link

    This is assuming a very exaggerated amplification of 10x.
  • Kristian Vättö - Monday, October 8, 2012 - link

    Keep in mind that it's an estimation based on the example numbers. 10x write amplification is fairly high for consumer workloads, most usually have something between 1-3x (though it gets a big bigger when taking wear leveling efficiency into account). Either way, we played safe and used 10x.

    Furthermore, the reported P/E cycle counts are the minimums. You have to be conservative when doing endurance ratings because every single die you sell must be able to achieve that. Hence it's completely possible (and even likely) that TLC can do more than 1,000 P/E cycles. It may be 1,500 or 3,000, I don't know; but 1,000 is the minimum. There is a Samsung 830 at XtremeSystems (had to remove the link as our system thought it was spam, LOL) that has lasted for more than 3,000TiBs, which would translate to over 10,000 P/E cycles (supposedly, that NAND is rated at 3,000 cycles).

    Of course, as mentioned at the end of the review, the 840 is something you would recommend to a light user (think about your parents or grandparents for instance), whereas the 840 Pro is the drive for heavier users. Those users are not writing a lot (heck, they may not use their system for days!), hence the endurance is not an issue.
  • A5 - Monday, October 8, 2012 - link

    Ah. I didn't know the 10x WA number was exceedingly conservative. Nevermind, then.
  • TheinsanegamerN - Friday, July 5, 2013 - link

    3.5 years is considering you are writing 36.5 GB of data a day. if the computer it is sitting in is mostly used for online work of document editing, youll get far more. the laptop would probably die long before the ssd did.
    also, this only apples to the tls ssds. mlc ssds last 3 times longer, so the 840 pro would be better for a computer kept longer than 3 years.
  • Vepsa - Monday, October 8, 2012 - link

    Might just be able to convince the wife that this is the way to go for her computer and my computer.
  • CaedenV - Monday, October 8, 2012 - link

    That is how I did it. My wife's old 80GB system drive died a bit over a year ago, and it was one of those issues of $75 for a decent HDD, or $100 for an SSD that would be 'big enough' for her as a system drive (60GB at the time). So I spent the extra $25, and it made her ~5 year old Core2Duo machine faster (for day-to-day workloads) than my brand new i7 monster that I had just build (but was still using traditional HDD at the time).

    I eventually got so frustrated by the performance difference that I ended up finally getting one for myself, and then after my birthday came then I spent my fun money on a 2nd one for RAID0. It did not make a huge performance increase (I mean it was faster in benchmarks, but doubling the speed of instant is still instant lol), but it did allow me to have enough space to load all my programs on the SSD instead of being divided between the SSD and HDD.
  • AndersLund - Sunday, November 25, 2012 - link

    Notice, that setting up a RAID with your SSD might hinder the OS to see the SSDs as SSD and not sending TRIM commands to the disks. My first (and current) gamer system consists of two Intel 80 GB SSD in a RAID0 setup, but the OS (and Intel's toolbox) does not recognize them as SSD.

Log in

Don't have an account? Sign up now