The Intel SSD 600p (512GB) Review
by Billy Tallis on November 22, 2016 10:30 AM ESTIntel's SSD 600p was the first PCIe SSD using TLC NAND to hit the consumer market. It is Intel's first consumer SSD with 3D NAND and it is by far the most affordable NVMe SSD: current pricing is on par with mid-range SATA SSDs. While most other consumer PCIe SSDs have been enthusiast-oriented products aiming to deliver the highest performance possible, the Intel 600p merely attempts to break the speed limits of SATA without breaking the bank.
The Intel SSD 600p has almost nothing in common with Intel's previous NVMe SSD for consumers (the Intel SSD 750). Where the Intel SSD 750 uses Intel's in-house enterprise SSD controller with consumer-oriented firmware, the Intel 600p uses a third-party controller. The SSD 600p is a M.2 PCIe SSD with peak power consumption only slightly higher than the SSD 750's idle. By comparison, the Intel SSD 750 is a high power and high performance drive that comes in PCIe expansion card and 2.5" U.2 form factors, both with sizable heatsinks.
Intel SSD 600p Specifications Comparison | |||||
128GB | 256GB | 512GB | 1TB | ||
Form Factor | single-sided M.2 2280 | ||||
Controller | Intel-customized Silicon Motion SM2260 | ||||
Interface | PCIe 3.0 x4 | ||||
NAND | Intel 384Gb 32-layer 3D TLC | ||||
SLC Cache Size | 4 GB | 8.5 GB | 17.5 GB | 32 GB | |
Sequential Read | 770 MB/s | 1570 MB/s | 1775 MB/s | 1800 MB/s | |
Sequential Write (SLC Cache) | 450 MB/s | 540 MB/s | 560 MB/s | 560 MB/s | |
4KB Random Read (QD32) | 35k IOPS | 71k IOPS | 128.5k IOPS | 155k IOPS | |
4KB Random Write (QD32) | 91.5k IOPS | 112k IOPS | 128k IOPS | 128k IOPS | |
Endurance | 72 TBW | 144 TBW | 288 TBW | 576 TBW | |
Warranty | 5 years |
The Intel SSD 600p is our first chance to test Silicon Motion's SM2260 controller, their first PCIe SSD controller. Silicon Motion's SATA SSD controllers have built a great reputation for being affordable, low power and providing good mainstream performance. One key to the power efficiency of Silicon Motion's SATA SSD controllers is their use of an optimized single core ARC processor (via Synopsys), but in order to meet the SM2260's performance target, Silicon Motion has finally switched to a dual core ARM processor. The controller chip used on the SSD 600p has some customizations specifically for Intel and bears both Intel and SMI logos.
The 3D TLC NAND used on the Intel SSD 600p is the first generation 3D NAND co-developed with Micron. We've already evaluated Micron's Crucial MX300 with the same 3D TLC and found it to be a great mainstream SATA SSD. The MX300 was unable to match the performance of Samsung's 3D TLC NAND as found in the 850 EVO, but the MX300 is substantially cheaper and remarkably power efficient, both in comparison to Samsung's SSDs and to other SSDs that use the same controller as the MX300 but planar NAND.
Intel uses the same 3D NAND flash die for its MLC and TLC parts. The MLC configuration that has not yet found its way to the consumer SSD market has a capacity of 256Gb (32GB) per die, which gives the TLC configuration a capacity of 384Gb (48GB). Micron took advantage of this odd size to offer the MX300 in non-standard capacities, but for the SSD 600p Intel is offering normal power of two capacities with large fixed size SLC write caches in the spare area. The ample spare area also allows for a write endurance rating of about 0.3 drive writes per day for the duration of the five year warranty.
Intel 3D TLC NAND, four 48GB dies for a total of 192GB per package
The Intel SSD 600p shares its hardware with two other Intel products: the SSD Pro 6000p for business client computing and the SSD E 6000p for the embedded and IoT market. The Pro 6000p is the only one of the three to support encryption and Intel's vPro security features. The SSD 600p relies on the operating system's built-in NVMe driver and Intel's consumer SSD Toolbox software which was updated in October to support the 600p.
For this review, the primary comparisons will not be against high-end NVMe drives but against mainstream SATA SSDs, as these are ultimately the closest to 'mid-to-low range' NVMe as we can get. The Crucial MX300 has given us a taste of what the Intel/Micron 3D TLC can do, and it is currently one of the best value SSDs on the market. The Samsung 850 EVO is very close to the Intel SSD 600p in price and sets the bar for the performance the SSD 600p needs to provide in order to be a good value.
Because the Intel SSD 600p is targeting a more mainstream audience and more modest level of performance than most other M.2 PCIe SSDs, I have additionally tested its performance in the M.2 slot built in to the testbed's ASUS Z97 Pro motherboard. In this configuration the SSD 600p is limited to a PCIe 2.0 x2 link, as compared to the PCIe 3.0 x4 link that is available during the ordinary testing process where an adapter is used in the primary PCIe x16 slot. This extra set of results does not include power measurements but may be more useful to desktop users who are considering adding a cheap NVMe SSD to an older but compatible existing system.
AnandTech 2015 SSD Test System | |
CPU | Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled) |
Motherboard | ASUS Z97 Pro (BIOS 2701) |
Chipset | Intel Z97 |
Memory | Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T) |
Graphics | Intel HD Graphics 4600 |
Desktop Resolution | 1920 x 1200 |
OS | Windows 8.1 x64 |
- Thanks to Intel for the Core i7-4770K CPU
- Thanks to ASUS for the Z97 Deluxe motherboard
- Thanks to Corsair for the Vengeance 16GB DDR3-1866 DRAM kit, RM750 power supply, Carbide 200R case, and Hydro H60 CPU cooler
63 Comments
View All Comments
vFunct - Tuesday, November 22, 2016 - link
These would be great for server applications, if I could find PCIe add-in cards that have 4x M.2 slots.I'd love to be able to stick 10 or 100 or so of these in a server, as an image/media store.
ddriver - Tuesday, November 22, 2016 - link
You should call intel to let them know they are marketing it in the wrong segment LOLddriver - Tuesday, November 22, 2016 - link
To clarify, this product is evidently the runt of the nvme litter. For regular users, it is barely faster than sata devices. And once it runs out of cache, it actually gets slower than a sata device. Based on its performance and price, I won't be surprised if its reliability is just as subpar. Putting such a device in a server is like putting a drunken hobo in a Lamborghini.BrokenCrayons - Tuesday, November 22, 2016 - link
Assuming a media storage server scenario, you'd be looking at write once and read many where the cache issues aren't going to pose a significant problem to performance. Using an array of them would also mitigate much of that write performance using some form of RAID. Of course that applies to SATA devices as well, but there's a density advantange realized in NVMe.vFunct - Tuesday, November 22, 2016 - link
bingo.Now, how can I pack a bunch of these in a chassis?
BrokenCrayons - Tuesday, November 22, 2016 - link
I'd think the best answer to that would be a custom motherboard with the appropriate slots on it to achieve high storage densities in a slim (maybe something like a 1/2 1U rackmount) chassis. As for PCIe slot expansion cards, there's a few out there that would let you install 4x M.2 SSDs on a PCIe slot, but they'd add to the cost of building such a storage array. In the end, I think we're probably a year or three away from using NVMe SSDs in large storage arrays outside of highly customized and expensive solutions for compaines that have the clout to leverage something that exotic.ddriver - Tuesday, November 22, 2016 - link
So are you going to make that custom motherboard for him, or will he be making it for himself? While you are at it, you may also want to make a cpu with 400 pcie lanes so that you can connect those 100 lousy budget p600s.Because I bet the industry isn't itching to make products for clueless and moneyless dummies. There is already a product that's unbeatable for media storage - an 8tb ultrastar he8. As ssd for media storage - that makes no sense, and a 100 of those only makes a 100 times less sense :D
BrokenCrayons - Tuesday, November 22, 2016 - link
"So are you going to make that..."Sure, okay.
Samus - Tuesday, November 22, 2016 - link
ddriver, you are ignoring his specific application when judging his solution to be wrong. For imaging, sequential throughput is all that matters. I used to work part time in PC refurbishing for education and we built a bench to image 64 PC's at a time over 1Gbe with a dual 10Gbe fiber backbone to a server using, which was at the time the best option on the market, an OCZ RevoDrive PCIe SSD. Even this drive was crippled by a single 10Gbe connection let alone dual 10Gbe connections, which is why we eventually installed TWO of them in RAID 1.This hackjob configuration allowed imaging 60+ PC's simultaneously over GBe in about 7 minutes when booting via PXE, running a diskpart script and imagex to uncompress a sysprep'd image.
The RevoDrive's were not reliable. One would fail like clockwork almost annually, and eventually in 2015 after I had left I heard they fell back to a pair of Plextor M2 2280's in a PCIe x4 adapter for better reliability. It was, and still is, however, very expensive to do this compared to what the 600p is offering.
Any high-throughput sequential reading application would greatly benefit from the performance and price the 600p is offering, not to mention Intel has class leading reliability in the SSD sector of 0.3%/year failure rate according to their own internal 2014 data...there is no reason to think of all companies Intel won't keep reliability as a high priority. After all, they are still the only company to mastermind the Sandforce 2200, a controller that had incredibly high failure rates across every other vendor and effectively lead to OCZ's bankruptcy.
ddriver - Tuesday, November 22, 2016 - link
So how does all this connect to, and I quote, "stick 10 or 100 or so of these in a server, as an image/media store"?Also, he doesn't really have "his specific application", he just spat a bunch of nonsense he believed would be cool :D
Lastly, next time try multicasting, this way you can simultaneously send data to 64 hosts at 1 gbps without the need for dual 10gbit or an uber expensive switch, achieving full parallelism and an effective 64 gbps. In that case a regular sata ssd or even an hdd would have sufficed as even mechanical drives have no problem saturating the 1 gbps lines you to the targets. You could have done the same work, or even better, at like 1/10 of the cost. You could even do 1000 system at a time, or as many as you want, just daisy chain more switches, terabit, petabit effective cumulative bandwidth is just as easily achievable.