Comments Locked

116 Comments

Back to Article

  • Hurr Durr - Thursday, March 8, 2018 - link

    Hypetane!
  • iter - Thursday, March 8, 2018 - link

    optane = hypetane
    x-point = xtra-pointless

    It keeps getting worse and worse instead of getting better. The next x-point iteration may slip below nand even in the few strong points of the technology.

    Also, it doesn't seem that enterprise is very interested in intel's offering, seeing how they struggle to cram the product in market niches where it is xtra-pointless, I'd go on a limb and assume that's not because of love for consumers or skipping on them fat enterprise product margins.

    Also, it seems that intel gave very misleading information not only in terms of performance, but also regarding the origin of the technology. The official story is its development began in 2012 as a joint venture between intel and micron.

    That however is not true, x-point can be traced back to a now erased from history company named Unity Semiconductors, which was flogging the tech back in 2009 under the CMOx moniker.

    Courtesy of archive.org, there is still some trace of that, along with several PDFs explaining the operational principle of what intel has been highly secretive about:

    https://web.archive.org/web/20120205085357/http://...

    All in all, the secrecy might have to do with intel's inability to deliver on the highly ambitious expectations of the actual designers of the tech. It is nowhere near the 200% better than nand density, in fact it seems at the current manufacturing node it won't be possible to make more than 256 gb in m2 form factor, which is 8 times less than mlc nand or 24 times less than what was projected in 2009. Performance is not all that stellar too, a tad lower than what slc was capable at back in 2012, thank the gods nobody makes slc anymore, so there's a ray of sun to make xtra-pointless hypetane look good on paper.
  • chrnochime - Thursday, March 8, 2018 - link

    Rambus renamed it to ReRAM according to this article in 2015, so it would seem the tech lived on through Rambus after the aquisition of Unity Semi.

    https://www.eetimes.com/document.asp?doc_id=132552...

    But I'm not sure if it's the exact same tech as Intel's.
  • iter - Thursday, March 8, 2018 - link

    Check the PDFs, what little intel posted about it is all there. They may have licensed the tech from rambus. It is not like rambus does anything other than patent milking anyway.
  • iter - Thursday, March 8, 2018 - link

    "Coincidentally", rambus bought unity in 2012, exactly when intel allegedly started developing...
  • MDD1963 - Friday, March 23, 2018 - link

    Not everyone remembers a few sticks of RAMBUS RDIMMS for some Pentium 3 boards costing $500-$600 a stick back in '99-'00....; and being outperformed by DDR. Nice job, RAMBUS!
  • tommo1982 - Thursday, March 8, 2018 - link

    Am I reading it right? Was Cross-point memory supposed to be cheaper than NAND?
  • WinterCharm - Thursday, March 8, 2018 - link

    Yes. But I guess we won't see that for a while.

    Latency and power consumption are great... but speed and capacity leave a lot to be desired. When MacBook Pros have NVME drives capable of 3.2 GB/s (yes gigabytes) at a 2TB capacity... Optane is far behind.

    There are some advantages, but I expect that Intel will need to do a lot more work before these are cheaper, faster, and have higher capacity.
  • Reflex - Friday, March 9, 2018 - link

    That said, latency is what users notice. Max speed is a rarely encountered scenario in most user workloads.
  • iter - Saturday, March 10, 2018 - link

    No human notices microseconds. Delay becomes noticeable at about 10-20 msec, depending on the individual's reflexes, becomes annoying at about 50 msecs, and becomes detrimental at 200+.

    10 mseconds is 10000 microseconds. Hypetane improves things in the double digit microseconds range. Humans cannot notice that, not today, not in a million years.
  • patrickjp93 - Saturday, March 10, 2018 - link

    Until you start running databases, multi-tenant VM environments, and large swap files for map-reduce, inference drawing, etc..
  • xenol - Monday, March 12, 2018 - link

    Yeah, let me know when the average consumer uses that.
  • keralataxi - Monday, March 26, 2018 - link

    thank you for sharing this.This site is really helpful to me and i think you have to collect more details about the topic.
    http://keralatravelcabs.com/tempo-traveller-taxi-s...
  • JohnBooty - Monday, March 26, 2018 - link

    Rrrrrrright. So, yes, a human wouldn't perceive... the latency improvement seen on a single IOP on Octane vs. an alternative.

    Bravo. That's about as insightful as pointing out that a human also wouldn't notice the difference in latency between a single operation on a 3ghz CPU vs. a 4ghz CPU. Because microseconds, amiirite?

    Just for your reference: this is 2018. Unless we're traveling back to 1985 and benchmarking the floppy drive on a Commodore 64, we usually measure the performance or modern by looking at the aggregate performance of thousands or even millions of operations.
  • Ewitte12 - Monday, April 30, 2018 - link

    When something is loading it is rarely doing 1 task. Hundreds or thousands of tasks will definitely be noticeable. You would be surprised how much goes on just loading an app...
  • MrSpadge - Friday, March 9, 2018 - link

    No, I can't remember anyone saying that. It was supposed to be an intermediate step between high NAND and DRAM.
  • ATC9001 - Friday, March 9, 2018 - link

    It very well may cost less than NAND, but no business in the world is going to sell a product that's faster for less than another one of it's products (unless it simple doesn't sell, which is surely not that case for NAND).

    I'm pretty excited overall...sure it's hyped up a lot, but remember the first SSDs? Took a while to figure out the controllers and such.

    It's great to see the low latency...with DRAM prices the way they are *MAYBE* some enterprise customers will look to optane.

    As a gamer, I'm more concerned with sequential performance and higher capacities, but for prosumers with high random workloads this might make sense.
  • sharath.naik - Sunday, March 11, 2018 - link

    More importantly wan't xpoint supposed to have endurance many times over an SLC nand?. The spec here is not much better than an MLC let alone SLC
  • chrnochime - Thursday, March 8, 2018 - link

    And FWIW, I value endurance(write cycle) far more than speed. So if it's as fast as SLC I'm okay with that.
  • iter - Thursday, March 8, 2018 - link

    Performance means the actual media performance, that includes P/E latency and endurance. slc has great endurance, that I myself would be more than willing to pay for, alas there is no such an option as no slc products have been made the last few years.
  • MrSpadge - Friday, March 9, 2018 - link

    Did you ever had an SSD run out of write cycles? I've personally only witnessed one such case (old 60 GB drive from 2010, old controller, being almost full all the time), but numerous other SSD deaths (controller, Sandforce or whatever).
  • name99 - Friday, March 9, 2018 - link

    I have an SSD that SMART claims is at 42%. I'm curious to see how this plays out over the next three years or so.

    But yeah, I'd agree with your point. I've had two SSDs so far fail (many fewer than HDs, but of course I've owned many more HDs and for longer) and both those failures were inexplicable randomness (controller? RAM?) but they certainly didn't reflect the SSD running out of write cycles.

    I do have some very old (heavily used) devices that are flash based (iPod nano 3rd gen) and they are "failing" in the expected SSD fashion --- getting slower and slower, and can be goosed with some speed for another year by giving them a bulk erase. Meaning that it does seem that SSDs "wear-out" failure (when everything else is reliable) happens as claimed --- the device gets so slow that at some some point you're better off just moving to a new one --- but it takes YEARS to get there, and you get plenty of warning, not unexpected medium failure.
  • MonkeyPaw - Monday, March 12, 2018 - link

    The original Nexus 7 had this problem, I believe. Those things aged very poorly.
  • 80-wattHamster - Monday, March 12, 2018 - link

    Was that the issue? I'd read/heard that Lollipop introduced a change to the cache system that didn't play nicely with Tegra chips.
  • sharath.naik - Sunday, March 11, 2018 - link

    the Endurance listed here is barely better than MLC. it is not where close to even SLC
  • Reflex - Thursday, March 8, 2018 - link

    https://www.theregister.co.uk/2016/02/01/xpoint_ex...

    I know ddriver can't resist continuing to use 'hypetane' but seriously looking at this article, Optane appears to be a win nearly across the board. In some cases quite significantly. And this is with a product that is constrained in a number of ways. Prices also are starting at a much better place than early SSD's did vs HDD's.

    Really fantastic early results.
  • iter - Thursday, March 8, 2018 - link

    You need to lay off whatever you are abusing.

    Fantastic results? None of the people who can actually benefit from its few strong points are rushing to buy. And for everyone else intel is desperately flogging it at it is a pointless waste of money.

    Due to its failure to deliver on expectations and promisses, it is doubtful intel will any time soon allocate the manufacturing capacity it would require to make it competitive to nand, especially given its awful density. At this time intel is merely trying to make up for the money they put into making it. Nobody denies the strong low queue depth reads, but that ain't enough to make it into a money maker. Especially not when a more performant alternative has been available since before intel announced xpoint.
  • Alexvrb - Thursday, March 8, 2018 - link

    Most people ignore or gloss over the strong low QD results, actually. Which is ironic given that most of the people crapping all over them for having the "same" performance (read: bars in extreme benchmarks) would likely benefit from improved performance at low QD.

    With that being said capacity and price are terrible. They'll never make any significant inroads against NAND until they can quadruple their current best capacity.
  • Reflex - Thursday, March 8, 2018 - link

    Alex - I'm sure they are aware of that. I just remember how consumer NAND drives launched, the price/perf was far worse than this compared to HDD's, and those drives still lost in some types of performance (random read/write for instance) despite the high prices. For a new tech, being less than 3x while providing across the board better characteristics is pretty promising.
  • Calin - Friday, March 9, 2018 - link

    SSD never had a random R/W problem compared to magnetic disks, not even if you compared them by price to RAIDs and/or SCSI server drives. What problem they might had at the beginning was in sequential read (and especially write) speed. Current sequential write speeds for hard drives are limited by the rpm of the drive, and they reach around 150MB/s for a 7200 rpm 1TB desktop drive. Meanwhile, the Samsung 480 EVO SSD at 120GB (a good second or third generation SSD) reaches some 170MB/s sequential write.
    Where the magnetic rotational disk drives suffer a 100 times reduction in performance is random write, while the SSD hardly care. This is due to the awful access time of hard drives (move the heads and wait for the rotation of the disks to bring the data below the read/write heads) - that's 5-10 milliseconds wait time for each new operation).
  • Alexvrb - Saturday, March 10, 2018 - link

    Calin you are obviously too young to remember some of the early "affordable" consumer NAND SSDs. Hammer them a bit and they stalled... producing worse results than a lot of fast HDDs... especially in random writes. Sequential speeds were never a major issue that I can recall.
  • The_Assimilator - Friday, March 9, 2018 - link

    Trying to equate a NAND-to-Optane transition to the mechanical-HDD-to-SSD transition is laughable.
  • wumpus - Friday, March 9, 2018 - link

    The moment pseudo-SLC in TLC showed up, Optane was pretty much dead in the SSD market. They would presumably compete with SLC (does anybody still make it?), but TLC is the coffin nail in consumer markets.

    From the moment the 3d-xpoint hypetrane started, it was clear that it would try to wedge itself into the memory hierarchy, presumably between flash and DRAM, and hopes for replacing flash.

    Flash isn't going anywhere, and 3d-xpoint hasn't shown the endurance needed for a fast-paging DRAM replacement. It certainly wouldn't replace *all* DRAM, but anyone who's seen a 4GB machine actually function (slow, but they do work) knows that nearly all that expensive (hopefully DDR4 will fall back to Earth) DRAM could be replaced by something sufficiently fast, but neither flash nor 3d-xpoint is quite there.

    To compound the problems, Intel decided that "Optane in a DDR4 slot" would be strictly proprietary. So there are marketing/political problems trying to get manufacturers to support it as well as technical issues to make the stuff.
  • name99 - Friday, March 9, 2018 - link

    Consumer NAND launched in an environment where it had SOME spaces where it was optimal, and so had the chance to grow. It started in phones and DAPs, then grew to ultra-laptops, and finally the desktop. Point is --- there were niches that could pay for on-going improvement.

    Octane is different because there is NO obvious niche that justifies continuing to pump money into it. The niche that was SUPPOSED to justify it (NV-DIMMs) is STILL MIA years after it was promised...
  • iwod - Friday, March 9, 2018 - link

    I am all for super fast QD1 results. But so far none of the application seems to benefits from it. At least not according to test results. I am wonder, we are either testing it wrong, looking at the wrong thing, or the benefits of QD1 is over thought and bottleneck is somewhere else.

    And NAND continues to get bigger better and faster. We may be looking at below $100 250GB SSD this year.
  • iter - Friday, March 9, 2018 - link

    Exactly. It is hilarious how them fanboys keep claiming that we overlook the advantages, when I explicitly state them almost every time.

    There are very little and far in between workloads where those advantages can translate into tangible improvement of real world performance.

    When your bottleneck is a human being interacting via input devices, discrete savings of several dozens of microseconds are simply not perceivable.

    Even cumulative savings are in fact not, because most of the time that data has to also be processed by the cpu, which is why synthetics aside, raw real world applications snow minuscule going from a decent ssd to a crazy fast nvme device.
  • sor - Friday, March 9, 2018 - link

    Probably has something to do with your name calling and “it keeps getting worse and worse” when that objectively isn’t true. You come off as having an axe to grind.

    It is not true that this is worse and worse. The power improvements shown here are quite impressive. Low QD performance is still better than NAND by an order of magnitude, and looks to have gotten a roughly 20% improvement. Sequential read now even beats NAND.

    You and others are falling over yourselves to crap on it for some strange reason, and clearly are ignoring the upsides. It’s just a product.
  • iter - Friday, March 9, 2018 - link

    "when that objectively isn’t true"

    It absolutely is. It is slower than the 900p. They improved power a bit - big whoop, especially considering it came at the cost of gutting the interface by 50%.

    118 GB? I bet enthusiasts all over the planet are drooling about that crazy capacity. Not to mention the smaller model...

    Nobody denies the strong points, it is just that they are way too little to make this a good product.

    Instead of getting bigger and faster it gets smaller and slower.

    And somehow the price per GB increases.

    Truly impressive.
  • nevcairiel - Friday, March 9, 2018 - link

    If you want to go down that road, at current consumer SSD speeds (say Samsung 960 Pro), I doubt any normal user would even notice if the performance suddenly doubled (or halfed, for that matter).

    Does that mean we should not innovate? Perhaps consumer work-load isn't the main goal, but if you have the hardware, why not try to make a consumer product, anyway.
  • MrSpadge - Friday, March 9, 2018 - link

    With decently fast SATA SSDs the bottleneck is almsot entirely the CPU already, unless you've got purely I/O load.
  • Reflex - Thursday, March 8, 2018 - link

    I'm sorry you know how many are rushing out to buy a product that isn't available yet? I don't personally expect large volumes since at the current capacities it isn't in the sweet spot for consumers in price/perf, but its offering solid performance that bests NAND in almost every consumer scenario, in some cases significantly while consuming less overall power. That's a win. As production ramps, costs will come down.

    And only the literacy challenged have chosen to read Intel's claims about 3DXPoint's potential as claims about its first generation products. Right now its constrained by a number of things beyond the memory itself, such as PCIe bus speed.
  • iter - Friday, March 9, 2018 - link

    And I am sorry don't possess common sense.

    Of course I am not talking about how the 800p sells, only a complete idiot could take this out of my comments. I am talking about the non-existent demand for it in the enterprise, which the introduction of the 800p is another testament to.

    If intel was able to sell it at high enterprise margins they wouldn't be forcing it in the consumer world where it is pointless. Intel is not keep on losing money, and as overpriced as it is even as a consumer product, it is tremendously cheaper than what they can ask for it at the enterprise market. Instead they are marketing it to frigging games... which is 100% laughable.

    And of course, I don't expect anyone save for silly fanboys with rich mommies to buy it in the consumer world. because it can offer absolutely nothing for the price premium it comes at. No intelligent human being would pick a 118 gb 800p to a decent 256 gb nvme or 512 gb ssd drive. None whatsoever.

    Constrained by PCIe? It doesn't even come close to that. Neither in terms of bandwidth nor latency. But believe whatever it takes to reinforce your fake worldview.
  • Luckz - Wednesday, April 25, 2018 - link

    Is there actually a point to 256 gb NVMe though? I mean, performance of the small 960 Evo sucks balls compared to the bigger ones. Why go NVMe when you can have a nice SATA drive with much more capacity and not even much worse perf?
  • Adramtech - Saturday, March 10, 2018 - link

    iter, Lehi fabs are 100% dedicated to Xpoint and no longer NAND. They wouldn't commit billions to that if they didn't have a path outlined for improvement and scaling.
  • patrickjp93 - Saturday, March 10, 2018 - link

    Are you kidding? AWS Memcached, Lambda, and DynamoDB have their caching layers and indexing stored in Optane.
  • eddman - Saturday, March 10, 2018 - link

    We just found ddriver's long lost twin brother.
  • Alexvrb - Saturday, March 10, 2018 - link

    Cloning tech gone wrong.
  • Reflex - Saturday, March 10, 2018 - link

    It's not his twin, it's just his new account. This place had a much better community during the too brief time he was gone.
  • patrickjp93 - Saturday, March 10, 2018 - link

    3DXP is being made on the 90nm node right now. What did you expect? It's a vastly cheaper research node for something so complex.

    And the performance is stunningly better than everything else Samsung has EXCEPT for high Queue Depth sequential performance. All real world testing shows the 960Pro getting smashed.
  • Reflex - Saturday, March 10, 2018 - link

    Didn't you hear the memo that because this first gen product isn't as good as the theoretical max performance discussed three years ago its all a big fail? /sarcasm
  • eddman - Monday, March 12, 2018 - link

    90? It is stated as 20nm in that table up there.
  • Nottheface - Monday, March 12, 2018 - link

    I was told these are not related in a previous article's posts: https://www.anandtech.com/comments/12136/the-intel...
  • Ewitte12 - Monday, April 30, 2018 - link

    They had difficulty keeping the enterprise drives in stock.

    The 2X quote was for RAM. low queue depth obliterates NAND. Most other speeds are on par with NAND (with sustained a bit behind) but this is direct access to the storage. Most NAND drives have sophisticated RAM caching it can be writing way after the bar disappears off your screen.

    The biggest issue with pricing. Optane has high early adopter fees (which come with a few extra bugs usually). Also anything under the 900p is kinda pointless. 3.0x2 and low capacities??? Not worth it.
  • Gothmoth - Friday, March 9, 2018 - link

    intel hyped this like crazy and after reading the paper i was hyped too.

    but this seems like just another way for intel to push it´s stock market value with redicolous claims.
  • hescominsoon - Friday, March 9, 2018 - link

    Semiaccurate had 3d x-point pegged from the beginning:

    https://www.semiaccurate.com/?s=point
  • Ashinjuka - Saturday, March 10, 2018 - link

    Optanic.
  • DanNeely - Thursday, March 8, 2018 - link

    Could we see results from Optane as cache + budget SSD and Optane as cache + high end SSD?

    I'm not sure it'd be worthwhile with a fast SSD since it only beats them in a subset of benches, but it looks capable of giving a decent boost to budget flash. Cost effectiveness vs just buying better flash'd be the harder question.
  • iter - Thursday, March 8, 2018 - link

    Cache only makes sense for a HDD. It would make no difference combining it with an SSD. Not in terms of real world application performance anyway.

    Spending on 118 gb of optane is pointless when you can get a decent 512 gb ssd for the same money. Over 200% higher the capacity at 99% of the performance. It is a no brainer. Intel will have to resort to bribing OEMs once again if they are to score any design wins.
  • patrickjp93 - Saturday, March 10, 2018 - link

    Uh, think again on big data where the indices for the databases you're running are way too big to fit in memory. AWS is just one cloud provider making extensive use of Optane, especially in DynamoDB, RDS, Memcached, and Lambda where multi-tenant container environments definitely benefit in rapid spinup thanks to the much lower latency 3DXP.
  • Billy Tallis - Thursday, March 8, 2018 - link

    All of our usual SSD tests are for the drive acting as a secondary drive, but Intel's Optane-specific cache software only supports the boot volume, so it's rather awkward to test.
  • 0ldman79 - Thursday, March 8, 2018 - link

    That is a pretty significant limitation.

    With SSD's a lot of us have small to mid sized SSD as a boot drive and practically everything else resides on a spinner.

    If Optane can't cache the secondary drive then it is of less use to me than even the Kaby Lake and above limitation. That means that even if I built a Kaby Lake or Coffee Lake I still won't get any benefit on the anything aside from the OS. My games are all installed on a mechanical drive.
  • Lolimaster - Friday, March 9, 2018 - link

    Crucial MX500 2TB $499

    If you're an avid GTAV player, the 118GB should be a nice thing for the game intall, also your pagefile and install/profile/cache of firefox/chrome.
  • TheWereCat - Friday, March 9, 2018 - link

    Micron 1100 2TB $370
    https://www.amazon.com/gp/aw/s/ref=is_s_ss_i_1_9?k...
  • Reflex - Friday, March 9, 2018 - link

    Yup, grabbed one of those a few weeks ago, its a great drive for that price.
  • hescominsoon - Friday, March 9, 2018 - link

    I run only a micro 1TB ssd in my machine for everything. I have a couple of friends who are into video editing and they use a spinning disk for temporary storage...but that's about it..:)
  • name99 - Friday, March 9, 2018 - link

    Any decent SSD that wants to boost itself with a cache can ALREADY do so by using some of the MLC or TLC flash as SLC. And thereby run faster than Optane. And without requiring a separate controller and a separate Optane die.

    Optane is not buying you anything in the sort of market you describe.
  • Gunbuster - Thursday, March 8, 2018 - link

    Capacity is still useless for any power user who would be shopping this.
  • iter - Thursday, March 8, 2018 - link

    It will be useful for pagefile spillover in case you have workloads that require more than the 32 or 64 gb of ram that most high end desktops come with.

    It will still massacre performance if you go paging, but it will be significantly better than nand, god forbid hdd.
  • jospoortvliet - Friday, March 9, 2018 - link

    That's an interesting use case, the first I read that seems reasonably useful... But it would still need more performance to really make it worth it, and even then only when you don't care about costs at all and your platform simply doesn't support more ram. I mean, as long as the system can handle another ram dimm, you'd go for that even with the insane prices atm...
  • iter - Saturday, March 10, 2018 - link

    The thing is many systems can't. 64 is currently a limit for high end, 128 for HEDT.

    It could get better by raiding more drives, but .... that's not an option on high end platforms due to the low PCIe lane count. You will have to give up on running a GPU if you want to snap in 4 of those drives.
  • beginner99 - Friday, March 9, 2018 - link

    Exactly. Anything below 240GB is not a workable solution nowadays. I remember my first intel g2 80GB. constant micro-managing where to put files and which app gets to be on the ssd and which not. Or for my parents I back then got them a 64 gb drive. When the win 10 update came it was not possible to update because updating windows 7 to 10 requires more than 64gb.
  • Calin - Friday, March 9, 2018 - link

    I do use a 120GB SSD on my desktop, and it works good enough with a 2TB hard drive. I even use a 90% partition, as early SSDs had performance problems when close to full.
  • sharath.naik - Thursday, March 8, 2018 - link

    was rapid mode tried on Samsung drives?. not sure with a large enough ram the difference in random performance would matter that much.
  • Billy Tallis - Thursday, March 8, 2018 - link

    Half the test suite is run on Linux, so Rapid Mode isn't an option. And in general, I don't approve of third-party software that second-guesses the decisions made by core parts of the OS like the virtual memory system—especially not when those tools put user data at risk without being absolutely clear about what they're really doing.
  • eddieobscurant - Friday, March 9, 2018 - link

    Billy , do you have any news on micron's QuantX ?
  • Dragonstongue - Thursday, March 8, 2018 - link

    Intel and Micron (IM) joint venture, Intel "branded" as Optane either way is 3D XPoint..far as I understood Micron decided to "drop it" so is Intel going about it all on their own, was Unity Semiconductors who was bought out by Rambus 2012, that likely not a good thing either (they) RB seem more prevalent to sue people vs making a tangible product everyone wants (IMO)

    the above 3d x, optane whatever seems like another thing that "on paper" seems like would be a decent thing, but, the price factor puts it into a "there are better options available" that offer similar performance or at the very least substantially better $/gb value.

    I think that is what Micron was seeing, no real way to get the "value" out of it without charging too high a price to make it market worthwhile for them and consumer, Intel is their own fish and they always (again IMO) charge substantial price for a "do we really need this" type product (like Nvidia) cut corners or cut down performance that could have been, but still want top dollar, and "next year" come out with a more full fat version (that should have been the previous year) and want more $ for the "upgrade" planned obsolescence/upgrade path.

    for a loose example, Samsung 950 EVO M.2 250gb (pro faster but ofc more pricey)
    I see available for ~$160 CAD
    read/write 3200/1900
    QD1 Thread
    Random Read: 14,000 IOPS
    Random Write: 50,000 IOPS
    QD32 Thread
    Random Read: 380,000 IOPS
    Random Write: 360,000 IOPS

    their "power draw" and latency do not seem to be praiseworth either, so it still leads me to the same question "why bother"...also, I really wish M.2 drives were maybe a toned down speed version so it could be "less expensive" here I thought that by going smaller and smaller node and going from SLC to MLC to 3d etc price would drop and drop while performance would go up and up, seems that the only real thing that has changed is the less on the "board" the further they crank the speed give smaller capacity and increase the price *facepalm*
  • Lolimaster - Friday, March 9, 2018 - link

    10x less latency
    15x faster in QD1r
    4X faster in QD1w
  • Adramtech - Saturday, March 10, 2018 - link

    Micron has no plans to drop QuantX and are providing an update at their May tech conference.
  • shabby - Thursday, March 8, 2018 - link

    Leave it to intel to artificially cripple a product on purpose, who does this?
  • boeush - Thursday, March 8, 2018 - link

    Seems to me, if you really want supper-fast, low-latency high-endurance random read/write at low QD and capacities ~128GB for a lot of $$$, then just get a bunch of RAM and a UPS (to prevent data loss in case of power failure.). No SSD technology will ever beat good ol' RAM in terms of performance. In this case, for mass storage you just need fast sequential reads and writes so you can quickly map your filesystem to/from RAM on system startup/shutdown, respectively...

    In light of which, until Intel comes out with their next-gen Optane at 512 GB+ capacities in M.2 package, the current product feels like a solution on search of a problem
  • boeush - Thursday, March 8, 2018 - link

    P.S. please pardon the "autocorrect"-induced typos... (in the year 2018, still wishing Anandtech would find a way to let us edit our posts...)
  • Calin - Friday, March 9, 2018 - link

    Unfortunately, if you already have a computer supporting only 32 GB of RAM, the 200$ for an Intel 800p is peanuts compared to what you would have to pay for a system that supports more than 128GB of RAM - both in costs of mainboard, CPU and especially RAM. I'd venture a guess of a $5,000 entry price (you might pay less for refurbished). It might very possibly be worth it, but it's still a $5k against a $200 investment
  • The_Assimilator - Friday, March 9, 2018 - link

    Entry-level Intel Xeon + 1U motherboard with 8x DIMM slots = ~$600
    8x 32GB modules for 256GB RAM total = ~$3,200

    So not quite $5k, but still a lot more than $200 :)
  • mkaibear - Friday, March 9, 2018 - link

    ...plus a new case, plus a new PSU, plus a UPS...
  • boeush - Saturday, March 10, 2018 - link

    Yes, I did mention a lot of $$$...

    But that's the point: how badly do you really need the extreme random access performance to begin with - above and beyond what a good 1 TB SSD can deliver? Will you even be able to detect the difference? Most workloads are not of such a 'pure' synthetic-like nature, and any decent self-respecting OS will anyway cache your 'hot' files in RAM automatically for you (assuming you have sufficient RAM).

    So really, to benefit from such Optane drives (at a cost 4x the equivalent-sized NAND SSD) you'd need to have a very exotic corner-case of a workload - and if you're really into such super-exotic special cases, then likely for you performance trumps cost (and you aren't going to worry so much about +/- a few $thousand here or there...)
  • jjj - Friday, March 9, 2018 - link

    Yeah not impressive at all. They can't reach mainstream price points with higher capacity and that leads to less than stellar perf and a very limiting capacity.
    To some extent, the conversation should also include investing more in DRAM when building a system but that's hard to quantify.
    Intel/Micron need the second gen and decent yields, would be nice if that arrives next year- just saying, it's not like they are providing much info on their plans. Gen 2 was initially scheduled for early 2017 but nobody is talking about roadmaps anymore.
  • jjj - Friday, March 9, 2018 - link

    Just to add something, NAND prices are coming down some and perf per $ is getting better as more folks join the higher perf party. It's not gonna be trivial to compete with NAND in consumer.
  • CheapSushi - Friday, March 9, 2018 - link

    Hardware "enthusiasts" have sure become jaded, cynical, grumpy assholes.
  • Reflex - Friday, March 9, 2018 - link

    No shit. I think people are confusing their anger at Intel with whether or not this is a good tech advancement. I am wondering if they even are looking at the article I saw. The vast majority of the charts showed Optane products in the lead, power consumption lower, latency lower, etc. Only a few places showed it behind, most around scenarios that are not typical.

    It is fair to point out its not worth 3x the cost. I'm building a system now, not going with Optane at this price. It is fair to point out that the capacity is not there yet. That is another part of why I'm not using it. Those are valid criticisms. They are also things that are likely to be remedied very soon.

    What is not fair is to bash it incessantly for reasons imagined in their own minds (OMG IT DOES NOT HIT THE NUMBERS IN A PAPER ABOUT THE POTENTIAL IN ITS FIRST GEN PRODUCTS!), or ignore the fact that we finally have a potentially great storage alternative to NAND which has a number of limitations we have run up against. This is a great thing.
  • Adramtech - Saturday, March 10, 2018 - link

    Agreed, Reflex. In 2 years Optane Gen 2 is likely going to look a lot better and impress. Criticizing Gen 1 tech is ridiculous.
  • Reflex - Saturday, March 10, 2018 - link

    I also think people forget how crappy & expensive gen1 and 2 SSD's were.
  • Drazick - Friday, March 9, 2018 - link

    We really need those in U2 / SATA Express form.
    Desktop users shouldn't use M2 with all its thermal limitations.
  • jabber - Friday, March 9, 2018 - link

    Whichever connector you use or whatever the thermals, once you go above 600MBps the real world performance difference is very hard to tell in most cases. We just need SATA4 and we can dump all these U2/SATA Express sockets. M.2 for compactness and SATA4 for everything else non Enterprise. Done.
  • Reflex - Friday, March 9, 2018 - link

    U2 essentially is next gen SATA. There is no SATA4 on the way. SATA is at this point an 18 year old specification ripe for retirement. There is also nothing wrong with M.2 even in desktops. Heat spreaders aren't a big deal in that scenario. All that's inside a SATA drive is the same board you'd see in M.2 form factor more or less.
  • leexgx - Saturday, March 10, 2018 - link

    apart from that your limited to 0-2 slots per board (most come with 6 SATA ports)

    i agree that a newer SATA that support NVME over it be nice but U2 be nice if anyone would adopt it and make the ports become standard and have U2 SSDs
  • jabber - Friday, March 9, 2018 - link

    I am amazed that no one has decided to just do the logical thing and slap a 64GB Flash cache in a 4TB+ HDD and be done with it. One unit and done.
  • iter - Friday, March 9, 2018 - link

    They have, seagate has a hybrid drive, not all that great really.

    The reason is that caching algorithms suck. They are usually FIFO - first in first out, and don't take into account actual usage patterns. Meaning you get good performance only if your work is confined to a data set that doesn't exceed the cache. If you exceed it, it starts bringing in garbage, wearing down the flash over nothing. Go watch a movie, that you are only gonna watch once - it will cache that, because you accessed it. And now you have gigabytes of pointless writes to the cache, displacing data that actually made sense to be cached.

    Which is why I personally prefer to have separate drives rather than cache. Because I know what can benefit from flash and what makes no sense there. Automatic tiering is pathetic, even in crazy expensive enterprise software.
  • jabber - Friday, March 9, 2018 - link

    Yeah I was using SSHD drives when they first came out but 8GB of flash doesn't really cut it. I'm sure after all this time 64GB costs the same as 8GB did back then (plus it would be space enough for several apps and data sets to be retained) and the algorithms will have improved. If Intel thinks caches for HDDs have legs then why not just combine them in one simple package?
  • wumpus - Friday, March 9, 2018 - link

    Presumably, there's no market. People who buy spinning rust are either buying capacity (for media, and using SSD for the rest) or cheaping out and not buying SSDs.

    What surprises me is that drives still include 64MB of DRAM, you would think that companies who bothered to make these drives would have switched to TLC (and pseudo-SLC) for their buffer/caches (writing on power off must be a pain). Good luck finding someone who would pay for the difference.

    Intel managed to shove this tech into the chipsets (presumably a software driver that looked for the hardware flag, similar to RAID) in 2011-2012, but apparently dropped that soon afterward. Too bad, reserving 64GB of flash to cache a harddrive (no idea if you could do this with a RAID array) sounds like something that is still usefull (not that you need the performance, just that the flash is so cheap). Just make sure the cache is set to "write through" [if this kills performance it shouldn't be on rust] to avoid doubling your chances of drive loss. Apparently the support costs weren't worth the bother.
  • leexgx - Saturday, March 10, 2018 - link

    8GB should be plenty for SSHD and there currant generation have cache evic protection (witch i think is 3rd gen) so say a LBA block is read 10 times it will assume that is something you open often or its a system file or a startup item, so 2-3GB of data will not get removed easily (so windows, office, browsers and other startup items will always be in the nand cache) the rest of the caching is dynamic if its had more then 2-4 reads it caches it to the nand

    the current generation SSHDs by seagate (don't know how others do it) its split into 3 sections so has a easy, bit harder and very hard to evict from read cache, as the first gen SSHDs from seagate just defragmenting the drive would end evicting your normal used stuff as any 2 reads would be cached right away that does not happen any more

    if you expect it to make your games load faster you need to look elsewhere, as they are meant to boost commonly used applications and OS and on startup programs but still have the space for storage

    that said i really dislike HDDs as boot drives if they did not cost £55 for a 250gb SSD i put them in for free
  • name99 - Friday, March 9, 2018 - link

    Not QUITE true.
    Apple has done it (IMHO very successfully) in part because
    - they understand something of the data patterns and
    - already had tech in the file system to move hot data (hot file system data AND hot files) to the fastest part of the medium and
    - they were willing to include ENOUGH flash (128GB) and fast flash; they didn't cheap out.

    But yeah, the solutions sold by Seagate were not (in my experience) very impressive, especially considering the ridiculous premium Seagate charged for them.

    What you CAN do on Apple systems (and I have done, very successfully, multiple times) is to fuse external SSDs with other drives (either other external or an internal HD) and this behaves just like a native fusion drive, you can even boot off it. This means you can retrofit fusion even to old macs (eg I have a 2007 iMac running a fusion system based on an SSD in an external FW-800 enclosure, fused with the internal 320GB drive).
  • zepi - Friday, March 9, 2018 - link

    Sounds like Apple Fusion drive. Very difficult to do well on drive-level, much easier to do well with some OS support and filesystem level.

    Afaik people have been relatively happy with their Fusion drives, though personally I find them horribly expensive. Then again, that applies to all Apple storage options, they always feel insanely expensive.
  • PeachNCream - Friday, March 9, 2018 - link

    Optane performance is good in some ways and disappointing in others. I'd like to see the technology improve since NAND endurance is a problem that warrants a solution. Maybe Optane isn't that solution.
  • Reflex - Friday, March 9, 2018 - link

    Optane basically is a variation of Phase-Change Memory. It's been around a long time, but Micron/Intel have finally managed to make it in large enough capacities to productize it out of niche markets. There are other contenders for next gen memory &storage, ranging from MRAM (magnetic memory) to ReRAM to racetrack memory (HP has claimed to be on the edge of productizing that for about four years now).

    I am just happy one finally got out there, an it is in pretty good shape for a first gen product. Hoping this gets others to get serious about bringing alternative storage methods to market soon.
  • Lolimaster - Saturday, March 10, 2018 - link

    At least the 860 EVO and Pro improved endurance a lot for consumer.

    600TB 860 EVO 1TB
    1.2PB 860 Pro 1TB
  • leexgx - Sunday, March 11, 2018 - link

    they can easy do 4x that especially the Pro drive (they was been Really conservative before, mainly so it did not affect the sales of there enterprise drives)

    heck the 840 Pro did was 2PB before it died suddenly (but it did all that with 0 read errors)
  • Araemo - Friday, March 9, 2018 - link

    Can we get the consistency scatter plots for this drive? Those are an awesome tool to gauge the real world 'feel' of the drive.
  • Billy Tallis - Friday, March 9, 2018 - link

    They're an awesome tool to exaggerate the impact of garbage collection pauses on flash-based SSDs. Real-world usage doesn't involve constant writes to a full drive. Those random write consistency graphs often show interesting things about how drives handle GC, but they're a horrible way of ranking real-world performance of SSDs.
  • Zinabas - Saturday, March 10, 2018 - link

    As a thought the best case to use these in... would be an AMD Ryzen system with (Fuzedrive) the new software that manages all the drives as one volume. The small capacity would be automanaged by software and would be swapped to fit whatever you're playing at the time.
  • emvonline - Monday, March 12, 2018 - link

    so there doesnt seen to be a clear difference in real world applications. its faster with lower latency but does not always show up. could you cleary pick the optane drive vs samsung 960 in a blind test everytime running games and office apps?
  • AnnonymousCoward - Wednesday, March 14, 2018 - link

    No. My understanding is that most load times are CPU bound, and there's a negligible difference from most 500MB/s SATA III drives vs the Samsung 950/960 vs Optane. That makes it completely pointless for almost all users.
  • emvonline - Friday, March 16, 2018 - link

    so any reasonable calculation says chip can be cycled 7-10k times (mlc nand is specd at 10k). And total tbw is less than most of the competion 960 pro ssds. is this true? so its around enterprise mlc in endurance???
  • mkozakewich - Sunday, March 18, 2018 - link

    Why is everyone so bad at figuring out when to use things? This would be great in any environment where you're prioritizing low-queue-depth transfers. Obviously it's not going to replace your computer's SSD.
  • MDD1963 - Friday, March 23, 2018 - link

    When these things cost less than and/or exceed the performance of the 960 EVO, perhaps then they will begin to sell....
  • DocNo - Thursday, April 12, 2018 - link

    Use this as L2 cache with PrimoCache and prepared to be amazed. I have the 64GB Optane paired with Primocache and the performance difference is notable - even with my primary drive being a Samsung M2 Pro series SSD drive. If I wasn't so happy with my current setup I'd be all over this new drive as an L2 cache. And unlike Intel's caching software for Optane, Primocache is trivial to install with no special requirements for BIOS or partitioning.

    http://www.romexsoftware.com/en-us/primo-cache/ind...

    Primocache is also the most inexpensive way I have found to accelerate Windows server too. I'm a huge fan!
  • Chongsboy - Monday, August 27, 2018 - link

    Not sure where you guys work, or what you have against optane is, but under server conditions, a new mb design, would be wonderful for servers, ie: cloud servers. Where workers are a fricken arm and a leg, hardware costs is not that much of an issue, especially special discounts intel will give to big players. I feel that most ppl here commenting negatively dont actually run real life servers, dealing with thousands, maybe not even hundred requests per minute. Reliability amd speed trumps prices for these servers as if you dont realize aws and all cloud providers are ripping people off, and thats why these c rooked companies are earning billions. Anyways, i deal with these cloud systems in my work. I for one expect all large companies to mov to optane, because it's just that fast, and reliable: ie facebook, aws, azure, what not. Dunno why people are going nuts... we got intel haters galore here...

Log in

Don't have an account? Sign up now