POST A COMMENT

54 Comments

Back to Article

  • 420baller - Tuesday, June 11, 2013 - link

    First! Reply
  • kenour - Tuesday, June 11, 2013 - link

    Fir... DAMN IT! Reply
  • DigitalFreak - Tuesday, June 11, 2013 - link

    Retard! Reply
  • UltraTech79 - Tuesday, June 11, 2013 - link

    What are you, fucking 12? Get the fuck out of here. Reply
  • Makaveli - Tuesday, June 11, 2013 - link

    I agree with UltraTech.

    420 No real girl baller its past your bed time GTFO.
    Reply
  • Spunjji - Wednesday, June 12, 2013 - link

    Fail! Reply
  • BMNify - Tuesday, June 11, 2013 - link

    F... bummer! Reply
  • texasti89 - Tuesday, June 11, 2013 - link

    Great work Anand, indeed these last two weeks have been really hectic for tech sites. Great new techs have been announced. I'm happy for all the technological evolution that has taken place in the last 4 years, especially on the flash storage side. I'm only hoping that the price/GB for these new enterprise SSDs will come soon below 1$/GB. Reply
  • ImSpartacus - Tuesday, June 11, 2013 - link

    Yes, I'm impressed. Even this brief look at the new Intel drive is very helpful. I'll definitely be back for the rest of it. Reply
  • petertoth - Tuesday, June 11, 2013 - link

    The write endurance seems to be pretty low though! 450000GB/800GB = 562 cycles. Others do something like 2-3000! Reply
  • Minion4Hire - Tuesday, June 11, 2013 - link

    I believe that's just the writes they guarantee the drive for. There's write amplification and maintenance to consider there as well. Reply
  • ShieTar - Wednesday, June 12, 2013 - link

    Well, they have to keep the S3700 useful enough to sell both. So they tailor the specs a bit in order to push customers into buying the "right" drive. Reply
  • ShieTar - Wednesday, June 12, 2013 - link

    Then again, if this is guaranteed for the whole range, its an impressive number for the small 80GB drive. Reply
  • pesos - Tuesday, June 11, 2013 - link

    How about performance over time in virtualization scenarios? Wondering how well these SSDs hold up when they have nothing on them but virtual hard disks... Reply
  • dealcorn - Tuesday, June 11, 2013 - link

    In Part 2, could you kindly note whether the Drive supports DEVSLP. Depending on usage pattern, not considering the drive for mobile use based on idle power requirements may be inappropriate. Reply
  • sunbear - Tuesday, June 11, 2013 - link

    Looking at the consistency comparison against the seagate 600 pro, it looks like the intel s3500 is more consistent but unfortunately it's consistently slower in every metric. I'd rather have a seagate 600 pro with inconsistent performance if the minimum performance if that drive is better than the maximum performance of the s3500. Reply
  • beginner99 - Wednesday, June 12, 2013 - link

    I had the same thought. agree. Reply
  • hrrmph - Friday, June 14, 2013 - link

    As an individual drive maybe.

    For RAID, the slowest drive in the array will probably control the overall I/O rate. In that case, I don't see an advantage for Seagate over Intel.

    As I see it, the S3500 is a pro-sumer high-end workstation drive for RAID arrays, and a mid-range enterprise class drive. The S3700 is clearly a full-on high-end enterprise class drive.

    We'll have to wait for Part 2 of the article and hope that Anand gives us some comparisons to the consumer 520 series to see if there is any reason to buy an S3500 instead of a 520.

    Intel is being suspiciously quiet about the upcoming 530 series SSDs. I expect that we'll be looking at another low power consumption, high performance, relatively affordable SSD using a non-Intel controller. But, it would be nice if we could have all of that with an Intel controller instead.

    -
    Reply
  • rs2 - Wednesday, June 12, 2013 - link

    What's the deal with the first slide from Intel shown in the conclusion? Specifically, how is a 12x800GB (9.6 TB) deployment comparable to a 500x300GB (150 TB) deployment?

    The only way you can get 500 VM's on such a deployment is if you allocate only ~20 GB per VM. That's anemic. And if that's the allocation size then the 500x300GB can support over 7500 VM's.

    So...yeah, not seeing how a valid comparison is being made. Intel should be quoting figures based upon ~192 SSD's, because that's how many it takes to reach the same storage capacity as the solution it's being compared to.
    Reply
  • flyingpants1 - Wednesday, June 12, 2013 - link

    I noticed the same thing. Reply
  • ShieTar - Wednesday, June 12, 2013 - link

    I think the metric is supposed to show that you need a dedicated drive per VM with mechanical HDDs, but that one of these SSDs can support and not slow down 12 VMs by itself. Having 12 VMs access the same physical HDD can drive access times into not-funny territory.
    The 20GB per VM can be enough if have a specific kernel and very little software. Think about a "dedicated" Web-Server. Granted, the comparison assumes a quiet specific usage scenario, but knowing Intel they probably did go out and retrieve that scenario from an actual commercial user. So it is a valid comparison for somebody, if maybe not the most convincing one to a broad audience.
    Reply
  • Death666Angel - Wednesday, June 12, 2013 - link

    Read the conclusion page. That just refers to the fact that those 2 setups have the same random IO performance. Nothing more, nothing less. Reply
  • FunBunny2 - Wednesday, June 12, 2013 - link

    Well, there's that other vector to consider: if you're enamoured of sequential VSAM type applications, then you'd need all that HDD footprint. OTOH, if you're into 3NF RDBMS, you'd need substantially less. So, SSD reduces footprint and speeds up the access you do. Kind of a win-win. Reply
  • jimhsu - Wednesday, June 12, 2013 - link

    Firstly, the 500 SAS drives are almost certainly short-stroked (otherwise, how do you sustain 200 IOPS, even on 15K drives). That cuts capacity by 2x at least. Secondly, the large majority of web service/database/enterprise apps are IO-limited, not storage-limited, hence all that TB is basically worthless if you can't get data in and out fast enough. For certain applications though (I'm thinking image/video storage for one), obviously you'd use a HDD array. But their comparison metric is valid. Reply
  • rs2 - Wednesday, June 12, 2013 - link

    That doesn't mean it's not also confusing. The primary purpose of a "SW SAN Solution" is storage, not IOPS, so one SAN is not comparable to another SAN unless they both offer the same storage capacity.

    In the specific use-case of virtualization, IOPS are generally more important than storage space. But if IOPS are what they want to compare across solutions is IOPS performance then they shouldn't label either column a "SAN".

    So yes, on the one hand it's valid, but on the other it's definitely presented in a confusing way.
    Reply
  • thomas-hrb - Wednesday, June 12, 2013 - link

    It is a typical example of a vendor highlighting the statistics they want to you remember, and ignoring the ones that they hope are not important. That is the reason why technical people exist. Any fool can read and present excellent arguments for one side or the other. It is the understanding of these parameters, what they actually mean in a real world usage scenario that is the bread and butter of our industry. I don't know if this is typical for most modern SAN's. I am using a IBM v7000 (very popular SAN for IBM). But the v7000 comes with Auto Teiring which moves "hot blocks" from normal HDD Storage to SSD, thus having a solid performing random IO SSD that is consistent is essential to how this type of SAN works. Reply
  • Jaybus - Monday, June 17, 2013 - link

    Well, but but look at it another way. You can put 120 SSDs in 20U and have 200 GB per VM using half the rack space and a tenth the power but with FAR higher performance, and for less cost.

    Also, the ongoing cost of power and rack space is more important. In the same 42U space you can have a 252 SSD SAN (201,600 GB) and still use less than a fifth the power and have far, far greater performance.
    Reply
  • thomas-hrb - Wednesday, June 12, 2013 - link

    They are comparing IOP's. There are a few use cases where having large amounts of storage is the main target (databases, mailbos datastores etc), but typically application servers are less than 20GB in size. Even web-servers will typically be less than 10GB (nix based) in size. Ultimately any storage system will have a blend of both technologies and have a teir'd setup where they have Traditional HDD's to cover their capacity and somewhere between 5-7% of that capacity as high performance SSD's to cover for the small subset of data blocks that are "hot" and require significant'y more IOP's. This new SSD simply gives storage professionals an added level of flexibility in their designs. Reply
  • androticus - Wednesday, June 12, 2013 - link

    Why is "performance consistency" supposed to be so good... when the *lowest* performance number of the Seagate 600 is about the same as the *consistent* number for Intel? The *average* of the Seagate looks much higher? I could see this as an advantage if the competitor numbers also went way below Intel's consistent number, but not in this case. Reply
  • Lepton87 - Wednesday, June 12, 2013 - link

    Compared to Seagate random write performance this doesn't look unlike a GF that delivers almost constant 60fps compared to a card that delivers 60-500fps, so what's the big deal? Cap the performance at whatever level Intel SSD delivers and you will have the same consistency, but what's the point? It only matters if the drives deliver comparable performance but one is a roller-coaster and the second is very consistent which is not the case is this comparison. Allocate more spare area to the Seagate, even 25% and it will mop the floor with this drive and price per GB will be still FAR lower. Very unimpressed with this drive, but because it's an Intel product we are talking about on Anandtech it's lauded and praised like there's no tomorrow. Reply
  • oyabun - Wednesday, June 12, 2013 - link

    I made the same observation, the Samsung has at a minimum the performance of the Intel drive and then skyrockets. Reply
  • cheeselover - Wednesday, June 12, 2013 - link

    umm... isn't it the other way around? 600pro already has overprovisioning at 28% and s3500 has it at 9%. Reply
  • btb - Wednesday, June 12, 2013 - link

    No Windows 8 Secure boot support? Reply
  • btb - Wednesday, June 12, 2013 - link

    oops typo, meant Microsoft eDrive support Reply
  • lyeoh - Wednesday, June 12, 2013 - link

    Anand, do you have IOPs/latency over time graphs for random reads as well? Or are random reads quite stable and we can derive them from the 4KB random read IO meter scores? I notice the sandforce drives seem to find random reads harder, so I'm wondering if there are any latency spikes for various drives. Reply
  • lucasbakker - Wednesday, June 12, 2013 - link

    What about Capacitors on this controller? Why is it that nowadays in reviews I don't see any mention anymore about supercapacitors and data loss when losing power. Reply
  • ShieTar - Wednesday, June 12, 2013 - link

    Well, those are enterprise drives, Intel probably assumes that their customers will implement their own emergency power plans in their data centers, so the drives itself don't have to.
    And on consumer drives, the potential data loss of a power outage are rather acceptable for most people. I've personally experienced one real power outage and one blown fuse over the next 25 years, so that's not really a relevant scenario for my PC buying decisions.
    Reply
  • lucasbakker - Wednesday, June 12, 2013 - link

    It used to be a big issue in reviews. For databases a capacitor can be pretty important, even when taking emergency power setups in mind. Furthermore, I guess that with laptops sudden power drops are a little bit more common. Reply
  • thomas-hrb - Wednesday, June 12, 2013 - link

    Somehow I don't see this making it into too many laptops, and enterprise SAN's etc have power failure protection. I think that it is just a feature that was in the S3700 that they did not disable in this unit, it all helps with the prosumer targeting. Reply
  • zanon - Wednesday, June 12, 2013 - link

    Um, Anand? Why no mention of your own research showing how key over provisioning was and the immense difference it could make in performance consistency? The S3500 is significantly more expensive then other prosumer drives like the 840 Pro, Corsair Neutron, etc., and by "significant" I mean the magical "25%". That means that someone could instead choose to get another drive (or multiple other drives) and then assign 25% spare area for each, at which point from your own tests it looks like the S3500 gets SLAUGHTERED.

    Please do not throw softballs to Intel, they are big boys and can and should be expected to produce competitive, top tier stuff with no asterisks. If for some reason the far higher IOPS with better consistency produced by drives like the Corsair aren't worth the same as the Intel drive, please explain why. If there are other special features being factored in, please mention them. But even for a brief, high level overview this didn't feel like it set the proper context. You spent a great deal of time testing and discussing this stuff in the past, so to suddenly have it vanish from the conversation feels pretty weird.
    Reply
  • nathanddrews - Wednesday, June 12, 2013 - link

    The difference is that the S3500 comes over provisioned and the others don't. While you and I have the knowledge and skill to do it ourselves, most people - even IT staff - would have zero clue or interest in how to do something like that. Reply
  • zanon - Wednesday, June 12, 2013 - link

    Give me a break, "most people" aren't interested in an S3500 period or even a prosumer drive, their primary focus would be capacity and cost (since at that level any modern SSD at all will be great). By definition, anyone interested in this or other such drives isn't "most people". "IT staff" or prosumers can perfectly well format/partition a drive, an easy GUI for it comes with every OS they'd use, it's hardly the kind of technical operation that'd make it a rare case. And since it only ever needs to be done once and then can be ignored forever, it can even be setup by someone else.

    Anand has considered it important enough to spend significant time on and test in all other recent reviews, and I think that speaks for itself. It's of direct relevance.
    Reply
  • cheeselover - Wednesday, June 12, 2013 - link

    does increasing overprovising on the intel drive change the performance much? this article compares s3500 to 600 pro but overprovising is much higher on the seagate drive (512gb of flash to get 400gb of storage). the intel drive is listed as 264gb of flash for 240gb and that translate to 512gb of flash for 480gb.

    also wondering how the pricing works out considering for the same amount of flash the seagate drives get 20% less storage space.
    Reply
  • sallgeud - Wednesday, June 12, 2013 - link

    As of right now it's been nearly 6 weeks since the last retailer and wholesaler received their shipments of S3700s. The word from most of them is that we're at least 6 more weeks away from the next expected deliveries. For those of us in the server world, it would be great if they could just produce and ship what they already make... and thus far throwing money at my monitor has done nothing. Reply
  • mtoma - Wednesday, June 12, 2013 - link

    Regarding the testing methodology: on page 3, Mr. Shimpi said (as usual) the following: "To generate the data below I took a freshly secure erased SSD and filled it with sequential data". Ok, so how EXACTLY he did that? I mean, secure erasing the Intel SSD. I was in a couple of very frustrating positions, when I tried to secure erase Intel and Samsung SSD's, following the kind (read DUMB) suggestions of Samsung SSD Magician and Intel SSD Toolbox. On the Samsung drive I finnaly did it, I secure erased the drive. On Intel, no way. Intel SSD Toolbox kept saing that I must power down the drive, and then power on. But that din't work. I noticed a lot of angry users of Intel SSDs who could not secure erase their drive.
    So allow me to repeat the question: HOW MR. SHIMPI SECURE ERASED THE DRIVE? Thanks!
    Reply
  • alainiala - Wednesday, June 12, 2013 - link

    Interesting, the comment about the high idle power usage making this drive not ideal for consumer use... Our channel partner was recommending this as a replacement for the 320 Series for our laptops. Reply
  • mjz - Thursday, June 13, 2013 - link

    why would you even have to upgrade the SSDs in the laptops? I think your channel partner is just trying to make some money. The intel 320 ssd when used in a laptop is good for 98% of tasks Reply
  • neodan - Thursday, June 13, 2013 - link

    Unrelated question but if you guys had a choice between having the Crucial M500 480GB or the Samsung 830 512GB for the same price which would pick overall? Reply
  • Wolfpup - Thursday, June 13, 2013 - link

    I continue to be a firm believer in Micron/Crucial and Intel's drives-quality and reliability and non-flakieness over (sometimes) better performance. ANY decent SSD for years now has provided crazy performance. As far as I'm concerned, that's now a moot point, save for drives that dip super low weirdly.

    What I care about is reliability and the testing these two companies do compared to other companies. I mean whoopdedo if one company makes an SSD that's 400 bajillion MB/s and another does 400 bajillian + 20 MB/s if the latter is going to corrupt my data after six months.

    I've currently got two Intel drives and Crucial in active use (one in my Playstation 3) and all of them have run great with zero issues. Thrilled that Intel's using their own controllers again and not the "we spent an entire year fixing Sandforce's gigantic bugs and it still has gigantic bugs" Sandforce stuff.

    Hmm, I guess actually I have a Samsung in my Macbook which has been okay too.
    Reply
  • Juddog - Thursday, June 13, 2013 - link

    Excellent job Anand! I just hope Intel can keep up with supplying these things; I tried to get my hands on an S3700 after they came out and they were all completely sold out everywhere. Reply
  • toyotabedzrock - Thursday, June 13, 2013 - link

    If my math is correct, excluding the spare area, this mlc can only be written to 700 times? Reply
  • ShieTar - Friday, June 14, 2013 - link

    Your math is unrealistically simplified. You could fill up 75% of the drive with data that you never change, so then you can write the remaining 25% of space 2800 times before you reach the 450TB written.

    Also, Intel only want to guarantee 450TB written. That could still mean that the average drive survives much longer, it just is not meant as a major selling point for this drive.
    Reply
  • jhh - Friday, June 14, 2013 - link

    I don't understand why the review says latency measurements are done, when the chart shows IOPS. Latency is measured in milliseconds, not IOPS. I want to know how long it takes for the drive to complete an operation after it gets the command. Even more interesting is how that measurement changes as the queue is bigger or smaller. Any chance of getting measurements like this?

    I'm not sure how this works in Windows, but in Linux, when an application wants to be sure data is persistently stored, this operation translates into a filesystem barrier, which does not return until the drive has written the data (or stored it in a place where it's safe from power failure). The faster the barrier completes, the faster the application runs. This is why I would like to know latency in milliseconds. While IOPS has its value, so does milliseconds.
    Reply
  • mk8 - Tuesday, January 14, 2014 - link

    Anand, I think one thing that you don't mention at all in the article is IF the S3500 needs or benefits of over provisioning. I guess the performance benefits would be minor, but what about write amplification? I look forward for the "Part 2" of the article. Thanks Reply

Log in

Don't have an account? Sign up now