One question, all of the tests at Anandtech for Sandforce performance after being hammered for a period of time are always over the entire drive. Do the drives maintain their performance if the random writes are only over 50% or 75% of the LBA's on the drive? Very few people actually fill up their SSD so I wonder if it is a truely relevant test?
You will always eventually use all of your SSD, wear leveling algorithms will spread data across all NAND packages... A certain portion will always be marked "empty" if you haven't filled it to capacity, but that space has been issued and it's subject to the performance degradation conditions AT tests for.
It looks to me as though the 1GB Dram Cache on the Intel DC S3700 is mainly responsible for smoothing out those peaks and valleys to deliver "Consistant" performance across the drive
As for trim.....
It's time to start with a fresh perspective on SSD's
The problem with caching in general is that there is no good way to test how much write/read caching the drive is doing. All we got is what manufacturers tell us, which may or may not be accurate.
This information has been already hashed over by several sites, in particular TweakTown. They have been educating the public for months about the lack of TRIM with Sandforce SSDs. Other sites have also noticed the read degradation, and commented on it ad nauseum.
OMG...SandForce does not do dedupe (deduplication). It does not "has to check if the data is used by something else."!! The drive is unaware of the actual file usage above the device level. That is a host level responsibility. I cannot believe that this article was not vetted before it was posted.
SandForce does deduplication at the device level. It doesn't look for actual files like the host does because it's all ones and zeros for the controller. However, what it does is look for similar data patterns.
For example, if you have two very similar photos which are 5MB each, the controller may not write 10MB. Instead, it will only write let's say 8MB to the NAND because some of the data is duplicate and the whole idea of deduplication is to minimize NAND writes.
If you go and delete one of these photos, the OS sends a TRIM command that tells the LBA is no longer in use and it can be deleted. What makes SandForce more complicated is the fact that the photos don't necessarily have their own LBAs, so what you need to do is to check that the LBA you're about to erase is not mapped to any other LBA. Otherwise you might end up erasing a portion of the other photo as well.
I challenge you to offer one document that supports your assertion that sandforce does deduplication. There ins't any, as it does not. Feel free to link to the technical document to support your claims in your reply. SandForce supports compression, not deduplication.
SandForce/LSI has published very little about the technology behind DuraWrite and how it works, but a combination of technologies including compression and deduplication is what they have told us.
Linking an Anandtech article is not proof that SandForce does deduplication. A quick Google will reveal that there is no other source, outside of Anandtech, that claims that they offer deduplication on the current series of processors. As a matter of fact, that article is the only other reference to deduplication and SandForce that can be found. There was a mistake made in that article.
As a matter of fact, if deduplication were to apply to the SandForce series of processors then incompressible data would also experience decreases in write amplification. SandForce is very public that they have "top follow the same rules" with incompressible data as everyone else. IE, they suffer the same amount of write amplification. Since SandForce controllers only exhibit performance enhancing and endurance increasing benefits from compressible data, that alone indicates that deduplication is not in use. Deduplication can be applied regardless of the compressibility of the data.
Incompressible data generally doesnt have duplications in it... that's kinda what makes it incompressible... I mean the whole POINT of compression is removing duplications!
If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses. For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
That could also be considered compression. Take 2 copies of the same MP3 file and put them into a zip file, how big is the zip file? Pretty close to the size of one copy...
Do you understand how data deduplication works? This is a rhetorical question. Those who have read your comments know the answer. Please read the Wikipedia article on data deduplication, or some other source, before making further comments.
I am repeating the comments above for you, since you referenced the Wiki I would kindly suggest that you might have a look at it yourself before commenting further. "the intent of storage-based data deduplication is to inspect large volumes of data and identify large sections – such as entire files or large sections of files – that are identical, in order to store only one copy of it." This happens without any regard to whether data is compressible or not. If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses. For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
You contradict yourself dude. You are regurgitating the words, but their meaning isn't sinking in. If you have two sets of incompressible data, then you have just made it compressible, ie. 2=1
When the drive is hammered with incompressible data, there is only one set of data. If there were two or more sets of identical data then it would be compressible. De-duplication is a form of compression. If you have incompressible data, it cannot be de-duped.
Write amplification improvements come from compression, as in 2 files=1 file. Write less, lower amplification. Compressible data exhibits this, but incompressible data cannot because no two files are identical. Write amp is still high with incompressible data like everyone else. Your conclusion is backwards. De-duplication can only be applied on compressible data.
The previous article that Anand himself wrote suggested dedupe, it did not state that it was used, as that was not divulged. Either way, dedupe is similar to compression, hence the description. Although vague, it's the best we got from Sandforce to describe what they do.
What Sandforce uses is speculation anyhow, since it deals with trade secrets. If you really want to know you will have to ask Sandforce yourself. Good luck with that. :)
If you were to write 100 exact copies of a file, with each file consisting of incompressible data and 100MB in size, deduplication would only write ONE file, and link back to it repeatedly. The other 99 instances of the same file would not be repeatedly written. That is the very essence of deduplication. SandForce processors do not exhibit this characteristic, be it 100 files or even only two similar files. Of course SandForce doesn't disclose their methods, but full on terming it dedupe is misleading at best.
SandForce presumably uses some sort of differential information update. When a block is modified, you find the difference between the old data and the new data. If the difference is small, you can just encode it over a smaller number of bits in the flash page. If you do the difference encoding, you cannot gc the old data unless you reassemble and rewrite the new data to a different location.
Difference encoding requires more time (extra read, processing, etc). So, you must not do it when the write buffer is close to full. You can always choose whether or not you do differential encoding.
It is definitely not deduplication. You can think of it as compression.
A while back my prof and some of my labmates tried to guess their "DuraWrite" (*rolls eyes*) technology and this is the best guess have come up with. We didn't have the resources to reverse engineer their drive. We only surveyed published literature (papers, patents, presentations).
Hallelujah! Thanks funnytrace, i had a strong suspicion that it was data differencing. In the linked patent document it lists this 44 times. Maybe that many repetitions will sink in for some who still believe it is deduplication? Also, here is a link to data differencing for those that wish to learn.. http://en.wikipedia.org/wiki/Data_differencing Radoslav Danilak is listed as the inventor, not surprising i believe he was SandForce employee #2. He is now running Skyera, he is an excellent speaker btw.
It's no different than SAN's and ZFS and other enterprise level storage solutions doing block level de-duplication. It's not magic, and it's not complicated. Why is it so hard to believe? I mean, you are correct that the drive has no idea what bytes go to what file, but it doesn't have to. As long as the controller sends the same data back to the host for a given read on an lba as the host sent to write, it's all gravvy. It doesnt matter what ends up on the flash,.
Aboslutely correct. However, they have much more powerful processors. You are talking about a very low wattage processor that cannot handle deduplication on this scale. SandForce also does not make the statement that they actually DO deduplication.
"Speaking specifically on SF-powered drives, Kent is keen to illustrate that the SF approach to real time compression/deduplication gives several key advantages."
Entertaining that you would link to thessdreview, which is pretty much unanimously known as the home of misinformation. Here is a link to the actual slide deck from that presentation, which does not ever mention deduplication. http://www.flashmemorysummit.com/English/Collatera...
LOL - good job, I will continue to read and see if all the "smart" people have finally shut the H up. I was hoping one would come by, apologize, and thank you. Of course I know better. *Happy the consensus is NOT the final word.*
Most of our tests are run without a partition, meaning that the OS has no access to the drive. After the torture, I created a partition which formats the drive and then deleted it. Formatting the drive is the same as TRIMing all user-accessible LBAs since it basically tells the controller to get rid of all data in the drive.
Does it mean, that there was no TRIM command executed at all?
Not when torturing drive, because it wasn't TRIM supported partition. Not when you "TRIM'ed" drive, because it was a format.
While I agree that you can notice some weird effects, why do you describe them as TRIM problems? Sorry, but I don't know how your test could be relevant to standard use of SDD, when TRIM is active all the time.
Formatting is the same as issuing a TRIM command to the whole drive. If I disable TRIM and format the drive, its performance won't restore since the drive still thinks the data is in use and hence you'll have to do read-modify-write when writing to the drive.
They are problems in the sense that the performance should fully restore after formatting. If it doesn't, then TRIM does not function properly. Using an extreme scenario like we do it the best for checking if there is a problem; how that affects real world usage is another question. With light usage there shouldn't be a problem but you may notice the degradation in performance if your usage is write intensive.
Basing on you test I would say, that format is not enough to restore drive performance after using it without TRIM. Quite possible that the state of the drive after torture without TRIM is very different to anything you can get when TRIM is active.
It would be interesting to compare your test to real life scenario, with NTFS partition and working TRIM.
With most SSDs, formatting the drive will fully restore it's performance, so the behavior we're seeing here is not completely normal.
Remember that even if TRIM is active at all times, sending a TRIM command to the controller does not mean the data will be erased immediately. If you're constantly writing to the SSD, the controller may not have time to do garbage collection in real time and hence the SSD may be pushed to a very fragmented state as in our test where, as we can see, TRIM doesn't work perfectly.
I know that our test may not translate to real world in most cases, but it's still a possible scenario if the drive is hammered enough.
That is correct, Storage bench tests are run on a drive without a partition.
Running tests on a drive with a partition vs without a partition is something I've discussed with other storage editors quite a bit and there isn't really an optimal way to test things. We prefer to test without a partition because that is the only way we can ensure that the OS doesn't cause any additional anomalies but that means the drive may behave slightly differently with a file system than what you see in our tests.
Well personally I think that testing devices that are designed to operate in a certain environment is important. You are testing SSDs that are designed for a filesystem and TRIM, without a filesystem and TRIM. This means that the traces that you are running aren't indicative of real performance at all, the drives are functioning without the benefit of their most important aspect, TRIM. This explains why Anandtech is just now reporting the lack of TRIM support when other sites have been reporting this for months. Testing in an unrealistic environment with different datasets than those that are actually used when recording (your tools do not use the actual data, it uses substituted data that is highly compressible), in a TRIM free environment is like testing a Formula One car in a school zone. This is the problem with proprietary traces. Readers have absolutely no idea if these results are valid, and surprise, they are not!
The drive has NO IDEA id there is a partition on it or not. All the drive has to do is store data at a bunch of different addresses. That's it. Whether there is a partition or not has no difference, it's all just 0's and 1's to the drive.
it IS all ones and zeros my friend, but TRIM is a command issued by the Operating System. NOT the drive. This is why XP does not support TRIM for instance, and several older operating systems also do not support it. That is merely because they do not issue the TRIM command. The OS issues the TRIM commands, but only as a function of the file system that is managing it. :) No file system=no TRIM.
exceprts from the defiition of TRIM from WIKI: Because of the way that file systems typically handle delete operations, storage media (SSDs, but also traditional hard drives) generally do not know which sectors/pages are truly in use and which can be considered free space. Since a common SSD has no access to the file system structures, including the list of unused clusters, the storage medium remains unaware that the blocks have become available.
Different drives, different algorithms and different results. But since you are testing drive well outside normal use you should draw conclusion with care, not all could be relevant to standard application.
I've read everything (afaik) on AT on SSDs the past few years and the powersaving features used to be disabled in reviews. They, at least at some point, significantly affect performance. Back then I bought an Intel 80GB postville SSD and all tests I ran confirmed that these settings have quite a big impact.
I currently have an Intel 520 (though sadly limited by a 3Gbps SATA controller on my old core i7 920 platform) and I never thought of turning everything on again, so I wonder whether the problem is solved with newer drives. Did I miss something or why aren't these settings disabled anymore? Hopefully it's not a feature of newer platforms.
It would be nice if the next big SSD piece would cover this (or feel free to point me to an older one that does :)). I'd really like this to be clarified, if possible.
AHCI might give you a sleight performance boost, but does it affect the "Consistency" of the test results having NCQ enabled or the power saving features of AHCI or the O.S. itself
I always test my SSD's in the worst case scenario's to find bottom
XP No Trim (Not even O&O Defrag Pro's manual Trim) Heavy Defragging to test reliability while still under return policy yank the drives power while running Stuff like that
I predicted that my Vertex 2 would die if I ever updated the firmware as I have been tellyng people for the past few years and YES, it finally Died right after the firmware update
It was still under warranty but I seriously do not want another one
EIST and Turbo were disabled when we ran our tests on a Nehalem based platform (that was before AnandTech Storage Bench 2011) but have been enabled since that. Some of our older SSD reviews have a typo in the table which shows that those would be disabled in our current setup as well, but that's because Anand forget to remove those lines when updating the spec table for our Sandy Bridge build.
You say "AFAIK it affected performance with some older SandForce SSDs but when I started testing SSD and asked Anand for all the settings, he just told me to leave it on since it doesn't matter anymore".
But it's a clear difference on my old postville. Granted, the SSD is my boot disc during those tests, but isn't that their most likely use anyway? TBH I'd be interested to know why enabling those powersaving features apparently impact performance only when used as boot disc. When I say 'impact", I mean based on multiple runs, of course.
Lastly, it seems that C-states are the most impactful setting and that one isn't mentioned in the reviews. I suppose you've left those on as well?
1) BSOD problem 2) AES-256 hardware doesn't work (seriously??? hardware doesn't work???) 3) TRIM has not been working properly (what, you failed to GC blocks properly?)
As a lot of people mention, these SSD makers want to use early adopters and PC building enthusiasts as guinea pigs.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
56 Comments
Back to Article
bradcollins - Thursday, November 22, 2012 - link
One question, all of the tests at Anandtech for Sandforce performance after being hammered for a period of time are always over the entire drive. Do the drives maintain their performance if the random writes are only over 50% or 75% of the LBA's on the drive? Very few people actually fill up their SSD so I wonder if it is a truely relevant test?Impulses - Wednesday, November 28, 2012 - link
You will always eventually use all of your SSD, wear leveling algorithms will spread data across all NAND packages... A certain portion will always be marked "empty" if you haven't filled it to capacity, but that space has been issued and it's subject to the performance degradation conditions AT tests for.Bullwinkle J Moose - Thursday, November 22, 2012 - link
Time for an article on Cache Analysis?It looks to me as though the 1GB Dram Cache on the Intel DC S3700 is mainly responsible for smoothing out those peaks and valleys to deliver "Consistant" performance across the drive
As for trim.....
It's time to start with a fresh perspective on SSD's
Bullwinkle J Moose - Thursday, November 22, 2012 - link
I know Intel claimed otherwise on the dram usage, but I don't buy itSounds more likely they are just sending the competition on a wild goose chase
Bullwinkle J Moose - Thursday, November 22, 2012 - link
Reread DC S3700 review again256MB of the 1GB is used for cache
OK, my bad
Bullwinkle J Moose - Thursday, November 22, 2012 - link
DohIf I read it 5 more times I'll get it right eventually
extide - Saturday, November 24, 2012 - link
If it were that easy, don't you think other guys would have drives like the 3700 out?Kristian Vättö - Friday, November 23, 2012 - link
The problem with caching in general is that there is no good way to test how much write/read caching the drive is doing. All we got is what manufacturers tell us, which may or may not be accurate.mayankleoboy1 - Friday, November 23, 2012 - link
With each review, Samsung 840Pro looks better and better.JellyRoll - Friday, November 23, 2012 - link
This information has been already hashed over by several sites, in particular TweakTown. They have been educating the public for months about the lack of TRIM with Sandforce SSDs.Other sites have also noticed the read degradation, and commented on it ad nauseum.
FunnyTrace - Wednesday, November 28, 2012 - link
Yes, I did read an article on Tweaktown about this in August 2012.JellyRoll - Friday, November 23, 2012 - link
OMG...SandForce does not do dedupe (deduplication). It does not "has to check if the data is used by something else."!!The drive is unaware of the actual file usage above the device level. That is a host level responsibility.
I cannot believe that this article was not vetted before it was posted.
Kristian Vättö - Friday, November 23, 2012 - link
SandForce does deduplication at the device level. It doesn't look for actual files like the host does because it's all ones and zeros for the controller. However, what it does is look for similar data patterns.For example, if you have two very similar photos which are 5MB each, the controller may not write 10MB. Instead, it will only write let's say 8MB to the NAND because some of the data is duplicate and the whole idea of deduplication is to minimize NAND writes.
If you go and delete one of these photos, the OS sends a TRIM command that tells the LBA is no longer in use and it can be deleted. What makes SandForce more complicated is the fact that the photos don't necessarily have their own LBAs, so what you need to do is to check that the LBA you're about to erase is not mapped to any other LBA. Otherwise you might end up erasing a portion of the other photo as well.
JellyRoll - Friday, November 23, 2012 - link
I challenge you to offer one document that supports your assertion that sandforce does deduplication. There ins't any, as it does not. Feel free to link to the technical document to support your claims in your reply.SandForce supports compression, not deduplication.
Here is a link to documentation and product data sheets.
http://www.lsi.com/products/storagecomponents/Page...
Kristian Vättö - Friday, November 23, 2012 - link
SandForce/LSI has published very little about the technology behind DuraWrite and how it works, but a combination of technologies including compression and deduplication is what they have told us.http://www.anandtech.com/show/2899/3
JellyRoll - Friday, November 23, 2012 - link
Linking an Anandtech article is not proof that SandForce does deduplication. A quick Google will reveal that there is no other source, outside of Anandtech, that claims that they offer deduplication on the current series of processors.As a matter of fact, that article is the only other reference to deduplication and SandForce that can be found.
There was a mistake made in that article.
JellyRoll - Friday, November 23, 2012 - link
As a matter of fact, if deduplication were to apply to the SandForce series of processors then incompressible data would also experience decreases in write amplification. SandForce is very public that they have "top follow the same rules" with incompressible data as everyone else. IE, they suffer the same amount of write amplification.Since SandForce controllers only exhibit performance enhancing and endurance increasing benefits from compressible data, that alone indicates that deduplication is not in use.
Deduplication can be applied regardless of the compressibility of the data.
extide - Saturday, November 24, 2012 - link
Incompressible data generally doesnt have duplications in it... that's kinda what makes it incompressible... I mean the whole POINT of compression is removing duplications!JellyRoll - Saturday, November 24, 2012 - link
If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses.For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
extide - Wednesday, November 28, 2012 - link
That could also be considered compression. Take 2 copies of the same MP3 file and put them into a zip file, how big is the zip file? Pretty close to the size of one copy...Sivar - Saturday, November 24, 2012 - link
Do you understand how data deduplication works?This is a rhetorical question. Those who have read your comments know the answer.
Please read the Wikipedia article on data deduplication, or some other source, before making further comments.
JellyRoll - Saturday, November 24, 2012 - link
I am repeating the comments above for you, since you referenced the Wiki I would kindly suggest that you might have a look at it yourself before commenting further."the intent of storage-based data deduplication is to inspect large volumes of data and identify large sections – such as entire files or large sections of files – that are identical, in order to store only one copy of it."
This happens without any regard to whether data is compressible or not.
If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses.
For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
'nar - Monday, November 26, 2012 - link
You contradict yourself dude. You are regurgitating the words, but their meaning isn't sinking in. If you have two sets of incompressible data, then you have just made it compressible, ie. 2=1When the drive is hammered with incompressible data, there is only one set of data. If there were two or more sets of identical data then it would be compressible. De-duplication is a form of compression. If you have incompressible data, it cannot be de-duped.
Write amplification improvements come from compression, as in 2 files=1 file. Write less, lower amplification. Compressible data exhibits this, but incompressible data cannot because no two files are identical. Write amp is still high with incompressible data like everyone else. Your conclusion is backwards. De-duplication can only be applied on compressible data.
The previous article that Anand himself wrote suggested dedupe, it did not state that it was used, as that was not divulged. Either way, dedupe is similar to compression, hence the description. Although vague, it's the best we got from Sandforce to describe what they do.
What Sandforce uses is speculation anyhow, since it deals with trade secrets. If you really want to know you will have to ask Sandforce yourself. Good luck with that. :)
JellyRoll - Tuesday, November 27, 2012 - link
If you were to write 100 exact copies of a file, with each file consisting of incompressible data and 100MB in size, deduplication would only write ONE file, and link back to it repeatedly. The other 99 instances of the same file would not be repeatedly written.That is the very essence of deduplication.
SandForce processors do not exhibit this characteristic, be it 100 files or even only two similar files.
Of course SandForce doesn't disclose their methods, but full on terming it dedupe is misleading at best.
extide - Wednesday, November 28, 2012 - link
DeDuplication IS a form of compression dude. Period!!FunnyTrace - Wednesday, November 28, 2012 - link
SandForce presumably uses some sort of differential information update. When a block is modified, you find the difference between the old data and the new data. If the difference is small, you can just encode it over a smaller number of bits in the flash page. If you do the difference encoding, you cannot gc the old data unless you reassemble and rewrite the new data to a different location.Difference encoding requires more time (extra read, processing, etc). So, you must not do it when the write buffer is close to full. You can always choose whether or not you do differential encoding.
It is definitely not deduplication. You can think of it as compression.
A while back my prof and some of my labmates tried to guess their "DuraWrite" (*rolls eyes*) technology and this is the best guess have come up with. We didn't have the resources to reverse engineer their drive. We only surveyed published literature (papers, patents, presentations).
Oh, and here's their patent: http://www.google.com/patents/US20120054415
JellyRoll - Friday, November 30, 2012 - link
Hallelujah!Thanks funnytrace, i had a strong suspicion that it was data differencing. In the linked patent document it lists this 44 times. Maybe that many repetitions will sink in for some who still believe it is deduplication?
Also, here is a link to data differencing for those that wish to learn..
http://en.wikipedia.org/wiki/Data_differencing
Radoslav Danilak is listed as the inventor, not surprising i believe he was SandForce employee #2. He is now running Skyera, he is an excellent speaker btw.
extide - Saturday, November 24, 2012 - link
It's no different than SAN's and ZFS and other enterprise level storage solutions doing block level de-duplication. It's not magic, and it's not complicated. Why is it so hard to believe? I mean, you are correct that the drive has no idea what bytes go to what file, but it doesn't have to. As long as the controller sends the same data back to the host for a given read on an lba as the host sent to write, it's all gravvy. It doesnt matter what ends up on the flash,.JellyRoll - Saturday, November 24, 2012 - link
Aboslutely correct. However, they have much more powerful processors. You are talking about a very low wattage processor that cannot handle deduplication on this scale. SandForce also does not make the statement that they actually DO deduplication.FunBunny2 - Saturday, November 24, 2012 - link
here: http://thessdreview.com/daily-news/latest-buzz/ken..."Speaking specifically on SF-powered drives, Kent is keen to illustrate that the SF approach to real time compression/deduplication gives several key advantages."
Kent being the LSI guy.
JellyRoll - Saturday, November 24, 2012 - link
Entertaining that you would link to thessdreview, which is pretty much unanimously known as the home of misinformation. Here is a link to the actual slide deck from that presentation, which does not ever mention deduplication.http://www.flashmemorysummit.com/English/Collatera...
CeriseCogburn - Saturday, December 29, 2012 - link
LOL - good job, I will continue to read and see if all the "smart" people have finally shut the H up.I was hoping one would come by, apologize, and thank you.
Of course I know better.
*Happy the consensus is NOT the final word.*
dishayu - Friday, November 23, 2012 - link
Get Kristian on to the next episode of the podcast and make him talk!!popej - Friday, November 23, 2012 - link
What exactly does it mean: "I TRIM'ed the drive after our 20 minute torture"?Shouldn't TRIM function be executed by OS all the time during torture test?
Kristian Vättö - Friday, November 23, 2012 - link
Most of our tests are run without a partition, meaning that the OS has no access to the drive. After the torture, I created a partition which formats the drive and then deleted it. Formatting the drive is the same as TRIMing all user-accessible LBAs since it basically tells the controller to get rid of all data in the drive.popej - Friday, November 23, 2012 - link
Does it mean, that there was no TRIM command executed at all?Not when torturing drive, because it wasn't TRIM supported partition. Not when you "TRIM'ed" drive, because it was a format.
While I agree that you can notice some weird effects, why do you describe them as TRIM problems? Sorry, but I don't know how your test could be relevant to standard use of SDD, when TRIM is active all the time.
Kristian Vättö - Friday, November 23, 2012 - link
Formatting is the same as issuing a TRIM command to the whole drive. If I disable TRIM and format the drive, its performance won't restore since the drive still thinks the data is in use and hence you'll have to do read-modify-write when writing to the drive.They are problems in the sense that the performance should fully restore after formatting. If it doesn't, then TRIM does not function properly. Using an extreme scenario like we do it the best for checking if there is a problem; how that affects real world usage is another question. With light usage there shouldn't be a problem but you may notice the degradation in performance if your usage is write intensive.
popej - Friday, November 23, 2012 - link
Basing on you test I would say, that format is not enough to restore drive performance after using it without TRIM. Quite possible that the state of the drive after torture without TRIM is very different to anything you can get when TRIM is active.It would be interesting to compare your test to real life scenario, with NTFS partition and working TRIM.
Kristian Vättö - Friday, November 23, 2012 - link
With most SSDs, formatting the drive will fully restore it's performance, so the behavior we're seeing here is not completely normal.Remember that even if TRIM is active at all times, sending a TRIM command to the controller does not mean the data will be erased immediately. If you're constantly writing to the SSD, the controller may not have time to do garbage collection in real time and hence the SSD may be pushed to a very fragmented state as in our test where, as we can see, TRIM doesn't work perfectly.
I know that our test may not translate to real world in most cases, but it's still a possible scenario if the drive is hammered enough.
JellyRoll - Friday, November 23, 2012 - link
If the majority of your tests are conducted without a partition that means none of the storage bench results are with TRIM?Kristian Vättö - Friday, November 23, 2012 - link
That is correct, Storage bench tests are run on a drive without a partition.Running tests on a drive with a partition vs without a partition is something I've discussed with other storage editors quite a bit and there isn't really an optimal way to test things. We prefer to test without a partition because that is the only way we can ensure that the OS doesn't cause any additional anomalies but that means the drive may behave slightly differently with a file system than what you see in our tests.
JellyRoll - Friday, November 23, 2012 - link
Well personally I think that testing devices that are designed to operate in a certain environment is important. You are testing SSDs that are designed for a filesystem and TRIM, without a filesystem and TRIM. This means that the traces that you are running aren't indicative of real performance at all, the drives are functioning without the benefit of their most important aspect, TRIM. This explains why Anandtech is just now reporting the lack of TRIM support when other sites have been reporting this for months.Testing in an unrealistic environment with different datasets than those that are actually used when recording (your tools do not use the actual data, it uses substituted data that is highly compressible), in a TRIM free environment is like testing a Formula One car in a school zone.
This is the problem with proprietary traces. Readers have absolutely no idea if these results are valid, and surprise, they are not!
extide - Saturday, November 24, 2012 - link
The drive has NO IDEA id there is a partition on it or not. All the drive has to do is store data at a bunch of different addresses. That's it. Whether there is a partition or not has no difference, it's all just 0's and 1's to the drive.JellyRoll - Saturday, November 24, 2012 - link
it IS all ones and zeros my friend, but TRIM is a command issued by the Operating System. NOT the drive. This is why XP does not support TRIM for instance, and several older operating systems also do not support it. That is merely because they do not issue the TRIM command. The OS issues the TRIM commands, but only as a function of the file system that is managing it. :)No file system=no TRIM.
JellyRoll - Saturday, November 24, 2012 - link
exceprts from the defiition of TRIM from WIKI:Because of the way that file systems typically handle delete operations, storage media (SSDs, but also traditional hard drives) generally do not know which sectors/pages are truly in use and which can be considered free space.
Since a common SSD has no access to the file system structures, including the list of unused clusters, the storage medium remains unaware that the blocks have become available.
popej - Friday, November 23, 2012 - link
Different drives, different algorithms and different results. But since you are testing drive well outside normal use you should draw conclusion with care, not all could be relevant to standard application.R3dox - Friday, November 23, 2012 - link
I've read everything (afaik) on AT on SSDs the past few years and the powersaving features used to be disabled in reviews. They, at least at some point, significantly affect performance. Back then I bought an Intel 80GB postville SSD and all tests I ran confirmed that these settings have quite a big impact.I currently have an Intel 520 (though sadly limited by a 3Gbps SATA controller on my old core i7 920 platform) and I never thought of turning everything on again, so I wonder whether the problem is solved with newer drives. Did I miss something or why aren't these settings disabled anymore? Hopefully it's not a feature of newer platforms.
It would be nice if the next big SSD piece would cover this (or feel free to point me to an older one that does :)). I'd really like this to be clarified, if possible.
Bullwinkle J Moose - Friday, November 23, 2012 - link
I was kinda wondering something similarAHCI might give you a sleight performance boost, but does it affect the "Consistency" of the test results having NCQ enabled or the power saving features of AHCI or the O.S. itself
I always test my SSD's in the worst case scenario's to find bottom
XP
No Trim (Not even O&O Defrag Pro's manual Trim)
Heavy Defragging to test reliability while still under return policy
yank the drives power while running
Stuff like that
I predicted that my Vertex 2 would die if I ever updated the firmware as I have been tellyng people for the past few years and YES, it finally Died right after the firmware update
It was still under warranty but I seriously do not want another one
Time for me to thrash a Samsung 256GB 840 pro
I feel sorry for it already
Sniff
Bullwinkle J Moose - Friday, November 23, 2012 - link
I forgotMisaligned partitions and firmware updates during the return policy will also be used for testing any of my new drives during the return policy
I don't trust my data to a drive I haven't tested in a worst case scenario
Kristian Vättö - Friday, November 23, 2012 - link
EIST and Turbo were disabled when we ran our tests on a Nehalem based platform (that was before AnandTech Storage Bench 2011) but have been enabled since that. Some of our older SSD reviews have a typo in the table which shows that those would be disabled in our current setup as well, but that's because Anand forget to remove those lines when updating the spec table for our Sandy Bridge build.R3dox - Friday, November 23, 2012 - link
I see, but that doesn't really answer my question :P.Is there still a performance hit and do you just choose to test under normal rather than optimal conditions or is this a thing of the past?
Kristian Vättö - Friday, November 23, 2012 - link
I tested this quickly a while back but there was no significant difference in performance (small variation always occurs anyway):http://forums.anandtech.com/showthread.php?t=22721...
R3dox - Friday, November 23, 2012 - link
Thanks for the replies :).You say "AFAIK it affected performance with some older SandForce SSDs but when I started testing SSD and asked Anand for all the settings, he just told me to leave it on since it doesn't matter anymore".
But it's a clear difference on my old postville. Granted, the SSD is my boot disc during those tests, but isn't that their most likely use anyway? TBH I'd be interested to know why enabling those powersaving features apparently impact performance only when used as boot disc. When I say 'impact", I mean based on multiple runs, of course.
Lastly, it seems that C-states are the most impactful setting and that one isn't mentioned in the reviews. I suppose you've left those on as well?
JellyRoll - Wednesday, December 5, 2012 - link
that wasnt testing.Schugy - Tuesday, November 27, 2012 - link
Don/t buy MLC rubbish. SLC is really worth it.FunnyTrace - Wednesday, November 28, 2012 - link
Nice going SandForce.1) BSOD problem
2) AES-256 hardware doesn't work (seriously??? hardware doesn't work???)
3) TRIM has not been working properly (what, you failed to GC blocks properly?)
As a lot of people mention, these SSD makers want to use early adopters and PC building enthusiasts as guinea pigs.