Mushkin Atlas mSATA (240GB & 480GB) Review
by Kristian Vättö on December 16, 2013 1:10 PM ESTPerformance Consistency
In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.
To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.
We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.
The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.
The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).
The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.
Mushkin Atlas 240GB | Mushkin Atlas 480GB | Intel SSD 525 | Plextor M5M | Samsung SSD 840 EVO 250GB | |||||
Default | |||||||||
25% OP | - | - |
Quite surprisingly, the 240GB model has great IO consistency but the performance is significantly lower at 480GB. We've not tested any 480GB SandForce SSDs for years, so I'm not sure if this is typical behavior or unique to the 480GB Atlas. Performance can slow down when more NAND dies are added because there are more pages/blocks to track, which requires more processing power and cache to deal with. The SF-2281 silicon is over two years old, so I think it wasn't really optimized for capacities over 256GB even though the controller is capable of supporting up to 512GB with 64Gb/die NAND. The 480GB model is still okay, though, as even at steady-state the IOPS is around 5000, while for example the Plextor M5M has occasions where the IOPS drops to zero.
Mushkin Atlas 240GB | Mushkin Atlas 480GB | Intel SSD 525 | Plextor M5M | Samsung SSD 840 EVO 250GB | |||||
Default | |||||||||
25% OP | - | - |
Mushkin Atlas 240GB | Mushkin Atlas 480GB | Intel SSD 525 | Plextor M5M | Samsung SSD 840 EVO 250GB | |||||
Default | |||||||||
25% OP | - | - |
TRIM Validation
To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 60 minutes. After torturing the drive, I measured the sequential write performance with Iometer (128KB IO size, fully random, 100% LBA, QD=1, 60 seconds). Next I TRIM'ed the drive (quick format in Windows 7/8) and reran Iometer.
Mushkin Atlas Resiliency - Iometer Incompressible Sequential Write | |||
Clean | Dirty | After TRIM | |
Mushkin Atlas 240GB | 189.2MB/s | 35.2MB/s | 106.6MB/s |
As expected, performance doesn't fully recover. I've heard SandForce actually has a fix for this but it's still in validation and will be implemented once its given a green light.
27 Comments
View All Comments
kwrzesien - Monday, December 16, 2013 - link
Avago to buy storage chipmaker LSI for $6.6 billion:http://www.cnbc.com/id/101275289
MichalSuchyn - Monday, December 16, 2013 - link
Love my job, since I've been bringing in $5600… I sit at home, music playing while I work in front of my new iMac that I got now that I'm making it online(Click on menu Home)http://goo.gl/O9CyBB
CharonPDX - Tuesday, December 17, 2013 - link
Spammer seems to be on the increase here - is there any easy way to report spam comments? (I can't find one.)hojnikb - Monday, December 16, 2013 - link
Yey another sandforce drive -.-Although one interesting point comes from all this...
Sandforce is actually working on fixing trim, which is nice to hear.
Gunbuster - Monday, December 16, 2013 - link
I've got a 240GB Mushkin MSATA in my Precision M4700. Runs like a champ.jrs77 - Monday, December 16, 2013 - link
mSATA is only of interest when talking either about switching storage in your ultrabook or your thin mITX system. For everything else a standard SATA SSD is better in price/performance.And for those ultrabooks or thin-clients performance isn't the first question, but price and silent operation.
So I'd say that this drive pretty much looses on all fronts, especially vs the Cruicial M500 240GB which is available currently for $144.99.
lmcd - Monday, December 16, 2013 - link
Or when talking about the mSATA in a larger notebook as the boot drive.MrSpadge - Tuesday, December 17, 2013 - link
I disagree: outfitting a regular laptop with one mSATA baby and a 1 or 2 TB 9.5mm height 2.5" HDD could be very welcome to power users not wanting 17" laptops with 2 drive bays. But of course these mSATA drives have to be priced competitively - there's no reason for them to cost more capacity.Hrel - Monday, December 16, 2013 - link
Plextor still seems to be the way to go here. Good to see Mushkin offering a legitimate alternative, but Plextor gets my recommendation for now.whyso - Monday, December 16, 2013 - link
Pretty poor drive. The high power consumption kills it in the mobile space.