We have lately seen SSD manufacturers paying more and more attention to the retail mSATA SSD market. For long the retail mSATA SSD market was controlled by only a few players: Crucial, ADATA and Mushkin were the only ones with widely available models. Intel also had a few models available for retail but those were all rather small and outdated (SATA 3Gbps and mainly aimed for caching with Intel Smart Response Technology). In the OEM market, you have mSATA SSDs from major brands, such as Samsung and Toshiba, but unfortunately many manufacturers have decided not to push their mSATA SSDs for the retail market. 

Like I've said before, the market for retail mSATA SSDs isn't exactly alluring but on the other hand, the market can't grow if the products available are not competitive. With only a few manufacturers playing in the field, it was clear that there wasn't enough competition, especially when compared to the 2.5" SATA market. A short while ago, Intel brought the SSD 525 and some very needed presence from a big SSD manufacturer to the mSATA retail market. Now we have another player, Plextor, joining the chorus.

Plextor showcased their M5M mSATA SSD already at CES but the actual release took place mid-February. Architecturally the M5M is similar to Plextor's M5 Pro Xtreme: Both use Marvell's 88SS9187 controller, 19nm Toshiba NAND and Plextor's custom firmware. The only substantial difference is four NAND packages instead of 8/16, which is due to mSATA's space constraints. 

  M5M (256GB) M5 Pro Xtreme (256GB)
Sequential Read 540MB/s 540MB/s
Sequential Write 430MB/s 460MB/s
4KB Random Read 80K IOPS 100K IOPS
4KB Random Write 76K IOPS 86K IOPS

Performance wise the M5M is slightly behind the M5 Pro Xtreme but given the limited NAND channels, the performance is very good for an mSATA SSD. Below are the complete specs for each capacity of the M5M:

Plextor M5M mSATA Specifications
Capacity 64GB 128GB 256GB
Controller Marvell 88SS9187
NAND Toshiba 19nm Toggle-Mode MLC
Cache (DDR3) 128MB 256MB 512MB
Sequential Read 540MB/s 540MB/s 540MB/s
Sequential Write 160MB/s 320MB/s 430MB/s
4KB Random Read 73K IOPS 80K IOPS 79K IOPS
4KB Random Write 42K IOPS 76K IOPS 77K IOPS
Warranty 3 years

The M5M tops out at 256GB because that's the maximum capacity that you can currently achieve with four NAND packages and 8GB die (4x8x8GB). It's possible that we'll see a 512GB model later once 16GB per die NAND is more widely available. 

Similar to Plextor's other SSDs, the M5M uses DRAM from Nanya and NAND from Toshiba. There's a 512MB DDR3-1333 chips acting as a cache, which is coupled by four 64GB (8x 8GB die) MLC NAND packages. The small chip you're seeing is a 85MHz 8Mb serial NOR flash chip from Macronix, which is used to house the drive's firmware. This isn't anything new as Plextor has always used NOR flash to store the firmware, but the package is just different to meet mSATA dimension requirements. 

Removing the sticker reveals the heart of the M5M: The Marvell 88SS9187. 

I discovered a weird bug during the testing of the M5M. Every once in a while, the drive would drop to SATA 3Gbps speeds (~220MB/s in Iometer) after a secure erase and the performance wouldn't recover until another secure erase command was issued. I couldn't find any logic behind the bug as the slow downs were totally random; sometimes the drive went through a dozen cycles (secure erase, test, repeat) while on some occasions the issue occurred after nearly every secure erase. At first I thought it was my mSATA to SATA 6Gbps adapter, so I asked Plextor for a new adapter and sample to make sure we were not dealing with defective hardware. However, the bug persisted. I've noticed similar behavior in the M5 Pro Xtreme (though not in the original M5 Pro) which is why I'm guessing the bug is firmware related (hardware issue would be much harder to fix). 

To date, Plextor has not been able to reproduce the bug, although I'm still working with their engineers in order to repeat our testing methodology as closely as possible. I don't think the bug will be a huge issue for most buyers as there's rarely a need to secure erase the drive but it's still something to keep in mind when looking at the M5M.

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 2 x 4GB (9-9-9-24)
Video Card XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
Video Drivers AMD Catalyst 10.1
Desktop Resolution 1920 x 1080
OS Windows 7 x64

 

Random & Sequential Performance
Comments Locked

36 Comments

View All Comments

  • JellyRoll - Thursday, April 18, 2013 - link

    The consistency testing and all trace based testing used by this site are tested without partitions or filesystems, and no TRIM functionality. This has been disclosed by the staff in the comment sections of previous reviews.
  • bobsmith1492 - Wednesday, April 17, 2013 - link

    Hi Kristian,
    Let me know the regulator part number and I can calculate the loss in the regulator. The main difference is if it is a switching or linear part. A linear part will waste 100% * (5-3.3)/5 percent of the power, or 34% neglecting the usually small quiescent current. A switcher will waste less, usually 10-20%.
  • Kristian Vättö - Wednesday, April 17, 2013 - link

    It's Micrel 29150 as far as I know. Here's the datasheet http://www.micrel.com/_PDF/mic29150.pdf
  • Ashaw - Wednesday, April 17, 2013 - link

    That is a linear part. Current in = current out + the ground pin current. See the graph on page 10. The ground current is about 1/50 the output current in this part. so the input current is a good approximation of the output current.
  • Ashaw - Wednesday, April 17, 2013 - link

    So the powers in the graphs above should be approx 0.41W, 2.75W and 2.98 W respectively. (Maybe slightly less in le lower digit if I were to include regulator losses).
  • bobsmith1492 - Wednesday, April 17, 2013 - link

    Agreed, the SSD is using approximately 66% of the measured power on the 5V rail.
  • JellyRoll - Wednesday, April 17, 2013 - link

    There are two problems with this statement:
    "In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time."

    1. Anand did not introduce this testing, another website did.
    2. it isnt looking at individual operations, thousands of operations are happening per second, hence the term 'IOPS' (I/O Per Second)
  • JellyRoll - Wednesday, April 17, 2013 - link

    Actually there is a third problem with the statement, it isnt looking at latency either. It is looking at IOPS, which is much different than latency. There are no latency numbers in this test.
  • JPForums - Thursday, April 18, 2013 - link

    There are no latency numbers displayed directly in the results, but latencies are implicit in the IOPS measurement. You may not be getting individual operation latencies, but IOPS is the inverse of average operation latency. So Just divide 1 by the number IOPS and you'll get your average operation latency.

    In general, I give reviewers the benefit of the doubt and try to put aside small slip ups in nomenclature or semantics as long as it is relatively easy to understand the points they are trying to make. That said, you seem to have it out for Kristian (or perhaps Anandtech as a whole), giving no slack and even reading things into statements that I'm not sure are there. I have no vested interest in Anandtech beyond the interest of reading good reviews, but I have to ask, did Kristian kick your dog or something? I'm honestly interested if you have a legitimate grievance.
  • JellyRoll - Thursday, April 18, 2013 - link

    Pointing out numerous problems with methodology is simply that, in particular the consistency tests are wildly misleading for a number of reasons, the least of which is an unreal workload. I will not resort to replying to thinly veiled flamebait attempts.

Log in

Don't have an account? Sign up now