The Plextor M3

Plextor sent us a 256GB model of their M3 series. Below is a table containing the specifications of their M3 line.

Plextor M3 Specifications
Model PX-64M3 PX-128M3 PX-256M3 PX-512M3
Raw NAND Capacity 64GiB 128GiB 256GiB 512GiB
Formatted Capacity 59.6GiB 119.2GiB 238.5GiB 476.9GiB
Number of NAND Packages 8 8 8 8
Number of die per Package 1 2 4 8
Sequential Read 520MB/s 510MB/s 510MB/s 525MB/s
Sequential Write 175MB/s 210MB/s 360MB/s 445MB/s
4K Random Read 55K IOPS 70K IOPS 70K IOPS 56K IOPS
4K Random Write 40K IOPS 50K IOPS 65K IOPS 30K IOPS
Cache (DDR3) 128MB 256MB 512MB 512MB

The Plextor M3 is available in all the standard capacities. In the light of the performance specifications, the M3 looks very promising. It beats its closest match, the Crucial m4, in all aspects. It's very competitive even with SandForce based SSDs and especially the stated random read figures are great.

NewEgg Price Comparison (4/2/2012)
  64GB 128GB 256GB 512GB
Plextor M3 $110 $180 $340 $660
Crucial m4 $88 $155 $315 $630
Intel 520 Series $110 $180 $345 $800
Samsung 830 Series $105 $185 $300 $780
OCZ Vertex 3 $90 $178 $340 $770

Price wise the M3 is not the cheapest SSD, especially in the smaller capacities. There is about $10-15 premium in the 64GB and 128GB models but 256GB and 512GB models are more competitively priced. Crucial's m4, however, comes in as a cheaper option than the M3 at every capacity so that will be a key matchup where Plextor has to win on performance or come down in pricing.

The external design of Plextor M3 is very solid. When I first saw it, it reminded me of Samsung 830 with its brushed metal finish. Only the Plextor logo has been printed on the front—The model and other important information are printed on a sticker on the back of the drive. The drive package includes a 3.5" bracket, quick installation guide, and a software CD, which includes a clone&backup utility along with performance analyzer. Plextor is giving the M3 a top notch 5-year warranty as well.

Each of the main components (controller, NAND devices, and DRAM) have their own little thermal pad. Since the chassis is also made out of metal, heat dissipation should not be a problem. 

Inside we find Marvell’s 88SS9174-BLD2 controller (or just 9174). This is the same controller that's in Crucial's m4, but the firmware is custom developed by Plextor. It’s actually a bit surprising, yet very refreshing, to see a Marvell based SSD for a change. Everyone seems to have a SandForce solution these days. We have seen that Marvell can be competitive; you just need to take the time to customize the firmware to get good performance. The stock SandForce firmware is fast enough, so it's obvious that many companies choose to go with the easiest option.

Flip the PCB and we find eight Toshiba 24nm 2-bit-per-cell MLC NAND devices. That’s coupled with two 256MB DDR-1333 chips from Nanya, giving a total of 512MB of DDR3 cache. 

Toshiba uses a Toggle-Mode interface and the current iteration (2.0) of Toggle-Mode NAND is good for up to 400MT/s per interface. Rating speed by transfers is a bit annoying as it doesn't tell us the actual bandwidth—for that we need the width of the channel and transfers per second. The channel in this case is 8 bits wide, so that works out to be 3.2Gbps per interface, or 400MB/s. With eight NAND packages, the maximum throughput works out to be 3200MB/s, over four times more than what SATA 6Gb/s can provide. Of course, reading from NAND and dumping the data into a register is one thing; it's another matter to actually transfer the data to a host controller over the interface.

We want to provide a quick word about firmware updates before we go into benchmarks. The drive came with FW 1.01, which was the latest at that time. Plextor has recently released FW 1.02 which is supposed to fix some issues but all our tests have been done using FW 1.01. Plextor is not claiming increased performance in the release notes of the update. The actual process of updating the firmware is very simple. Download a small ISO (~3MB) from Plextor's site, burn that to a CD or USB stick and boot from that. Press Enter and it automatically flashes the drive. I even had all my other drives plugged in and there was no problem. 

The Test

CPU

Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)

Motherboard

AsRock Z68 Pro3

Chipset

Intel Z68

Chipset Drivers

Intel 9.1.1.1015 + Intel RST 10.2

Memory G.Skill RipjawsX DDR3-1600 2 x 4GB (9-9-9-24)
Video Card XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
Video Drivers AMD Catalyst 10.1
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Our regular readers may notice that my testbed is not exactly the same as Anand's. Anand's setup is based on Intel's motherboard with H67 chipset, whereas mine is an ASRock board based on Intel's Z68 chipset. The important bit here is that both feature native SATA 6Gb/s support and both setups use the same drivers. Other features and components don't really have an effect on SSD testing. For example the average CPU usage during write speed tests is less than 5%.

Introduction Random and Sequential Read/Write Speed
Comments Locked

113 Comments

View All Comments

  • jwilliams4200 - Thursday, April 5, 2012 - link

    I know it is Anand's fault and you are just parroting his erroneous statements, but you guys really need to do better with your steady-state testing. Sandforce is actually among the worst at steady-state performance, and Plextor M3(P) is the best of the consumer SSDs at steady-state performance.

    anandtech.com should use some version of the SNIA steady-state testing protocol.

    Using HDTach is just crazy, since it writes a stream of zeros that is easily compressed by Sandforce SSDs, and thus does not give a good indication of steady-state performance (which SNIA specifies should be tested with random data streams). Besides, the workload of sequential writes spaced across the entire SSD is not realistic at all.

    Here are a couple reviews that do a decent job of steady-state testing (could be better, but at least they are far superior to anandtech.com's terrible testing protocols):

    scroll down to "Enterprise Synthetic Benchmarks" and look at the "... steady average speed" graphs for steady-state performance:
    http://www.storagereview.com/plextor_pxm3p_ssd_rev...

    http://www.xbitlabs.com/articles/storage/display/m...
  • bji - Thursday, April 5, 2012 - link

    Jarred and Kristin, I know you guys are reading these comments ... I think you would do very well to respond to this comment. You guys are doing great articles but this looks like something you should definitely consider if you want to be more accurate on steady-state performance.

    I personally very much care about this issue as the last thing I want is for my drive to fall into JMicron style performance holes. One of the factors that I used in deciding to get the Intel 520s that I got a few weeks ago was the fact that your tests showed that under torture situations the performance is still good. If your tests are not accurate, then I think you really need to address this.
  • Beenthere - Thursday, April 5, 2012 - link

    I use a variety of sources for SSD reviews. Storage Reviews uses some different metrics that may be of interest to those trying to make sense of SSD performance as the benches often do NOT mirror real world performance.

    To me the Plextor M3 just isn't where it needs to be. The M3 Pro should be the entry level Plextor SSD IMO. It's performance is a little better but currently it's over-priced. It should be priced as the M3 is now.

    http://www.storagereview.com/reviews
  • Anand Lal Shimpi - Thursday, April 5, 2012 - link

    Note that we don't use the HDTach approach for SandForce TRIM testing and instead fill the drive with incompressible data, throw incompressible random writes at the drive, and then use AS-SSD to measure incompressible write speed afterwards.

    Note that fully random data patterns are absolutely not indicative of client workloads at all. What you are saying is quite correct for certain enterprise applications, but not true in the consumer client space (this is also why we have a different enterprise SSD testing suite). IOs in the consumer space end up being a combination of pseudo-random and sequential, but definitely not fully random and definitely not fully random over 100% of the LBA space.

    SandForce actually behaves very well over the long run for client workloads as we've mentioned in the past. We have seen write amplification consistently below 1x for client workloads, which is why the SF drives do so very well in client systems where TRIM isn't present.

    Our current recommendation for an environment like OS X however continues to be Samsung's SSD 830. Its firmware tends to be a lot better behaved under OS X (for obvious reasons given Samsung's close relationship with Apple), regardless of write amplification and steady state random write behavior.

    Take care,
    Anand
  • jwilliams4200 - Thursday, April 5, 2012 - link

    "Note that we don't use the HDTach approach for SandForce TRIM testing and instead fill the drive with incompressible data, throw incompressible random writes at the drive, and then use AS-SSD to measure incompressible write speed afterwards."

    What?

    Are you really saying that you test Sandforce SSDs differently from non-Sandforce SSDs, and then you compare the results?

    Surely the first rule any decent tester learns is that all devices must be tested in the same way if you are to have a prayer of comparing results.
  • Anand Lal Shimpi - Thursday, April 5, 2012 - link

    We don't directly compare the TRIM/torture-test results, they are simply used as a tool to help us characterize the drive and understand the controller's garbage collection philosophies. HDTach (or an equivalent) is typically for doing that on non-SF drives because you can actually visualize high latency GC routines (dramatic peaks/valleys).

    The rest of the numbers are directly comparable.

    Take care,
    Anand
  • jwilliams4200 - Thursday, April 5, 2012 - link

    So your reviews should not make comments comparing the steady-state performance of Sandforce drives to non-Sandforce drives, since you have no objective basis of comparison.

    SNIA guidelines for SSD testing clearly state that the "tests shall be run with a random data pattern". Other review sites that do steady-state testing comply with this protocol.

    anandtech.com is urgently in need of improving its steady-state test protocols and complying with industry standard testing guidelines, since currently anandtech.com is making misleading statements about the relative performance of SSDs in steady-state tests
  • Anand Lal Shimpi - Thursday, April 5, 2012 - link

    As I mentioned before, we have done extensive long term analysis of SandForce drives and came away with a very good understanding of their behavior in client workloads - that's the feedback that's folded into reviews. For client workloads, SF drives have extremely good steady-state characteristics since a lot of data never gets written to NAND (I've mentioned this in previous articles, pointing to sub-1x write amplification factors after several months of regular use).

    We use both incompressible and compressible data formats in our tests, as well as have our own storage suites that provide a mixture of both. No client system relies on 100% random data patterns or 100% random data access, it's simply not the case. We try our best to make our client tests representative of client workloads.

    Our enterprise test suite does look different however, and included within it is a random write steady state test scenario. Even within the enterprise world it is not representative of all workloads, but there are some where it's an obvious fit.

    Take care,
    Anand
  • jwilliams4200 - Thursday, April 5, 2012 - link

    "As I mentioned before, we have done extensive long term analysis of SandForce drives and came away with a very good understanding of their behavior in client workloads - that's the feedback that's folded into reviews."

    And as I have explained before, your tests are flawed. You do NOT have a good understanding, because you are unable to specify the actual data that was written to the SSDs during your testing. You are just guessing.

    All other studies that have looked at compressibility of data written to Sandforce SSDs in typical consumer workloads have shown that most data is incompressible. The only common data that is compressible is OS and program installs, but that is only done once for most users. Probably your testers were installing lots of programs and OS's and running benchmarks that write easily compressible data, but that is not typical of most consumers. But the bottom line is that you seem to have no idea of what was actually written in your "analysis". So you really do not have a good understanding.

    Day to day, most home users write Office documents (automatically compressed before saving), MP3 files, JPGs, compressed video files, and hibernation files (automatically compressed in Win7). All of these are incompressible to sandforce.

    But none of that is really relevant to the question of how to test SSDs. The fact is that the only non-arbitrary way to do it is to use random, incompressible data patterns. There is a reason the industry standard SSD test protocols defined by SNIA specify mandatory random data patterns -- because that is the only completely objective test.
  • Anand Lal Shimpi - Thursday, April 5, 2012 - link

    Again - we do use incompressible data patterns for looking at worst case performance on SF drives.

    There's no impact on incompressible vs. compressible data with these other controllers, so the precondition, high-QD torture, HDTach pass is fine for other drives.

    As far as our internal experiment goes - we did more than just install/uninstall programs for 3 - 8 months. Each editor was given a SandForce drive and many of them used the drives as their boot/application drive for the duration of the study. My own personal workstation featured a SF drive for nearly a year, average write amplification over the course of that year was under 0.7x. My own workload involves a lot of email, video editing, photo editing, web browsing, HTML work, some software development, Excel, lots of dealing with archives, presentations, etc... I don't know that I even installed a single application during the test period as I simply cloned my environment over.

    We also measured fairly decent write amplification for our own server workloads with Intel's SSD 520.

    Take care,
    Anand

Log in

Don't have an account? Sign up now