Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance

Our final enterprise storage bench test once again comes from our own internal databases. We're looking at the stats DB again however this time we're running a trace of our Weekly Maintenance procedure. This procedure runs a consistency check on the 30GB database followed by a rebuild index on all tables to eliminate fragmentation. As its name implies, we run this procedure weekly against our stats DB.

The read:write ratio here remains around 3:1 but we're dealing with far more operations: approximately 1.8M reads and 1M writes. Average queue depth is up to 5.43.

Microsoft SQL WeeklyMaintenance - Average Data Rate

We see the same 44% performance advantage for the 910 over the P320h in our second SQL benchmark. The P320h is ahead of the remaining competitors however.

Average IO latency continues to be a clear strength of the P320h.

Microsoft SQL WeeklyMaintenance - Disk Busy Time

Microsoft SQL WeeklyMaintenance - Average Service Time

Enterprise Storage Bench - Microsoft SQL UpdateDailyStats Final Words
Comments Locked

57 Comments

View All Comments

  • JellyRoll - Monday, October 15, 2012 - link

    Of course you have absolutely no experience with Virtualization, which would mean that for your archaic workloads you wouldn't need something of this nature.
    users that purchase this will not be running one database at such low queue depths, that would be an insane waste of money.
    This is designed for high load OLTP and virtualized environments, not to run the database of one website.
    you may be in IT at some small company, but you havent seen anything on datacenter scale apparently.
  • DataC - Tuesday, October 16, 2012 - link

    JellyRoll is correct. I work for Micron, and we developed the P320h’s controller and firmware through collaboration with enterprise OEMs—which is why we optimized for higher queue depths. When the P320h is run in these environments (which are common in datacenters), you’ll see significantly higher performance than what’s shown in the charts above.
  • jospoortvliet - Tuesday, October 16, 2012 - link

    Yup. And it should be tested on a proper enterprise platform - this test is like running a Nascar vehicle with the handbrakes on.

    Time for an upgrade to a real OS, Anand.
  • Denithor - Monday, October 15, 2012 - link

    Would have liked to see the fastest consumer-grade drive thrown in just to see exactly how much faster enterprise drives go. Also would like to see how this drive would perform in the standard Light and Heavy Bench tests.
  • FunBunny2 - Monday, October 15, 2012 - link

    Actually, against a Fusion-io part, the closest example.
  • jwilliams4200 - Monday, October 15, 2012 - link

    Right, enterprise drives should get all the standard consumer SSD tests run on them in addition to the enterprise tests.
  • mckirkus - Wednesday, October 17, 2012 - link

    And I'd argue a RAMDisk should be included just to get a sense of relative performance.
  • Kevin G - Monday, October 15, 2012 - link

    I'm kinda surprised that there wasn't as much discussion about the effects of the native PCI-e controller. Lower latency results do crop up in various benchmarks here. I wonder if the impact is merely 'benchmark only' and not anything that'd be noticeable in more real world tests.

    By going with 34 nm SLC, they have limited capacity but his article seems to indicate that the controller is capable of support MLC in the 20 to 30 nm range. That would allow it to hit the 4 TB maximum capacity of the controller. I'm also curious on how such a change would perform. The current P320h does need a PCI-e 2.0 8x connection as some of the benchmarks are (barely) exceeding what a PCI-e 2.0 4x link can provide. With faster NAND, a move to PCI-e 3.0 8x or PCI-e 2.0 16x may be warranted.

    I'm also curious if multiple P320h's can be used in a system behind a RAID. Overkill the overkill?

    Now for a few general comments about NVMe. I'd love to see NAND chips on DIMMs at the enterprise level. If the controller detects NAND failure or chips reaching their maximum endurance, they could potentially be swapped out. This is akin to current ECC DIMMs. Along those same lines it would be nice to see a SAS or SATA port on the board so that it could fail over to a hard drive in the event of multiple impending NAND failures. The main reasoning I can see to avoid DIMMs would simply be physical space.

    This is also a good preview of what to expect with SATA-Express drives next year. They won't reach such bandwidth figures as they'll be limited to two PCI-e lanes but the latency improvements should carry over with a good controller.
  • PCTC2 - Monday, October 15, 2012 - link

    You could probably just do an OS-level software stripe (like in Linux). I think that would be more beneficial just in terms of usable capacity rather than the increase in performance. However, the increase in performance could be tangible, depending on your workload.

    As for the link, I think we're more constrained by the controller to the performance than the NAND. I don't think we need the PCIe 3.0 or PCIe 2.0 x16 links for this iteration of the controller. I don't think it would saturate the link. As you said, some of the tests don't even saturate a PCIe x4 link, if you don't include overhead (there is overhead).

    Also, Anand did point out a 25nm eMLC version is coming out in the future.

    As for putting chips on DIMMs, for a HH/HL PCIe card, that is a waste of space, as you said yourself. Between the controller, DRAM, and then the NAND, the sockets would just take up space. The daughterboard direction allows a much more compact, proprietary size depending on the board itself. If you wanted a FH/HL card, I'm sure DIMMs would be more possible.
  • FunBunny2 - Monday, October 15, 2012 - link

    Check out the Sun/Oracle flash appliance. Other niche Enterprise flash storage also exist.

Log in

Don't have an account? Sign up now