Testbed Setup and Testing Methodology

Our rackmount NAS testbed uses the same infrastructure and methodology as the other units with a tower form factor. Performance evaluation is done under both single and multiple client scenarios. In the multiple client scenario, we run tests with all available network ports teamed with 802.3ad dynamic link aggregation. For these tests, we use the SMB / SOHO NAS testbed described earlier. This is the first 10 GbE-equipped NAS we have evaluated. Special mention must be made of the Netgear ProSafe GSM7352S-200 in our setup. It provided us with the necessary infrastructure to properly evaluate the capabilities of the Synology RS10613xs+.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Supermicro was gracious to loan us their mini rack (CSE-RACK14U). An interesting aspect of the mini rack is the fact that its height is that of the standard workplace desk (30.64"). This allowed us to use our existing NAS testbed (tower form factor) and power measurement unit easily along with the rackmount components (the NAS under test, the Netgear ProSafe switch etc.)

We have been using the Western Digital 4TB RE (WD4000FYYZ) disks as test hard drives for NAS reviews. As we saw in our previous reviews, RAID rebuilds take days to get done. With a large number of bays, usage of hard disks was going to be very cumbersome. In addition, hard disks just don't bring out the performance potential of the rackmount units. Therefore, evaluation of the Synology RS10613xs+ was done by setting up a RAID-5 volume with twelve OCZ Vector 4 120 GB SSDs. Tests were also done using Intel SSD 520 240 GB disks that were supplied by Synology along with the review unit. However, to keep benchmark results consistent across different NAS units, the results we present are those obtained using the OCZ Vector SSDs.

Thank You!

We thank the following companies for helping us out with our rackmount NAS evaluation:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.

Introduction and Setup Impressions Single Client Performance - CIFS and iSCSI on Windows
Comments Locked

51 Comments

View All Comments

  • mfenn - Friday, December 27, 2013 - link

    The 802.3ad testing in this article is fundamentally flawed. 802.3ad does NOT, repeat NOT, create a virtual link whose throughput is the sum of its components. What it does is provide a mechanism for automatically selecting which link in a set (bundle) to use for a particular packet based on its source and destination. The definition of "source and destination" depends on the particular hashing algorithm you choose, but the common algorithms will all hash a network file system client / server pair to the same link.

    In a 4 1Gb/s + 2 10Gb/s 802.3ad ling aggregation group, you would expect that 2/3rd's of the clients would get hashed to the 1 Gb/s links and 1/3rd would get hashed to the 10Gb/s links. In a situation where all clients are running in lock-step (i.e. everyone must complete their tests before moving on to the next), you would expect the 10 Gb/s clients to be limited by the 1 Gb/s ones, thus providing a ~ 6 Gb/s line rate ~= 600 MB/s user data result.

    Since 2 * 10 Gb/s > 6 * 1 Gb/s, I recommend retesting with only the two 10 Gb/s links in the 802.3ad aggregation group.
  • Marquis42 - Friday, December 27, 2013 - link

    Indeed, that's what I was going to get at when I asked more about the particulars of the setup in question. Thanks for just laying it out, saved me some time. ;)
  • ganeshts - Friday, December 27, 2013 - link

    mfenn / Marquis42,

    Thanks for the note. I indeed realized this issue after processing the data for the Synology unit. Our subsequent 10GbE reviews which are slated to go out over the next week or so (the QNAP TS-470 and the Netgear ReadyNAS RN-716) have been evaluated with only the 10GbE links in aggregated mode (and the 1 GbE links disconnected).

    I will repeat the Synology multi-client benchmark with RAID-5 / 2 x 10Gb 802.3ad and update the article tomorrow.
  • ganeshts - Saturday, December 28, 2013 - link

    I have updated the piece with the graphs obtained by just using the 2 x 10G links in 802.3ad dynamic link aggregation. I believe the numbers don't change too much compared to teaming all the 6 ports together.

    Just for more information on our LACP setup:

    We are using the GSM7352S's SFP+ ports teamed with link trap and STP mode enabled. Obviously, dynamic link aggregation mode. The Hash Mode is set to 'Src/Dest MAC, VLAN, EType,Incoming Port'.

    I did face problems in evaluating other units where having the 1 Gb links active and connected to the same switch while the 10G ports were link-aggregated would bring down the benchmark numbers. I have since resolved that by completely disconnecting the 1G links in multi-client mode for the 10G-enabled NAS units.
  • shodanshok - Saturday, December 28, 2013 - link

    Hi all,
    while I understand that RAID5 surely has its domains, RAID10 is generally a way better choice, both for redundancy and performance.

    The RAID5 read-modify-write penalty present itself in a pretty heavy way when using anything doing many small writes, as databases ans virtual machines. So, then only case where I would create a RAID5 array is when it will be used as a storage archive (eg: fileserver).

    On the other hand, many, many sysadmins create "by default" RAID5 arrays pretending to consolidate on it many virtual machines. Unless you have a very high-end RAID controller (w/512+ MB of NVCache), they will badly suffer from RAID5 and alignment issues, which are basically non-existent on RAID10.

    One exception can be done for SSD arrays: in that case, a parity-based scheme (RAID5 or, better, RAID6) can do its work done very well, as SSD have no seek latency and tend to be of lower capacity than mechanical disks. However, alignment issues remain significant, and need to be taken into account when creating both the array and the virtual machines on top of it.

    Regards.
  • sebsta - Saturday, December 28, 2013 - link

    Since the introduction of 4k sector size disks things have changed a lot,
    at least in the ZFS world. Everyone who is thinking about building their
    storage system with ZFS and RaidZ should see this Video.

    http://zfsday.com/zfsday/y4k/

    Starting at 17:00 comes the bad stuff for RaidZ users.
    Here the one of the co creators of ZFS basically tells you.....

    Stay away from RaidZ if you are using 4k sector disks.
  • hydromike - Sunday, December 29, 2013 - link

    It depends on the OS implemented if this is a current problem. Many of the commercial ZFS vendors have had this fixed for awhile 18 to 24 months. FreeNAS in its latest release 9.2.0 have fixed this issue. ZFS has been a command-line heavy operation that you really understand drive setup and to tune it for the best speed.
  • sebsta - Sunday, December 29, 2013 - link

    I don't know much about FreeNAS but like FreeBSD they get their ZFS from Illumos.
    Illumos ZFS implementation has no fix. What is ZFS supposed to do if you write 8k to a RaidZ with 4 data disks if the sector size of a disk is 4k?

    The video explain what happens on Illumos. You will end up with something like this

    1st 4k data -> disk1
    2nd 4k data -> disk2
    1st 4k data -> disk3
    2nd 4k data -> disk4
    Parity -> disk5

    So you have written the same data twice plus parity. Much like mirroring with the additional overhead of calculating and writing the parity. FreeNAS has changed the ZFS implementation in that regard?
  • sebsta - Sunday, December 29, 2013 - link

    I did a quick search and at least in January this year FreeBSD had the same issues.
    See here https://forums.freebsd.org/viewtopic.php?&t=37...
  • shodanshok - Monday, December 30, 2013 - link

    Yes this is true, but for this very same reason many enterprise disks remain at 512 Byte per sector.
    Take the "enterprise drives" from WD:
    - the low cost WD SE are Advanced Format ones (4K sector size)
    - the somewhat pricey WD RE have classical 512B sector
    - the top-of-the line WD XE have 512B sector

    So, the 4K formatted disks are proposed for storage archiving duties, while for VMs and DBs the 512B disks remain the norm.

    Regards.

Log in

Don't have an account? Sign up now