Miscellaneous Factors and Final Words

The Synology RS10613xs+ is a 10-bay NAS, and there are many applicable disk configurations (JBOD / RAID-0 / RAID-1 / RAID-5 / RAID-6 / RAID-10). Most users looking for a balance between performance and redundancy are going to choose RAID-5. Hence, we performed all our expansion / rebuild duration testing as well as power consumption recording with the unit configured in RAID-5 mode. The disks used for benchmarking (OCZ Vector 120 GB) were also used in this section. The table below presents the average power consumption of the unit as well as time taken for various RAID-related activities.

RS10613xs+ RAID Expansion and Rebuild / Power Consumption
Activity Duration (HH:MM:SS) Power Consumption (Outlet 1 / W) Power Consumption (Outlet 2 / W) Total Power Consumption (W)
Diskless   52.9 67.4 120.3
Single Disk Initialization   46.5 61.61 108.11
RAID-0 to RAID-1 (116 GB to 116 GB / 1 to 2 Drives) 0:30:05 44.4 59.37 103.77
RAID-1 to RAID-5 (116 GB to 233 GB / 2 to 3 Drives) 0:37:53 49.82 65.91 115.73
RAID-5 Expansion (233 GB to 350 GB / 3 to 4 Drives) 00:24:10 54.42 70.98 125.4
RAID-5 Expansion (350 GB to 467 GB / 4 to 5 Drives) 00:21:40 57.61 74.29 131.9
RAID-5 Expansion (467 GB to 584 GB / 5 to 6 Drives) 00:21:10 61.1 78.29 139.39
RAID-5 Expansion (584 GB to 700 GB / 6 to 7 Drives) 00:21:10 63.77 81.23 145
RAID-5 Expansion (700 GB to 817 GB / 7 to 8 Drives) 00:20:41 66.8 85 151.8
RAID-5 Expansion (817 GB to 934 GB / 8 to 9 Drives) 00:22:41 67.92 86.16 154.08
RAID-5 Expansion (934 GB to 1051 GB / 9 to 10 Drives) 00:25:11 69.34 87.36 156.7
RAID-5 Rebuild (1168 GB to 1285 GB / 9 to 10 drives) 00:19:33 59.78 76.6 136.38

Unlike Atom-based units, RAID expansion and rebuild don't seem to take progressively longer as the number of disks increase.

Coming to the business end of the review, the Synology RS10613xs+ manages to tick all the right boxes in its market segment. Support for both SAS and SATA disks ensures compatibility with the requirements of a wide variety of SMBs and SMEs. We have not even covered some exciting SMB-targeted features in DSM such as Synology High Availability (which uses a dedicated second unit as a seamless failover replacement) and official support for multiple virtualization solutions including VMWare, Citrix and Hyper-V.

A couple of weeks back, Synology introduced the follow-up SATA-only RS3614xs+ with 12-bays and slots for up to two 10G NICs. Compared to the advertised 2000 MBps for the RS10613xs+, the RS3614xs+ can go up to 3200 MBps and 620K IOPS. Given Synology's commitment to the this lineup, SMBs looking for enterprise features in their storage server would do little wrong in going with Synology's xs+ series for the perfect mid-point between a NAS and a SAN.

Multi-Client Performance - CIFS
Comments Locked

51 Comments

View All Comments

  • mfenn - Friday, December 27, 2013 - link

    The 802.3ad testing in this article is fundamentally flawed. 802.3ad does NOT, repeat NOT, create a virtual link whose throughput is the sum of its components. What it does is provide a mechanism for automatically selecting which link in a set (bundle) to use for a particular packet based on its source and destination. The definition of "source and destination" depends on the particular hashing algorithm you choose, but the common algorithms will all hash a network file system client / server pair to the same link.

    In a 4 1Gb/s + 2 10Gb/s 802.3ad ling aggregation group, you would expect that 2/3rd's of the clients would get hashed to the 1 Gb/s links and 1/3rd would get hashed to the 10Gb/s links. In a situation where all clients are running in lock-step (i.e. everyone must complete their tests before moving on to the next), you would expect the 10 Gb/s clients to be limited by the 1 Gb/s ones, thus providing a ~ 6 Gb/s line rate ~= 600 MB/s user data result.

    Since 2 * 10 Gb/s > 6 * 1 Gb/s, I recommend retesting with only the two 10 Gb/s links in the 802.3ad aggregation group.
  • Marquis42 - Friday, December 27, 2013 - link

    Indeed, that's what I was going to get at when I asked more about the particulars of the setup in question. Thanks for just laying it out, saved me some time. ;)
  • ganeshts - Friday, December 27, 2013 - link

    mfenn / Marquis42,

    Thanks for the note. I indeed realized this issue after processing the data for the Synology unit. Our subsequent 10GbE reviews which are slated to go out over the next week or so (the QNAP TS-470 and the Netgear ReadyNAS RN-716) have been evaluated with only the 10GbE links in aggregated mode (and the 1 GbE links disconnected).

    I will repeat the Synology multi-client benchmark with RAID-5 / 2 x 10Gb 802.3ad and update the article tomorrow.
  • ganeshts - Saturday, December 28, 2013 - link

    I have updated the piece with the graphs obtained by just using the 2 x 10G links in 802.3ad dynamic link aggregation. I believe the numbers don't change too much compared to teaming all the 6 ports together.

    Just for more information on our LACP setup:

    We are using the GSM7352S's SFP+ ports teamed with link trap and STP mode enabled. Obviously, dynamic link aggregation mode. The Hash Mode is set to 'Src/Dest MAC, VLAN, EType,Incoming Port'.

    I did face problems in evaluating other units where having the 1 Gb links active and connected to the same switch while the 10G ports were link-aggregated would bring down the benchmark numbers. I have since resolved that by completely disconnecting the 1G links in multi-client mode for the 10G-enabled NAS units.
  • shodanshok - Saturday, December 28, 2013 - link

    Hi all,
    while I understand that RAID5 surely has its domains, RAID10 is generally a way better choice, both for redundancy and performance.

    The RAID5 read-modify-write penalty present itself in a pretty heavy way when using anything doing many small writes, as databases ans virtual machines. So, then only case where I would create a RAID5 array is when it will be used as a storage archive (eg: fileserver).

    On the other hand, many, many sysadmins create "by default" RAID5 arrays pretending to consolidate on it many virtual machines. Unless you have a very high-end RAID controller (w/512+ MB of NVCache), they will badly suffer from RAID5 and alignment issues, which are basically non-existent on RAID10.

    One exception can be done for SSD arrays: in that case, a parity-based scheme (RAID5 or, better, RAID6) can do its work done very well, as SSD have no seek latency and tend to be of lower capacity than mechanical disks. However, alignment issues remain significant, and need to be taken into account when creating both the array and the virtual machines on top of it.

    Regards.
  • sebsta - Saturday, December 28, 2013 - link

    Since the introduction of 4k sector size disks things have changed a lot,
    at least in the ZFS world. Everyone who is thinking about building their
    storage system with ZFS and RaidZ should see this Video.

    http://zfsday.com/zfsday/y4k/

    Starting at 17:00 comes the bad stuff for RaidZ users.
    Here the one of the co creators of ZFS basically tells you.....

    Stay away from RaidZ if you are using 4k sector disks.
  • hydromike - Sunday, December 29, 2013 - link

    It depends on the OS implemented if this is a current problem. Many of the commercial ZFS vendors have had this fixed for awhile 18 to 24 months. FreeNAS in its latest release 9.2.0 have fixed this issue. ZFS has been a command-line heavy operation that you really understand drive setup and to tune it for the best speed.
  • sebsta - Sunday, December 29, 2013 - link

    I don't know much about FreeNAS but like FreeBSD they get their ZFS from Illumos.
    Illumos ZFS implementation has no fix. What is ZFS supposed to do if you write 8k to a RaidZ with 4 data disks if the sector size of a disk is 4k?

    The video explain what happens on Illumos. You will end up with something like this

    1st 4k data -> disk1
    2nd 4k data -> disk2
    1st 4k data -> disk3
    2nd 4k data -> disk4
    Parity -> disk5

    So you have written the same data twice plus parity. Much like mirroring with the additional overhead of calculating and writing the parity. FreeNAS has changed the ZFS implementation in that regard?
  • sebsta - Sunday, December 29, 2013 - link

    I did a quick search and at least in January this year FreeBSD had the same issues.
    See here https://forums.freebsd.org/viewtopic.php?&t=37...
  • shodanshok - Monday, December 30, 2013 - link

    Yes this is true, but for this very same reason many enterprise disks remain at 512 Byte per sector.
    Take the "enterprise drives" from WD:
    - the low cost WD SE are Advanced Format ones (4K sector size)
    - the somewhat pricey WD RE have classical 512B sector
    - the top-of-the line WD XE have 512B sector

    So, the 4K formatted disks are proposed for storage archiving duties, while for VMs and DBs the 512B disks remain the norm.

    Regards.

Log in

Don't have an account? Sign up now