Miscellaneous Factors and Final Words

The Synology RS10613xs+ is a 10-bay NAS, and there are many applicable disk configurations (JBOD / RAID-0 / RAID-1 / RAID-5 / RAID-6 / RAID-10). Most users looking for a balance between performance and redundancy are going to choose RAID-5. Hence, we performed all our expansion / rebuild duration testing as well as power consumption recording with the unit configured in RAID-5 mode. The disks used for benchmarking (OCZ Vector 120 GB) were also used in this section. The table below presents the average power consumption of the unit as well as time taken for various RAID-related activities.

RS10613xs+ RAID Expansion and Rebuild / Power Consumption
Activity Duration (HH:MM:SS) Power Consumption (Outlet 1 / W) Power Consumption (Outlet 2 / W) Total Power Consumption (W)
Diskless   52.9 67.4 120.3
Single Disk Initialization   46.5 61.61 108.11
RAID-0 to RAID-1 (116 GB to 116 GB / 1 to 2 Drives) 0:30:05 44.4 59.37 103.77
RAID-1 to RAID-5 (116 GB to 233 GB / 2 to 3 Drives) 0:37:53 49.82 65.91 115.73
RAID-5 Expansion (233 GB to 350 GB / 3 to 4 Drives) 00:24:10 54.42 70.98 125.4
RAID-5 Expansion (350 GB to 467 GB / 4 to 5 Drives) 00:21:40 57.61 74.29 131.9
RAID-5 Expansion (467 GB to 584 GB / 5 to 6 Drives) 00:21:10 61.1 78.29 139.39
RAID-5 Expansion (584 GB to 700 GB / 6 to 7 Drives) 00:21:10 63.77 81.23 145
RAID-5 Expansion (700 GB to 817 GB / 7 to 8 Drives) 00:20:41 66.8 85 151.8
RAID-5 Expansion (817 GB to 934 GB / 8 to 9 Drives) 00:22:41 67.92 86.16 154.08
RAID-5 Expansion (934 GB to 1051 GB / 9 to 10 Drives) 00:25:11 69.34 87.36 156.7
RAID-5 Rebuild (1168 GB to 1285 GB / 9 to 10 drives) 00:19:33 59.78 76.6 136.38

Unlike Atom-based units, RAID expansion and rebuild don't seem to take progressively longer as the number of disks increase.

Coming to the business end of the review, the Synology RS10613xs+ manages to tick all the right boxes in its market segment. Support for both SAS and SATA disks ensures compatibility with the requirements of a wide variety of SMBs and SMEs. We have not even covered some exciting SMB-targeted features in DSM such as Synology High Availability (which uses a dedicated second unit as a seamless failover replacement) and official support for multiple virtualization solutions including VMWare, Citrix and Hyper-V.

A couple of weeks back, Synology introduced the follow-up SATA-only RS3614xs+ with 12-bays and slots for up to two 10G NICs. Compared to the advertised 2000 MBps for the RS10613xs+, the RS3614xs+ can go up to 3200 MBps and 620K IOPS. Given Synology's commitment to the this lineup, SMBs looking for enterprise features in their storage server would do little wrong in going with Synology's xs+ series for the perfect mid-point between a NAS and a SAN.

Multi-Client Performance - CIFS
Comments Locked

51 Comments

View All Comments

  • Gigaplex - Saturday, December 28, 2013 - link

    No, you recover from backup. RAID is to increase availability in the enterprise, it is not a substitute for a backup.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Please read that 3rd link and tell me if RAID 5 makes any sense with todays drive sizes and costs.
  • Gunbuster - Thursday, December 26, 2013 - link

    Re: that 3rd link. Who calls it resilvering? Sounds like what a crusty old unix sysadmin with no current hardware knowledge would call it.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Whatever the name it doesn't really matter its the numbers that count and in TB drive sizes now a days RAID 5 makes zero sense.
  • Kheb - Saturday, December 28, 2013 - link

    No it doesnt. Not at all. First, you are taking into account only huge arrays used to store data and not to run applications (so basically only mechanical SATA, that is).Second, you are completeley ignoring costs (raid 5 or raid 6 vs raid 10). Third, you are assuming the raid 5 itself is not backed up or with some sort of software\hardware redundancy or tiering at lower levels (see SANs).

    So while I can agree that THEORETICALLY having raid 10 everywhere would indeed be safer, the costs (hdds + enclosures + controllers + backplanes) make this, and this time for real, have zero sense.
  • Ammaross - Thursday, December 26, 2013 - link

    "Resilvering" is the ZFS term for rebuilding data on a volume. It's very much a current term still, but it does give us an insight into the current bias of the author, who apparently favors ZFS for his storage until something he proposes as better is golden.
  • hydromike - Thursday, December 26, 2013 - link

    How many times have you had to rebuild a RAID5 in your lifetime? I have over 100 times on over 10 major HARDWARE RAID vendors.

    "And when you go to rebuild that huge RAID 5 array and another disk fails your screwed."

    The other drive failing is a very small possibility in an enterprise environment that I was talking about, because of enterprise grade drives vs consumer. That is why most have either the raid taken offline for a much faster rebuild. Besides during that rebuild the RAID is still functional just degraded.

    Also my point is lots of us still have hardware that is 2-5 years old that is still just working. The newest Arrays that I have setup as of late are 20 to 150 TB in size and we went with Freenas with ZFS which puts all other to shame. NetApp Storage appliances rebuild times are quite fast 6-12 hours for 40TB LUNS. It all depends upon the redundancy that you need. Saying that raid 5 needs to die is asinine. What if the data you are storing is all available in the public domain but have a local copy speeds up the data access rates. The rebuild is faster with a degraded LUN vs retrieving all of the data from the public domain again. There are many use cases for each RAID level just because one level does not fit YOUR uses it does not need to die!
  • P_Dub_S - Thursday, December 26, 2013 - link

    So if you were to buy this NAS for a new implementation would you even consider throwing 10-12 disks in it and building a RAID 5 array? just asking. Even in your own post you state how you use Freenas with ZFS for your new arrays. RAID 5 is the dodo here let it go extinct.
  • Ammaross - Thursday, December 26, 2013 - link

    For all you know, he's running ZFS using raidz1 (RAID5 essentially). Also, saying RAID5 needs to die, one must then assume you also think RAID0 is beyond worthless, since it has NO redundancy? Obviously, you can (hopefully) cite the use-cases for RAID0. Your bias just prevents you from seeing the usefulness of RAID5.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    It does happen though. I've had to rebuild 2 servers alone this year because of multiple drive failures. One server had 3 drives fail. But that's because of neglect. Us engineers only have so much time. Especially with the introduction to lean manufacturing.

    RAID 5 + Global spare though is usually pretty safe bet if it's a critical app server. Otherwise RAID 5 is perfectly fine.

Log in

Don't have an account? Sign up now