Miscellaneous Factors and Final Words

The Synology RS10613xs+ is a 10-bay NAS, and there are many applicable disk configurations (JBOD / RAID-0 / RAID-1 / RAID-5 / RAID-6 / RAID-10). Most users looking for a balance between performance and redundancy are going to choose RAID-5. Hence, we performed all our expansion / rebuild duration testing as well as power consumption recording with the unit configured in RAID-5 mode. The disks used for benchmarking (OCZ Vector 120 GB) were also used in this section. The table below presents the average power consumption of the unit as well as time taken for various RAID-related activities.

RS10613xs+ RAID Expansion and Rebuild / Power Consumption
Activity Duration (HH:MM:SS) Power Consumption (Outlet 1 / W) Power Consumption (Outlet 2 / W) Total Power Consumption (W)
Diskless   52.9 67.4 120.3
Single Disk Initialization   46.5 61.61 108.11
RAID-0 to RAID-1 (116 GB to 116 GB / 1 to 2 Drives) 0:30:05 44.4 59.37 103.77
RAID-1 to RAID-5 (116 GB to 233 GB / 2 to 3 Drives) 0:37:53 49.82 65.91 115.73
RAID-5 Expansion (233 GB to 350 GB / 3 to 4 Drives) 00:24:10 54.42 70.98 125.4
RAID-5 Expansion (350 GB to 467 GB / 4 to 5 Drives) 00:21:40 57.61 74.29 131.9
RAID-5 Expansion (467 GB to 584 GB / 5 to 6 Drives) 00:21:10 61.1 78.29 139.39
RAID-5 Expansion (584 GB to 700 GB / 6 to 7 Drives) 00:21:10 63.77 81.23 145
RAID-5 Expansion (700 GB to 817 GB / 7 to 8 Drives) 00:20:41 66.8 85 151.8
RAID-5 Expansion (817 GB to 934 GB / 8 to 9 Drives) 00:22:41 67.92 86.16 154.08
RAID-5 Expansion (934 GB to 1051 GB / 9 to 10 Drives) 00:25:11 69.34 87.36 156.7
RAID-5 Rebuild (1168 GB to 1285 GB / 9 to 10 drives) 00:19:33 59.78 76.6 136.38

Unlike Atom-based units, RAID expansion and rebuild don't seem to take progressively longer as the number of disks increase.

Coming to the business end of the review, the Synology RS10613xs+ manages to tick all the right boxes in its market segment. Support for both SAS and SATA disks ensures compatibility with the requirements of a wide variety of SMBs and SMEs. We have not even covered some exciting SMB-targeted features in DSM such as Synology High Availability (which uses a dedicated second unit as a seamless failover replacement) and official support for multiple virtualization solutions including VMWare, Citrix and Hyper-V.

A couple of weeks back, Synology introduced the follow-up SATA-only RS3614xs+ with 12-bays and slots for up to two 10G NICs. Compared to the advertised 2000 MBps for the RS10613xs+, the RS3614xs+ can go up to 3200 MBps and 620K IOPS. Given Synology's commitment to the this lineup, SMBs looking for enterprise features in their storage server would do little wrong in going with Synology's xs+ series for the perfect mid-point between a NAS and a SAN.

Multi-Client Performance - CIFS
Comments Locked

51 Comments

View All Comments

  • iAPX - Thursday, December 26, 2013 - link

    2000+ MB/s ethernet interface (2x10Gb/s), 10 hard-drives able to to delivers at least 500MB/s EACH (grand total of 5000MB/s), Xeon quad-core CPU, and tested with ONE client, it delivers less than 120MB/s?!?
    That's what I expect from an USB 3 2.5" external hard-drive, not a SAN of this price, it's totally deceptive!
  • Ammaross - Thursday, December 26, 2013 - link

    Actually, 120MB/s is remarkably exactly what I would expect from a fully-saturated 1Gbps link (120MB/s * 8 bits = 960Mbps). Odd how that works out.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    That's because the PC only has a gigabit NIC. That's actually what you should expect.
  • BrentfromZulu - Thursday, December 26, 2013 - link

    For the few who know, I am the Brent that brought up Raid 5 on the Mike Tech Show (saying how it is not the way to go in any case)

    Raid 10 is the performance king, Raid 1 is great for cheap redundancy, and Raid 10, or OBR10, should be what everyone uses in big sets. If you need all the disk capacity, use Raid 6 instead of Raid 5 because if a drive fails during a rebuild, then you lose everything. Raid 6 is better because you can lose a drive. Rebuilding is a scary process with Raid 5, but Raid 1 or 10, it is literally copying data from 1 disk to another.

    Raid 1 and Raid 10 FTW!
  • xdrol - Thursday, December 26, 2013 - link

    From the drives' perspective, rebuilding a RAID 5 array is exactly the same as rebuilding a RAID 1 or 10 array: Read the whole disk(s) (or to be more exact, sectors with data) once, and write the whole target disk once. It is only different for the controller. I fail to see why is one scarier than the other.

    If your drive fails while rebuilding a RAID 1 array, you are exactly as screwed. The only thing why R5 is worse here is because you have n-1 disks unprotected while rebuilding, not just one, giving you approximately (=negligibly smaller than) n-1 times data loss chance.
  • BrentfromZulu - Friday, December 27, 2013 - link

    Rebuilding a Raid 5 requires reading data from all of the other disks, whereas Raid 10 requires reading data from 1 other drive. Raid 1 rebuilds are not complex, nor Raid 10. Raid 5/6 rebuilding is complex, requires activity from other disks, and because of the complexity has a higher chance of failure.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    You take a big hit on performance with RAID 6.
  • Ajaxnz - Thursday, December 26, 2013 - link

    I've got one of these with 3 extra shelves of disks and 1TB of SSD cache.
    There's a limit of 3 shelves in a single volume, but 120TB (3 shelves of 12 4Tb disks, raid5 on each shelf) with the SSD cache performs pretty well.
    For reference, NFS performance is substantially better than CIFS or iSCSI.

    It copes fine with the 150 virtual machines that support a 20 person development team.

    So much cheaper than a NetAPP or similar - but I haven't had a chance to test the multi-NAS failover - to see if you truly get enterprise quality resilience.
  • jasonelmore - Friday, December 27, 2013 - link

    well at least half a dozen morons got schooled on the different types of RAID arrays. gg, always glad to see the experts put the "less informed" (okay i'm getting nicer) ppl in their place.
  • Marquis42 - Friday, December 27, 2013 - link

    I'd be interested in knowing greater detail on the link aggregation setup. There's no mention of the load balancing configuration in particular. The reason I ask is because it's probably *not* a good idea to bond 1Gbps links with 10Gbps links in the same bundle unless you have access to more advanced algorithms (and even then I wouldn't recommend it). The likelihood of limiting a single stream to ~1Gbps is fairly good, and may limit overall throughput depending on the number of clients. It's even possible (though admittedly statistically unlikely) that you could limit the entirety of the system's network performance to saturating a single 1Gbe connection.

Log in

Don't have an account? Sign up now