POST A COMMENT

23 Comments

Back to Article

  • dropadrop - Monday, December 11, 2006 - link

    How was the noiselevel measured for the device? I was considering the Intel since it's easily available here, but previous reviews have stated it was far from silent. Here's a quote from toms:

    quote:

    The back of the SS4000E includes a large cooling fan for the hard drives on top, with a smaller fan at the bottom for the power supply. Given the two fans, noise levels are about what you'd expect - ok for a noisy office, but home users will want to park the box in a closet, basement or other unoccupied space.


    I'm pretty allergic to noise, currently the noisiest thing I have is a Buffalo Linkstation, which I have allready stuffed in a closet and end up turning off when not used.
    Reply
  • eaglemasher - Monday, December 11, 2006 - link

    I've had an infrant readynas NV running for about 10 months, and I am impressed with the overall experience. The access speeds are much faster than my other NAS device, the firmware updates are very regular, the support staff are responsive, and it's quiet. Then there's the fact that it supports pretty much any network file system you want to throw at it and supports long path lengths and I am one very satisfied customer. Perhaps Infrant's hardware is hit and miss for some people, but in my case I haven't had a hiccup in 10 months of constant use as a backup device for 18 users.

    My other raid NAS device is a Terastation I've run for about 20 months. My overall assessment is stay the heck away. The only good thing I can say about it is that it hasn't failed yet, but the speeds are pretty dismal, the interface is very limiting, and the path length limitations make it unusable as a direct-copy backup device. Add to that the fact that Buffalo has not updated the U.S. firmware in 1 1/2 years (though the japanese version gets regular updates), and it's extremely disappointing, especially when contrasted with the ReadyNas. Maybe with the firmware updates the Japanese actually get a useful NAS.
    Reply
  • archcommus - Wednesday, December 06, 2006 - link

    I'm not too familiar with these devices, can someone tell me what advantage(s) they hold over a home built file server PC? Something cheap and slow but with a large hardware-based RAID array that simply sits in a room with no monitor attached and does its job. Seems that'd be easier and more upgradeable, also probably faster. Reply
  • yyrkoon - Wednesday, December 06, 2006 - link

    They hold no real benifit over home built solutions, except that perhaps, like buying a Dell PC, you dont have to build/provide support for it yourself. For a home brew solution, you can use whatever you like, however you like, and dont have to worry about proprietary hardware. Granted, OEMs have more experience in this arena, so when you do build your own, you may have to learn what works, and what doesnt, on the fly.

    Anyhow, thats the way I see it, maybe someone else can answer further if I missed something.
    Reply
  • TheBeagle - Wednesday, December 06, 2006 - link

    I know that evaluators are most times constrained by time limits for their work, but I believe you guys missed a very good NASA box offering along the way. I'm speaking about the latest offering from U.S. Robotics, Model 8700. It is a four-bay box, with gigabit ethernet connection, 2 additional USB 2.0 ports, and is very well constructed. It comes without any drives, which allows the vendor/user to select their own drive (WD500YS drives work great in it), and can array four drives of 500 GB each into a RAID 5 setup that works like a charm. It has good software and also client backup software. You really ought to evaluate this NAS box as well. It's a winner! Reply
  • mziegler - Tuesday, December 05, 2006 - link

    I'm really glad to see this type of review as I have been looking at these devices. However, the review left out any drive failure scenerios. I would like to see included in the review of the Hammer system restoring or rebuilding a RAID array using the Z-FS file system. Also for that system since it uses a proprietary file system a test of grabbing data off of a drive using Dataplows SFSExtract.exe DOS utility.

    This review focused solely on performance which is only about half the reason someone would purchase one of these devices. The redunduncy which arguably is most important was practically ignored.

    A review of this type of product in my opinion must answer the following questions:
    - What happens if a drive fails
    - What happens if the NAS device fails
    - How easy and how long is the recovery process
    - What's the relative performance
    - Features vs a traditional Unix/Linux/WS2003 NAS head
    Reply
  • LoneWolf15 - Wednesday, December 06, 2006 - link

    Insightful post. I agree with the parent poster on this one. Reply
  • aikend - Tuesday, December 05, 2006 - link

    It would have been really nice if the features table had included the maximum number of drives, and maximum total capacity, for each of the units. Sure, I can go to each vendor's website and track it down by myself, but I would think "disk capacity" would be a pretty important measure for lots of people. Reply
  • JarredWalton - Tuesday, December 05, 2006 - link

    Disk capacity will often be determined by the largest HDD available. Right now, that's the Seagate 750GB, so you can do up to 3TB of storage in a four drive unit (which most of these are). When someone makes a larger HDD, there's a pretty good chance all of these NAS units will support it. I'm not sure where the next "barrier" is on SATA/BIOS/OS drive sizes, but after the 128GB limitation was removed I think the next maximum HDD size went up into the many TB range. Reply
  • yyrkoon - Tuesday, December 05, 2006 - link

    I've been told that the next limit barrier of 48 bit LBA is more than 2 TB per disk, although, I havent personally read any specifications concerning this. This also isnt to say, that other factors couldnt come into play either (manufactuer using cheaper electronics, and thus reducing the over all limit somehow). Reply
  • LoneWolf15 - Tuesday, December 05, 2006 - link

    Did Anandtech actually test Active Directory support with these test units?

    I ask, because after setting up multiple Buffalo Terastation Pros, it isn't an easy task. In our case, our units were shipped with a firmware that isn't even offered for download (1.2) that had multiple issues. I had to call Buffalo (btw, on-hold times are forever with this company) who told me I had to back down the firmware to the version on their website. I did this, and still had glitches, so they sent me a beta firmware (1.4) which fixed those, but requires using IP addresses (UNC pathnames are not currently supported).

    That was this summer. The Buffalo tech indicated a 1.5 firmware in testing that would be released this fall; that time has come and gone. The units work for what we need, but I'm far from impressed with their support, and would encourage Anandtech to make sure things like Active Directory support actually work as advertised.
    Reply
  • smalenfant - Tuesday, December 05, 2006 - link

    I would have like to see the power consumption by these devices. Currently I have a mythbackend server (Duron 1.6Ghz) that I installed 4 disk in it (no array). I couldn't justify buying a NAS and let it sit there powered up all day. My server uses about 110W (when all disk are spinning and ATSC capture card running). I took out my NSLU2 because it was so slow but that doesn't compare here. Reply
  • arswihart - Tuesday, December 05, 2006 - link

    I don't see spin-down mentioned on any of these units. This is a big contributor to disk life, as a lot of wear can be saved if they are spun down when not being used. This may be a non-issue for some, but if you are just using them all day long and not at all at night, you are already saving 50% of the time until failure, and possibly doubling your disk's lifespan. Reply
  • Deanodxb - Wednesday, December 06, 2006 - link

    The latest firmware for the Ready Nas NVs supports disc spin down. Reply
  • yyrkoon - Tuesday, December 05, 2006 - link

    You know, the Thecus model looks like a rip-off of Mashies' UDAT mod ( www.mashie.org ).

    Anyhow, these results are very disapointing. You can purchase a 4 disk enclosure from addonics, that will ouput 4 ATA drives on one USB connection, that will perform very simular to these "superior" products (and cost a hell of a lot less, $150usd, minus drives). Granted, the only RAID option with the addonics 4x ATA-> USB controller is JOBD. I think most of these manufactuers could have saved themselves some money (thus passing it on to the customer) by using older technology equipment in their systems. Also, I'm still trying to figure out WHY the Hammer-x system just didnt opt for a Linux iSCSI target configuration, since atleast Vista Ultimate will ship with MS' initator client (atleast judging by the RC2 5744 build), and the MS initiator is also free from MS currently for XP.

    I guess the only way the home enthusiast, such as probably most of the people who read these comments, would most likely be better off suited buying their own hardware, and putting it together. So much for having high hopes eh ?

    However, I still have high hopes for http://www.accusys.com.tw/eng/products_deskraid_77...">This product. It's not availible yet, but I've been in contact with the company through email, and it soudns as though they are finishing up on the firmware, and are close to production phase.
    Reply
  • yyrkoon - Tuesday, December 05, 2006 - link

    err . . .

    quote:

    I guess the only way the home enthusiast, such as probably most of the people who read these comments, would most likely be better off suited buying their own hardware, and putting it together.


    What I was TRYING to say was: "The home enthusiast concerned about performance, would be better off building their own"

    Reply
  • Deanodxb - Tuesday, December 05, 2006 - link

    I've looked at this too. What I'm going to do is put a small system together with an Areca controller, Addonics 4XSA drive bays and an Open E XSR SMB Nas system (basically a NAS specific OS which comes on a compact flash card which plugs straight into the IDE controller on your Mobo). You don't need a very powerful CPU for this, just a mobo with onboard vid and preferably Gigabit NIC and PCI-E for the controller. Open E supports the Areca controllers (as well as many others). Check out http://www.open-e.com/nasxsr/network_attached_stor...">Open-E

    Addonics also offer solutions now similar to the Accusys set up. Get an eSATA card and one of the storage tower units with a 5 x 1 SATA port multiplier. http://www.addonics.com/products/raid_system/ast4....">Addonics Storage TowerIf you want to run any RAID configuration on this though it will eat up CPU cycles as the eSATA card doesn't have a dedicated hardware RAID processor, unlike the Areca.

    I currently run a mix of an 8 port Areca (3.5 TB) plus 1 (working) Ready NAS NV (1TB) and Addonics eSATA removable/mobile rack with a bunch of cartridges (250Gb - 750Gb drives). This works well. And before you ask, I use this set up to store HD movies...

    ...p0rn is burnt to DVD ;)
    Reply
  • yyrkoon - Tuesday, December 05, 2006 - link

    Yeah I've know about Open-E for some time, and I wouldnt even consider their product, as it is too expencive for what it is. It wouldnt be that hard for someone such as myself who knows a good bit about Linux already to make their own Linux iSCSI target even. This isnt to say I know it all, I dont, however, thats is what's so great about the internet, and choosing the right distro(there are a lot of people who have already done it, and have documented what they've done)

    As for Addonics having something simular to the Accusys system, I do not know 100%, but I'm thinking this is incorrect. The Accusys system requires no host driver (well, partially, obviously the host would need eSATA drivers for the eSATA connectivity), uses 0% host CPU, and all RAID functions are done inside the enclosure. A port multiplier using current technology requires: A host with either a HBA, or onboard SIL3132(or equivelent) chipset, SIL RAID utilities on the host, and of course eSATA connectivity. Either way, using a SATA port multiplier *would* be cheaper, but at the cost of at least a little performance, and the Accusys solution at current is only SATA(192MB/s max), not SATAII(384MB/s max).

    Anyhow, all this being said, and I'm still not sure exactly what I want/need. I mean do I *really* need a storage system that is capable of 384MB/s(potentialy)? Would it really make all that much of a difference if the RAID was handled by the host vs the enclosure ? Which solution would be cost efficient, more so than the other ? Lastly, can I even afford either solution ?

    I do know that I dont like to wait when transfering files, and I do need something that will hold large amounts of data, and be reliable. *right now*, all my stored data is on USB enclosed HDDs, and seems to be working *ok*, but what I would really like, is a system that holds 4 or more disks, and uses removable cartidges, so that I do not have buy another system when i run out of space, but rather, just buy a new cartridge, and HDD, and pop it in. Personally, I still have a lot of thought to put into what I'm going to buy, and I've already been thinking about it for a long time.
    Reply
  • Deanodxb - Tuesday, December 05, 2006 - link

    Nice article, I must however share my experience with the Ready NAS NV units.

    I bought 2 of these earlier this year. One works fine (250gb drives) although it is VERY slow at reads and writes, even with a Gigabit switched connection (all hardware approved/recommended by Infrant).

    The other is a complete and utter lemon. It keeps dropping the network connection after 20 seconds or so (this seems to be a fairly common fault - see the infrant forum) and has so far killed 3 320Gb drives, all new, from different batches. I had two discs fail on me in this unit, one shortly after the other, and I lost around 800Gb of data. It is now a very expensive doorstop (I live in Dubai, it will cost me more to ship it back to the US than the unit is worth). Whilst the units look sturdy, in real world usage they are anything but that.

    These units are VERY hit and miss. I would not recommend Ready NAS NV units to anyone who cares about their data and fast access to it. Caveat emptor.

    I would suggest going with an Areca RAID card instead. I did and I am much happier.
    Reply
  • dillytaint - Wednesday, December 06, 2006 - link

    I bought an NV cause it was linux based and would save me some time since I could just plug it in and be done. I knew it wasn't blindly fast but the performance really is terrible. Rebuild/init times are terrible averaging ~5 hours for 4 320gb drives for me. Performance is especially bad with small files if you have your Maildir on it for example. Jumbo frames only work in one direction and NFS only works over UDP. I had problems with CIFS/NFS user permissions and UIDs since the UIDs I use on my machines were in a restricted range. I had trouble with good drives being reported as bad always in the same slot and was constantly rebuilding. On large file transfers it would hang and would require a reset to bring it back online. The 256mb of memory is only expandable by small list of supported SO-DIMMs and is non-ECC. Some of these issues may be fixed now but I was not willing to wait and beta test.

    In the end I returned it and built my own linux box with the same drives and it's been rock solid. It cost me $200 more in hardware and I got 2gb of ECC RAM, a workstation class motherboard, a dualcore CPU, and 6 SATA II connectors. And I can run anything else I want to on the box. I get 48MB/s sequential writes (even after filling the buffer cache) over gigabit with jumbo frames and 60MB/s sequential reads. I am using LVM2 and reiserfs so I have a lot of flexability with how I use my space, and can also export space as iSCSI targets.

    If you have experience with such things resist the urge to get one of these boxes to save time. You'll likely end up saving more time doing it yourself and end up with more reliability and better performance.
    Reply
  • yyrkoon - Friday, December 08, 2006 - link

    My problem is this: I want redundancy, but I also do not want to be limited to GbE transfer rates. I've been in communication with many people, via different channels (email, IRC, forums, etc), and the best results I've seen anyone get on GbE is around 90MB/s using specific NIC cards (Intel pro series, PCI-E).

    The options here are rather limited. I like Linux, however, I refuse to use Ethernet channel bonding (thus forcing the use of Linux on all my machines), or possibly a combination of Ethernet channel bonding, with a very expensive 802.11 a/d switch. 10GbE is is an option, but is way out of my price range, and 4GB FC doesn't seem to be much better. From my limited understanding of their product, Intel pro cards I think come with software to be used in aggregate load balancing, but I'm not 100% sure of this, and unless I used cross over cables from one machine, to another, I would be forced into paying $300usd or possibly more for a 802.11 a/d switch again. I've looked into all these options, plus 1394b firewire teaming, and SATA port multipliers. Port multiplier technology looks promising, but is Dependant on motherboard RAID (unless you shell out for a HBA), but from what I do know about it, you couldn't just plug it in to a Areca card, and have it work at full performance (someone correct me if I'm wrong please, Id love top learn otherwise).


    My goal, is to have a reliable storage solution, with minimal wait times when transferring files. At some point, having too much would be overkill, and this also needs to be realized.
    Reply
  • peternelson - Tuesday, December 12, 2006 - link


    It sounds like your needs would be solved by using a fiber channel fabric.

    You need a FC nic (or two) in each of your clients, then one or more FC switches eg from Brocade or oems of their switches. Finally you need drive arrays to connect FC or regular drives onto the FC fabric.

    It isn't cheap but gives fantastic redundancy. FC speeds are 1/2/4 Gigabits per second.
    Reply
  • yyrkoon - Tuesday, December 05, 2006 - link

    I've been giving Areca a lot of thought lately. What I was considering, was to use a complete system for storage, loads of disk space, with an Areca RAID controller. The only problem I personally have with my idea here is: how do I get a fast link to the desktop PC ?

    I've been debating back and forth with a friend of mine about using firewire. From what he says, you can use multiple firewire links, teamed, along with some "hack" ? for raising to get 1394b to 1000MBit/s, to achieve what seems like outstanding performance. Assuming what my friend says is accurate, you could easily team 4x 1394b ports, and get 500MB/s.
    Reply

Log in

Don't have an account? Sign up now