RAID 0

RAID 0 takes two or more disk drives and writes data in a "stripe" across each disk. Data is accessed by requesting the stripe from the array, resulting in the disks more or less simultaneously feeding their portion of the data back to the controller. The overall capacity of the array is equal to the sum of the formatted capacities of all drives, and disk usage is more or less spread evenly among all drives in the array.


The net result is that the system will see much faster sustained transfer rates for both read and write operations compared to a single drive. File access time, however, is not measurably improved by leveraging multiple disks in a RAID 0 set, which means that systems which require frequent access of small, non-contiguous files (as is often the case in desktop configurations) generally do not benefit from RAID 0.

RAID 0 is an excellent choice for video editing and large-scale "solving" applications, where large files need to be read and written in a continuous manner.

Perhaps the greatest drawback to RAID 0 is that the arrays are rendered inaccessible when a single drive in the array fails. In that sense, RAID 0 isn't actually RAID at all, as it lacks the "Redundant" part of the equation. Data reliability and retention is decreased exponentially as drives are added to a RAID 0 setup, so unless frequent backups are made - or if the data is not regarded as even remotely important - RAID 0 should be approached with caution.

Pros:
  • Excellent streaming performance
  • Maximum capacity available for users (sum of all disks)
Cons:
  • No redundancy of data
  • Negligible performance benefits for many users
RAID 1

RAID 1 sits at the other extreme of the spectrum. It makes a continuous copy of all data from one disk (which is written to and read from by the system) onto another physical disk which is in "standby" mode. This "standby" disk is held in reserve by the controller for when a failure is detected on the first disk. At that point in time, the controller "fails over" to the second disk in the system, with all data still available to the user.


While RAID 1 usually offers no performance benefits (and indeed, it often slightly degrades performance in some situations), it does increase the uptime of the host computer by allowing it to remain online even after a disk in the system has failed. This makes it an extremely popular option for mirroring operating systems on enterprise-class servers, and for small office users without the need for massive amounts of data storage but a requirement for constant uptime.

Higher quality RAID 1 controllers can outperform single drive implementations by making both drives active for read operations. This can in theory reduce file access times (requests are sent to whichever drive is closer to the desired data) as well as potentially doubling data throughput on reads (both drives can read different data simultaneously). Most consumer RAID 1 controllers do not provide this level of sophistication, however, resulting in performance that is at best slightly worse than what would be achieved with a single drive. Software RAID 1 solutions also lack support for reading from both drives in a RAID 1 set simultaneously.

Pros:
  • Redundancy of data
  • Lowest cost data redundancy available (one additional disk)
  • Simple operations make it easy to implement solution using software only
Cons:
  • Poor usage of drive capacity (only 50% of purchased hard drive capacity available)
  • Typically no performance benefit over a single hard disk
Index Data Striping and Parity
Comments Locked

41 Comments

View All Comments

  • alecweder - Wednesday, February 4, 2015 - link

    The biggest issue with RAID are the unrecoverable read errors.
    If you loose the drive, the RAID has to read 100% of the remaining drives even if there is no data on portions of the drive. If you get an error on rebuild, the entire array will die.

    http://www.enterprisestorageforum.com/storage-mana...

    A UER on SATA of 1 in 10^14 bits read means a read failure every 12.5 terabytes. A 500
    GB drive has 0.04E14 bits, so in the worst case rebuilding that drive in a five-drive
    RAID-5 group means transferring 0.20E14 bits. This means there is a 20% probability
    of an unrecoverable error during the rebuild. Enterprise class disks are less prone to this problem:

    http://www.lucidti.com/zfs-checksums-add-reliabili...

Log in

Don't have an account? Sign up now