Monday, December 20, 2010

RAID 0+1 vs. RAID 1+0 and SVM

Six years ago when I first started working on Solaris Volume Manager's earlier incarnation (known as SDS) I was confused about whether it implemented RAID 0+1 or RAID 1+0. The answer ended up being more complicated than simply one or the other. The same implementation has carried forward into the current version of SVM. Since this question still comes up with some regularity I thought it was worth spending some time describing how this particular part of SVM works.
BackgroundRAID stands for 'Redundant Array of Inexpensive Disks', and the different numbers correspond to differing ways of placing data on the disks. There are two basic RAID levels that pertain to this subject in general plus an additional logical device type that's involved when you're dealing with SVM:
RAID0:
+--------+ +--------+
| Chunk 1| |Chunk 2 |
+--------+ +--------+
| Chunk 3| |Chunk 4 |
+--------+ +--------+
| Chunk 5| |Chunk 6 |
+--------+ +--------+
Disk A Disk B
| |
+--------Stripe---------+
In this case the term RAID is a misnomer as no redundancy is gained. The address space is 'interlaced' across the disks, improving performance for I/Os bigger than the interlace/chunk size as a single I/O will spread across multiple disks. In the example above, for an interlace size of 16k, the first 16k of data would reside on Disk A, the second 16k on Disk B, the third 16k on Drive A, and so on. The interlace size needs to be chosen when the logical device is created and can't be changed later without recreating the logical device from scratch.
RAID1:
+--------+ +--------+
| Copy A | <-> | Copy B |
+--------+ +--------+
Disk A Disk B
| |
+--------Mirror---------+
The advantages are that you can lose either disk and the data will still be accessible, and reads can be alternated between the two disks to improve performance. The drawbacks are that you've doubled your storage costs and incurred additional overhead by having to generate two writes to physical devices for every one that's done to the logical mirror device.
Concatenation:
+--------+
| Disk A |
+--------+
| Disk B |
+--------+
| Disk C |
+--------+
Concat
A concatenation aggregates several smaller physical devices into one large logical device. Unlike a stripe, the address space isn't interlaced across the underlying devices. This means there's no performance gain from using a concatenated device.
One additional type of logical device that's involved when combining stripes and mirrors in SVM is a concatenation. There is no RAID nomencalture associated with a 'concat':
RAID1 is mirroring. Every byte of data written to the mirror is duplicated on both disks:
RAID0 is striping, in which data is spread across multiple spindles (disks). Data written to the RAID0 is broken up into chunks of a specified size (interlace value) and spread across the disks:
Since RAID0 improves performance, and RAID1 provides redundancy, someone came up with the idea to combine them. Fast and reliable. Two great tastes that taste great together!
When combining these two types of 'logical' devices there's a choice to be made -- do you mirror two stripes, or do you stripe across multiple mirrors? There are pros and cons to each approach:
RAID 0+1:
+------------------------------------+ +------------------------------------+
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| | Chunk 1| |Chunk 2 | |Chunk 3 | | | | Chunk 1| |Chunk 2 | |Chunk 3 | |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| | Chunk 4| |Chunk 5 | |Chunk 6 | | | | Chunk 4| |Chunk 5 | |Chunk 6 | |
| +--------+ +--------+ +--------+ |<--->| +--------+ +--------+ +--------+ |
| | Chunk 7| |Chunk 8 | |Chunk 9 | | | | Chunk 7| |Chunk 8 | |Chunk 9 | |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| Disk A Disk B Disk C | | Disk D Disk E Disk F |
+------------------------------------+ +------------------------------------+
Stripe 1 Stripe 2
| |
+-----------------------------------Mirror--------------------------------------+
Advantage:
RAID 1+0:
+-----------------+ +-----------------+ +-----------------+
| Chunk 1 | | Chunk 2 | | Chunk 3 |
+-----------------+ +-----------------+ +-----------------+
| Chunk 4 | | Chunk 5 | | Chunk 6 |
+-----------------+ +-----------------+ +-----------------+
| Chunk 7 | | Chunk 8 | | Chunk 9 |
+-----------------+ +-----------------+ +-----------------+
Disk A <---> Disk B Disk C <---> Disk D Disk E <---> Disk F
Mirror 1 Mirror 2 Mirror 3
| |
+-----------------------------------------------------------+
Stripe
Advantage: A failure on one disk only impacts redundancy for the chunks of the stripe that are located on that disk. For instance, a failure on Disk B above only loses redundancy for every third chunk (1, 4, 7, etc.) Redundancy for the other stripe chunks is unaffected, so a second disk failure could be tolerated as long as the second failure wasn't on Disk A.
Disadvantage: More complicated from an administrative standpoint. The administrator needs to issue one creation command per mirror, then a command to stripe across the mirrors. The six-disk example above would require four commands to create, while a twelve disk configuration would require seven commands.
In RAID 1+0 the opposite approach is taken. The disks are mirrored first, then the mirrors are combined together into a stripe:
Simple administrative model. Issue one command to create the first stripe, a second command to create the second stripe, and a third command to mirror them. Three commands and you're done, regardless of the number of disks in the configuration.
Disadvantage: An error on any one of the disks kills redundancy for all disks. For instance, a failure on Disk B above 'breaks' the Stripe 1 side of the mirror. As a result, should disk D, E, or F fail as well, the entire mirror becomes unusable.
In RAID 0+1, the stripes are created first, then are mirrored together. Logically, the resulting device looks like:
SVM specificsSo, does SVM do RAID 0+1 or RAID 1+0? The answer is, "Yes." So it gives you a choice between the two? The answer is "No."
Obviously further explanation is necessary...
In SVM, mirror devices cannot be created from "bare" disks. You are required to create the mirror on top of another type of SVM metadevice, known as a concat/stripe*. SVM combines concatenations and stripes into a single metadevice type, in which one or more stripes are concatenated together. When used to build a mirror these concat/stripe logical devices are known as submirrors. If you want to expand the size of a mirror device you can do so by concatenating additional stripe(s) onto the concat/stripe devices that are serving as submirrors.
So, in SVM, you are always required to set up a stripe (concat/stripe) in order to create a mirror. On the surface this makes it appear that SVM does RAID 0+1. However, once you understand a bit about the SVM mirror code, you'll find RAID 1+0 lurking under the covers.
SVM mirrors are logically divided up into regions. The state of each mirror region is recorded in state database replicas* stored on disk. By individually recording the state of each region in the mirror, SVM can be smart about how it performs a resync. Following a disk failure or an unusual event (e.g. a power failure occurs after the first side of a mirror has been written to but before the matching write to the second side can be accomplished), SVM can determine which regions are out-of-sync and only synchronize them, not the entire mirror. This is known as an optimized resync.
The optimized resync mechanisms allow SVM to gain the redundancy benefits of RAID 1+0 while keeping the administrative benefits of RAID 0+1. If one of the drives in a concat/stripe device fails, only those mirror regions that correspond to data stored on the failed drive will lose redundancy. The SVM mirror code understands the layout of the concat/stripe submirrors and can therefore determine which resync regions reside on which underlying devices. For all regions of the mirror not affected by the failure, SVM will continue to provide redundancy, so a second disk failure won't necessarily prove fatal.
So, in a nutshell, SVM provides a RAID 0+1 style administrative interface but effectively implements RAID 1+0 functionality. Administrators get the best of each type, the relatively simple administration of RAID 0+1 plus the greater resilience of RAID 1+0 in the case of multiple device failures.

* concat/stripe logical devices (metadevices)The following example shows a concat/stripe metadevice that's serving as a submirror to a mirror metadevice. Note that the metadevice is a concatenation of three separate stripes:
Stripe 0 is a 1-way stripe (so not really striped at all) on disk slice c1t11d0s0.
Stripe 1 is a 1-way stripe on disk slice c1t12d0s0.
Stripe 2 is a 2-way stripe with an interlace size of 32 blocks on disk slices c1t13d0s1 and c1t14d0s2.
d1: Submirror of d0
State: Okay
Size: 78003 blocks (38 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t11d0s0 0 No Okay Yes
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
c1t12d0s0 0 No Okay Yes
Stripe 2: (interlace: 32 blocks)
Device Start Block Dbase State Reloc Hot Spare
c1t13d0s1 0 No Okay Yes
c1t14d0s2 0 No Okay Yes
** State database replicasSVM stores configuration and state information in a 'state database' in memory. Copies of this state database are stored on disk, where they are referred to as state database replicas. The primary purpose of the state database replicas is to provide non-volatile copies of the state database so that the SVM configuration is persistant across reboots. A secondary purpose of the replicas is to provide a 'scratch pad' to keep track of mirror region states.

1 comment:

  1. Please send me commands about how to configure RAID1+0

    ReplyDelete