Выбрать главу

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sde

mdadm: Defaulting to version 1.2 metadata

mdadm: array /dev/md0 started.

mdadm --query /dev/md0

/dev/md0: 8.00GiB raid0 2 devices, 0 spares. Use mdadm --detail for more detail.

mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Thu Sep 30 15:21:15 2010

     Raid Level : raid0

     Array Size : 8388480 (8.00 GiB 8.59 GB)

   Raid Devices : 2

  Total Devices : 2

    Persistence : Superblock is persistent

    Update Time : Thu Sep 30 15:21:15 2010

          State : active

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

     Chunk Size : 512K

           Name : squeeze:0  (local to host squeeze)

           UUID : 0012a273:cbdb8b83:0ee15f7f:aec5e3c3

         Events : 0

    Number   Major   Minor   RaidDevice State

       0       8        0        0      active sync   /dev/sda

       1       8       64        1      active sync   /dev/sde

mkfs.ext4 /dev/md0

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

524288 inodes, 2097152 blocks

104857 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=2147483648

55 block groups

32768 blocks per group, 32768 fragments per group

8160 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

mkdir /srv/raid-0

mount /dev/md0 /srv/raid-0

df -h /srv/raid-0

Filesystem            Size  Used Avail Use% Mounted on

/dev/md0              8.0G  249M  7.4G   4% /srv/raid-0

The mdadm --create command requires several parameters: the name of the volume to create (/dev/md*, with MD standing for Multiple Device), the RAID level, the number of disks (which is compulsory despite being mostly meaningful only with RAID-1 and above), and the physical drives to use. Once the device is created, we can use it like we'd use a normal partition, create a filesystem on it, mount that filesystem, and so on. Note that our creation of a RAID-0 volume on md0 is nothing but coincidence, and the numbering of the array doesn't need to be correlated to the chosen amount of redundancy.

Creation of a RAID-1 follows a similar fashion, the differences only being noticeable after the creation:

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdg2 /dev/sdh

mdadm: largest drive (/dev/sdg2) exceed size (4194240K) by more than 1%

Continue creating array? y

mdadm: array /dev/md1 started.

mdadm --query /dev/md1

/dev/md1: 4.00GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.

mdadm --detail /dev/md1

/dev/md1:

        Version : 1.2

  Creation Time : Thu Sep 30 15:39:13 2010

     Raid Level : raid1

     Array Size : 4194240 (4.00 GiB 4.29 GB)

  Used Dev Size : 4194240 (4.00 GiB 4.29 GB)

   Raid Devices : 2

  Total Devices : 2

    Persistence : Superblock is persistent

    Update Time : Thu Sep 30 15:39:26 2010

          State : active, resyncing

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

 Rebuild Status : 10% complete

           Name : squeeze:1  (local to host squeeze)

           UUID : 20a8419b:41612750:b9171cfe:00d9a432

         Events : 27

    Number   Major   Minor   RaidDevice State

       0       8       98        0      active sync   /dev/sdg2

       1       8      112        1      active sync   /dev/sdh

mdadm --detail /dev/md1

/dev/md1:

[...]

          State : active

[...]

TIP RAID, disks and partitions

As illustrated by our example, RAID devices can be constructed out of disk partitions, and do not require full disks.

A few remarks are in order. First, mdadm notices that the physical elements have different sizes; since this implies that some space will be lost on the bigger element, a confirmation is required.

More importantly, note the state of the mirror. The normal state of a RAID mirror is that both disks have exactly the same contents. However, nothing guarantees this is the case when the volume is first created. The RAID subsystem will therefore provide that guarantee itself, and there will be a synchronisation phase as soon as the RAID device is created. After some time (the exact amount will depend on the actual size of the disks…), the RAID array switches to the “active” state. Note that during this reconstruction phase, the mirror is in a degraded mode, and redundancy isn't assured. A disk failing during that risk window could lead to losing all the data. Large amounts of critical data, however, are rarely stored on a freshly created RAID array before its initial synchronisation. Note that even in degraded mode, the /dev/md1 is usable, and a filesystem can be created on it, as well as some data copied on it.

TIP Starting a mirror in degraded mode

Sometimes two disks are not immediately available when one wants to start a RAID-1 mirror, for instance because one of the disks one plans to include is already used to store the data one wants to move to the array. In such circumstances, it is possible to deliberately create a degraded RAID-1 array by passing missing instead of a device file as one of the arguments to mdadm. Once the data have been copied to the “mirror”, the old disk can be added to the array. A synchronisation will then take place, giving us the redundancy that was wanted in the first place.