This document will consentrate on setting up and managing software RAID and Logical Volume Management
Why use this combination?
Because IDE disks are cheap and because certain IDE “RAID” controllers is not real controllers, but just controllers with a windows driver, there makes it look like a real RAID controller.
Software RAID
Install mdadm if it is not allready installed. In Gentoo Linux you can do it with a emerge mdadm.
Make the RAID – First make sure that mdadm looks on all your disks:
#
echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' >> /etc/mdadm.conf
Now make the actual RAID – I have used /dev/hda5 and /dev/hda6 to make the RAID on, but that should of course be on seperate disks, if it is to be used for something usefull… Start by making device nodes i /dev. These will be made automatically once the RAID is created and the machine rebooted.
#
mknod /dev/md0 b 9 0
#mknod /dev/md1 b 9 1
#mknod /dev/md2 b 9 2
#mknod /dev/md3 b 9 3
Now mdadm will be happy to use the devices and make the RAID
#
mdadm --create /dev/md0 --chunk=64 --level=raid1 --raid-devices=2 /dev/hda5 /dev/hda6
Or
#mdadm -C /dev/md0 -l 1 -n 2 /dev/hda5 /dev/hda6
Update /etc/mdadm.conf with
#
mdadm --detail --scan >> /etc/mdadm.conf
That’s it! Now there should be a /dev/md0 device, witch cn be formatted and mounted. The RAID itself start out by syncronizing the devices (Even though you have not put any data on the drives yet).
#
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hda6[1] hda5[0]
20579136 blocks [2/2] [UU]unused devices:
Get more info on the RAID with mdadm
#
mdadm --query /dev/md0
/dev/md0: 19.63GiB raid1 2 devices, 0 spares. Use mdadm –detail for more detail.
/dev/md0: No md super block found, not an md component.
#mdadm --detail /dev/md0
/dev/md0: Version : 00.90.01 Creation Time : Sun May 1 00:25:08 2005 Raid Level : raid1 Array Size : 20579136 (19.63 GiB 21.07 GB) Device Size : 20579136 (19.63 GiB 21.07 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun May 1 02:00:10 2005 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 69c37901:e04cc127:02340757:2472df82 Events : 0.36 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 3 6 1 active sync /dev/hda6
When disks die (And they will)
First of all the task is to determine what physical disk is gone dead. This might seen obvious at the time of installing, but…
Second task is to replace the dead disk with a no-so-dead disk, which has to be of the same or larger in size. Mirror the partition layout by issuing the command
#sfdisk -d /dev/hda | sfdisk /dev/hdb
This will make sure that /dev/hda looks the same as /dev/hda.
To return to out example setup I decided to see what would happen if there was another drive added to the game. I added a 3rd “disk” to the setup:
#mdadm --add /dev/md0 /dev/hda7
mdadm: hot added /dev/hda7 # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hda7[2](S) hda6[1] hda5[0] 20579136 blocks [2/2] [UU]
This gives us a Spare in the RAID. mdadm --detail /dev/md0
will also show us that Spare Devices is 1. So far so good. Notice that it does not do any rebuilding of the spare device at once. To see if it works, I marked one of the working devices as faulty:
#mdadm --fail /dev/md0 /dev/hda6
mdadm: set /dev/hda6 faulty in /dev/md0
Resulting in more noise from the computer as it started to rebuild the RAID:
#cat /proc/mdstat
Personalities : [raid1] md0 : active raid1 hda7[2] hda6[3](F) hda5[0] 20579136 blocks [2/1] [U_] [================>....] recovery = 80.7% (16608256/20579136) finish=5.3min speed=12347K/sec unused devices:# mdadm --detail /dev/md0
/dev/md0: Version : 00.90.02 Creation Time : Sun May 1 00:25:08 2005 Raid Level : raid1 Array Size : 20579136 (19.63 GiB 21.07 GB) Device Size : 20579136 (19.63 GiB 21.07 GB) Raid Devices : 2 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jan 19 00:51:12 2006 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Rebuild Status : 80% complete UUID : 69c37901:e04cc127:02340757:2472df82 Events : 0.1346 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 0 0 - removed 2 3 7 1 spare rebuilding /dev/hda7 3 3 6 - faulty /dev/hda6
So it worked as expected. Great, as we can now sleep tight at night knowing that our files are safe, at least from disk crashes.
To clean our act up a bit, remove the failed device from our RAID:
#mdadm --remove /dev/md0 /dev/hda6
mdadm: hot removed /dev/hda6
It’s worthwhile to mention that it is not possible to remove a device, that not either failed or is a spare divice. I.e.
#mdadm --remove /dev/md0 /dev/hda7
mdadm: hot remove failed for /dev/hda7: Device or resource busy
Logical Volume Management
There is no problem in using LVM2 without having any sort of RAID first. But I would not advice using it over several physical disks, as long as they are not part of a hardware RAID. That is: As long as you have any kind of relationship with you data – If a disk goes up in smoke, you should not expect to be able to use the data from the other disks containing the same volume groups.
Install LVM2
#
emerge sys-fs/lvm2
Activate LVM
#vgscan
Reading all physical volumes. This may take a while... No volume groups found
If you allready have a volume group this will show up here. But it will not make the devices for your logical volumes in /dev
– vgchange will take care of that:
#
vgchange -a y
5 logical volume(s) in volume group “vg” now active
Make the LVM devices.
Here you are assigning witch physical devices are to be used by LVM2.
#pvcreate /dev/md0
Physical volume "/dev/md0" successfully created
Make the Volume Group
The volume groups are spanning the physical devices, and is the space from witch you are handing out pices to the logical volumes uk viagra online.
#vgcreate vg /dev/md0
Volume group "vg" successfully created
Make the Logical Volumes
#lvcreate -L 5G -n files vg
Logical volume "files" created
Check out the result
#vgdisplay -v vg | more
Using volume group(s) on command line Finding volume group "vg" --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.62 GB PE Size 4.00 MB Total PE 5024 Alloc PE / Size 1280 / 5.00 GB Free PE / Size 3744 / 14.62 GB VG UUID Gt5Mk1-DrIU-2o5p-3SXY-VulR-Ukv1-Z4dsIS --- Logical volume --- LV Name /dev/vg/files VG Name vg LV UUID eZHCw2-TK7K-Q2B3-pPN1-oBeS-BtVl-8SfzWe LV Write Access read/write LV Status available # open 0 LV Size 5.00 GB Current LE 1280 Segments 1 Allocation inherit Read ahead sectors 0 Block device 252:0 --- Physical volumes --- PV Name /dev/md0 PV UUID t6yv7M-Zuk2-38pt-x7tD-9yBl-1KrT-HJU5IR PV Status allocatable Total PE / Free PE 5024 / 3744 #ls -l /dev/vg/
total 0 lrwxrwxrwx 1 root root 20 May 1 01:22 files -> /dev/mapper/vg-files
Make a filesystem on you new logical lolume
#
mkreiserfs /dev/vg/files
mkreiserfs 3.6.19 (2003 www.namesys.com)A pair of credits:
BigStorage (www.bigstorage.com) contributes to our general fund every month,
and has done so for quite a long time.Lycos Europe (www.lycos-europe.com) had a support contract with us that
consistently came in just when we would otherwise have missed payroll, and that
they kept doubling every year. Much thanks to them.Guessing about desired format.. Kernel 2.6.11-gentoo-r6 is running.
Format 3.6 with standard journal
Count of blocks on the device: 1310720
Number of blocks consumed by mkreiserfs formatting process: 8251
Blocksize: 4096
Hash function used to sort names: “r5”
Journal Size 8193 blocks (first block 18)
Journal Max transaction length 1024
inode generation number: 0
UUID: 3e4bc306-42c2-45b7-b731-0de54439569c
ATTENTION: YOU SHOULD REBOOT AFTER FDISK!
ALL DATA WILL BE LOST ON ‘/dev/vg/files’!
Continue (y/n):y
Initializing journal – 0%….20%….40%….60%….80%….100%
Syncing..okTell your friends to use a kernel based on 2.4.18 or later, and especially not a
kernel based on 2.4.9, when you use reiserFS. Have fun.ReiserFS is successfully created on /dev/vg/files.
Mount the filesystemet
#
mkdir /mnt/files
#mount /dev/vg/files /mnt/files
Make sure to insert you new partition in /etc/fstab. This also make sure, at least on Gentoo Linux, that LVM get loaded nicely at boot.
The fun part
Expand the files LV with 5GB:
#lvextend -L+5G /dev/vg/files
Extending logical volume files to 10.00 GB Logical volume files successfully resized
Ajust the size of the filesystem on /dev/vg/files. We let it use all the available space by not specifying the size. This can be done with a mounted filesystem, as long as it is an extension of it.
#df /mnt/files
Filesystem 1K-blocks Used Available Use% Mounted on /dev/vg/files 5242716 32852 5209864 1% /mnt/filer #resize_reiserfs /dev/vg/files
resize_reiserfs 3.6.19 (2003 www.namesys.com) resize_reiserfs: On-line resizing finished successfully. #df /mnt/files
Filesystem 1K-blocks Used Available Use% Mounted on /dev/vg/files 10485436 32852 10452584 1% /mnt/filer
Extemsion of the volume group – For when you have got youself a new disk or split your LVM over several partitions. Here it is /dev/hda7 witch is simulating the new disks partition.
#pvcreate /dev/hda7
Physical volume "/dev/hda7" successfully created #vgextend vg /dev/hda7
Volume group "vg" successfully extended #vgdisplay -v vg
Using volume group(s) on command line Finding volume group "vg" --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.25 GB PE Size 4.00 MB Total PE 10048 Alloc PE / Size 2560 / 10.00 GB Free PE / Size 7488 / 29.25 GB VG UUID Gt5Mk1-DrIU-2o5p-3SXY-VulR-Ukv1-Z4dsIS --- Logical volume --- LV Name /dev/vg/files VG Name vg LV UUID eZHCw2-TK7K-Q2B3-pPN1-oBeS-BtVl-8SfzWe LV Write Access read/write LV Status available # open 2 LV Size 10.00 GB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors 0 Block device 252:0 --- Physical volumes --- PV Name /dev/md0 PV UUID t6yv7M-Zuk2-38pt-x7tD-9yBl-1KrT-HJU5IR PV Status allocatable Total PE / Free PE 5024 / 2464 PV Name /dev/hda7 PV UUID HS1dsy-r4d6-hWCq-V31m-avEr-rRPk-XKdzO4 PV Status allocatable Total PE / Free PE 5024 / 5024
Tips
Boot from the RAID
If we assume that you allready have a running installation on /dev/hda, and you decide to move it to a new RAID, witch is starting on /dev/hdb, the following would be a good thing regarding grub.
#
grub
grub>device (hd0) /dev/hdb
grub>root (hd0,0)
grub>setup (hd0)
Durring installation
If you are installing directly on a new RAID, make sure that (After everything is umounted again) to stop your volume groups again.
#
vgchange -a n
One thought on “LVM2 and Software RAID in Linux”
Comments are closed.