Hvorfor bruge denne kombination?
Fordi IDE diske er billige og fordi visse IDE “RAID” controllere ikke er rigtige controllere, men blot controllere med en windows driver, der fÃ¥r det til at ligne en rigtig RAID controller.
Software RAID
Installer mdadm hvis programmet ikke allerede er installeret. I Gentoo Linux gøres det med emerge mdadm.
Lav RAIDet – Sørg først for at mdadm kigger pÃ¥ alle dine diske:
#
echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' >> /etc/mdadm.conf
Derefter oprettes selve RAIDet – Jeg har brugt /dev/hda5 og /dev/hda6 til at køre RAIDet pÃ¥, men det skal selvfølgelig være pÃ¥ seperate diske, hvis det skal bruges til noget fornuftigt… Start med at lave device noder i /dev. Disse vil blive lavet automatisk, nÃ¥r først RAIDet er lavet og maskinen rebootet.
#
mknod /dev/md0 b 9 0
#mknod /dev/md1 b 9 1
#mknod /dev/md2 b 9 2
#mknod /dev/md3 b 9 3
Derefter selv RAIDet
#
mdadm --create /dev/md0 --chunk=64 --level=raid1 --raid-devices=2 /dev/hda5 /dev/hda6
Eller
#mdadm -C /dev/md0 -l 1 -n 2 /dev/hda5 /dev/hda6
Opdater /etc/mdadm.conf med
#
mdadm --detail --scan >> /etc/mdadm.conf
Det er sÃ¥dan set det – Nu burde der være en /dev/md0 device, som kan formateres og mountes. RAID bliver i første omgang synkroniseret (OgsÃ¥ selvom der ikke er data pÃ¥ drevet).
#
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hda6[1] hda5[0]
20579136 blocks [2/2] [UU]unused devices: <none>
Mere info om RAIDet kan få med mdadm
#
mdadm --query /dev/md0
/dev/md0: 19.63GiB raid1 2 devices, 0 spares. Use mdadm –detail for more detail.
/dev/md0: No md super block found, not an md component.
#mdadm --detail /dev/md0
/dev/md0: Version : 00.90.01 Creation Time : Sun May 1 00:25:08 2005 Raid Level : raid1 Array Size : 20579136 (19.63 GiB 21.07 GB) Device Size : 20579136 (19.63 GiB 21.07 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun May 1 02:00:10 2005 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 69c37901:e04cc127:02340757:2472df82 Events : 0.36 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 3 6 1 active sync /dev/hda6
Logical Volume Management
LVM2 kan fint bruges, uden at der er sat nogle former for software RAID op først. Dog vil jeg ikke tilrÃ¥de at man bruger det over flere fysiske diske, sÃ¥ længe de ikke er en del af et hardware RAID. Det være sig sÃ¥ længe at man har et nært forhold til sine data – Ryger der en disk, skal man ikke kunne forvente at kunne bruge data fra de andre.
Installer LVM2
#
emerge sys-fs/lvm2
Aktiver LVM
#vgscan
Reading all physical volumes. This may take a while... No volume groups found
Lav LVM devices
#pvcreate /dev/md0
Physical volume "/dev/md0" successfully created
Lav Volume Group’en
#vgcreate vg /dev/md0
Volume group "vg" successfully created
Lav logical volumes
#lvcreate -L 5G -n files vg
Logical volume "files" created
Check resultatet
#vgdisplay -v vg | more
Using volume group(s) on command line Finding volume group "vg" --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.62 GB PE Size 4.00 MB Total PE 5024 Alloc PE / Size 1280 / 5.00 GB Free PE / Size 3744 / 14.62 GB VG UUID Gt5Mk1-DrIU-2o5p-3SXY-VulR-Ukv1-Z4dsIS --- Logical volume --- LV Name /dev/vg/files VG Name vg LV UUID eZHCw2-TK7K-Q2B3-pPN1-oBeS-BtVl-8SfzWe LV Write Access read/write LV Status available # open 0 LV Size 5.00 GB Current LE 1280 Segments 1 Allocation inherit Read ahead sectors 0 Block device 252:0 --- Physical volumes --- PV Name /dev/md0 PV UUID t6yv7M-Zuk2-38pt-x7tD-9yBl-1KrT-HJU5IR PV Status allocatable Total PE / Free PE 5024 / 3744 #ls -l /dev/vg/
total 0 lrwxrwxrwx 1 root root 20 May 1 01:22 files -> /dev/mapper/vg-files
Lav filsystem
#
mkreiserfs /dev/vg/files
mkreiserfs 3.6.19 (2003 www.namesys.com)A pair of credits:
BigStorage (www.bigstorage.com) contributes to our general fund every month,
and has done so for quite a long time.Lycos Europe (www.lycos-europe.com) had a support contract with us that
consistently came in just when we would otherwise have missed payroll, and that
they kept doubling every year. Much thanks to them.Guessing about desired format.. Kernel 2.6.11-gentoo-r6 is running.
Format 3.6 with standard journal
Count of blocks on the device: 1310720
Number of blocks consumed by mkreiserfs formatting process: 8251
Blocksize: 4096
Hash function used to sort names: “r5”
Journal Size 8193 blocks (first block 18)
Journal Max transaction length 1024
inode generation number: 0
UUID: 3e4bc306-42c2-45b7-b731-0de54439569c
ATTENTION: YOU SHOULD REBOOT AFTER FDISK!
ALL DATA WILL BE LOST ON ‘/dev/vg/files’!
Continue (y/n):y
Initializing journal – 0%….20%….40%….60%….80%….100%
Syncing http://genericoitalia.it/..okTell your friends to use a kernel based on 2.4.18 or later, and especially not a
kernel based on 2.4.9, when you use reiserFS. Have fun.ReiserFS is successfully created on /dev/vg/files.
Mount filsystemet
#
mkdir /mnt/files
#mount /dev/vg/files /mnt/files
Sørg for at sætte din nye partition ind i /etc/fstab. Dette sørger også for at i hvert fald på Gentoo Linux, vil LVM blive startet pænt op ved boot.
Den sjove del
Udvid files med 5GB:
#lvextend -L+5G /dev/vg/files
Extending logical volume files to 10.00 GB Logical volume files successfully resized
Juster størrelse af filsystemet (Lad den bruge hele den tilgængelige plads ved ikke at specificere størrelsen). Dette kan gøres med et mountet filsystem, så længe at det er en udvidelse af det.
#df /mnt/files
Filesystem 1K-blocks Used Available Use% Mounted on /dev/vg/files 5242716 32852 5209864 1% /mnt/filer #resize_reiserfs /dev/vg/files
resize_reiserfs 3.6.19 (2003 www.namesys.com) resize_reiserfs: On-line resizing finished successfully. #df /mnt/files
Filesystem 1K-blocks Used Available Use% Mounted on /dev/vg/files 10485436 32852 10452584 1% /mnt/filer
Udvidelse af volume group – NÃ¥r du har fÃ¥et flere diske, eller har fordelt dit LVM over flere partioner. Her er det /dev/hda7, der er den nye disk.
#pvcreate /dev/hda7
Physical volume "/dev/hda7" successfully created #vgextend vg /dev/hda7
Volume group "vg" successfully extended #vgdisplay -v vg
Using volume group(s) on command line Finding volume group "vg" --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.25 GB PE Size 4.00 MB Total PE 10048 Alloc PE / Size 2560 / 10.00 GB Free PE / Size 7488 / 29.25 GB VG UUID Gt5Mk1-DrIU-2o5p-3SXY-VulR-Ukv1-Z4dsIS --- Logical volume --- LV Name /dev/vg/files VG Name vg LV UUID eZHCw2-TK7K-Q2B3-pPN1-oBeS-BtVl-8SfzWe LV Write Access read/write LV Status available # open 2 LV Size 10.00 GB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors 0 Block device 252:0 --- Physical volumes --- PV Name /dev/md0 PV UUID t6yv7M-Zuk2-38pt-x7tD-9yBl-1KrT-HJU5IR PV Status allocatable Total PE / Free PE 5024 / 2464 PV Name /dev/hda7 PV UUID HS1dsy-r4d6-hWCq-V31m-avEr-rRPk-XKdzO4 PV Status allocatable Total PE / Free PE 5024 / 5024
Tips
Boot fra RAID
Hvis vi antager at du allerede har et en Linux installation kørende på /dev/hda, og du beslutter dig til at flytte den til et nyt RAID, der starter på /dev/hdb vil følgende være en god ting mht grub.
#
grub
grub>device (hd0) /dev/hdb
grub>root (hd0,0)
grub>setup (hd0)
Under installation
Hvis du er ved at installere på det sørg for (Efter det hele er unmountet igen) at stoppe dine volume groups pænt igen.
#
vgchange -a n