实验要求:
1)创建RAID1;
2)添加一个热备盘
3模拟故障,自动顶替故障盘
4从raid1中移除故障盘
mdadm -C -v /dev/md1 -l 1 -n 2 -x 1 /dev/sd[d,e,f]
[root@192 ~]# mdadm -C -v /dev/md1 -l 1 -n 2 -x 1 /dev/sd[d,e,f]mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 20954112KContinue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started.
2.保存阵列信息:
mdadm -Dsv > /etc/mdadm.conf
3.查看阵列信息:
mdadm -Dsv 或 mdadm -D /dev/md1
可以看到同步进度
[root@192 ~]# mdadm -Dsv > /etc/mdadm.conf [root@192 ~]# mdadm -D /dev/md1/dev/md1: Version : 1.2 Creation Time : Tue Dec 15 03:07:16 2020 Raid Level : raid1 Array Size : 20954112 (19.98 GiB 21.46 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Dec 15 03:08:49 2020 State : clean, resyncing Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Consistency Policy : resync Resync Status : 35% complete Name : 192.168.74.128:1 (local to host 192.168.74.128) UUID : af8a3ec5:715b9882:5ae40383:db213061 Events : 5 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 - spare /dev/sdf
4.查看
cat /proc/mdstat
[root@192 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md1 : active raid1 sdf[2](S) sde[1] sdd[0] 20954112 blocks super 1.2 [2/2] [UU] [=========>...........] resync = 49.2% (10327424/20954112) finish=2.1min speed=82512K/sec md0 : active raid0 sdc[1] sdb[0] 41908224 blocks super 1.2 512k chunks unused devices:
mkfs.xfs /dev/md1
[root@192 ~]# mkfs.xfs /dev/md1meta-data=/dev/md1 isize=512 agcount=4, agsize=1309632 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0data = bsize=4096 blocks=5238528, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
6.创建挂载目录并挂载
mkdir /raid1
mount /dev/md1 /raid1
[root@192 ~]# mkdir /raid1 [root@192 ~]# mount /dev/md1 /dev/raid1mount: mount point /dev/raid1 does not exist[root@192 ~]# mount /dev/md1 /raid1[root@192 ~]#
7.写入数据测试
cp /etc/passwd /raid1/
cp -r /boot/grub /raid1/
8.查看文件系统大小,确认有数据
df -h
[root@192 ~]# cp /boot/grub/ /raid1/cp: omitting directory ‘/boot/grub/’[root@192 ~]# cp /boot/grub /raid1/cp: omitting directory ‘/boot/grub’[root@192 ~]# cp -r /boot/grub /raid1/[root@192 ~]# df -hFilesystem Size Used Avail Use% Mounted on devtmpfs 898M 0 898M 0% /dev tmpfs 910M 0 910M 0% /dev/shm tmpfs 910M 9.6M 901M 2% /run tmpfs 910M 0 910M 0% /sys/fs/cgroup/dev/mapper/centos-root 17G 1.3G 16G 8% //dev/md0 40G 33M 40G 1% /raid0/dev/sda1 1014M 151M 864M 15% /boot tmpfs 182M 0 182M 0% /run/user/0/dev/md1 20G 33M 20G 1% /raid1
mdadm /dev/md1 -f /dev/sde
[root@192 ~]# mdadm /dev/md1 -f /dev/sdemdadm: set /dev/sde faulty in /dev/md1[root@192 ~]#
10.查看备用盘是否已经自动顶替,自动同步
mddm -D /dev/md1
[root@192 ~]# mdadm -D /dev/md1/dev/md1: Version : 1.2 Creation Time : Tue Dec 15 03:07:16 2020 Raid Level : raid1 Array Size : 20954112 (19.98 GiB 21.46 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Dec 15 03:16:45 2020 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Consistency Policy : resync Rebuild Status : 11% complete Name : 192.168.74.128:1 (local to host 192.168.74.128) UUID : af8a3ec5:715b9882:5ae40383:db213061 Events : 22 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 2 8 80 1 spare rebuilding /dev/sdf 1 8 64 - faulty /dev/sde
11.更改以后需要保存磁盘信息
mdadm -Dsv > /etc/mdadm.conf
12.查看数据是否丢失
ls /raid1/
[root@192 ~]# ls /raid1/grub passwd[root@192 ~]#
mdadm -r /dev/md1 /dev/sde
mdadm -D /dev/md1
[root@192 ~]# mdadm -r /dev/md1 /dev/sdemdadm: hot removed /dev/sde from /dev/md1[root@192 ~]# mdadm -D /dev/md1/dev/md1: Version : 1.2 Creation Time : Tue Dec 15 03:07:16 2020 Raid Level : raid1 Array Size : 20954112 (19.98 GiB 21.46 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Dec 15 03:17:52 2020 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Consistency Policy : resync Rebuild Status : 36% complete Name : 192.168.74.128:1 (local to host 192.168.74.128) UUID : af8a3ec5:715b9882:5ae40383:db213061 Events : 31 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 2 8 80 1 spare rebuilding /dev/sdf
14添加设备sde,并查看
mdadm -a /dev/md1 /dev/sde
mdadm -D /dev/md1
[root@192 ~]# mdadm -a /dev/md1 /dev/sdemdadm: added /dev/sde[root@192 ~]# mdadm -D /dev/md1/dev/md1: Version : 1.2 Creation Time : Tue Dec 15 03:07:16 2020 Raid Level : raid1 Array Size : 20954112 (19.98 GiB 21.46 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Dec 15 03:18:48 2020 State : clean, degraded, recovering Active Devices : 1 Working Devices : 3 Failed Devices : 0 Spare Devices : 2 Consistency Policy : resync Rebuild Status : 55% complete Name : 192.168.74.128:1 (local to host 192.168.74.128) UUID : af8a3ec5:715b9882:5ae40383:db213061 Events : 36 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 2 8 80 1 spare rebuilding /dev/sdf 3 8 64 - spare /dev/sde
实验结果:
磁盘sde sdd做raid1,命名md1,sdf作为热备盘,自动顶替
分析:实验一和实验二做对比,可看出Raid0磁盘利用率=100%,阵列大小为所有磁盘大小之和。Raid1利用率为50%,称为镜像盘。
**
Raid1常用于数据库、系统盘,保证数据安全性。
**