刘刚刚的个人博客

<linux就该这么学>第七章 RAID与LVM磁盘阵列技术

创建时间:2021-02-24 19:54:11
更新时间:2021-02-24 19:54:11

通过RAID和LVM可以更灵活的使用硬盘.

RAID阵列

不同阵列的对比:

RAID级别最少硬盘可用容量读写性能安全性特点
02nn追求最大容量和速度,任何一块盘损坏,数据全部异常。
12n/2n追求最大安全性,只要阵列组中有一块硬盘可用,数据不受影响。
53n-1n-1在控制成本的前提下,追求硬盘的最大容量、速度及安全性,允许有一块硬盘异常,数据不受影响。
104n/2n/2综合RAID1和RAID0的优点,追求硬盘的速度和安全性,允许有一半硬盘异常(不可同组),数据不受影响

raid0

raid0用来提高磁盘的读写速度,使用时从,多块硬盘中同时读写不同的数据,不具备有数据冗余备份的功能

image-20210217221845234

raid1

raid1用来做数据的备份,将相同的数据写入多个硬盘中,当一块硬盘出现问题时,数据不会丢失.

image-20210217222407112

raid5

raid5最少由三块硬盘组成,没块硬盘中,都存在数据块或者数据块对应的奇偶校验信息,切至少由一个硬盘中存在奇偶校验信息.因此raid5中允许一块硬盘损坏.

image-20210217222428470

raid5的数据存储及校验计算

image-20210217222933804

当一块硬盘损坏,使用一块新的硬盘取代损坏硬盘时:

image-20210217223024970

RAID5校验位算法原理

  P=D1 xor D2 xor D3 … xor Dn (D1,D2,D3 … Dn为数据块,P为校验,xor为异或运算)

  XOR(Exclusive OR)的校验原理如下表:

A值B值Xor结果
000
101
011
110
tip:

随着硬盘数量和单个硬盘容量的增加,raid5的数据恢复成功率会降低,因此实际生产中也很少用

### raid10

raid10至少需要4块硬盘,其中两块一块用来组成raid1,再将两组raid1组成raid0,

image-20210217224736326

tip:

对于raid01来说,如果两边同时有盘坏掉,则数据就会丢失,再安全性上不如raid10.

RAID10阵列的部署

raid阵列的命名,模式使用md开头

首先,需要给虚拟机添加4块硬盘.

image-20210217230258136

# 组成阵列 
mdadm -Cv {自定义的阵列设备名称} -n {磁盘数量} -l {阵列类型} {多个磁盘设备名称...}
# 查看状态
mdadm -D {阵列设备名称}


>>>
[root@localhost dev]# ls | grep ^sd
sda
sdb
sdc
sdd
# 创建磁盘阵列
[root@localhost dev]# mdadm -Cv /dev/md0 -n 4 -l 10  /dev/sd[a-d]
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost dev]# ls md0
md0
# 查看摘要
[root@localhost dev]# mdadm -Q /dev/md0
/dev/md0: 9.99GiB raid10 4 devices, 0 spares. Use mdadm --detail for more detail.
# 格式化
[root@localhost dev]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=163712 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=2618880, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost dev]# mkdir /myraid10
[root@localhost dev]# mount /dev/md0 /myraid10
[root@localhost dev]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M   18M  966M   2% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   17G  3.9G   14G  23% /
/dev/nvme0n1p1        1014M  152M  863M  15% /boot
tmpfs                  197M   20K  197M   1% /run/user/42
tmpfs                  197M  3.5M  194M   2% /run/user/1000
/dev/sr0               6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
/dev/md0                10G  105M  9.9G   2% /myraid10
[root@localhost dev]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Feb 17 23:10:13 2021
        Raid Level : raid10
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Feb 17 23:12:44 2021
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 12f52dd8:4c558aa8:fc3814be:070dde0e
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync set-A   /dev/sda
       1       8       16        1      active sync set-B   /dev/sdb
       2       8       32        2      active sync set-A   /dev/sdc
       3       8       48        3      active sync set-B   /dev/sdd
# 持久化挂载
[root@localhost dev]# echo "/dev/md0 /myraid10 xfs defaults 0 0" >> /etc/fstab
[root@localhost dev]# mount -a

mdadm 命令的常用参数

参数作用
-a检测设备名称
-n指定设备数量
-l指定RAID级别
-C创建
-v显示过程
-f模拟设备损坏
-r移除设备
-Q查看摘要信息
-D查看详细信息
-S停止RAID磁盘阵列
-x备份盘的数量

RAID5及热备盘的部署

# 单独添加热备盘的方式
mdadm {raid设备名} -a {需要添加的设备路径}

>>>
# 3快组raid5,一块做热备盘
[root@localhost dev]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sd[a-d]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# 查看状态
[root@localhost dev]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 18 23:47:51 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Feb 18 23:48:20 2021
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : b6294f16:92e4c84c:45bb47b2:6fb30a4f
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       4       8       32        2      active sync   /dev/sdc

       3       8       48        -      spare   /dev/sdd
[root@localhost dev]# mkfs.ext4 /dev/md0
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 2618880 4k blocks and 655360 inodes
Filesystem UUID: 86d66a98-b7e1-4b95-a841-c6e1dd067a51
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

[root@localhost dev]# mkdir /myraid5
[root@localhost dev]# echo "/dev/md0 /myraid5 ext4 defaults 0 0" >> /etc/fstab
[root@localhost dev]# mount -a
[root@localhost dev]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M   18M  966M   2% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   17G  3.9G   14G  23% /
/dev/nvme0n1p1        1014M  152M  863M  15% /boot
tmpfs                  197M   20K  197M   1% /run/user/42
tmpfs                  197M  3.5M  194M   2% /run/user/1000
/dev/sr0               6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
/dev/md0               9.8G   37M  9.3G   1% /myraid5

# 此处移除虚拟机中的一块硬盘sdc,sdd会自动重建
[root@localhost dev]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 18 23:47:51 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Thu Feb 18 23:51:57 2021
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 11% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : b6294f16:92e4c84c:45bb47b2:6fb30a4f
            Events : 21

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       3       8       48        2      spare rebuilding   /dev/sdd
[root@localhost dev]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 18 23:47:51 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Thu Feb 18 23:52:02 2021
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 26% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : b6294f16:92e4c84c:45bb47b2:6fb30a4f
            Events : 24

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       3       8       48        2      spare rebuilding   /dev/sdd

LVM技术的使用

image-20210219233250153

lvm技术将多个物理硬盘组成一个抽象的硬盘,然后可以将其切割为多个逻辑硬盘。

lvm切割的硬盘一般使用ext4格式,xfs格式不支持缩容。

通过lvm可以灵活的对硬盘进行调整,但lvm没有备份功能,可以在raid上建lvm,但这样性能的损耗会比较大。

image-20210219204513949

部署逻辑卷

功能/命令物理卷管理卷组管理逻辑卷管理
扫描pvscanvgscanlvscan
建立pvcreatevgcreatelvcreate
显示pvdisplayvgdisplaylvdisplay
删除pvremovevgremovelvremove
扩展 vgextendlvextend
缩小 vgreducelvreduce
#1. 让硬盘支持LVM技术
pvcreate {硬盘1} {硬盘2}

#2. 将硬盘添加到逻辑卷组中
vgcreate {卷组名称} {硬盘1} {硬盘2}

#3. 切割逻辑卷,切割好的逻辑卷路径为 /dev/{卷组名称}/{卷名称}
## -l 后使用基本单元的个数, vgdisplay中的 PE Size 即代表基本单元的大小
## -L 后使用容量的大小
lvcreate -n {逻辑卷名称} -l {基本单元的个数}  {逻辑卷组名称}

#4. 将硬盘格式化,并挂载


>>>
[root@localhost dev]# pvcreate sda sdb
  Physical volume "sda" successfully created.
  Physical volume "sdb" successfully created.
[root@localhost dev]# vgcreate mylvm sda sdb
  Volume group "mylvm" successfully created
[root@localhost dev]# vgdisplay
  --- Volume group ---
  VG Name               rhel
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       4863 / <19.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               cTKWWM-t7yh-r342-JkSD-2bPU-Wjf0-tRExs8
   
  --- Volume group ---
  VG Name               mylvm
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               9.99 GiB
  PE Size               4.00 MiB
  Total PE              2558
  Alloc PE / Size       0 / 0   
  Free  PE / Size       2558 / 9.99 GiB
  VG UUID               AzUk8Y-x1Y1-wMEn-wgbk-Lqru-WAWM-0zK19A
   
[root@localhost dev]# lvcreate -n mylvm01 -L 10M mylvm
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "mylvm01" created.

[root@localhost dev]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                1Ug78x-W3LQ-e6DK-lJc4-OyO2-Q9E3-YusxNX
  LV Write Access        read/write
  LV Creation host, time localhost, 2021-01-04 03:37:52 +0800
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                SyF0lV-U0KX-uFq6-8fdc-6kWa-d23n-qnIhY1
  LV Write Access        read/write
  LV Creation host, time localhost, 2021-01-04 03:37:53 +0800
  LV Status              available
  # open                 1
  LV Size                <17.00 GiB
  Current LE             4351
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/mylvm/mylvm01
  LV Name                mylvm01
  VG Name                mylvm
  LV UUID                DxqLeJ-SQLP-aAi3-jEZc-7Oma-hdTY-vo9OAA
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2021-02-19 21:04:31 +0800
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
[root@localhost /]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M   18M  966M   2% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   17G  3.9G   14G  23% /
/dev/nvme0n1p1        1014M  152M  863M  15% /boot
tmpfs                  197M   20K  197M   1% /run/user/42
tmpfs                  197M  3.5M  194M   2% /run/user/1000
/dev/sr0               6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
[root@localhost /]# mkdir /mylvmdir01
[root@localhost /]# mkfs.ext4 /dev/mylvm/mylvm01 
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 12288 1k blocks and 3072 inodes
Filesystem UUID: 3215ccc8-82ef-4b3a-9577-91cc03d86567
Superblock backups stored on blocks: 
    8193

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

[root@localhost /]# mount /dev/mylvm/mylvm01 /mylvmdir01/
[root@localhost /]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                   969M     0  969M   0% /dev
tmpfs                      984M     0  984M   0% /dev/shm
tmpfs                      984M   18M  966M   2% /run
tmpfs                      984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root       17G  3.9G   14G  23% /
/dev/nvme0n1p1            1014M  152M  863M  15% /boot
tmpfs                      197M   20K  197M   1% /run/user/42
tmpfs                      197M  3.5M  194M   2% /run/user/1000
/dev/sr0                   6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
/dev/mapper/mylvm-mylvm01   11M  204K  9.6M   3% /mylvmdir01

扩容逻辑卷

扩容时,需要先卸载挂载点,防止在扩容时有数据被写入,导致数据丢失。

# 1. 卸载挂载点
umount {设备|硬盘}

# 2. 扩容
lvextend -L {扩容后的容量}  {卷名称}

# 3. 检查硬盘的完整性
e2fsck -f {卷名称}

# 4. 重置硬盘容量(通知系统)
resize2fs -f {卷名称}

# 5. 重新挂载硬盘
mount {设备} {目录}

>>>
[root@localhost /]# umount /mylvmdir01 
[root@localhost /]# lvextend -l 5 /dev/mylvm/mylvm01
  Size of logical volume mylvm/mylvm01 changed from 12.00 MiB (3 extents) to 20.00 MiB (5 extents).
  Logical volume mylvm/mylvm01 successfully resized.
[root@localhost /]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                1Ug78x-W3LQ-e6DK-lJc4-OyO2-Q9E3-YusxNX
  LV Write Access        read/write
  LV Creation host, time localhost, 2021-01-04 03:37:52 +0800
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                SyF0lV-U0KX-uFq6-8fdc-6kWa-d23n-qnIhY1
  LV Write Access        read/write
  LV Creation host, time localhost, 2021-01-04 03:37:53 +0800
  LV Status              available
  # open                 1
  LV Size                <17.00 GiB
  Current LE             4351
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/mylvm/mylvm01
  LV Name                mylvm01
  VG Name                mylvm
  LV UUID                DxqLeJ-SQLP-aAi3-jEZc-7Oma-hdTY-vo9OAA
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2021-02-19 21:04:31 +0800
  LV Status              available
  # open                 0
  LV Size                20.00 MiB
  Current LE             5
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
[root@localhost /]# e2fsck -f /dev/mylvm/mylvm01
e2fsck 1.44.3 (10-July-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mylvm/mylvm01: 11/3072 files (0.0% non-contiguous), 1621/12288 blocks
[root@localhost /]# resize2fs /dev/mylvm/mylvm01
resize2fs 1.44.3 (10-July-2018)
Resizing the filesystem on /dev/mylvm/mylvm01 to 20480 (1k) blocks.
The filesystem on /dev/mylvm/mylvm01 is now 20480 (1k) blocks long.

[root@localhost /]# mount /dev/mylvm/mylvm01 /mylvmdir01/
[root@localhost /]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                   969M     0  969M   0% /dev
tmpfs                      984M     0  984M   0% /dev/shm
tmpfs                      984M   18M  966M   2% /run
tmpfs                      984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root       17G  3.9G   14G  23% /
/dev/nvme0n1p1            1014M  152M  863M  15% /boot
tmpfs                      197M   20K  197M   1% /run/user/42
tmpfs                      197M  3.5M  194M   2% /run/user/1000
/dev/sr0                   6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
/dev/mapper/mylvm-mylvm01   19M  204K   17M   2% /mylvmdir01

缩小逻辑卷

缩小时更容易丢失数据,因此需要先检查完整性,如果出现错误则需要修复后再进行。

# 1. 卸载
# 2. 检查完整性
# 3. 重置硬盘容量
resize2fs {逻辑卷} {容量} 
# 4. 缩容
lvreduce -L {容量} {逻辑卷}
# 5. 挂载

>>>
[root@localhost /]# umount /mylvmdir01 
[root@localhost /]# e2fsck -f /dev/mylvm/mylvm01 
e2fsck 1.44.3 (10-July-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mylvm/mylvm01: 11/4608 files (0.0% non-contiguous), 1815/20480 blocks
[root@localhost /]# resize2f /dev/mylvm/mylvm01 15M
bash: resize2f: command not found...
Failed to search for file: Cannot update read-only repo
[root@localhost /]# resize2fs /dev/mylvm/mylvm01 15M
resize2fs 1.44.3 (10-July-2018)
Resizing the filesystem on /dev/mylvm/mylvm01 to 15360 (1k) blocks.
The filesystem on /dev/mylvm/mylvm01 is now 15360 (1k) blocks long.

[root@localhost /]# lvreduce -L 15M /dev/mylvm/mylvm01 
  Rounding size to boundary between physical extents: 16.00 MiB.
  WARNING: Reducing active logical volume to 16.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mylvm/mylvm01? [y/n]: y
  Size of logical volume mylvm/mylvm01 changed from 20.00 MiB (5 extents) to 16.00 MiB (4 extents).
  Logical volume mylvm/mylvm01 successfully resized.
[root@localhost /]# mount /dev/mylvm/mylvm01 /mylvmdir01/
[root@localhost /]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                   969M     0  969M   0% /dev
tmpfs                      984M     0  984M   0% /dev/shm
tmpfs                      984M   18M  966M   2% /run
tmpfs                      984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root       17G  3.9G   14G  23% /
/dev/nvme0n1p1            1014M  152M  863M  15% /boot
tmpfs                      197M   20K  197M   1% /run/user/42
tmpfs                      197M  3.5M  194M   2% /run/user/1000
/dev/sr0                   6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
/dev/mapper/mylvm-mylvm01   14M  204K   13M   2% /mylvmdir01

逻辑卷快照

使用逻辑卷快照,可以备份当前逻辑卷的状态,需要注意:

  1. 快照卷的容量需要与逻辑卷容量相同
  2. 快照卷仅一次有效,恢复后则会被立即删除。
# 1. 创建快照
lvcreate -L {容量} -s -n {快照名称} {逻辑卷名称}
# 2. 卸载目录
# 3. 恢复快照
lvconvert --merge {快照名称}

>>>
[root@localhost /]# cd /mylvmdir01/
[root@localhost mylvmdir01]# ls
lost+found
[root@localhost mylvmdir01]# echo "1" >> a.txt
[root@localhost mylvmdir01]# echo "1" >> b.txt
[root@localhost mylvmdir01]# lvcreate -L 16M -s -n mybackup /dev/mylvm/mylvm01
  Logical volume "mybackup" created.
[root@localhost mylvmdir01]# echo "2" >> c.txt
[root@localhost mylvmdir01]# ls
a.txt  b.txt  c.txt  lost+found
[root@localhost mylvmdir01]# cd /
[root@localhost /]# umount /mylvmdir01 
[root@localhost /]# lvconvert --merge /dev/mylvm/mybackup 
  Merging of volume mylvm/mybackup started.
  mylvm/mylvm01: Merged: 100.00%
[root@localhost mylvmdir01]# mount /dev/mylvm/mylvm01 /mylvmdir01/
[root@localhost mylvmdir01]# ls
a.txt  b.txt
[root@localhost /]# cd /mylvmdir01/
[root@localhost mylvmdir01]# ls
a.txt  b.txt

删除逻辑卷

# 1. 卸载目录(如果已经卸载则不需要)
# 2. 删除逻辑卷设备
lvremove {逻辑卷名称}
# 3. 删除卷组
vgremove {逻辑卷组名称}
# 4. 删除物理卷设备
pvremove {设备1} {设备2}

>>>
[root@localhost mylvmdir01]# umount /mylvmdir01 
[root@localhost mylvmdir01]# lvremove /dev/mylvm/mylvm01 
Do you really want to remove active logical volume mylvm/mylvm01? [y/n]: y
  Logical volume "mylvm01" successfully removed
[root@localhost mylvmdir01]# vgremove /dev/mylvm
  Volume group "mylvm" successfully removed
[root@localhost mylvmdir01]# pvremove /dev/sd[a-b]
  Labels on physical volume "/dev/sda" successfully wiped.
  Labels on physical volume "/dev/sdb" successfully wiped.
我的名片

昵称:shuta

职业:后台开发(python、php)

邮箱:648949076@qq.com

站点信息

建站时间: 2020/2/19
网站程序: ANTD PRO VUE + TP6.0
晋ICP备18007778号