Centos

通過相同的磁碟增長 LVM RAID5,沒有足夠的範圍

  • September 12, 2020

我在由 3x4TB 驅動器組成的 CentOS 8 機器上有一個現有的 LVM RAID5 陣列。該陣列的空間開始不足,因此我想將一個相同的 4TB 驅動器添加到陣列中以增加總空間。但是,當我執行時lvextend /dev/storage/raidarray /dev/sda,我得到以下輸出:

Converted 100%PVS into 953861 physical extents.
Using stripesize of last segment 64.00 KiB
Archiving volume group "storage" metadata (seqno 35).
Extending logical volume storage/raidarray to <10.92 TiB
Insufficient free space: 1430790 extents needed, but only 953861 available

這是輸出pvs

PV         VG      Fmt  Attr PSize   PFree
/dev/sda   storage lvm2 a--   <3.64t  <3.64t
/dev/sdb3  cl      lvm2 a--  221.98g      0
/dev/sdc   storage lvm2 a--   <3.64t      0
/dev/sdd   storage lvm2 a--   <3.64t      0
/dev/sde   storage lvm2 a--   <3.64t      0
/dev/sdf           lvm2 ---  119.24g 119.24g

lvs -o +devices:

LV        VG      Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
home      cl      -wi-a----- <164.11g                                                     /dev/sdb3(12800)
root      cl      -wi-ao----   50.00g                                                     /dev/sdb3(0)
swap      cl      -wi-ao----   <7.88g                                                     /dev/sdb3(54811)
raidarray storage rwi-aor---   <7.28t                                    100.00           raidarray_rimage_0(0),raidarray_rimage_1(0),raidarray_rimage_2(0)

pvdisplay:

--- Physical volume ---
PV Name               /dev/sdb3
VG Name               cl
PV Size               221.98 GiB / not usable 3.00 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              56827
Free PE               0
Allocated PE          56827
PV UUID               MM6j63-1V3E-YWXl-61ro-f3bB-7ysd-c1DGQv

--- Physical volume ---
PV Name               /dev/sdc
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               rmqBBu-DD8U-d7WW-yzKW-R97b-1M4r-RYb1Qx

--- Physical volume ---
PV Name               /dev/sdd
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               TBn2He-cRTU-eybT-fuBM-REbO-YNfr-Ca86gU

--- Physical volume ---
PV Name               /dev/sde
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               wHZOf0-KTK9-2qLW-USl9-Gkgz-6MjV-D3gWrH

--- Physical volume ---
PV Name               /dev/sdf
VG Name               storage
PV Size               119.24 GiB / not usable <4.34 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              30525
Free PE               30525
Allocated PE          0
PV UUID               MWWaUJ-UC2h-YT29-bMol-fWoQ-5Chl-uKBB4O

--- Physical volume ---
PV Name               /dev/sda
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              953861
Free PE               953861
Allocated PE          0
PV UUID               vzGHi9-TF42-EFx9-uLch-EioJ-DI35-RuZuJt

lsblk

NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   3.7T  0 disk
sdb                            8:16   0 223.6G  0 disk
├─sdb1                         8:17   0   600M  0 part /boot/efi
├─sdb2                         8:18   0     1G  0 part /boot
└─sdb3                         8:19   0   222G  0 part
 ├─cl-root                  253:0    0    50G  0 lvm  /
 └─cl-swap                  253:1    0   7.9G  0 lvm  [SWAP]
sdc                            8:32   0   3.7T  0 disk
├─storage-raidarray_rmeta_0  253:7    0     4M  0 lvm
│ └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
└─storage-raidarray_rimage_0 253:8    0   3.7T  0 lvm
 └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
sdd                            8:48   0   3.7T  0 disk
├─storage-raidarray_rmeta_1  253:9    0     4M  0 lvm
│ └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
└─storage-raidarray_rimage_1 253:10   0   3.7T  0 lvm
 └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
sde                            8:64   0   3.7T  0 disk
├─storage-raidarray_rmeta_2  253:11   0     4M  0 lvm
│ └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
└─storage-raidarray_rimage_2 253:12   0   3.7T  0 lvm
 └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
sdf                            8:80   0 119.2G  0 disk
sdg                            8:96   1  14.8G  0 disk
└─sdg1                         8:97   1  14.8G  0 part

我一直在尋找這個問題的答案,但幾乎找不到關於 LVM RAID 的文章;只有mdadm。有人知道我可以在不購買額外驅動器且不失去數據的情況下擴展 RAID 陣列的方法嗎?

我通常不使用 LVM RAID,所以如果我重現你的情況有點不完美,請原諒。所以數字會有點奇怪。

考慮到 3 設備 RAID 5 在mdadm. 在 LVM 術語中,這稱為具有 2 個條帶的 raid5(不計算奇偶校驗)。

# lvs -o +devices HDD/raidtest
 LV       VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                       
 raidtest HDD rwi-a-r--- 256.00m                                    100.00           raidtest_rimage_0(0),raidtest_rimage_1(0),raidtest_rimage_2(0)

再增加一條條紋的工作方式如下:

# lvconvert --stripes 3 HDD/raidtest
 Using default stripesize 64.00 KiB.
 WARNING: Adding stripes to active logical volume HDD/raidtest will grow it from 4 to 6 extents!
 Run "lvresize -l4 HDD/raidtest" to shrink it or use the additional capacity.
Are you sure you want to add 1 images to raid5 LV HDD/raidtest? [y/n]: maybe
[... this takes a while ...]
 Logical volume HDD/raidtest successfully converted.

需要注意的事項:WARNING 消息應該清楚地說明設備正在增長,而不是縮小。

此外,我沒有指定擴展使用哪個 PV,所以 LVM 自己選擇了它。在您的情況下,這也是可選的,應該可以正常工作(因為沒有其他符合條件的 PV),但請隨意繼續指定它,這樣就不會有任何意外。

結果:

# lvs -o +devices HDD/raidtest
 LV       VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                            
 raidtest HDD rwi-a-r--- 384.00m                                    100.00           raidtest_rimage_0(0),raidtest_rimage_1(0),raidtest_rimage_2(0),raidtest_rimage_3(0)

在這種情況下,文件系統將不會增長,您可以選擇單獨執行此操作或使用lvresize將 LV 縮小到以前的狀態(現在只是分發到更多驅動器)。我想這在並排使用多個 RAID LV 時很有用,而不是像您似乎正在做的那樣將整個磁碟分配給單個磁碟。

引用自:https://unix.stackexchange.com/questions/569472