Lvm
添加磁碟以增長 LVM Raid5
我有一個帶有單個 LV 的 LVM VG,它是一個帶有三個 PV 的 raid5 卷。我想向卷組添加一個額外的 PV 並擴展 raid5 LV 以使用它。
這裡我使用 4 個 100MB 的文件作為測試盤來練習。
$ sudo vgs WARNING: Not using lvmetad because a repair command was run. VG #PV #LV #SN Attr VSize VFree testvg 4 1 0 wz--n- 384.00m 96.00m $ sudo pvs WARNING: Not using lvmetad because a repair command was run. PV VG Fmt Attr PSize PFree /dev/loop0p1 testvg lvm2 a-- 96.00m 0 /dev/loop1p1 testvg lvm2 a-- 96.00m 0 /dev/loop2p1 testvg lvm2 a-- 96.00m 0 /dev/loop3p1 testvg lvm2 a-- 96.00m 96.00m $ sudo lvs WARNING: Not using lvmetad because a repair command was run. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert testraid testvg rwi-a-r--- 184.00m 100.00
如果我嘗試更改條帶數以引入額外的磁碟,命令會返回錯誤,但新 PV 現在顯示子 LV,並且 LV 顯示可用空間增加。但是 subLV 顯示不同步屬性,並且在 LV 上執行修復失敗。
$ sudo lvconvert --stripes 3 /dev/testvg/testraid Using default stripesize 64.00 KiB. WARNING: Adding stripes to active logical volume testvg/testraid will grow it from 46 to 69 extents! Run "lvresize -l46 testvg/testraid" to shrink it or use the additional capacity. Are you sure you want to add 1 images to raid5 LV testvg/testraid? [y/n]: y Insufficient free space: 4 extents needed, but only 0 available Failed to allocate out-of-place reshape space for testvg/testraid. Insufficient free space: 4 extents needed, but only 0 available Failed to allocate out-of-place reshape space for testvg/testraid. Reshape request failed on LV testvg/testraid. $ sudo pvs -a -o +pv_pe_count,pv_pe_alloc_count PV VG Fmt Attr PSize PFree PE Alloc /dev/loop0p1 testvg lvm2 a-- 96.00m 0 24 24 /dev/loop1p1 testvg lvm2 a-- 96.00m 0 24 24 /dev/loop2p1 testvg lvm2 a-- 96.00m 0 24 24 /dev/loop3p1 testvg lvm2 a-- 96.00m 0 24 24 $ sudo lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert testraid testvg rwi-a-r--- 276.00m 100.00 [testraid_rimage_0] testvg iwi-aor--- 92.00m [testraid_rimage_1] testvg iwi-aor--- 92.00m [testraid_rimage_2] testvg iwi-aor--- 92.00m [testraid_rimage_3] testvg Iwi-aor--- 92.00m [testraid_rmeta_0] testvg ewi-aor--- 4.00m [testraid_rmeta_1] testvg ewi-aor--- 4.00m [testraid_rmeta_2] testvg ewi-aor--- 4.00m [testraid_rmeta_3] testvg ewi-aor--- 4.00m $ sudo lvconvert --repair /dev/testvg/testraid WARNING: Not using lvmetad because of repair. Active raid has a wrong number of raid images! Metadata says 4, kernel says 3. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y WARNING: Disabling lvmetad cache for repair command. Unable to repair testvg/testraid. Source devices failed before the RAID could synchronize. You should choose one of the following: 1) deactivate testvg/testraid, revive failed device, re-activate LV, and proceed. 2) remove the LV (all data is lost). 3) Seek expert advice to attempt to salvage any data from remaining devices. Failed to replace faulty devices in testvg/testraid.
我應該採取哪些步驟來通過額外的相同磁碟來增加我的 LV?
我在做這件事時做的筆記在這裡:https ://wiki.archlinux.org/index.php/User:Ctag/Notes#Growing_LVM_Raid5
我最後添加了一個新磁碟,並遷移到 Raid6。
如果我沒記錯的話,問題在於新磁碟比其他磁碟小幾個扇區,並且必要的 LVM/raid 元數據的成本隨著新磁碟的添加而略有增加(因此相同的磁碟也不起作用)。解決這兩個問題的方法是利用幾個扇區未充分利用所有磁碟,從而為元數據和未來的磁碟差異留出空間。
# pvs -a -o +pv_pe_count,pv_pe_alloc_count PV VG Fmt Attr PSize PFree PE Alloc /dev/mapper/cryptslow1 cryptvg lvm2 a-- <1.82t 20.00m 476931 476931 /dev/mapper/cryptslow2 cryptvg lvm2 a-- <1.82t 20.00m 476931 476931 /dev/mapper/cryptslow3 cryptvg lvm2 a-- <2.73t <931.52g 715395 476931 /dev/mapper/cryptslow4 cryptvg lvm2 a-- <1.82t <1.82t 476927 0
見上文,新磁碟如何只有“476927”區而不是“476931”?那就是問題所在。我們需要讓 LVM 只為 RAID5 安排分配較少數量(或更少)的擴展區,以便能夠使用這個新磁碟。
# lvresize -r -l -10 /dev/cryptvg/raid fsck from util-linux 2.34 /dev/mapper/cryptvg-raid: clean, 913995/240320512 files, 686703011/961280000 blocks resize2fs 1.45.3 (14-Jul-2019) Resizing the filesystem on /dev/mapper/cryptvg-raid to 976742400 (4k) blocks. The filesystem on /dev/mapper/cryptvg-raid is now 976742400 (4k) blocks long. Size of logical volume cryptvg/raid changed from <3.64 TiB (953860 extents) to <3.64 TiB (953850 extents). Logical volume cryptvg/raid successfully resized. # pvs -a -o +pv_pe_count,pv_pe_alloc_count PV VG Fmt Attr PSize PFree PE Alloc /dev/mapper/cryptslow1 cryptvg lvm2 a-- <1.82t 20.00m 476931 476926 /dev/mapper/cryptslow2 cryptvg lvm2 a-- <1.82t 20.00m 476931 476926 /dev/mapper/cryptslow3 cryptvg lvm2 a-- <2.73t <931.52g 715395 476926 /dev/mapper/cryptslow4 cryptvg lvm2 a-- <1.82t <1.82t 476927 0
現在我們可以繼續添加我們的新磁碟,這一次它可以工作了。