Linux
恢復的 VG 已轉移卷
我正在使用包含多個 LV 的單個 VG 在 mdadm RAID 1 上恢復 PV。
底層設備有幾個壞扇區(一個只有幾個,另一個真的很多),一個愚蠢的錯字使得有必要通過對設備進行 grepping 來恢復 LVM 配置。幸運的是,我找到了它,恢復的配置看起來像原來的配置。
唯一的問題是邏輯卷沒有有效的文件系統。使用 e2sl,我發現我的目標 fs 的超級塊之一位於錯誤的邏輯卷中。可悲的是,我不知道如何糾正或規避這個問題。
root@rescue ~/e2sl # ./ext2-superblock -d /dev/vg0/tmp | grep 131072000 Found: block 20711426 (cyl 1369, head 192, sector 50), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) root@rescue ~/e2sl # ./ext2-superblock -d /dev/vg0/home | grep 131072000 Found: block 2048 (cyl 0, head 32, sector 32), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 526336 (cyl 34, head 194, sector 34), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 1050624 (cyl 69, head 116, sector 36), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 1574912 (cyl 104, head 38, sector 38), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 2099200 (cyl 138, head 200, sector 40), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 6293504 (cyl 416, head 56, sector 56), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 6817792 (cyl 450, head 218, sector 58), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 12584960 (cyl 832, head 81, sector 17), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 20973568 (cyl 1387, head 33, sector 49), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 32507904 (cyl 2149, head 238, sector 30), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 63440896 (cyl 4195, head 198, sector 22), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 89655296 (cyl 5929, head 139, sector 59), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) ^C
我感覺距離再次訪問我的文件系統以恢復一些非備份數據只有一英寸的距離。
LVM 配置:
root@rescue ~ # pvs PV VG Fmt Attr PSize PFree /dev/md1 vg0 lvm2 a-- 2.71t 767.52g root@rescue ~ # vgs VG #PV #LV #SN Attr VSize VFree vg0 1 5 0 wz--n- 2.71t 767.52g root@rescue ~ # lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert backup vg0 -wi-a--- 500.00g container vg0 -wi-a--- 500.00g home vg0 -wi-a--- 500.00g root vg0 -wi-a--- 500.00g tmp vg0 -wi-a--- 10.00g
VG 配置:
# Generated by LVM2 version 2.02.95(2) (2012-03-06): Sun Oct 13 23:56:33 2013 contents = "Text Format Volume Group" version = 1 description = "Created *after* executing 'vgs'" creation_host = "rescue" # Linux rescue 3.10.12 #29 SMP Mon Sep 23 13:18:39 CEST 2013 x86_64 creation_time = 1381701393 # Sun Oct 13 23:56:33 2013 vg0 { id = "7p0Aiw-pBpd-rn6Y-geFb-jyZe-gide-Anc9ag" seqno = 19 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "GBIwI4-AxBa-6faf-aLfB-UZiP-iSS9-FaOrhH" device = "/dev/md1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 5824875134 # 2.71242 Terabytes pe_start = 384 pe_count = 711044 # 2.71242 Terabytes } } logical_volumes { root { id = "1e3gvq-IJnX-Aimz-ziiY-zucE-soCO-YU2ayp" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } tmp { id = "px8JAy-JnkP-Amry-uHtf-lCUB-rfdx-Z8y11y" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 2560 # 10 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 128000 ] } } home { id = "e0AZbd-22Ss-RLrF-TgvF-CSDN-Nw6w-Gj7dal" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 130560 ] } } backup { id = "ZXNcbK-gYKj-LJfm-f193-Ozsi-Rm3Y-kZL37c" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "new.bountin.net" creation_time = 1341852222 # 2012-07-09 18:43:42 +0200 segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 258560 ] } } container { id = "X9wheh-3ADB-Fiau-j7SR-pcH9-hXne-K2NVAc" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "new.bountin.net" creation_time = 1341852988 # 2012-07-09 18:56:28 +0200 segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 386560 ] } } } }
對於有類似問題的任何人:
我用e2sl
$$ 1 $$直接從其中一個 RAID 設備中實際找到文件系統的候選者,並使用循環設備安裝文件系統$$ 2 $$跳過 LVM 和軟體 RAID。我不得不稍微調整一下偏移量(超級塊位置到分區開頭的偏移量為 1KB!)但最後我設法做到了。 從那裡救援很容易:將 loopdevice 安裝到安裝點,所有內容都可以複製。
$$ 1 $$ http://schumann.cx/e2sl/ $$ 2 $$mount –loop 並查看 losttup