Mount

掛載 LVM 以恢復數據:錯誤的 fs 類型、錯誤的選項、錯誤的超級塊

  • February 3, 2022

如何在主機(Proxmox)上掛載 VM 使用的 lvm 磁碟,以便我可以復製文件?

虛擬機(Xpenology)壞了,我不能再啟動了。我在 RAID 1 中安裝了 2 個 4TB 磁碟,我想取回我的數據,但我無法安裝 LVM。

(我只對 Disk_1 和 Disk_2 感興趣)

以下是一些資訊:

root@pr0xm0x:~# lvs
 LV            VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
 vm-100-disk-0 Disk_1 -wi-a-----    3.55t
 vm-100-disk-0 Disk_2 -wi-a-----    3.55t
 data          pve    twi-aotz-- <181.69g             66.51  3.92
 root          pve    -wi-ao----   69.50g
 swap          pve    -wi-ao----    8.00g
 vm-100-disk-0 pve    Vwi-a-tz--   16.00g data        12.24
 vm-100-disk-1 pve    Vwi-a-tz--   52.00m data        57.21
 vm-103-disk-1 pve    Vwi-a-tz--    6.00g data        27.56
 vm-200-disk-0 pve    Vwi-a-tz--  120.00g data        97.66
 vm-200-disk-1 pve    Vwi-a-tz--  100.00g data        0.00



root@pr0xm0x:~# parted /dev/Disk_1/vm-100-disk-0 print
Model: Linux device-mapper (linear) (dm)
Disk /dev/dm-0: 3908GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
1      1049kB  2551MB  2550MB  ext4                  raid
2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
3      4832MB  3908GB  3903GB                        raid



root@pr0xm0x:~# lsblk
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                             8:0    0 279.4G  0 disk
├─sda1                          8:1    0  1007K  0 part
├─sda2                          8:2    0   512M  0 part
└─sda3                          8:3    0 278.9G  0 part
 ├─pve-swap                  253:2    0     8G  0 lvm  [SWAP]
 ├─pve-root                  253:3    0  69.5G  0 lvm  /
 ├─pve-data_tmeta            253:4    0   1.9G  0 lvm
 │ └─pve-data-tpool          253:6    0 181.7G  0 lvm
 │   ├─pve-data              253:7    0 181.7G  0 lvm
 │   ├─pve-vm--200--disk--0  253:8    0   120G  0 lvm
 │   ├─pve-vm--100--disk--0  253:9    0    16G  0 lvm
 │   ├─pve-vm--100--disk--1  253:10   0    52M  0 lvm
 │   ├─pve-vm--200--disk--1  253:11   0   100G  0 lvm
 │   └─pve-vm--103--disk--1  253:12   0     6G  0 lvm
 └─pve-data_tdata            253:5    0 181.7G  0 lvm
   └─pve-data-tpool          253:6    0 181.7G  0 lvm
     ├─pve-data              253:7    0 181.7G  0 lvm
     ├─pve-vm--200--disk--0  253:8    0   120G  0 lvm
     ├─pve-vm--100--disk--0  253:9    0    16G  0 lvm
     ├─pve-vm--100--disk--1  253:10   0    52M  0 lvm
     ├─pve-vm--200--disk--1  253:11   0   100G  0 lvm
     └─pve-vm--103--disk--1  253:12   0     6G  0 lvm
sdb                             8:16   0   3.7T  0 disk
└─Disk_2-vm--100--disk--0     253:1    0   3.6T  0 lvm
sdc                             8:32   0   3.7T  0 disk
└─Disk_1-vm--100--disk--0     253:0    0   3.6T  0 lvm
 ├─Disk_1-vm--100--disk--0p1 253:13   0   2.4G  0 part
 ├─Disk_1-vm--100--disk--0p2 253:14   0     2G  0 part
 └─Disk_1-vm--100--disk--0p3 253:15   0   3.6T  0 part
sdd                             8:48   0   3.7T  0 disk
sde                             8:64   0   1.8T  0 disk
└─sde1                          8:65   0   1.8T  0 part
sdf                             8:80   1  14.4G  0 disk
├─sdf1                          8:81   1   2.9G  0 part
├─sdf2                          8:82   1   3.9M  0 part
└─sdf3                          8:83   1  11.6G  0 part
sr0                            11:0    1  1024M  0 rom

root@pr0xm0x:~# lvdisplay
 --- Logical volume ---
 LV Path                /dev/Disk_1/vm-100-disk-0
 LV Name                vm-100-disk-0
 VG Name                Disk_1
 LV UUID                Hek0vC-VCjH-9BhS-i1Va-5X3d-0mzC-FK3bbM
 LV Write Access        read/write
 LV Creation host, time pr0xm0x, 2020-01-23 08:50:40 +0100
 LV Status              available
 # open                 3
 LV Size                3.55 TiB
 Current LE             931840
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:0

 --- Logical volume ---
 LV Path                /dev/Disk_2/vm-100-disk-0
 LV Name                vm-100-disk-0
 VG Name                Disk_2
 LV UUID                M6dzfZ-6wXt-dyvI-pSL8-3hky-aROy-JfWZUC
 LV Write Access        read/write
 LV Creation host, time pr0xm0x, 2020-01-23 08:50:55 +0100
 LV Status              available
 # open                 0
 LV Size                3.55 TiB
 Current LE             931840
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:1

 --- Logical volume ---
 LV Path                /dev/pve/swap
 LV Name                swap
 VG Name                pve
 LV UUID                JogsLv-1xic-2cK2-rBRX-EHt5-buYg-pcrWJM
 LV Write Access        read/write
 LV Creation host, time proxmox, 2019-12-07 11:10:23 +0100
 LV Status              available
 # open                 2
 LV Size                8.00 GiB
 Current LE             2048
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:2

 --- Logical volume ---
 LV Path                /dev/pve/root
 LV Name                root
 VG Name                pve
 LV UUID                Ukw2fX-Dcf1-RueD-mx6e-spEw-GdrV-fvxnjB
 LV Write Access        read/write
 LV Creation host, time proxmox, 2019-12-07 11:10:23 +0100
 LV Status              available
 # open                 1
 LV Size                69.50 GiB
 Current LE             17792
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:3

 --- Logical volume ---
 LV Name                data
 VG Name                pve
 LV UUID                LZmHdO-0rZX-XfGy-6fRz-j9bm-VmJz-yS2CQd
 LV Write Access        read/write
 LV Creation host, time proxmox, 2019-12-07 11:10:24 +0100
 LV Pool metadata       data_tmeta
 LV Pool data           data_tdata
 LV Status              available
 # open                 6
 LV Size                <181.69 GiB
 Allocated pool data    66.51%
 Allocated metadata     3.92%
 Current LE             46512
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:6

 --- Logical volume ---
 LV Path                /dev/pve/vm-200-disk-0
 LV Name                vm-200-disk-0
 VG Name                pve
 LV UUID                vRF4uB-WzMy-B2Nm-LDcy-T8BN-ghjF-PqPVKS
 LV Write Access        read/write
 LV Creation host, time pr0xm0x, 2019-12-17 11:03:55 +0100
 LV Pool name           data
 LV Status              available
 # open                 0
 LV Size                120.00 GiB
 Mapped size            97.66%
 Current LE             30720
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:8

 --- Logical volume ---
 LV Path                /dev/pve/vm-100-disk-0
 LV Name                vm-100-disk-0
 VG Name                pve
 LV UUID                3yGcBF-rhHJ-EMhC-Ft8o-okne-YdVg-ll3D4f
 LV Write Access        read/write
 LV Creation host, time pr0xm0x, 2020-01-23 08:40:48 +0100
 LV Pool name           data
 LV Status              available
 # open                 0
 LV Size                16.00 GiB
 Mapped size            12.24%
 Current LE             4096
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:9

 --- Logical volume ---
 LV Path                /dev/pve/vm-100-disk-1
 LV Name                vm-100-disk-1
 VG Name                pve
 LV UUID                3YV9J4-mLv3-yHg3-Sv2f-kklP-cvPt-1H5Zc0
 LV Write Access        read/write
 LV Creation host, time pr0xm0x, 2020-01-23 08:48:19 +0100
 LV Pool name           data
 LV Status              available
 # open                 0
 LV Size                52.00 MiB
 Mapped size            57.21%
 Current LE             13
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:10

 --- Logical volume ---
 LV Path                /dev/pve/vm-200-disk-1
 LV Name                vm-200-disk-1
 VG Name                pve
 LV UUID                3TWqbr-RO52-chRo-ubLf-zzzx-4QGg-Z21cuq
 LV Write Access        read/write
 LV Creation host, time pr0xm0x, 2020-02-01 13:59:13 +0100
 LV Pool name           data
 LV Status              available
 # open                 0
 LV Size                100.00 GiB
 Mapped size            0.00%
 Current LE             25600
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:11

 --- Logical volume ---
 LV Path                /dev/pve/vm-103-disk-1
 LV Name                vm-103-disk-1
 VG Name                pve
 LV UUID                4e22Xm-P40c-NaxA-TttF-5eBQ-F3CR-IcK2DP
 LV Write Access        read/write
 LV Creation host, time pr0xm0x, 2022-01-30 16:47:57 +0100
 LV Pool name           data
 LV Status              available
 # open                 0
 LV Size                6.00 GiB
 Mapped size            27.56%
 Current LE             1536
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:12

這是掛載的結果:

root@pr0xm0x:~# mount /dev/Disk_1/vm-100-disk-0 /mnt/Disk_1/
mount: /mnt/Disk_1: wrong fs type, bad option, bad superblock on /dev/mapper/Disk_1-vm--100--disk--0, missing codepage or helper program, or other error.

邏輯卷的名稱表明 LV 被用作VM 的虛擬磁碟,因此每個 LV 可能包含一個分區表和一個或多個分區 - 您的parted輸出證明這正是正在發生的事情。

當它定義了分區時,您將無法直接安裝/dev/Disk_1/vm-100-disk-0任何東西。/dev/sda當您掛載某些東西時,文件系統驅動程序期望實際的文件系統從您嘗試掛載的設備的第一個塊開始,而不是在設備的某個距離處(即在分區表和可能的其他分區之後)。

首先,使用losetup -f:它將報告第一個未使用的/dev/loopN設備。使用此命令報告的實際設備代替/dev/loopN任何後續命令。

將 LV 綁定到循環設備並請求losetup為其自動創建分區設備:

losetup -P /dev/loopN /dev/Disk_1/vm-100-disk-0

這將創建諸如等設備/dev/loopNp1/dev/loopNp2通過這些設備,您將能夠訪問 LV 中包含的虛擬磁碟的每個單獨分區。

(如果您的舊版本losetup無法辨識該-P選項,則該命令kpartx可以用作替代品,正如 Bravo 在評論中所建議的那樣。根據您的發行版,kpartx可以作為單獨的包或與device-mapper-multipath工具一起打包.)

如果虛擬磁碟包含正常分區,您現在應該能夠掛載它們。但是,如果虛擬磁碟還包含 LVM 物理卷,則必須先啟動 LVM 卷組,然後才能訪問其 LV:vgchange -ay應該足以啟動所有可檢測到的 LVM 卷組。

訪問完虛擬磁碟後,請記住按正確順序撤消訪問其分區所需的任何步驟:

  1. 解除安裝從虛擬磁碟掛載的任何分區/LV
  2. 如果虛擬磁碟包含 LVM 卷,請停用您使用啟動的任何 LVM 卷組vgchange -an <name of the VG>。如果您不確定卷組的名稱,該pvs命令的輸出應該會有所幫助。
  3. 如果您使用過kpartx,請執行kpartx -d /dev/loopN以刪除分區設備作為單獨的步驟。
  4. 用於losetup -d /dev/loopN取消綁定循環設備(以及由創建的任何分區設備losetup -P)。

在實際執行磁碟所屬的虛擬機時,切勿將 VM 的虛擬磁碟這樣掛載到主機系統:這會使主機和 VM 的文件系統記憶體不同步並相互衝突,從而導致快速領先虛擬磁碟上的數據損壞。

引用自:https://unix.stackexchange.com/questions/688946