Data-Recovery
RAID-1 無法組裝,元素列為 SPARE,如何恢復?
我最近將我的 Kubuntu 12.04 工作站搬到了一個新位置。它正常關閉,但是當我在它的新家重新啟動它時,RAID-1 陣列 /dev/md0 消失了!數組的元素被列為備用?(!)
RAID-1 陣列 /dev/md0 中僅包含關鍵文件,作業系統位於其自己的 HDD 上。
陣列的兩個元素似乎都很健康,並列為:Linux raid autodetect。
fdisk -l 輸出:
# fdisk -l Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000669b6 Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 fd Linux raid autodetect Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0000f142 Device Boot Start End Blocks Id System /dev/sdb1 63 1953520064 976760001 fd Linux raid autodetect
mdadm 輸出:
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : f3d0cc70:52dfd786:d81c7e2d:1c12b06d Name : forsaken:0 Creation Time : Tue Sep 3 04:52:19 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : 75c525b2:cdfbc3c4:918ac90a:a1bedfd0 Update Time : Thu Nov 20 16:50:46 2014 Checksum : ff0eb2ba - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : f3d0cc70:52dfd786:d81c7e2d:1c12b06d Name : forsaken:0 Creation Time : Tue Sep 3 04:52:19 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : e62e960a:3adf4b5e:f1fb773f:a7a80cfa Update Time : Thu Nov 20 16:50:46 2014 Checksum : 4ee25b00 - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing)
停止陣列並嘗試自組裝:
# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 # mdadm --assemble -v --scan --uuid=f3d0cc70:52dfd786:d81c7e2d:1c12b06d mdadm: looking for devices for /dev/md/0 mdadm: cannot open device /dev/sr0: No medium found mdadm: no RAID superblock on /dev/sdb mdadm: no RAID superblock on /dev/sde1 mdadm: no RAID superblock on /dev/sde mdadm: no RAID superblock on /dev/sdd1 mdadm: no RAID superblock on /dev/sdd mdadm: no RAID superblock on /dev/sdc mdadm: no RAID superblock on /dev/sda9 mdadm: no RAID superblock on /dev/sda8 mdadm: no RAID superblock on /dev/sda7 mdadm: no RAID superblock on /dev/sda6 mdadm: no RAID superblock on /dev/sda5 mdadm: no RAID superblock on /dev/sda4 mdadm: no RAID superblock on /dev/sda3 mdadm: no RAID superblock on /dev/sda2 mdadm: no RAID superblock on /dev/sda1 mdadm: no RAID superblock on /dev/sda mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot -1. mdadm: /dev/sdc1 is identified as a member of /dev/md/0, slot -1. mdadm: added /dev/sdc1 to /dev/md/0 as -1 mdadm: added /dev/sdb1 to /dev/md/0 as -1 mdadm: /dev/md/0 assembled from 0 drives and 2 spares - not enough to start the array.
嘗試組裝陣列並強制其執行:
# mdadm --assemble -v --scan --force --run --uuid=f3d0cc70:52dfd786:d81c7e2d:1c12b06d mdadm: looking for devices for /dev/md/0 mdadm: cannot open device /dev/sr0: No medium found mdadm: no RAID superblock on /dev/sdb mdadm: no RAID superblock on /dev/sde1 mdadm: no RAID superblock on /dev/sde mdadm: no RAID superblock on /dev/sdd1 mdadm: no RAID superblock on /dev/sdd mdadm: no RAID superblock on /dev/sdc mdadm: no RAID superblock on /dev/sda9 mdadm: no RAID superblock on /dev/sda8 mdadm: no RAID superblock on /dev/sda7 mdadm: no RAID superblock on /dev/sda6 mdadm: no RAID superblock on /dev/sda5 mdadm: no RAID superblock on /dev/sda4 mdadm: no RAID superblock on /dev/sda3 mdadm: no RAID superblock on /dev/sda2 mdadm: no RAID superblock on /dev/sda1 mdadm: no RAID superblock on /dev/sda mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot -1. mdadm: /dev/sdc1 is identified as a member of /dev/md/0, slot -1. mdadm: added /dev/sdc1 to /dev/md/0 as -1 mdadm: added /dev/sdb1 to /dev/md/0 as -1 mdadm: failed to RUN_ARRAY /dev/md/0: Invalid argument mdadm: Not enough devices to start the array.
這仍然行不通。如何重新組裝我的 RAID-1 陣列並重新獲得對我的數據的訪問權限?
似乎 RAID 元數據以某種方式損壞了。那是怎麼發生的?一旦您修復了任何錯誤配置、錯誤腳本、硬體問題等,請嘗試以只讀方式掛載:
mkdir /mnt/{sdb1,sdc1} mount -o ro,loop,offset=$((2048*512)) /dev/sdb1 /mnt/sdb1 mount -o ro,loop,offset=$((2048*512)) /dev/sdc1 /mnt/sdc1
查看是否有任何一個掛載,驗證已知類型的文件,比較查看任何一方是否有損壞的文件。這也是進行備份的好時機。
一旦您決定保留哪一邊,使用它來創建一個新的 RAID。
先解除安裝:
umount /mnt/{sdb1,sdc1}
如果有任何
/dev/md*
根據 使用任一設備/proc/mdstat
,請停止它。mdadm --stop /dev/md0
創建一個新的 RAID-1,在本例中使用
/dev/sdb1
. 請注意,您必須使用正確的元數據版本和正確的偏移量。所以只有在上面的安裝確實有效時才這樣做,否則你必須首先確定正確的偏移量。mdadm --create /dev/md0 --metadata=1.2 --data-offset=2048 \ --level=1 --raid-devices=2 /dev/sdb1 missing
同樣,以只讀方式掛載它以驗證它是否確實按預期工作:
mount -o ro,loop /dev/md0 /mnt/sdb1
如果一切正常,最後將失去的設備添加到您的 RAID。
mdadm /dev/md0 --add /dev/sdc1
這將用 /dev/sdb1 上的數據覆蓋 /dev/sdc1,希望您的 RAID 恢復同步。
在旁注中,您的分區從扇區 63 開始;如果您的磁碟仍有 512 字節扇區,那很好。然而,大多數較新的磁碟使用 4k 扇區,因此如果您必須更換 RAID 中的磁碟,您可能還必須注意這些新磁碟的分區對齊。