Raid

已刪除 md0、md1 並且無法重新創建它們

  • July 25, 2013

我以前有一個工作的 RAID 1 陣列。我認為由於執行grub-install /dev/sdbgrub-install /dev/sdc我以某種方式擦除了我的電腦md0md1.

我需要重新設置它們。當我嘗試創建md0數組時,出現以下錯誤。

/dev# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing -f
mdadm: device /dev/sda1 not suitable for any style of array

似乎 Debian 認為驅動器已經在數組中,但事實並非如此。

# fdisk -l

Disk /dev/sda: 250.0 GB, 250000000000 bytes
255 heads, 63 sectors/track, 30394 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000080

  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1       30064   241489048+  fd  Linux raid autodetect
/dev/sda2           30065       30394     2650725    5  Extended
/dev/sda5           30065       30394     2650693+  fd  Linux raid autodetect

# cat /proc/mdstat 
Personalities : [raid1] 
unused devices: <none>

編輯:請不要輸出 mount。它似乎顯示md0已安裝/,但為什麼不顯示在proc/mdstat

/dev/md0 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)

cat /proc/mounts 
rootfs / rootfs rw 0 0
none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
none /proc proc rw,nosuid,nodev,noexec,relatime 0 0
none /dev devtmpfs rw,relatime,size=4143896k,nr_inodes=204530,mode=755 0 0
none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
/dev/sda1 / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0

file -s /dev/sda*
/dev/sda:  x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200; partition 1: ID=0xfd, active, starthead 1, startsector 63, 482978097 sectors; partition 2: ID=0x5, starthead 254, startsector 482978160, 5301450 sectors, code offset 0x48, OEM-ID "      м", Bytes/sector 190, sectors/cluster 124, reserved sectors 191, FATs 6, root entries 185, sectors 64514 (volumes <=32 MB) , Media descriptor 0xf3, sectors/FAT 20644, heads 6, hidden sectors 309755, sectors 2147991229 (volumes > 32 MB) , physical drive 0x7e, dos < 4.0 BootSector (0x0)
/dev/sda1: Linux rev 1.0 ext3 filesystem data, UUID=38daaa54-a108-4224-9104-016d5b4ee12c (needs journal recovery) (large files)
/dev/sda2: x86 boot sector; partition 1: ID=0xfd, starthead 254, startsector 63, 5301387 sectors, extended partition table (last)\011, code offset 0x0
/dev/sda5: Linux/i386 swap file (new style), version 1 (4K pages), size 662655 pages, no label, UUID=f635267e-37f8-43d0-ad01-d25969570a8f

更多資訊:我的工作 RAID 陣列具有 md0 和 md1 以及驅動器 sdb 和 sdc。我執行了這些grub-install命令。幾天后,我嘗試重新啟動它並收到以下錯誤(我認為對應於md0

Gave up waiting for boot device
ALERT /dev/disk/by-uuid/38[...] does not exist

所以我拔掉了我的第 2 和第 3 HD 並且設置為 md0 時 GRUB 無法啟動,所以我將它設置為這dev/sda1就是我的機器目前所處的狀態。

mdadm --assemble --scan -v -v
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda5 has wrong uuid.
mdadm: no recogniseable superblock on /dev/sda2
mdadm: /dev/sda2 has wrong uuid.
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md1
mdadm: /dev/sda5 requires wrong number of drives.
mdadm: no recogniseable superblock on /dev/sda2
mdadm: /dev/sda2 has wrong uuid.
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.


sfdisk -d /dev/sda
# partition table of /dev/sda
unit: sectors

/dev/sda1 : start=       63, size=482978097, Id=fd, bootable
/dev/sda2 : start=482978160, size=  5301450, Id= 5
/dev/sda3 : start=        0, size=        0, Id= 0
/dev/sda4 : start=        0, size=        0, Id= 0
/dev/sda5 : start=482978223, size=  5301387, Id=fd

/dev/sda1已安裝。安裝後您將無法執行任何操作。重新啟動到 Live CD。

您可以從現有文件系統創建 RAID1 卷,而不會失去數據。它必須使用 0.9 或 1.0 的超級塊格式,因為預設的 1.2 格式需要將超級塊放置在設備開頭附近,因此文件系統不能在同一位置啟動。有關完整演練,請參閱如何設置磁碟鏡像 (RAID-1)

您需要確保設備末端的超級塊有足夠的空間。超級塊位於設備的最後一個 64kB 對齊的 64kB 中,因此根據設備大小,它可能在設備末尾之前的 64kB 到 128kB 之間的任何位置。執行tune2fs -l /dev/sda1並將“塊計數”值乘以“塊大小”值以獲得文件系統大小(以字節為單位)。塊設備的大小為 241489048½ kB,因此您需要將文件系統降到最多 241488960 kB。如果大於此值,請在執行resize2fs /dev/sda1 241488960K之前執行mdadm --create

一個文件系統足夠短,您可以使用合適的元數據格式創建 RAID1 設備。

mdadm --create /dev/md0 --level=1 --raid-devices=2 --metadata=1.0 /dev/sda1 missing

引用自:https://unix.stackexchange.com/questions/84199