Mount

mdadm raid 沒有掛載

  • January 13, 2018

我有一個/etc/mdadm.conf這樣定義的raid數組:

ARRAY /dev/md0 devices=/dev/sdb6,/dev/sdc6
ARRAY /dev/md1 devices=/dev/sdb7,/dev/sdc7

但是當我嘗試安裝它們時,我得到了這個:

# mount /dev/md0 /mnt/media/
mount: special device /dev/md0 does not exist
# mount /dev/md1 /mnt/data
mount: special device /dev/md1 does not exist

/proc/mdstat同時說:

# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md125 : inactive dm-6[0](S)
     238340224 blocks

md126 : inactive dm-5[0](S)
     244139648 blocks

md127 : inactive dm-3[0](S)
     390628416 blocks

unused devices: <none>

所以我嘗試了這個:

# mount /dev/md126 /mnt/data
mount: /dev/md126: can't read superblock
# mount /dev/md125 /mnt/media
mount: /dev/md125: can't read superblock

分區上的 fs 是ext3,當我用 指定 fs 時-t,我得到

mount: wrong fs type, bad option, bad superblock on /dev/md126,
      missing codepage or helper program, or other error
      (could this be the IDE device where you in fact use
      ide-scsi so that sr0 or sda or so is needed?)
      In some cases useful info is found in syslog - try
      dmesg | tail  or so

如何安裝我的 RAID 陣列?它以前工作過。

編輯 1

# mdadm --detail --scan
mdadm: cannot open /dev/md/127_0: No such file or directory
mdadm: cannot open /dev/md/0_0: No such file or directory
mdadm: cannot open /dev/md/1_0: No such file or directory

編輯 2

# dmsetup ls
isw_cabciecjfi_Raid7    (252:6)
isw_cabciecjfi_Raid6    (252:5)
isw_cabciecjfi_Raid5    (252:4)
isw_cabciecjfi_Raid3    (252:3)
isw_cabciecjfi_Raid2    (252:2)
isw_cabciecjfi_Raid1    (252:1)
isw_cabciecjfi_Raid     (252:0)
# dmsetup table
isw_cabciecjfi_Raid7: 0 476680617 linear 252:0 1464854958
isw_cabciecjfi_Raid6: 0 488279484 linear 252:0 976575411
isw_cabciecjfi_Raid5: 0 11968362 linear 252:0 1941535638
isw_cabciecjfi_Raid3: 0 781257015 linear 252:0 195318270
isw_cabciecjfi_Raid2: 0 976928715 linear 252:0 976575285
isw_cabciecjfi_Raid1: 0 195318207 linear 252:0 63
isw_cabciecjfi_Raid: 0 1953519616 mirror core 2 131072 nosync 2 8:32 0 8:16 0 1 handle_errors

編輯 3

# file -s -L /dev/mapper/*
/dev/mapper/control:              ERROR: cannot read `/dev/mapper/control' (Invalid argument)
/dev/mapper/isw_cabciecjfi_Raid:  x86 boot sector
/dev/mapper/isw_cabciecjfi_Raid1: Linux rev 1.0 ext4 filesystem data, UUID=a8d48d53-fd68-40d8-8dd5-3cecabad6e7a (needs journal recovery) (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid3: Linux rev 1.0 ext4 filesystem data, UUID=3cb24366-b9c8-4e68-ad7b-22449668f047 (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid5: Linux/i386 swap file (new style), version 1 (4K pages), size 1496044 pages, no label, UUID=f07e031f-368a-443e-a21c-77fa27adf795
/dev/mapper/isw_cabciecjfi_Raid6: Linux rev 1.0 ext3 filesystem data, UUID=0f0b401a-f238-4b20-9b2a-79cba56dd9d0 (large files)
/dev/mapper/isw_cabciecjfi_Raid7: Linux rev 1.0 ext3 filesystem data, UUID=b2d66029-eeb9-4e4a-952c-0a3bd0696159 (large files)
# 

此外,當我的系統中有一個額外的磁碟/dev/mapper/isw_cabciecjfi_Raid時 - 我嘗試掛載一個分區但得到:

# mount /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
mount: unknown filesystem type 'linux_raid_member'

我重新啟動並確認 RAID 在我的BIOS.

I tried to force a mount which seems to allow me to mount but the content of the partition is inaccessible sio it still doesn't work as expected:
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
# ls -l /mnt/media/
total 0
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid /mnt/data
# ls -l /mnt/data
total 0

編輯 4

執行建議的命令後,我只得到:

$ sudo mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory

編輯 5

/dev/md127現在安裝了,但/dev/md0仍然/dev/md1無法訪問:

# mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory



root@regDesktopHome:~# mdadm --stop /dev/md12[567]
mdadm: stopped /dev/md127
root@regDesktopHome:~# mdadm --assemble --scan
mdadm: /dev/md127 has been started with 1 drive (out of 2).
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
     390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
     238340224 blocks

md0 : inactive dm-5[0](S)
     244139648 blocks

unused devices: <none>
root@regDesktopHome:~# ls -l /dev/mapper
total 0
crw------- 1 root root  10, 236 Aug 13 22:43 control
brw-rw---- 1 root disk 252,   0 Aug 13 22:43 isw_cabciecjfi_Raid
brw------- 1 root root 252,   1 Aug 13 22:43 isw_cabciecjfi_Raid1
brw------- 1 root root 252,   2 Aug 13 22:43 isw_cabciecjfi_Raid2
brw------- 1 root root 252,   3 Aug 13 22:43 isw_cabciecjfi_Raid3
brw------- 1 root root 252,   4 Aug 13 22:43 isw_cabciecjfi_Raid5
brw------- 1 root root 252,   5 Aug 13 22:43 isw_cabciecjfi_Raid6
brw------- 1 root root 252,   6 Aug 13 22:43 isw_cabciecjfi_Raid7
root@regDesktopHome:~# mdadm --examine
mdadm: No devices to examine
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
     390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
     238340224 blocks

md0 : inactive dm-5[0](S)
     244139648 blocks

unused devices: <none>
root@regDesktopHome:~# mdadm --examine /dev/dm-[356]
/dev/dm-3:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : 124cd4a5:2965955f:cd707cc0:bc3f8165
 Creation Time : Tue Sep  1 18:50:36 2009
    Raid Level : raid1
 Used Dev Size : 390628416 (372.53 GiB 400.00 GB)
    Array Size : 390628416 (372.53 GiB 400.00 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 127

   Update Time : Sat May 31 18:52:12 2014
         State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 23fe942e - correct
        Events : 167


     Number   Major   Minor   RaidDevice State
this     0       8       35        0      active sync

  0     0       8       35        0      active sync
  1     1       8       19        1      active sync
/dev/dm-5:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : 91e560f1:4e51d8eb:cd707cc0:bc3f8165
 Creation Time : Tue Sep  1 19:15:33 2009
    Raid Level : raid1
 Used Dev Size : 244139648 (232.83 GiB 250.00 GB)
    Array Size : 244139648 (232.83 GiB 250.00 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 0

   Update Time : Fri May  9 21:48:44 2014
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : bfad9d61 - correct
        Events : 75007


     Number   Major   Minor   RaidDevice State
this     0       8       38        0      active sync

  0     0       8       38        0      active sync
  1     1       8       22        1      active sync
/dev/dm-6:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : 0abe503f:401d8d09:cd707cc0:bc3f8165
 Creation Time : Tue Sep  8 21:19:15 2009
    Raid Level : raid1
 Used Dev Size : 238340224 (227.30 GiB 244.06 GB)
    Array Size : 238340224 (227.30 GiB 244.06 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 1

   Update Time : Fri May  9 21:48:44 2014
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 2a7a125f - correct
        Events : 3973383


     Number   Major   Minor   RaidDevice State
this     0       8       39        0      active sync

  0     0       8       39        0      active sync
  1     1       8       23        1      active sync
root@regDesktopHome:~# 

編輯 6

我阻止了他們mdadm --stop /dev/md[01]並確認/proc/mdstat不再顯示他們,然後執行mdadm --asseble --scan並得到

# mdadm --assemble --scan
mdadm: /dev/md0 has been started with 1 drives.
mdadm: /dev/md1 has been started with 2 drives.

但是如果我嘗試掛載任何一個陣列,我仍然會得到:

root@regDesktopHome:~# mount /dev/md1 /mnt/data
mount: wrong fs type, bad option, bad superblock on /dev/md1,
      missing codepage or helper program, or other error
      In some cases useful info is found in syslog - try
      dmesg | tail  or so

與此同時,我發現我的超級塊似乎已損壞(PS 我已經確認tune2fs並且fdisk我正在處理一個ext3分區):

root@regDesktopHome:~# e2fsck /dev/md1
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 59585077 blocks
The physical size of the device is 59585056 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
root@regDesktopHome:~# e2fsck /dev/md0
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 61034935 blocks
The physical size of the device is 61034912 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes

但是兩個分區都備份了一些超級塊:

root@regDesktopHome:~# mke2fs -n /dev/md0 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 15261696
inodes, 61034912 blocks 3051745 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1863
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 265408, 
       4096000, 7962624, 11239424, 20480000, 23887872

root@regDesktopHome:~# mke2fs -n /dev/md1 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 14901248
inodes, 59585056 blocks 2979252 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1819
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
       4096000, 7962624, 11239424, 20480000, 23887872

您認為,我應該嘗試將兩個陣列上的備份恢復到23887872嗎?我想我可以做到這一點,e2fsck -b 23887872 /dev/md[01]你建議試一試嗎?

我不一定想嘗試一些我不完全了解的可能會破壞我磁碟上的數據的東西…man e2fsck不一定說它很危險,但可能有另一種更節省的方法來修復超級塊.. .?


作為社區的最後更新


我曾經resize2fs讓我的超級塊恢復正常,我的驅動器又被安裝了!(resize2fs /dev/md0resize2fs /dev/md1支持我!)說來話長,但它終於成功了!mdadm一路走來,我學到了很多東西!謝謝@IanMacintosh

您的陣列未正確啟動。使用以下命令將它們從正在執行的配置中刪除:

mdadm --stop /dev/md12[567]

現在嘗試使用自動掃描和組裝功能。

mdadm --assemble --scan

假設可行,請使用以下命令保存您的配置(假設是 Debian 衍生產品)(這將覆蓋您的配置,因此我們先進行備份):

mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.old
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

您現在應該修復重新啟動,它會自動組裝並每次啟動。

如果不是,請給出以下輸出:

mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7

它會有點長,但會顯示您需要了解的有關陣列和陣列成員磁碟、它們的狀態等的所有資訊*。*

順便說一句,如果您不在磁碟上創建多個 RAID 陣列(即 /dev/sd

$$ bc $$6 和 /dev/sd$$ bc $$7) 分開。相反,只創建一個陣列,然後如果需要,您可以在陣列上創建分區。大多數情況下,LVM 是一種更好的陣列分區方式。

引用自:https://unix.stackexchange.com/questions/148062