Raid

好奇的soft raid 5設置不一致

  • September 14, 2011

最近我硬著頭皮把我的作業系統從 Fedora 11 升級到 Fedora 15,我一直在努力弄清楚為什麼 Fedora 15 看不到我在 Fedora 11 中創建的 Raid 設置。我想我一定錯過了所以我在這裡求助於集體智慧。

當我升級時,我為 Fedora 15 使用了一個新的引導驅動器,因此我可以物理地交換引導驅動器並引導到 Fedora 11 或 15。Fedora 11 仍然可以看到 Raid 並且一切正常。Fedora 15 顯示了一些非常奇怪的東西。

$$ edited to add the output of the request of @psusi $$ 在 Fedora 11 上

我有一個正常引導驅動器 (/dev/sda) 和一個基於 raid 5 (/dev/sdb、/dev/sdc、/dev/sdd) 的 lvm。

具體來說,raid 磁碟 /dev/md/127_0 是從 /dev/sdb1、/dev/sdc1、/dev/sdd1 建構的,其中每個分區佔用整個磁碟空間。

引導驅動器的捲組 (/dev/vg_localhost/) 無關緊要。我在 RAID 磁碟上創建的捲組稱為 /dev/lvm-tb-storage/。

以下是我從系統中得到的設置(mdadm、pvscan、lvscan 等)

[root@localhost ~]# cat /etc/mdadm.conf 

[root@localhost ~]# pvscan
 PV /dev/md127   VG lvm-tb-storage   lvm2 [1.82 TB / 0    free]
 PV /dev/sda5    VG vg_localhost        lvm2 [61.44 GB / 0    free]
 Total: 2 [1.88 TB] / in use: 2 [1.88 TB] / in no VG: 0 [0   ]

[root@localhost ~]# lvscan
 ACTIVE            '/dev/lvm-tb-storage/tb' [1.82 TB] inherit
 ACTIVE            '/dev/vg_localhost/lv_root' [54.68 GB] inherit
 ACTIVE            '/dev/vg_localhost/lv_swap' [6.77 GB] inherit

[root@localhost ~]# vgdisplay
 --- Volume group ---
 VG Name               lvm-tb-storage
 System ID             
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  6
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                1
 Open LV               1
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               1.82 TB
 PE Size               4.00 MB
 Total PE              476839
 Alloc PE / Size       476839 / 1.82 TB
 Free  PE / Size       0 / 0   
 VG UUID               wqIXsb-KRZQ-eRnH-JvuP-VdHk-XJTG-DSWimc

 --- Volume group ---
 VG Name               vg_localhost
 System ID             
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  3
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                2
 Open LV               2
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               61.44 GB
 PE Size               4.00 MB
 Total PE              15729
 Alloc PE / Size       15729 / 61.44 GB
 Free  PE / Size       0 / 0   
 VG UUID               IVIpCV-C4qg-Lii7-zwkz-P3si-MXAZ-WYUSe6

[root@localhost ~]# vgscan
 Reading all physical volumes.  This may take a while...
 Found volume group "lvm-tb-storage" using metadata type lvm2
 Found volume group "vg_localhost" using metadata type lvm2

[root@localhost ~]# mdadm --detail --scan
ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba

[root@localhost ~]# ls -al /dev/md
total 0
drwxr-xr-x.  2 root root   60 2011-09-13 03:14 .
drwxr-xr-x. 19 root root 5180 2011-09-13 03:15 ..
lrwxrwxrwx.  1 root root    8 2011-09-13 03:14 127_0 -> ../md127

[root@localhost ~]# mdadm --detail /dev/md/127_0 
/dev/md/127_0:
       Version : 0.90
 Creation Time : Wed Nov  5 18:26:25 2008
    Raid Level : raid5
    Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
 Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
  Raid Devices : 3
 Total Devices : 3
Preferred Minor : 127
   Persistence : Superblock is persistent

   Update Time : Tue Sep 13 03:28:51 2011
         State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
 Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 64K

          UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
        Events : 0.671154

   Number   Major   Minor   RaidDevice State
      0       8       17        0      active sync   /dev/sdb1
      1       8       49        1      active sync   /dev/sdd1
      2       8       33        2      active sync   /dev/sdc1

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md127 : active raid5 sdb1[0] sdc1[2] sdd1[1]
     1953134208 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

[root@localhost ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
 Creation Time : Wed Nov  5 18:26:25 2008
    Raid Level : raid5
 Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
    Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
  Raid Devices : 3
 Total Devices : 3
Preferred Minor : 127

   Update Time : Tue Sep 13 03:29:50 2011
         State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
 Spare Devices : 0
      Checksum : f1ddf826 - correct
        Events : 671154

        Layout : left-symmetric
    Chunk Size : 64K

     Number   Major   Minor   RaidDevice State
this     0       8       17        0      active sync   /dev/sdb1

  0     0       8       17        0      active sync   /dev/sdb1
  1     1       8       49        1      active sync   /dev/sdd1
  2     2       8       33        2      active sync   /dev/sdc1

[root@localhost ~]# fdisk -lu 2>&1
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/md127 doesn't contain a valid partition table
Disk /dev/dm-2 doesn't contain a valid partition table

Disk /dev/sda: 250.0 GB, 250000000000 bytes
255 heads, 63 sectors/track, 30394 cylinders, total 488281250 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000080

 Device Boot      Start         End      Blocks   Id  System
/dev/sda1              63      610469      305203+  83  Linux
/dev/sda2          610470   359004554   179197042+  83  Linux
/dev/sda3   *   359004555   359414154      204800   83  Linux
/dev/sda4       359422245   488279609    64428682+   5  Extended
/dev/sda5       359422308   488278371    64428032   8e  Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0xb03e1980

 Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x7db522d5

 Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x20af5840

 Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              63  1953134504   976567221   da  Non-FS data

Disk /dev/dm-0: 58.7 GB, 58707673088 bytes
255 heads, 63 sectors/track, 7137 cylinders, total 114663424 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-1: 7264 MB, 7264534528 bytes
255 heads, 63 sectors/track, 883 cylinders, total 14188544 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000


Disk /dev/md127: 2000.0 GB, 2000009428992 bytes
2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-2: 2000.0 GB, 2000007725056 bytes
255 heads, 63 sectors/track, 243153 cylinders, total 3906265088 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000

我擁有的核心啟動參數

kernel /vmlinuz-2.6.30.10-105.2.23.fc11.x86_64 ro root=/dev/mapper/vg_localhost-lv_root rhgb quiet

在 Fedora 15 上

我在一個新的引導驅動器上安裝了 Fedora 15,安裝程序還為我創建了一個 lvm (/dev/vg_20110912a/),但這又無關緊要。

在 Fedora 15 下,lvm除了不相關的引導驅動器pvscan,什麼也看不到。然而,卻顯示了一些很奇怪的東西——原來的raid被分成了三個raid,而且組合起來很令人費解。vgscan``mdadm

[root@localhost ~] # cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all

[root@localhost ~]# pvscan
 PV /dev/sda2   VG vg_20110912a   lvm2 [59.12 GiB / 0  free]
 Total: 1 [59.12 GiB] / in use: 1 [59.12 GiB] / in no VG: 0 [0   ]

[root@localhost ~]# lvscan
 ACTIVE            '/dev/vg_20110912a/lv_home' [24.06 GiB] inherit
 ACTIVE            '/dev/vg_20110912a/lv_swap' [6.84 GiB] inherit
 ACTIVE            '/dev/vg_20110912a/lv_root' [28.22 GiB] inherit

[root@localhost ~]# vgdisplay
 --- Volume group ---
 VG Name               vg_20110912a
 System ID          
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  4
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                3
 Open LV               3
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               59.12 GiB
 PE Size               32.00 MiB
 Total PE              1892
 Alloc PE / Size       1892 / 59.12 GiB
 Free  PE / Size       0 / 0   
 VG UUID               8VRJyx-XSQp-13mK-NbO6-iV24-rE87-IKuhHH

[root@localhost ~]# vgscan
 Reading all physical volumes.  This may take a while...
 Found volume group "vg_20110912a" using metadata type lvm2

[root@localhost ~]# mdadm --detail --scan
ARRAY /dev/md/0_0 metadata=0.90 UUID=153e151b:8c717565:fd59f149:d2ea02c9
ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba

[root@localhost ~]# ls -l /dev/md
total 4
lrwxrwxrwx. 1 root root   8 Sep 13 02:39 0_0 -> ../md127
lrwxrwxrwx. 1 root root  10 Sep 13 02:39 0_0p1 -> ../md127p1
lrwxrwxrwx. 1 root root   8 Sep 13 02:39 127_0 -> ../md126
-rw-------. 1 root root 120 Sep 13 02:39 md-device-map

[root@localhost ~]# cat /dev/md/md-device-map
md126 0.90 bebfd467:cb6700d9:29bdc0db:c30228ba /dev/md/127_0
md127 0.90 153e151b:8c717565:fd59f149:d2ea02c9 /dev/md/0_0

[root@localhost ~]# mdadm --detail /dev/md/0_0
/dev/md/0_0:
       Version : 0.90
 Creation Time : Tue Nov  4 21:45:19 2008
   Raid Level : raid5
   Array Size : 976762496 (931.51 GiB 1000.20 GB)
 Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 127
   Persistence : Superblock is persistent

   Update Time : Wed Nov  5 09:04:28 2008
       State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

       Layout : left-symmetric
   Chunk Size : 64K

       UUID : 153e151b:8c717565:fd59f149:d2ea02c9
       Events : 0.2202

   Number   Major   Minor   RaidDevice State
   0       8       48      0   active sync   /dev/sdd
   1       8       16      1   active sync   /dev/sdb

[root@localhost ~]# mdadm --detail /dev/md/127_0
/dev/md/127_0:
       Version : 0.90
 Creation Time : Wed Nov  5 18:26:25 2008
   Raid Level : raid5
   Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
 Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
  Raid Devices : 3
 Total Devices : 2
Preferred Minor : 126
   Persistence : Superblock is persistent

   Update Time : Tue Sep 13 00:39:51 2011
       State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

       Layout : left-symmetric
   Chunk Size : 64K

       UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
       Events : 0.671154

   Number   Major   Minor   RaidDevice State
   0   259     0       0   active sync   /dev/md/0_0p1
   1       0       0       1   removed
   2       8       33      2   active sync   /dev/sdc1

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md126 : active (auto-read-only) raid5 md127p1[0] sdc1[2]
   1953134208 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]

md127 : active (auto-read-only) raid5 sdb[1] sdd[0]
   976762496 blocks level 5, 64k chunk, algorithm 2 [2/2] [UU]

unused devices: <none>

[root@localhost ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
       Magic : a92b4efc
       Version : 0.90.00
       UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
 Creation Time : Wed Nov  5 18:26:25 2008
   Raid Level : raid5
 Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
   Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
  Raid Devices : 3
 Total Devices : 3
Preferred Minor : 127

   Update Time : Tue Sep 13 00:39:51 2011
       State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
 Spare Devices : 0
   Checksum : f1ddd04f - correct
       Events : 671154

       Layout : left-symmetric
   Chunk Size : 64K

   Number   Major   Minor   RaidDevice State
this    0       8       17      0   active sync   /dev/sdb1

  0    0       8       17      0   active sync   /dev/sdb1
  1    1       8       49      1   active sync   /dev/sdd1
  2    2       8       33      2   active sync   /dev/sdc1

[root@localhost ~]# fdisk -lu 2>&1
Disk /dev/mapper/vg_20110912a-lv_swap doesn't contain a valid partition table
Disk /dev/mapper/vg_20110912a-lv_root doesn't contain a valid partition table
Disk /dev/md127 doesn't contain a valid partition table
Disk /dev/mapper/vg_20110912a-lv_home doesn't contain a valid partition table

Disk /dev/sda: 64.0 GB, 64023257088 bytes
255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001aa2f

 Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   125044735    62009344   8e  Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb03e1980

 Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7db522d5

 Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x20af5840

 Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              63  1953134504   976567221   da  Non-FS data

Disk /dev/mapper/vg_20110912a-lv_swap: 7348 MB, 7348420608 bytes
255 heads, 63 sectors/track, 893 cylinders, total 14352384 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_20110912a-lv_root: 30.3 GB, 30299652096 bytes
255 heads, 63 sectors/track, 3683 cylinders, total 59179008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md127: 2000.0 GB, 2000009428992 bytes
2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes
Disk identifier: 0x00000000


Disk /dev/md126: 1000.2 GB, 1000204795904 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953524992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disk identifier: 0x20af5840

    Device Boot      Start         End      Blocks   Id  System
/dev/md126p1              63  1953134504   976567221   da  Non-FS data
Partition 1 does not start on physical sector boundary.

Disk /dev/mapper/vg_20110912a-lv_home: 25.8 GB, 25836912640 bytes
255 heads, 63 sectors/track, 3141 cylinders, total 50462720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

我的核心啟動參數:

kernel /vmlinuz-2.6.40.4-5.fc15.x86_64 ro root=/dev/mapper/vg_20110912a-lv_root rd_LVM_LV=vg_20110912a/lv_root rd_LVM_LV=vg_20110912a/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet rdblacklist=nouveau nouveau.modeset=0 nodmraid

最後mdadm --examine /dev/sdb1顯示的結果與 Fedora 11 中的結果完全相同,但我不明白為什麼mdadm --detail /dev/md/0_0只顯示 /dev/sdb 和 /dev/sdd,並mdadm --detail /dev/md/127_0顯示 /dev/sdc1 和 /dev/md/0_0p1 。

由於mdadm --examine /dev/sdb1顯示了正確的結果,Fedora 15 能夠以某種方式訪問 RAID,但我不知道該怎麼做。我應該創建/組裝一個新的 raid /dev/md2 並希望lvm我創建的那個會神奇地出現嗎?

先感謝您。

看起來你周圍有一些舊的粗暴的突襲超級塊。您使用的陣列有 3 個磁碟和 bebfd467:cb6700d9:29bdc0db:c30228ba 的 uuid,創建於 2008 年 11 月 5 日。Fedora 15 辨識出另一個只有兩個磁碟的 RAID 陣列,前一天創建,使用整個磁碟而不是第一個分區。Fedora 15 似乎已經啟動了那個舊的 raid 陣列,然後嘗試將該陣列用作正確陣列中的組件之一,這造成了混亂。

我認為您需要清除舊的虛假超級塊:

mdadm --zero-superblock /dev/sdb /dev/sdd

您確實有目前備份嗎?;)

引用自:https://unix.stackexchange.com/questions/20609