Linux

為什麼 Raid 5 陣列的空間不等於磁碟的總和?

  • September 7, 2018

我正在嘗試從四個磁碟創建 raid 5:

Disk /dev/sdc: 8001.6 GB, 8001563222016 bytes
/dev/sdc1            2048  4294967294  2147482623+  fd  Linux raid autodetect
Disk /dev/sdb: 8001.6 GB, 8001563222016 bytes
/dev/sdb1            2048  4294967294  2147482623+  fd  Linux raid autodetect
Disk /dev/sdd: 24003.1 GB, 24003062267904 bytes
/dev/sdd1            2048  4294967294  2147482623+  fd  Linux raid autodetect
Disk /dev/sde: 8001.6 GB, 8001563222016 bytes
/dev/sde1            2048  4294967294  2147482623+  fd  Linux raid autodetect

但是,創建後我剛剛獲得了 6T 空間(我的磁碟之一):

/dev/md0         ext4   6.0T  184M  5.7T   1% /mnt/raid5

這是我創建過程的其他資訊:

結果mdadm -E /dev/sd[b-e]1

/dev/sdb1:
         Magic : a92b4efc
       Version : 1.2
   Feature Map : 0x0
    Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
          Name : node7:0  (local to host node7)
 Creation Time : Fri Sep  7 09:16:42 2018
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
    Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
 Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
   Data Offset : 262144 sectors
  Super Offset : 8 sectors
         State : clean
   Device UUID : 2fcb3346:9ed69eab:64c6f851:0bcc39c4

   Update Time : Fri Sep  7 13:17:38 2018
      Checksum : c701ff7e - correct
        Events : 18

        Layout : left-symmetric
    Chunk Size : 512K

  Device Role : Active device 0
  Array State : AAAA ('A' == active, '.' == missing)
/dev/sdc1:
         Magic : a92b4efc
       Version : 1.2
   Feature Map : 0x0
    Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
          Name : node7:0  (local to host node7)
 Creation Time : Fri Sep  7 09:16:42 2018
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
    Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
 Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
   Data Offset : 262144 sectors
  Super Offset : 8 sectors
         State : clean
   Device UUID : 6f13c9f0:de2d4c6f:cbac6b87:67bc483e

   Update Time : Fri Sep  7 13:17:38 2018
      Checksum : e4c675c2 - correct
        Events : 18

        Layout : left-symmetric
    Chunk Size : 512K

  Device Role : Active device 1
  Array State : AAAA ('A' == active, '.' == missing)
/dev/sdd1:
         Magic : a92b4efc
       Version : 1.2
   Feature Map : 0x0
    Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
          Name : node7:0  (local to host node7)
 Creation Time : Fri Sep  7 09:16:42 2018
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
    Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
 Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
   Data Offset : 262144 sectors
  Super Offset : 8 sectors
         State : clean
   Device UUID : 4dab38e6:94c5052b:06d6b6b0:34a41472

   Update Time : Fri Sep  7 13:17:38 2018
      Checksum : f306b65f - correct
        Events : 18

        Layout : left-symmetric
    Chunk Size : 512K

  Device Role : Active device 2
  Array State : AAAA ('A' == active, '.' == missing)
/dev/sde1:
         Magic : a92b4efc
       Version : 1.2
   Feature Map : 0x0
    Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
          Name : node7:0  (local to host node7)
 Creation Time : Fri Sep  7 09:16:42 2018
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
    Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
 Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
   Data Offset : 262144 sectors
  Super Offset : 8 sectors
         State : clean
   Device UUID : b04d152e:0448fe56:3b22a2d6:b2504d26

   Update Time : Fri Sep  7 13:17:38 2018
      Checksum : 40ffd3e7 - correct
        Events : 18

        Layout : left-symmetric
    Chunk Size : 512K

  Device Role : Active device 3
  Array State : AAAA ('A' == active, '.' == missing)

結果mdadm --detail /dev/md0

/dev/md0:
       Version : 1.2
 Creation Time : Fri Sep  7 09:16:42 2018
    Raid Level : raid5
    Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
 Used Dev Size : 2147351040 (2047.87 GiB 2198.89 GB)
  Raid Devices : 4
 Total Devices : 4
   Persistence : Superblock is persistent

   Update Time : Fri Sep  7 13:17:38 2018
         State : clean 
Active Devices : 4
Working Devices : 4
Failed Devices : 0
 Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 512K

          Name : node7:0  (local to host node7)
          UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
        Events : 18

   Number   Major   Minor   RaidDevice State
      0       8       17        0      active sync   /dev/sdb1
      1       8       33        1      active sync   /dev/sdc1
      2       8       49        2      active sync   /dev/sdd1
      4       8       65        3      active sync   /dev/sde1

結果mkfs.ext4 /dev/md0

mke2fs 1.41.9 (22-Aug-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
402628608 inodes, 1610513280 blocks
80525664 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
49149 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
   32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
   4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
   102400000, 214990848, 512000000, 550731776, 644972544

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: 
done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

然後mkdir /mnt/raid5mount /dev/md0 /mnt/raid5/

6 TB 將是 (4 - 1) * 2 TB,其中 4 是您的設備數量,減去 1 表示奇偶校驗,2 TB 是您似乎擁有的分區大小。

假設第一個輸出來自fdisk實用程序,這些欄位可能是

partition name       start        end       length  type
/dev/sdc1            2048  4294967294  2147482623+  fd  Linux raid autodetect

以 512 字節扇區為單位,分區從頭到尾為 2 TB。(+長度欄位末尾的似乎暗示實際長度大於 ,所以我忽略了該欄位。)我的fdisk實用程序也以人類單位顯示分區的大小,但 2 TB 是舊的限制-style MBR 分區表可以提供,因此請檢查您是否沒有使用它而不是 GPT。

一些舊版本的fdisk可能不知道 GPT 分區表,因此您可能需要使用其他工具(或獲取更新版本)。你實際上甚至不需要使用分區,你可以mdadm/dev/sd[bcde]. 但請注意,由於 RAID-5 佈局,最小的驅動器(或分區)設置了陣列的大小,因此單個較大的磁碟會部分浪費。

引用自:https://unix.stackexchange.com/questions/467480