Linux
我可以通過更改為raid 5 來增加raid 1 的容量嗎?
我最初的 RAID 設置是使用 mdadm 的 2x2TB RAID 1。
我購買了第三個 2TB 驅動器,希望使用 mdadm 將 RAID 的總容量升級到 4 TB。
我已經執行了以下兩個命令,但沒有看到容量變化:
sudo mdadm --grow /dev/md0 --level=5 sudo mdadm --grow /dev/md0 --add /dev/sdd --raid-devices=3
帶有 mdadm 詳細資訊:
$ sudo mdadm --detail /dev/md0 [sudo] password for userd: /dev/md0: Version : 1.2 Creation Time : Wed Jul 5 19:59:17 2017 Raid Level : raid5 Array Size : 1953383488 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed May 22 17:58:37 2019 State : clean, reshaping Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Consistency Policy : bitmap Reshape Status : 5% complete Delta Devices : 1, (2->3) Name : userd:0 (local to host userd) UUID : 986fca95:68ef5344:5136f8af:b8d34a03 Events : 13557 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 2 8 48 2 active sync /dev/sdd
更新:現在完成重塑,4TB 中只有 2TB 可用。
/dev/md0: Version : 1.2 Creation Time : Wed Jul 5 19:59:17 2017 Raid Level : raid5 Array Size : 3906766976 (3725.78 GiB 4000.53 GB) Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu May 23 23:40:16 2019 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Consistency Policy : bitmap Name : userd:0 (local to host userd) UUID : 986fca95:68ef5344:5136f8af:b8d34a03 Events : 17502 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 2 8 48 2 active sync /dev/sdd
如何讓 mdadm 使用所有 4TB 而不是僅使用 2TB?
根據 Gparted 的說法,答案是進行文件系統檢查以利用額外的空間。
為了解決這個問題,我必須:
- 解除安裝文件系統。
- 打開gparted
- 選擇raid設備(在我的例子中是/dev/md0)
- 執行檢查(分區 -> 檢查)
這成功地調整了 md0 分區的大小以使用所有可用空間。
gparted 的確切操作輸出如下:
GParted 0.33.0 --enable-libparted-dmraid --enable-online-resize Libparted 3.2 Check and repair file system (ext4) on /dev/md0 00:03:51 ( SUCCESS ) calibrate /dev/md0 00:00:00 ( SUCCESS ) path: /dev/md0 (device) start: 0 end: 7813533951 size: 7813533952 (3.64 TiB) check file system on /dev/md0 for errors and (if possible) fix them 00:02:43 ( SUCCESS ) e2fsck -f -y -v -C 0 '/dev/md0' 00:02:43 ( SUCCESS ) Pass 1: Checking inodes, blocks, and sizes Inode 30829505 extent tree (at level 1) could be shorter. Optimize? yes Inode 84025620 extent tree (at level 1) could be narrower. Optimize? yes Inode 84806354 extent tree (at level 2) could be narrower. Optimize? yes Pass 1E: Optimizing extent trees Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. Create? yes Pass 4: Checking reference counts Pass 5: Checking group summary information StorageArray0: ***** FILE SYSTEM WAS MODIFIED ***** 5007693 inodes used (4.10%, out of 122093568) 23336 non-contiguous files (0.5%) 2766 non-contiguous directories (0.1%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 4942467/2090/2 458492986 blocks used (93.89%, out of 488345872) 0 bad blocks 52 large files 4328842 regular files 612231 directories 0 character device files 0 block device files 3 fifos 1396 links 66562 symbolic links (63077 fast symbolic links) 45 sockets ------------ 5009079 files e2fsck 1.45.1 (12-May-2019) grow file system to fill the partition 00:01:08 ( SUCCESS ) resize2fs -p '/dev/md0' 00:01:08 ( SUCCESS ) Resizing the filesystem on /dev/md0 to 976691744 (4k) blocks. The filesystem on /dev/md0 is now 976691744 (4k) blocks long. resize2fs 1.45.1 (12-May-2019) ========================================
查看reshape狀態:
Update Time : Wed May 22 17:58:37 2019 State : clean, reshaping ... Reshape Status : 5% complete Delta Devices : 1, (2->3)
在完成之前,您不會獲得任何額外的儲存空間,並且您提供的報告顯示它目前僅完成了 5%。
在此重塑過程中,請勿中斷該過程或嘗試再次更改形狀。