RAID 陣列變為只讀
我創建了一個在 KVM 上執行的虛擬機,僅用於測試和學習目的。在安裝過程中,RAID 1 陣列配置了 3 個用於 root 的磁碟和 3 個用於引導的磁碟。經過一些遊戲和測試後,我決定將零寫入其中一個驅動器並檢查會發生什麼:
dd if=/dev/zero of=/dev/vdc2
之後系統進入只讀狀態,但 mdamd 中沒有任何錯誤。
dmesg:
[ 2177.091939] RAID1 conf printout: [ 2177.091947] --- wd:2 rd:3 [ 2177.091954] disk 0, wo:0, o:1, dev:vda2 [ 2177.091956] disk 1, wo:0, o:1, dev:vdb2 [ 2177.091958] disk 2, wo:1, o:1, dev:vdc2 [ 2177.095315] md: recovery of RAID array md1 [ 2177.095321] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 2177.095323] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 2177.095330] md: using 128k window, over a total of 9792512k. [ 2217.132610] RAID1 conf printout: [ 2217.132616] --- wd:2 rd:3 [ 2217.132622] disk 0, wo:0, o:1, dev:vda1 [ 2217.132625] disk 1, wo:0, o:1, dev:vdb1 [ 2217.132626] disk 2, wo:1, o:1, dev:vdc1 [ 2217.135129] md: delaying recovery of md0 until md1 has finished (they share one or more physical units) [ 2225.567664] md: md1: recovery done. [ 2225.572072] md: recovery of RAID array md0 [ 2225.572081] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 2225.572083] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 2225.572087] md: using 128k window, over a total of 682432k. [ 2225.574833] RAID1 conf printout: [ 2225.574836] --- wd:3 rd:3 [ 2225.574904] disk 0, wo:0, o:1, dev:vda2 [ 2225.574906] disk 1, wo:0, o:1, dev:vdb2 [ 2225.574908] disk 2, wo:0, o:1, dev:vdc2 [ 2229.036805] md: md0: recovery done. [ 2229.042732] RAID1 conf printout: [ 2229.042736] --- wd:3 rd:3 [ 2229.042740] disk 0, wo:0, o:1, dev:vda1 [ 2229.042742] disk 1, wo:0, o:1, dev:vdb1 [ 2229.042744] disk 2, wo:0, o:1, dev:vdc1 [ 5241.129626] md/raid1:md1: Disk failure on vdc2, disabling device. md/raid1:md1: Operation continuing on 2 devices. [ 5241.131639] RAID1 conf printout: [ 5241.131642] --- wd:2 rd:3 [ 5241.131645] disk 0, wo:0, o:1, dev:vda2 [ 5241.131647] disk 1, wo:0, o:1, dev:vdb2 [ 5241.131648] disk 2, wo:1, o:0, dev:vdc2 [ 5241.131655] RAID1 conf printout: [ 5241.131656] --- wd:2 rd:3 [ 5241.131658] disk 0, wo:0, o:1, dev:vda2 [ 5241.131684] disk 1, wo:0, o:1, dev:vdb2 [ 5326.850032] md: unbind<vdc2> [ 5326.850050] md: export_rdev(vdc2) [ 5395.301755] md: export_rdev(vdc2) [ 5395.312985] md: bind<vdc2> [ 5395.315022] RAID1 conf printout: [ 5395.315024] --- wd:2 rd:3 [ 5395.315027] disk 0, wo:0, o:1, dev:vda2 [ 5395.315029] disk 1, wo:0, o:1, dev:vdb2 [ 5395.315031] disk 2, wo:1, o:1, dev:vdc2 [ 5395.318161] md: recovery of RAID array md1 [ 5395.318168] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 5395.318170] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. [ 5395.318174] md: using 128k window, over a total of 9792512k. [ 5443.707445] md: md1: recovery done. [ 5443.712678] RAID1 conf printout: [ 5443.712682] --- wd:3 rd:3 [ 5443.712686] disk 0, wo:0, o:1, dev:vda2 [ 5443.712688] disk 1, wo:0, o:1, dev:vdb2 [ 5443.712689] disk 2, wo:0, o:1, dev:vdc2 [ 8017.777012] EXT4-fs error (device md1): ext4_lookup:1584: inode #36: comm systemd-sysv-ge: deleted inode referenced: 135 [ 8017.782244] Aborting journal on device md1-8. [ 8017.785487] EXT4-fs (md1): Remounting filesystem read-only [ 8017.876415] EXT4-fs error (device md1): ext4_lookup:1584: inode #36: comm systemd: deleted inode referenced: 137
貓/proc/mdstat:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 vdb2[1] vda2[0] 9792512 blocks super 1.2 [3/2] [UU_] md0 : active raid1 vdc1[2] vdb1[1] vda1[0] 682432 blocks super 1.2 [3/3] [UUU] unused devices: <none>
我試圖以讀寫方式重新掛載根目錄,但沒有成功:
mount -o 重新掛載 /
Segmentation fault (core dumped)
然後:
fsck -Af
fsck from util-linux 2.27.1 Segmentation fault (core dumped)
我希望在不移除 vdc2 驅動器的情況下成功地重新平衡它,但我錯了。損壞的驅動器已被移除:
mdadm --manage /dev/md1 --fail /dev/vdc2 mdadm --manage /dev/md1 --remove /dev/vdc2
並嘗試使用 fdisk 或 cfdisk 再次刪除並創建驅動器,但我遇到了同樣的錯誤:Segmentation fault (core dumped)
我正在使用 mdadm 粘貼 md1 和驅動器的狀態:
mdadm -D /dev/md1
/dev/md1: Version : 1.2 Creation Time : Mon Nov 7 21:22:29 2016 Raid Level : raid1 Array Size : 9792512 (9.34 GiB 10.03 GB) Used Dev Size : 9792512 (9.34 GiB 10.03 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Nov 8 02:38:26 2016 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : ubuntu-raid:1 (local to host ubuntu-raid) UUID : c846618f:d77238fe:95edac3d:dd19e295 Events : 108 Number Major Minor RaidDevice State 0 253 2 0 active sync /dev/vda2 1 253 18 1 active sync /dev/vdb2 4 0 0 4 removed
mdadm -E/開發/vdc2
/dev/vdc2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : c846618f:d77238fe:95edac3d:dd19e295 Name : ubuntu-raid:1 (local to host ubuntu-raid) Creation Time : Mon Nov 7 21:22:29 2016 Raid Level : raid1 Raid Devices : 3 Avail Dev Size : 19585024 (9.34 GiB 10.03 GB) Array Size : 9792512 (9.34 GiB 10.03 GB) Data Offset : 16384 sectors Super Offset : 8 sectors Unused Space : before=16296 sectors, after=0 sectors State : clean Device UUID : 25a823f7:a301598a:91f9c66b:cc27d311 Update Time : Tue Nov 8 02:20:34 2016 Bad Block Log : 512 entries available at offset 72 sectors Checksum : d6d7fc77 - correct Events : 101 Device Role : Active device 2 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
作業系統:Ubuntu 16.04.1 核心:4.4.0-31-generic mdadm 版本:v3.3
所以我有兩個問題:為什麼會發生這種情況,無法以讀寫方式掛載陣列的主要原因是什麼,第二個問題是將來如何防止這種情況發生。當然這是一個測試環境,但我正在尋找一種方法來修復它而無需重新啟動或類似的東西
Linux
md
系統依賴於 RAID 陣列的組件驅動器來提供良好的數據或不提供數據。在現實世界的故障情況下,這是一個合理的假設:磁碟上有糾錯資訊,壞扇區極不可能以無法檢測的方式損壞自身。通過將零寫入磁碟,您可以繞過此保護。系統
md
認為數據仍然是好的,它將損壞的數據傳遞到文件系統層,文件系統層反應很糟糕。由於您使用的是 RAID 1,md
因此將平衡所有驅動器的讀取以提高性能;您遇到的崩潰是因為從壞驅動器中讀取mount
和讀取。fsck
要恢復,請從系統中完全移除故障磁碟(因為您使用的是 VM,請使用 VM 的管理工具執行此操作;如果這是一個物理系統,您需要拔下驅動器)。這將迫使
md
系統意識到驅動器發生故障並停止從中讀取;然後,您可以執行所需的任何文件系統級恢復。如果你想用你的磁碟玩這種遊戲,用 ZFS 或 BTRFS 格式化它們:這些文件系統不做“好數據或沒有數據”的假設,並使用校驗和來發現從磁碟讀取的壞數據。