升級到 Ubuntu 17.10 後 RAID1 為只讀
我難住了。我在 16.10 上有一個執行良好的 RAID1 設置。升級到 17.10 後,它自動神奇地檢測到數組並重新創建了 md0。我所有的文件都很好,但是當我掛載 md0 時,它說該數組是只讀的:
cat /proc/mdstat Personalities : [raid1] md0 : active (read-only) raid1 dm-0[0] dm-1[1] 5860390464 blocks super 1.2 [2/2] [UU] bitmap: 0/44 pages [0KB], 65536KB chunk unused devices: <none> sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Jul 9 23:54:40 2016 Raid Level : raid1 Array Size : 5860390464 (5588.90 GiB 6001.04 GB) Used Dev Size : 5860390464 (5588.90 GiB 6001.04 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Nov 4 23:16:18 2017 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : x6:0 (local to host x6) UUID : baaccfeb:860781dd:eda253ba:6a08916f Events : 11596 Number Major Minor RaidDevice State 0 253 0 0 active sync /dev/dm-0 1 253 1 1 active sync /dev/dm-1
/var/log/kern.log 和 dmesg 中沒有錯誤。
我可以停止它並重新組裝它,但沒有效果:
sudo mdadm --stop /dev/md0 sudo mdadm --assemble --scan
我不明白為什麼它以前工作得很好,但現在數組是只讀的,我無法檢測到。這與我從 16.04 升級到 16.10 時自動重新組裝的陣列相同。
研究這個問題,我發現了一篇關於 /sys 只讀安裝問題的文章,我的確實是:
ls -ld /sys dr-xr-xr-x 13 root root 0 Nov 5 22:28 /sys
但兩者都無法修復它,因為 /sys 保持只讀:
sudo mount -o remount,rw /sys sudo mount -o remount,rw -t sysfs sysfs /sys ls -ld /sys dr-xr-xr-x 13 root root 0 Nov 5 22:29 /sys
誰能提供一些我缺少的見解?
編輯以包含 /etc/mdadm/mdadm.conf:
# mdadm.conf # # !NB! Run update-initramfs -u after updating this file. # !NB! This will ensure that initramfs has an uptodate copy. # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 UUID=baaccfeb:860781dd:eda253ba:6a08916f name=x6:0 # This configuration was auto-generated on Sun, 05 Nov 2017 15:37:16 -0800 by mkconf
設備映射器文件,似乎是可寫的:
ls -l /dev/dm-* brw-rw---- 1 root disk 253, 0 Nov 5 16:28 /dev/dm-0 brw-rw---- 1 root disk 253, 1 Nov 5 16:28 /dev/dm-1
還有其他一些 Ubuntu 或 Debian 已經改變的東西;我不知道這些 osprober 文件在這裡做什麼。我以為它們只在安裝時使用:
ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Nov 5 15:34 control lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdb1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 5 16:28 osprober-linux-sdc1 -> ../dm-1
分手資訊:
sudo parted -l Model: ATA SanDisk Ultra II (scsi) Disk /dev/sda: 960GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 81.9GB 81.9GB ext4 2 81.9GB 131GB 49.2GB linux-swap(v1) 3 131GB 131GB 99.6MB fat32 boot, esp 4 131GB 960GB 829GB ext4 Model: ATA WDC WD60EZRZ-00R (scsi) Disk /dev/sdb: 6001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 6001GB 6001GB raid Model: ATA WDC WD60EZRZ-00R (scsi) Disk /dev/sdc: 6001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 6001GB 6001GB raid Error: /dev/mapper/osprober-linux-sdc1: unrecognised disk label Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/osprober-linux-sdc1: 6001GB Sector size (logical/physical): 512B/4096B Partition Table: unknown Disk Flags: Error: /dev/mapper/osprober-linux-sdb1: unrecognised disk label Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/osprober-linux-sdb1: 6001GB Sector size (logical/physical): 512B/4096B Partition Table: unknown Disk Flags: Model: Linux Software RAID Array (md) Disk /dev/md0: 6001GB Sector size (logical/physical): 512B/4096B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00B 6001GB 6001GB ext4
設備映射器資訊:
$ sudo dmsetup table osprober-linux-sdc1: 0 11721043087 linear 8:33 0 osprober-linux-sdb1: 0 11721043087 linear 8:17 0 $ sudo dmsetup info Name: osprober-linux-sdc1 State: ACTIVE (READ-ONLY) Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 1 Number of targets: 1 Name: osprober-linux-sdb1 State: ACTIVE (READ-ONLY) Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 0 Number of targets: 1
嘗試將數組設置為 rw 的 strace 輸出(帶有一些上下文):
openat(AT_FDCWD, "/dev/md0", O_RDONLY) = 3 fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0 ioctl(3, RAID_VERSION, 0x7fffb3813574) = 0 fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0 ioctl(3, RAID_VERSION, 0x7fffb38134c4) = 0 ioctl(3, RAID_VERSION, 0x7fffb38114bc) = 0 fstat(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 0), ...}) = 0 readlink("/sys/dev/block/9:0", "../../devices/virtual/block/md0", 199) = 31 openat(AT_FDCWD, "/sys/block/md0/md/metadata_version", O_RDONLY) = 4 read(4, "1.2\n", 4096) = 4 close(4) = 0 openat(AT_FDCWD, "/sys/block/md0/md/level", O_RDONLY) = 4 read(4, "raid1\n", 4096) = 6 close(4) = 0 ioctl(3, GET_ARRAY_INFO, 0x7fffb3813580) = 0 ioctl(3, RESTART_ARRAY_RW, 0) = -1 EROFS (Read-only file system) write(2, "mdadm: failed to set writable fo"..., 66mdadm: failed to set writable for /dev/md0: Read-only file system ) = 66
這無法解釋為什麼您的陣列最終處於只讀模式,但是
mdadm --readwrite /dev/md0
應該恢復正常。在您的情況下它沒有,原因並不完全明顯:如果組成設備本身是只讀的,則 RAID 陣列是只讀的(這與您所看到的行為和程式碼路徑相匹配當您嘗試重新啟用讀寫時使用)。
該
dmsetup table
資訊強烈暗示了正在發生的事情:(osprober
我想,給定設備的名稱)正在尋找真正的 RAID 組件,並且出於某種原因,它正在它們之上創建設備映射器設備,並且這些設備正在被拾取並用於 RAID 設備。由於唯一的設備映射器設備是這兩個osprober
設備,最簡單的解決方案是停止 RAID 設備,停止 DM 設備,然後重新掃描 RAID 陣列,以便使用底層組件設備。要停止 DM 設備,請執行dmsetup remove_all
作為
root
.