Filesystems

即使在修復之後,ext3 根文件系統也會在日誌中止的情況下變為只讀

  • October 8, 2014

簡短版本:機架空間 (xen) VM 上的 ext3 根文件系統在引導時檢測到中止的日誌並以只讀方式掛載。我已嘗試按照我閱讀的許多文章中的規定從救援環境中修復此問題tune2fse2fsck但錯誤仍在繼續發生。

更新:因此,根據這篇文章,我在該文件系統的條目中添加了“barrier=0”,/etc/fstab並在下次啟動時很好地安裝了 r/w。我被引導相信這是一個半虛擬化的事情,但如果有人完全理解這裡發生的事情並可以解釋,我會喜歡它。

長版:

Rackspace VM 剛剛從 Ubuntu 11.10 升級到 12.04.2

dmesg 輸出錯誤:

[   14.701446] blkfront: barrier: empty write xvda op failed
[   14.701452] blkfront: xvda: barrier or flush: disabled
[   14.701460] end_request: I/O error, dev xvda, sector 28175816
[   14.701473] end_request: I/O error, dev xvda, sector 28175816
[   14.701487] Aborting journal on device xvda1.
[   14.704186] EXT3-fs (xvda1): error: ext3_journal_start_sb: Detected aborted journal
[   14.704199] EXT3-fs (xvda1): error: remounting filesystem read-only
[   14.940734] init: dmesg main process (763) terminated with status 7
[   18.425994] init: mongodb main process (769) terminated with status 1
[   21.940032] eth1: no IPv6 routers present
[   23.612044] eth0: no IPv6 routers present
[   27.147759] [UFW BLOCK] IN=eth0 OUT= MAC=40:40:73:00:ea:12:c4:71:fe:f1:e1:3f:08:00 SRC=98.143.36.192 DST=50.56.240.11 LEN=40 TOS=0x00 PREC=0x00 TTL=242 ID=37934 DF PROTO=TCP SPT=30269 DPT=8123 WINDOW=512 RES=0x00 SYN URGP=0 
[   31.025920] [UFW BLOCK] IN=eth0 OUT= MAC=40:40:73:00:ea:12:c4:71:fe:f1:e1:3f:08:00 SRC=116.6.60.9 DST=50.56.240.11 LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=1433 WINDOW=16384 RES=0x00 SYN URGP=0 
[  493.974612] EXT3-fs (xvda1): error: ext3_remount: Abort forced by user
[  505.887555] EXT3-fs (xvda1): error: ext3_remount: Abort forced by user

在救援作業系統中,我嘗試過:

tune2sf -O ^has_journal /dev/xdbb1 #Device is xvdb1 in rescue, but xvdba1 in real OS
e2fsck -f /dev/xvdb1
tune2sf -j /dev/xvdb1

我也跑過e2fsck -p,e2fsck -ftune2fs -e continue. 這是 的輸出tune2fs -l

tune2fs 1.41.14 (22-Dec-2010)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          68910771-4026-4588-a62a-54eb992f4c6e
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1245184
Block count:              4980480
Reserved block count:     199219
Free blocks:              2550830
Free inodes:              1025001
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      606
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Thu Oct 20 21:34:53 2011
Last mount time:          Mon Apr  8 23:01:13 2013
Last write time:          Mon Apr  8 23:08:09 2013
Mount count:              0
Maximum mount count:      29
Last checked:             Mon Apr  8 23:04:49 2013
Check interval:           15552000 (6 months)
Next check after:         Sat Oct  5 23:04:49 2013
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      1e07317a-6301-41d9-8885-0e3e837f2a38
Journal backup:           inode blocks

/var/log/syslog在救援模式下,我還使用了一些額外的錯誤資訊grep 了一些行:

Apr  8 19:47:06 dev kernel: [26504959.895754] blkfront: barrier: empty write xvda op failed
Apr  8 19:47:06 dev kernel: [26504959.895763] blkfront: xvda: barrier or flush: disabled
Apr  8 20:19:33 dev kernel: [    0.000000] Command line: root=/dev/xvda1 console=hvc0 ro quiet splash 
Apr  8 20:19:33 dev kernel: [    0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 ro quiet splash 
Apr  8 20:19:33 dev kernel: [    0.240303] blkfront: xvda: barrier: enabled
Apr  8 20:19:33 dev kernel: [    0.249960]  xvda: xvda1
Apr  8 20:19:33 dev kernel: [    0.250356] xvda: detected capacity change from 0 to 20401094656
Apr  8 20:19:33 dev kernel: [    5.684101] EXT3-fs (xvda1): mounted filesystem with ordered data mode
Apr  8 20:19:33 dev kernel: [  140.547468] blkfront: barrier: empty write xvda op failed
Apr  8 20:19:33 dev kernel: [  140.547477] blkfront: xvda: barrier or flush: disabled
Apr  8 20:19:33 dev kernel: [  140.709985] EXT3-fs (xvda1): using internal journal
Apr  8 21:18:12 dev kernel: [    0.000000] Command line: root=/dev/xvda1 console=hvc0 ro quiet splash 
Apr  8 21:18:12 dev kernel: [    0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 ro quiet splash 
Apr  8 21:18:12 dev kernel: [    1.439023] blkfront: xvda: barrier: enabled
Apr  8 21:18:12 dev kernel: [    1.454307]  xvda: xvda1
Apr  8 21:18:12 dev kernel: [    6.799014] EXT3-fs (xvda1): recovery required on readonly filesystem
Apr  8 21:18:12 dev kernel: [    6.799020] EXT3-fs (xvda1): write access will be enabled during recovery
Apr  8 21:18:12 dev kernel: [    6.839498] blkfront: barrier: empty write xvda op failed
Apr  8 21:18:12 dev kernel: [    6.839505] blkfront: xvda: barrier or flush: disabled
Apr  8 21:18:12 dev kernel: [    6.854814] EXT3-fs (xvda1): warning: ext3_clear_journal_err: Filesystem error recorded from previous mount: IO failure
Apr  8 21:18:12 dev kernel: [    6.854820] EXT3-fs (xvda1): warning: ext3_clear_journal_err: Marking fs in need of filesystem check.
Apr  8 21:18:12 dev kernel: [    6.855247] EXT3-fs (xvda1): recovery complete
Apr  8 21:18:12 dev kernel: [    6.855902] EXT3-fs (xvda1): mounted filesystem with ordered data mode
Apr  8 21:18:12 dev kernel: [  143.505890] EXT3-fs (xvda1): using internal journal

在這一點上,我認為這很可能是Debian Bug 637234的一個實例。由於這是一個雲虛擬機,管理程序核心不在我的控制範圍內。解決方法是使用barrier=0in/etc/fstab用於根文件系統。長期的解決方案是將盒子重建為下一代機架空間雲實例,而不是第一代基於 Xen 的實例。

/etc/fstab 中的“barrier=0”可能為時已晚(並且在文件系統在稍後的引導階段安裝 RW 後才開始發揮作用)。

“barrier=off”作為附加核心參數應該更早更好地工作。

嘗試一下。如果您的 DomU 是由 Dom0 中的 pygrub 啟動的(這是“通常”的方式),您可以將其放入 DomU 中的 grub-kernel-konfiguration-line 中。

引用自:https://unix.stackexchange.com/questions/71758