Linux

當設備有足夠空間時,如何在 mv 期間修復間歇性“設備上沒有剩餘空間”錯誤?

  • April 18, 2019
  • 桌面上的 Ubuntu 14.04
  • 源驅動器:/dev/sda1:5TB ext4 單

驅動器卷

  • 目標卷:/dev/mapper/archive-lvarchive:raid6 (mdadm) 18TB 卷,帶有 lvm

分區和 ext4

大約有 1500 萬個文件要移動,有些可能是重複的(我不想覆蓋重複的)。

使用的命令(來自源目錄)是:

ls -U |xargs -i -t mv -n {} /mnt/archive/targetDir/{}

正如預期的那樣,這已經持續了幾天,但我發現錯誤的頻率越來越高。當它啟動時,目標驅動器大約 70% 已滿,現在大約 90%。它曾經是大約 1/200 的移動狀態和錯誤,現在大約是 1/5。所有文件都不超過 100Mb,大多數文件都在 100k 左右

一些資訊:

$ df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/sdb3                      155G  5.5G  142G   4% /
none                           4.0K     0  4.0K   0% /sys/fs/cgroup
udev                           3.9G  4.0K  3.9G   1% /dev
tmpfs                          797M  2.9M  794M   1% /run
none                           5.0M  4.0K  5.0M   1% /run/lock
none                           3.9G     0  3.9G   0% /run/shm
none                           100M     0  100M   0% /run/user
/dev/sdb1                       19G   78M   18G   1% /boot
/dev/mapper/archive-lvarchive   18T   15T  1.8T  90% /mnt/archive
/dev/sda1                      4.6T  1.1T  3.3T  25% /mnt/tmp

$ df -i
Filesystem                       Inodes    IUsed     IFree IUse% Mounted on
/dev/sdb3                      10297344   222248  10075096    3% /
none                            1019711        4   1019707    1% /sys/fs/cgroup
udev                            1016768      500   1016268    1% /dev
tmpfs                           1019711     1022   1018689    1% /run
none                            1019711        5   1019706    1% /run/lock
none                            1019711        1   1019710    1% /run/shm
none                            1019711        2   1019709    1% /run/user
/dev/sdb1                       4940000      582   4939418    1% /boot
/dev/mapper/archive-lvarchive 289966080 44899541 245066539   16% /mnt/archive
/dev/sda1                     152621056  5391544 147229512    4% /mnt/tmp

這是我的輸出:

mv -n 747265521.pdf /mnt/archive/targetDir/747265521.pdf 
mv -n 61078318.pdf /mnt/archive/targetDir/61078318.pdf 
mv -n 709099107.pdf /mnt/archive/targetDir/709099107.pdf 
mv -n 75286077.pdf /mnt/archive/targetDir/75286077.pdf 
mv: cannot create regular file ‘/mnt/archive/targetDir/75286077.pdf’: No space left on device
mv -n 796522548.pdf /mnt/archive/targetDir/796522548.pdf 
mv: cannot create regular file ‘/mnt/archive/targetDir/796522548.pdf’: No space left on device
mv -n 685163563.pdf /mnt/archive/targetDir/685163563.pdf 
mv -n 701433025.pdf /mnt/archive/targetDir/701433025.pd

我發現很多關於這個錯誤的文章,但預測不合適。諸如“您的驅動器實際上已滿”或“您的 inode 已用完”甚至“您的 /boot 卷已滿”之類的問題。不過,大多數情況下,他們處理的第 3 方軟體由於處理文件的方式而導致問題,而且它們都是不變的,這意味著每次移動都會失敗。

謝謝。

編輯:這是一個失敗和成功的範例文件:

失敗(仍在源驅動器上)

ls -lhs 702637545.pdf
16K -rw-rw-r-- 1 myUser myUser 16K Jul 24 20:52 702637545.pdf

成功(在目標音量上)

ls -lhs /mnt/archive/targetDir/704886680.pdf
104K -rw-rw-r-- 1 myUser myUser 103K Jul 25 01:22 /mnt/archive/targetDir/704886680.pdf

此外,雖然並非所有文件都失敗,但失敗的文件總是會失敗。如果我一遍又一遍地重試它是一致的。

編輯:@mjturner 每個請求的一些附加命令

$ ls -ld /mnt/archive/targetDir
drwxrwxr-x 2 myUser myUser 1064583168 Aug 10 05:07 /mnt/archive/targetDir

$ tune2fs -l /dev/mapper/archive-lvarchive
tune2fs 1.42.10 (18-May-2014)
Filesystem volume name:   <none>
Last mounted on:          /mnt/archive
Filesystem UUID:          af7e7b38-f12a-498b-b127-0ccd29459376
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              289966080
Block count:              4639456256
Reserved block count:     231972812
Free blocks:              1274786115
Free inodes:              256343444
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         2048
Inode blocks per group:   128
RAID stride:              128
RAID stripe width:        512
Flex block group size:    16
Filesystem created:       Thu Jun 25 12:05:12 2015
Last mount time:          Mon Aug  3 18:49:29 2015
Last write time:          Mon Aug  3 18:49:29 2015
Mount count:              8
Maximum mount count:      -1
Last checked:             Thu Jun 25 12:05:12 2015
Check interval:           0 (<none>)
Lifetime writes:          24 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      3ea3edc4-7638-45cd-8db8-36ab3669e868
Journal backup:           inode blocks

$ tune2fs -l /dev/sda1
tune2fs 1.42.10 (18-May-2014)
Filesystem volume name:   <none>
Last mounted on:          /mnt/tmp
Filesystem UUID:          10df1bea-64fc-468e-8ea0-10f3a4cb9a79
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              152621056
Block count:              1220942336
Reserved block count:     61047116
Free blocks:              367343926
Free inodes:              135953194
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      732
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         4096
Inode blocks per group:   256
Flex block group size:    16
Filesystem created:       Thu Jul 23 13:54:13 2015
Last mount time:          Tue Aug  4 04:35:06 2015
Last write time:          Tue Aug  4 04:35:06 2015
Mount count:              3
Maximum mount count:      -1
Last checked:             Thu Jul 23 13:54:13 2015
Check interval:           0 (<none>)
Lifetime writes:          150 MB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      a266fec5-bc86-402b-9fa0-61e2ad9b5b50
Journal backup:           inode blocks

dir_index您在目標文件系統上使用的 ext4 功能實現中的錯誤。

解決方案:重新創建沒有 dir_index 的文件系統。或使用 tune2fs 禁用功能(需要一些注意事項,請參閱相關連結Novell SuSE 10/11:在 ext3 文件系統上禁用 H-Tree 索引,雖然與ext3相關,但可能需要類似注意事項。

(get a really good backup made of the filesystem)
(unmount the filesystem)
tune2fs -O ^dir_index /dev/foo
e2fsck -fDvy /dev/foo
(mount the filesystem)

ext4 有一個名為 dir_index 的功能預設啟用,它很容易受到雜湊衝突的影響。

……

ext4 可以散列其內容的文件名。這提高了性能,但有一個“小”問題:當 ext4 開始填滿時,它的雜湊表並沒有增長。相反,它返回 -ENOSPC 或“設備上沒有剩餘空間”。

引用自:https://unix.stackexchange.com/questions/222221