Arch-Linux
ZFS 池處於永久重新同步循環中,無法分離/刪除設備
我的游泳池:
❯ zpool status pool: wdblack state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Oct 5 13:35:14 2020 2.84T scanned at 586M/s, 852G issued at 172M/s, 9.52T total 66.5M resilvered, 8.74% done, 0 days 14:44:07 to go remove: Removal of vdev 0 copied 5.99T in 15h17m, completed on Sat Jun 6 03:15:41 2020 29.5M memory used for removed device mappings config: NAME STATE READ WRITE CKSUM wdblack ONLINE 0 0 0 wwn-0x50014ee26390e982 ONLINE 0 0 82 wwn-0x5000cca27ec99833-part1 ONLINE 0 0 264K (resilvering) ata-WDC_WD80EZAZ-11TDBA0_JEH52XAN ONLINE 0 0 0 ata-GOODRAM_C100_FF180744082600124110 ONLINE 0 0 0 sdg ONLINE 0 0 0
-part1 已經進行了一周的重新同步循環。它每次都會自行啟動。
ZFS 特點:
NAME PROPERTY VALUE SOURCE wdblack type filesystem - wdblack creation Sat Mar 9 10:54 2019 - wdblack used 9.56T - wdblack available 10.5T - wdblack referenced 96K - wdblack compressratio 1.00x - wdblack mounted yes - wdblack quota none default wdblack reservation none default wdblack recordsize 128K local wdblack mountpoint /home/agilob/disk local wdblack sharenfs off default wdblack checksum on default wdblack compression off default wdblack atime off local wdblack devices on default wdblack exec on default wdblack setuid on default wdblack readonly off default wdblack zoned off default wdblack snapdir hidden default wdblack aclinherit restricted default wdblack createtxg 1 - wdblack canmount on default wdblack xattr on default wdblack copies 1 default wdblack version 5 - wdblack utf8only off - wdblack normalization none - wdblack casesensitivity sensitive - wdblack vscan off default wdblack nbmand off default wdblack sharesmb off default wdblack refquota none default wdblack refreservation none default wdblack guid 5626685650647801653 - wdblack primarycache all default wdblack secondarycache all default wdblack usedbysnapshots 0B - wdblack usedbydataset 96K - wdblack usedbychildren 9.56T - wdblack usedbyrefreservation 0B - wdblack logbias latency default wdblack objsetid 51 - wdblack dedup off default wdblack mlslabel none default wdblack sync standard default wdblack dnodesize legacy default wdblack refcompressratio 1.00x - wdblack written 96K - wdblack logicalused 9.56T - wdblack logicalreferenced 42K - wdblack volmode default default wdblack filesystem_limit none default wdblack snapshot_limit none default wdblack filesystem_count none default wdblack snapshot_count none default wdblack snapdev hidden default wdblack acltype off default wdblack context none default wdblack fscontext none default wdblack defcontext none default wdblack rootcontext none default wdblack relatime off local wdblack redundant_metadata all default wdblack overlay off default wdblack encryption off default wdblack keylocation none default wdblack keyformat none default wdblack pbkdf2iters 0 default wdblack special_small_blocks 0 default
我無法分離任何設備,即使是那些沒有重新同步的設備
❯ sudo zpool detach wdblack /dev/sdg cannot detach /dev/sdg: only applicable to mirror and replacing vdevs ❯ sudo zpool detach wdblack wwn-0x5000cca27ec99833-part1 cannot detach wwn-0x5000cca27ec99833-part1: only applicable to mirror and replacing vdevs
也無法刪除
❯ sudo zpool remove wdblack wwn-0x5000cca27ec99833-part1 cannot remove wwn-0x5000cca27ec99833-part1: Pool busy; removal may already be in progress ❯ sudo zpool remove wdblack sdg cannot remove sdg: invalid config; all top-level vdevs must have the same sector size and not be raidz.
如果我沒記錯的話,重新同步在開始擦洗後就開始了,這是無法更改的:
❯ sudo zpool scrub -s wdblack cannot cancel scrubbing wdblack: currently resilvering ❯ sudo zpool scrub wdblack cannot scrub wdblack: currently resilvering
zpool status wdblack
報告有文件損壞,但數字“隨機”變化它通常在 7(最常見)和超過 3000000 之間,現在是errors: 259134 data errors, use '-v' for a list
超過 300000 是我在池中的所有文件的數量!
現在它報告超過 259k 的錯誤,但是當我 sudo 時
zpool status wdblack -v
它只能列印 7 個文件,而不是 259k。在撰寫本文時,我正在使用 archlinux
Linux 5.8.12-arch1-1 #1 SMP PREEMPT Sat, 26 Sep 2020 21:42:58 +0000 x86_64 GNU/Linux ❯ zfs version zfs-0.8.4-1 zfs-kmod-0.8.4-1
編輯:
Resilver 完成後立即啟動
Oct 6 2020 18:41:03.798523850 sysevent.fs.zfs.resilver_finish version = 0x0 class = "sysevent.fs.zfs.resilver_finish" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 time = 0x5f7cac2f 0x2f9881ca eid = 0x27e4e Oct 6 2020 18:41:03.798523850 sysevent.fs.zfs.history_event version = 0x0 class = "sysevent.fs.zfs.history_event" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 history_hostname = "slave" history_internal_str = "errors=159477" history_internal_name = "starting deferred resilver" history_txg = 0x7246f3 history_time = 0x5f7cac2f time = 0x5f7cac2f 0x2f9881ca eid = 0x27e4f Oct 6 2020 18:41:09.005195427 sysevent.fs.zfs.resilver_start version = 0x0 class = "sysevent.fs.zfs.resilver_start" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 time = 0x5f7cac35 0x4f46a3 eid = 0x27e50
這發生在我身上一次,當一個重新同步循環完成後,我就對池進行了擦洗。它沒有發現任何錯誤,但重新同步停止循環。之後,我能夠卸下和更換驅動器
更新:
好吧,好吧,你會看看那個。在您上次發表評論 3 小時後,他們修復了它。它自 5 月 4 日以來一直在進行中,他們發布了 0.8.5 和修復程序。https://github.com/openzfs/zfs/pull/10291
不幸的是,您必須為此修復建構 ZFS。
或者查找具有 0.8.5 版軟體包的可信來源。
您可以從這裡獲取 ZFS 源:https ://github.com/openzfs/zfs/releases/tag/zfs-0.8.5
這是你的幸運日!:)