Freebsd
使用 FreeBSD ZFS zroot(分區上的 ZFS)時更換磁碟?
在root上使用 ZFS 時如何用新磁碟替換損壞的磁碟?
我有 4 個使用 zroot 的磁碟 RAIDZ2 池。這意味著 ZFS 在單獨的分區上執行,而不是使用整個磁碟。我沒有找到任何關於如何在這種情況下更換磁碟的文件,或者資訊已被棄用。池是由安裝自動生成的。
凸輪控制設備列表:
% doas camcontrol devlist -v scbus0 on mpt0 bus 0: <> at scbus0 target -1 lun ffffffff () scbus1 on ahcich0 bus 0: <> at scbus1 target -1 lun ffffffff () scbus2 on ahcich1 bus 0: <> at scbus2 target -1 lun ffffffff () scbus3 on ahcich2 bus 0: <ST2000DM001-1CH164 CC43> at scbus3 target 0 lun 0 (pass0,ada0) <> at scbus3 target -1 lun ffffffff () scbus4 on ahcich3 bus 0: <ST2000DM001-1CH164 CC43> at scbus4 target 0 lun 0 (pass1,ada1) <> at scbus4 target -1 lun ffffffff () scbus5 on ahcich4 bus 0: <ST2000DM001-1CH164 CC43> at scbus5 target 0 lun 0 (pass2,ada2) <> at scbus5 target -1 lun ffffffff () scbus6 on ahcich5 bus 0: <SAMSUNG HD204UI 1AQ10001> at scbus6 target 0 lun 0 (pass3,ada3) <> at scbus6 target -1 lun ffffffff () scbus7 on ahciem0 bus 0: <AHCI SGPIO Enclosure 1.00 0001> at scbus7 target 0 lun 0 (pass4,ses0) <> at scbus7 target -1 lun ffffffff () scbus-1 on xpt0 bus 0: <> at scbus-1 target -1 lun ffffffff (xpt0)
現有磁碟的 gpart:
% gpart show ada0 => 40 3907029088 ada0 GPT (1.8T) 40 1024 1 freebsd-boot (512K) 1064 984 - free - (492K) 2048 4194304 2 freebsd-swap (2.0G) 4196352 3902832640 3 freebsd-zfs (1.8T) 3907028992 136 - free - (68K)
zpool狀態:
% zpool status zroot pool: zroot state: DEGRADED status: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: scrub repaired 28K in 0h41m with 0 errors on Thu Sep 27 17:58:02 2018 config: NAME STATE READ WRITE CKSUM zroot DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 ada2p3 ONLINE 0 0 0 15120424524672854601 REMOVED 0 0 0 was /dev/ada3p3 errors: No known data errors
離線:
% doas zpool offline zroot 15120424524672854601
我試圖將前幾個 GiB 從 ada0 複製到 ada3 ,
dd
但兩者都zpool attach
給出zpool replace
了錯誤:/dev/ada3p3 is part of active pool 'zroot'
甚至強制標誌也無濟於事。我猜磁碟 UUID 正在發生衝突。如何將
ada0-2p1-3
分區複製/複製到新磁碟(ada3)並更換故障驅動器的步驟是什麼?自動安裝程序首先執行了哪些命令來創建這些分區?
首先:記得讓新驅動器離線,並確保它沒有安裝或以任何方式使用。
將分區表從舊磁碟複製
ada0
到新磁碟ada3
:% doas gpart backup ada0 | doas gpart restore -F ada3
現在
ada3
有相同的三個分區ada0
:% doas gpart show ada3 => 40 3907029088 ada3 GPT (1.8T) 40 1024 1 freebsd-boot (512K) 1064 984 - free - (492K) 2048 4194304 2 freebsd-swap (2.0G) 4196352 3902832640 3 freebsd-zfs (1.8T) 3907028992 136 - free - (68K)
刪除舊的 ZFS 元數據(注意分區p3):
% doas dd if=/dev/zero of=/dev/ada3p3
更換驅動器(注意分區p3):
% doas zpool replace -f zroot 15120424524672854601 /dev/ada3p3 Make sure to wait until resilver is done before rebooting. If you boot from pool 'zroot', you may need to update boot code on newly attached disk '/dev/ada3p3'. Assuming you use GPT partitioning and 'da0' is your new boot disk you may use the following command: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
執行上述命令以更新新磁碟上的引導資訊:
% doas gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3 partcode written to ada3p1 bootcode written to ada3
UUID 現在不同了:
% gpart list ada0 | grep uuid | sort rawuuid: 7f842536-bcd0-11e8-b271-00259014958c rawuuid: 7fbe27a9-bcd0-11e8-b271-00259014958c rawuuid: 7fe24f3e-bcd0-11e8-b271-00259014958c % gpart list ada3 | grep uuid | sort rawuuid: 9c629875-c369-11e8-a2b0-00259014958c rawuuid: 9c63d063-c369-11e8-a2b0-00259014958c rawuuid: 9c66f76e-c369-11e8-a2b0-00259014958c % gpart list ada0 | grep efimedia | sort efimedia: HD(1,GPT,7f842536-bcd0-11e8-b271-00259014958c,0x28,0x400) efimedia: HD(2,GPT,7fbe27a9-bcd0-11e8-b271-00259014958c,0x800,0x400000) efimedia: HD(3,GPT,7fe24f3e-bcd0-11e8-b271-00259014958c,0x400800,0xe8a08000) % gpart list ada3 | grep efimedia | sort efimedia: HD(1,GPT,9c629875-c369-11e8-a2b0-00259014958c,0x28,0x400) efimedia: HD(2,GPT,9c63d063-c369-11e8-a2b0-00259014958c,0x800,0x400000) efimedia: HD(3,GPT,9c66f76e-c369-11e8-a2b0-00259014958c,0x400800,0xe8a08000)
驅動器現在正在重新同步:
% zpool status zroot pool: zroot state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sat Sep 29 01:01:24 2018 64.7G scanned out of 76.8G at 162M/s, 0h1m to go 15.7G resilvered, 84.22% done config: NAME STATE READ WRITE CKSUM zroot DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 ada2p3 ONLINE 0 0 0 replacing-3 OFFLINE 0 0 0 15120424524672854601 OFFLINE 0 0 0 was /dev/ada3p3/old ada3p3 ONLINE 0 0 0
重新同步後:
% zpool status zroot pool: zroot state: ONLINE scan: resilvered 18.6G in 0h7m with 0 errors on Sat Sep 29 01:09:22 2018 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 ada2p3 ONLINE 0 0 0 ada3p3 ONLINE 0 0 0 errors: No known data errors