Debian

在引導期間啟動多路徑上的 LVM 分區的正確方法

  • April 23, 2019

我有成功配置 iSCSI 和多路徑的 Debian 9:

# multipath -ll /dev/mapper/mpathb
mpathb (222c60001556480c6) dm-2 Promise,Vess R2600xi
size=10T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 12:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
 `- 13:0:0:0 sdd 8:48 active ready running

/dev/mapper/mpathb是 LVM 組的一部分vg-one-100

# pvs
 PV         VG         Fmt  Attr PSize  PFree
 /dev/dm-2  vg-one-100 lvm2 a--  10,00t 3,77t
# vgs
 VG         #PV #LV #SN Attr   VSize  VFree
 vg-one-100   1  17   0 wz--n- 10,00t 3,77t

vg-one-100組包含幾卷:

# lvs
 LV          VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
 lv-one-0-1  vg-one-100 -wi-a----- 20,00g                                                    
 lv-one-1-0  vg-one-100 -wi-a-----  2,41g                                                    
 lv-one-10-0 vg-one-100 -wi------- 20,00g                                                    
 lv-one-11-0 vg-one-100 -wi------- 30,00g                                                    
 lv-one-12-0 vg-one-100 -wi-------  2,41g                                                    
 lv-one-13-0 vg-one-100 -wi-------  2,41g                                                    
 lv-one-14-0 vg-one-100 -wi-------  2,41g                                                    
 lv-one-15-0 vg-one-100 -wi-------  2,41g                                                    
 lv-one-16-0 vg-one-100 -wi-------  2,41g                                                    
 lv-one-17-0 vg-one-100 -wi------- 30,00g                                                    
 lv-one-18-0 vg-one-100 -wi------- 30,00g                                                    
 lv-one-23-0 vg-one-100 -wi------- 20,00g                                                    
 lv-one-31-0 vg-one-100 -wi------- 20,00g                                                    
 lv-one-8-0  vg-one-100 -wi------- 30,00g                                                    
 lv-one-9-0  vg-one-100 -wi------- 20,00g                                                    
 lvm_images  vg-one-100 -wi-a-----  5,00t                                                    
 lvm_system  vg-one-100 -wi-a-----  1,00t          

我的lvm.conf包括下一個過濾器:

# grep filter /etc/lvm/lvm.conf | grep -vE '^.*#'
   filter = ["a|/dev/dm-*|", "r|.*|"]
   global_filter = ["a|/dev/dm-*|", "r|.*|"]

lvmetad被禁用:

# grep use_lvmetad /etc/lvm/lvm.conf | grep -vE '^.*#'
   use_lvmetad = 0

如果lvmetad禁用,lvm2-activation-generator則將使用。

就我而言lvm2-activation-generator,生成了所有需要的單元文件並在引導期間執行它:

# ls -1 /var/run/systemd/generator/lvm2-activation*
/var/run/systemd/generator/lvm2-activation-early.service
/var/run/systemd/generator/lvm2-activation-net.service
/var/run/systemd/generator/lvm2-activation.service

# systemctl status lvm2-activation-early.service
● lvm2-activation-early.service - Activation of LVM2 logical volumes
  Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
  Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
    Docs: man:lvm2-activation-generator(8)
Main PID: 897 (code=exited, status=0/SUCCESS)

systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation-net.service
● lvm2-activation-net.service - Activation of LVM2 logical volumes
  Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
  Active: inactive (dead) since Thu 2019-03-28 17:21:24 MSK; 3 weeks 4 days ago
    Docs: man:lvm2-activation-generator(8)
Main PID: 1537 (code=exited, status=0/SUCCESS)

systemd[1]: Starting Activation of LVM2 logical volumes...
lvm[1537]:   4 logical volume(s) in volume group "vg-one-100" now active
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation.service
● lvm2-activation.service - Activation of LVM2 logical volumes
  Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
  Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
    Docs: man:lvm2-activation-generator(8)
Main PID: 900 (code=exited, status=0/SUCCESS)

systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.

問題在於:我無法在引導期間自動啟動所有 LVM 卷,因為在通過 iSCSI 而不是多路徑設備(片段)lvm2-activator-net.service連接(登錄)卷之後啟動卷:journalctl

. . .
kernel: sd 11:0:0:0: [sdc] 21474836480 512-byte logical blocks: (11.0 TB/10.0 TiB)
kernel: sd 10:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 11:0:0:0: [sdc] Write Protect is off
kernel: sd 11:0:0:0: [sdc] Mode Sense: 97 00 10 08
kernel: sd 11:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 10:0:0:0: [sdb] Attached SCSI disk
kernel: sd 11:0:0:0: [sdc] Attached SCSI disk
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] (multiple)
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] (multiple)
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] successful.
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] successful.
systemd[1]: Started Login to default iSCSI targets.
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Starting Activation of LVM2 logical volumes...
multipathd[884]: sdb: add path (uevent)
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Reached target Remote File Systems (Pre).
systemd[1]: Mounting /var/lib/one/datastores/101...
systemd[1]: Mounting /var/lib/one/datastores/100...
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 1 1 service-time 0 1 1 8:16 1]
multipathd[884]: mpathb: event checker started
multipathd[884]: sdb [8:16]: path added to devmap mpathb
multipathd[884]: sdc: add path (uevent)
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 2 1 service-time 0 1 1 8:16 1 service-time 0 1 1 8:32 1]
. . .

啟動條件lvm2-activation-net.service正確:

# grep After /var/run/systemd/generator/lvm2-activation-net.service 
After=lvm2-activation.service iscsi.service fcoe.service

如何all在引導期間正確啟動邏輯卷?

由於您似乎只有一個物理卷,我真的很想知道在您的情況下如何發生部分啟動。它應該是全部或全部。但無論如何,這裡有幾個問題需要處理:

  • 您需要持久的多路徑設備名稱。我不確定從哪裡來,但為了清楚起見mpathb,我建議不要啟用。手動配置別名或使用儲存提供的 WWID。user_friendly_names``/etc/multipath.conf
  • LVM 過濾器是正則表達式,而不是 shell glob,因此您需要將語法更改為類似
filter = ["a|^/dev/mapper/222c60001556480c6$|", "r|.|"]

global_filter對於正確的功能是可選的,但它可能會影響啟動時間。)

  • 您必須延遲啟動,直到所有物理卷的多路徑設備出現。一種可能性是添加
Requires = dev-mapper-222c60001556480c6.device
After = dev-mapper-222c60001556480c6.device

/etc/systemd/system/lvm2-activation-net.service.d/wait_for_storage.conf. 另一個是創建專門的啟動服務。

  • iSCSI 儲存設備(及其多路徑設備)可能需要很長時間才能出現。您可能需要創建/etc/systemd/system/dev-mapper-222c60001556480c6.device包含
[Unit]
JobTimeoutSec=3min

以確保 systemd 不會因為等待它而超時。如果您有多個此類設備,請使用指向通用文件的符號連結。

即使上述方法不能立即解決您的問題,它也會使調試更容易處理。祝你好運!

引用自:https://unix.stackexchange.com/questions/513969