我想在 VirtualBox 的 CentOS 7 中使用 2 個 eth 設備設置一個 bond0 介面?
我想在 VirtualBox 中設置一個 CentOS 7.x 虛擬機,以便我可以嘗試綁定介面。如何設置此 VM,使其具有以下介面:
- eth1(專用網路 - 192.168.56.101)
- eth2(從屬bond0)
- eth3(從屬bond0)
- 債券 0(使用 LACP)
使用 Vagrant 來促進設置會很有幫助,因此更容易複製。
**注意:**我想手動進行設置,所以請展示一個禁用 NetworkManager 的範例。
設置流浪者
首先,您可以使用以下內容
Vagrantfile
來建構您的 VM:$ cat Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.hostname="box-101" config.ssh.forward_x11 = true config.vm.network "private_network", ip: "192.168.56.101" config.vm.network "public_network", bridge: "en0: Wi-Fi (Wireless)", auto_config: false config.vm.network "public_network", bridge: "en0: Wi-Fi (Wireless)", auto_config: false config.vm.provider "virtualbox" do |vb| vb.memory = "2048" end config.vm.provision "shell", inline: <<-SHELL yum install -y git vim socat tcpdump wget sysstat yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm SHELL end
**注意:**我使用的 NIC
public_network
是我的 Macbookbridge: "en0: Wi-Fi (Wireless)"
設備。如果您在其他任何東西上執行此操作,則需要將其更改為執行 Vagrant/VirtualBox 的主機系統上的適當 NIC。上面的文件包含 3 個 NIC,它們會在 VM 啟動時產生。啟動 VM 和 SSH 進入其中:
$ vagrant up $ vagrant ssh
初始網路設置
如果我們查看生成的網路,我們將看到以下內容:
$ ip a l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:c0:42:d5 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 85127sec preferred_lft 85127sec inet6 fe80::5054:ff:fec0:42d5/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ce:88:39 brd ff:ff:ff:ff:ff:ff inet 192.168.56.101/24 brd 192.168.56.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fece:8839/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff inet6 fe80::df68:9ee2:4b5:ad5f/64 scope link noprefixroute valid_lft forever preferred_lft forever 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:59:b0:69 brd ff:ff:ff:ff:ff:ff
以及對應的路由:
$ ip r default via 10.0.2.2 dev eth0 proto dhcp metric 100 10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101 metric 102
禁用網路管理器
對於這個 VM,我們將禁用 NetworkManager,這樣我們就可以手動配置綁定介面 + 從屬。
$ for i in NetworkManager-dispatcher NetworkManager NetworkManager-wait-online; do systemctl disable $i && systemctl stop $i done
確認 NM 現在已禁用:
$ systemctl list-unit-files |grep NetworkManager NetworkManager-dispatcher.service disabled NetworkManager-wait-online.service disabled NetworkManager.service disabled
設置綁定介面
首先,我們將建構 3 個文件。1個用於bond0介面,1個用於我們將用作從屬的2個介面(eth2和eth3)。
ifcfg-債券0
$ cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 Type=Bond NAME=bond0 BONDING_MASTER=yes BOOTPROTO=none ONBOOT=yes IPADDR=192.168.1.232 PREFIX=24 GATEWAY=192.168.1.2 BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
注意:
mode=4
是(802.3ad)又名。LACP。miimon=100
是 100 毫秒的檢查間隔,並且lacp_rate=1
是來自合作夥伴的快速 TX。您可以通過此命令查看綁定模組接受的所有參數modinfo bonding
。eth2
$ cat /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE=eth2 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes NM_CONTROLLED=no IPV6INIT=no MASTER=bond0 SLAVE=yes
eth3
$ cat /etc/sysconfig/network-scripts/ifcfg-eth3 DEVICE=eth3 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes NM_CONTROLLED=no IPV6INIT=no MASTER=bond0 SLAVE=yes
**注意:**在上面我靜態地為bond0介面分配IP地址192.168.1.232和網關192.168.1.2。您需要將這些更改為適合您情況的內容。
啟動界面
到目前為止,啟動網路最簡單的方法是重新啟動網路服務:
$ systemctl restart network
如果我們看一下介面和路由:
$ ip a l .. .. 4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff 5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000 link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 08:00:27:d7:c2:ec brd ff:ff:ff:ff:ff:ff inet 192.168.1.232/24 brd 192.168.1.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fed7:c2ec/64 scope link valid_lft forever preferred_lft forever $ ip r default via 10.0.2.2 dev eth0 10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth1 scope link metric 1003 169.254.0.0/16 dev bond0 scope link metric 1006 192.168.1.0/24 dev bond0 proto kernel scope link src 192.168.1.232 192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101
粘合細節
我們還可以查看綁定介面的設備以獲取有關介面狀態的更多詳細資訊:
$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 08:00:27:d7:c2:ec Active Aggregator Info: Aggregator ID: 1 Number of ports: 1 Actor Key: 9 Partner Key: 1 Partner Mac Address: 00:00:00:00:00:00 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 08:00:27:d7:c2:ec Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: churned Actor Churned Count: 0 Partner Churned Count: 1 details actor lacp pdu: system priority: 65535 system mac address: 08:00:27:d7:c2:ec port key: 9 port priority: 255 port number: 1 port state: 207 details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 3 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 08:00:27:59:b0:69 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: churned Partner Churn State: churned Actor Churned Count: 1 Partner Churned Count: 1 details actor lacp pdu: system priority: 65535 system mac address: 08:00:27:d7:c2:ec port key: 9 port priority: 255 port number: 2 port state: 199 details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 3
驗證外部連接
下面您可以看到對我在網路中的另一個盒子上執行的 bond0 的 IP 地址的 ping 輸出。一旦我們重新啟動
network
服務,我們可以看到它變得可以訪問:$ ping 192.168.1.232 From 192.168.1.10 icmp_seq=7414 Destination Host Unreachable From 192.168.1.10 icmp_seq=7415 Destination Host Unreachable 64 bytes from 192.168.1.232: icmp_seq=7416 ttl=64 time=886 ms 64 bytes from 192.168.1.232: icmp_seq=7417 ttl=64 time=3.58 ms 64 bytes from 192.168.1.232: icmp_seq=7418 ttl=64 time=3.52 ms 64 bytes from 192.168.1.232: icmp_seq=7419 ttl=64 time=3.46 ms 64 bytes from 192.168.1.232: icmp_seq=7420 ttl=64 time=3.15 ms 64 bytes from 192.168.1.232: icmp_seq=7421 ttl=64 time=3.50 ms
重啟提示
在 CentOS 7.x 上,bond0 介面在引導期間正常啟動似乎存在錯誤/問題。此問題的解決方法是將以下內容添加到:
$ echo "ifup bond0" >> /etc/rc.d/rc.local $ chmod +x /etc/rc.d/rc.local
這將保證
bond0
在引導期間正確啟動介面。參考