개요 : OVN 환경에서 DPDK 설치하는 kolla ansible 환경 구성
1. openstack 구성
- OS : ubuntu 22.04
- OpenStack Version : Bobcat
- Deployment Tool : Kolla-Ansible
- Node : deploy, controller, network, compute
- Network(ML2) : OVN
2. network 구성
- external Network : External Network
- internal Network : Internal API
- tenant Network : Tenant Network
3. 사전 설정
- [compute] grub 설정
- iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages=64
- [deploy] Ansible 파일 수정
vi /home/archiadmin/kvenv/share/kolla-ansible/role-ovsdpdk/defaults/main.yml
ovsdpdk_services:
ovsdpdk-db:
enabled: True
ovsdpdk-vswitchd:
enabled: True
4. Netplan 설정
- Compute
# This is the network config written by 'subiquity'
network:
ethernets:
eno1:
dhcp4: true
eno2:
dhcp4: true
eno3:
dhcp4: true
eno4:
dhcp4: true
enp23s0f0:
dhcp4: true
enp23s0f1:
dhcp4: true
enp24s0f0:
dhcp4: true
enp24s0f1:
dhcp4: true
enp59s0f0:
dhcp4: true
enp59s0f1:
dhcp4: true
enp94s0f0:
dhcp4: true
enp94s0f1:
dhcp4: true
enp95s0f0:
dhcp4: true
enp95s0f1:
dhcp4: true
version: 2
bonds:
bond2:
dhcp4: false
dhcp6: false
interfaces: ['enp23s0f0', 'enp24s0f1']
mtu: 9000
parameters:
mii-monitor-interval: "100"
mode: active-backup
vlans:
internal:
addresses:
- 172.19.244.12/24
dhcp4: false
dhcp6: false
id: 220
link: eno2
mtu: 9000
tenant:
addresses:
- 172.19.240.12/24
dhcp4: true
dhcp6: true
id: 224
link: bond2
mtu: 9000
bridges:
external:
interfaces: ['enp23s0f1', 'enp24s0f0']
mtu: 9000
addresses: [172.19.217.12/24]
5. OpenStack 배포
- [deploy] kolla-ansible 오탈자 수정
vi /home/archiadmin/kolla-ansible/ansible/roles/ovs-dpdk/handlers/main.yml
name: Restart ovsdpdk-db container
vars:
service_name: "ovsdpdk-db"
service: "{{ ovsdpdk_services[service_name] }}"
become: true
kolla_container:
action: "recreate_or_restart_container"
common_options: "{{ docker_common_options }}"
name: "{{ service.container_name }}"
image: "{{ service.image }}"
volumes: "{{ service.volumes }}"d
imensions: "{{ service.dimensions }}"
when:
- kolla_action != "config"
notify:
- Waiting the ovs db service to be ready
- Ensuring ovsdpdk bridges are properly setup indexed
- Restart ovsdpdk-vswitchd container
- Ensuring ovsdpdk bridges are properly setup named
- Wait for dpdk tunnel ip
# wait to Wait
- OVS-DPDK gather facts
vi /home/archiadmin/kolla-ansible/ansible/roles/ovs-dpdk/defaults/main.yml
ovs_bridge_mappings: "{% for bridge in neutron_bridge_name.split(',') %}physnet{{ loop.index0 + 1 }}:{{ bridge }}{% if not loop.last %},{% endif %}{% endfor %}"
ovs_port_mappings: "{% for bridge in neutron_bridge_name.split(',') %} {{ neutron_external_interface.split(',')[loop.index0] }}:{{ bridge }}{% if not loop.last %},{% endif %}{% endfor %}"
tunnel_interface_network: "{{ hostvars[inventory_hostname].ansible_facts[dpdk_tunnel_interface]['ipv4']['network'] }}/{{ hostvars[inventory_hostname].ansible_facts[dpdk_tunnel_interface]['ipv4']['netmask'] }}"
tunnel_interface_cidr: "{{ dpdk_tunnel_interface_address }}/{{ tunnel_interface_network | ipaddr('prefix') }}"
ovs_cidr_mappings: "{% if neutron_bridge_name.split(',') | length != 1 %} {{ neutron_bridge_name.split(',')[0] }}:{{ tunnel_interface_cidr }} {% else %} {{ neutron_bridge_name }}:{{ tunnel_interface_cidr }} {% endif %}"
# { neutron_bridge_name.split(',')[0] } to {{ neutron_bridge_name.split(',')[0] }}
ovs_mem_channels: 4
ovs_socket_mem: 1024
ovs_hugepage_mountpoint: /dev/hugepages
- [all] 필수 패키지 설치
apt install crudini
- [deploy] kolla-ansible globals.yml 수정
vi /etc/kolla/globals.yml
workaround_ansible_issue_8743: yes
kolla_base_distro: "ubuntu"
openstack_release: "2023.2"
node_custom_config: "{{ node_config }}/config"
kolla_internal_vip_address: "<public ip>"
kolla_external_vip_address: "192.168.10.111"
network_interface: "internal"
kolla_external_vip_interface: "eno1"
api_interface: "internal"
tunnel_interface: "tenant"
neutron_bridge_name: "br-ex"
neutron_plugin_agent: "ovn"
enable_neutron_packet_logging: "yes"
enable_openstack_core: "yes"
enable_haproxy: "yes"
enable_horizon: "{{ enable_openstack_core | bool }}"
enable_neutron_dvr: "yes"
enable_neutron_agent_ha: "yes"
enable_neutron_provider_networks: "yes"
nova_compute_virt_type: "kvm"
nova_console: "novnc”
- group_vars
vi /home/archiadmin/group_vars/control.yml
neutron_external_interface: "external"
enable_ovs_dpdk: "no"
vi /home/archiadmin/group_vars/network.yml
neutron_external_interface: "external"
enable_ovs_dpdk: "no"
vi /home/archiadmin/group_vars/compute.yml
neutron_external_interface: "enp24s0f0"
enable_ovs_dpdk: "yes"
ovs_datapath: "netdev"
dpdk_tunnel_interface: "external"
dpdk_interface_driver: "vfio-pci"
- [deploy] openstack 배포
- kolla-ansible -i ./multinode deploy
6. OpenStack 배포 후 작업
- [compute] ovsdpdk conf 수정
vi /etc/kolla/ovsdpdk-db/ovs-dpdkctl.conf
[ovs]
bridge_mappings = physnet1:br-ex
port_mappings = enp24s0f1:br-ex,enp23s0f0:br-ex
cidr_mappings = br-ex:172.19.216.14/24
ovs_coremask = 0x1
pmd_coremask = 0x2
ovs_mem_channels = 4
ovs_socket_mem = 1024
dpdk_interface_driver = vfio-pci
hugepage_mountpoint = /dev/hugepages
physical_port_policy = named
pci_whitelist = -a 0000:18:00.1,safe-mode-support=1 -a 0000:17:00.0,safe-mode-support=1
[enp23s0f0]
address = 0000:17:00.0
driver = vfio-pci
old_driver =ice
[enp23s0f1]
address = 0000:17:00.1
driver =ice
[enp24s0f0]
address = 0000:18:00.0
driver =ice
[enp24s0f1]
address = 0000:18:00.1
driver = vfio-pci
old_driver = ice
docker restart ovsdpdk_db
- [compute] dpdk-devbind 설정
docker exec ovsdpdk_vswitchd dpdk-devbind.py -u 17:00.0
docker exec ovsdpdk_vswitchd dpdk-devbind.py -u 18:00.1
docker exec ovsdpdk_vswitchd dpdk-devbind.py -b vfio-pci 17:00.0
docker exec ovsdpdk_vswitchd dpdk-devbind.py -b vfio-pci 18:00.1
docker exec ovsdpdk_vswitchd dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:17:00.0 'Ethernet Controller E810-XXV for SFP 159b' drv=vfio-pci unused=ice
0000:18:00.1 'Ethernet Controller E810-XXV for SFP 159b' drv=vfio-pci unused=ice
Network devices using kernel driver
===================================
0000:04:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno8303 drv=tg3 unused=vfio-pci *Active*
0000:04:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno8403 drv=tg3 unused=vfio-pci
0000:17:00.1 'Ethernet Controller E810-XXV for SFP 159b' if=enp23s0f1 drv=ice unused=vfio-pci
0000:18:00.0 'Ethernet Controller E810-XXV for SFP 159b' if=enp24s0f0 drv=ice unused=vfio-pci
0000:b1:00.0 'Ethernet Controller E810-XXV for SFP 159b' if=enp177s0f0 drv=ice unused=vfio-pci
0000:b1:00.1 'Ethernet Controller E810-XXV for SFP 159b' if=enp177s0f1 drv=ice unused=vfio-pci
0000:b2:00.0 'Ethernet Controller E810-XXV for SFP 159b' if=enp178s0f0 drv=ice unused=vfio-pci
0000:b2:00.1 'Ethernet Controller E810-XXV for SFP 159b' if=enp178s0f1 drv=ice unused=vfio-pci
0000:ca:00.0 'I350 Gigabit Network Connection 1521' if=enp202s0f0 drv=igb unused=vfio-pci
0000:ca:00.1 'I350 Gigabit Network Connection 1521' if=enp202s0f1 drv=igb unused=vfio-pci
0000:ca:00.2 'I350 Gigabit Network Connection 1521' if=enp202s0f2 drv=igb unused=vfio-pci
0000:ca:00.3 'I350 Gigabit Network Connection 1521' if=enp202s0f3 drv=igb unused=vfio-pci
No 'Baseband' devices detected
==============================
No 'Crypto' devices detected
============================
No 'DMA' devices detected
=========================
No 'Eventdev' devices detected
==============================
No 'Mempool' devices detected
=============================
No 'Compress' devices detected
==============================
No 'Misc (rawdev)' devices detected
===================================
No 'Regex' devices detected
===========================
- [compute] ovs bond 설정
docker exec ovsdpdk_db ovs-vsctl add-bond br-ex enp24s0f0 enp24s0f0 enp23s0f1 lacp=active \
-- set interface enp24s0f0 type=dpdk options:dpdk-devargs=0000:18:00.0,safe-mode-support=1 \
-- set interface enp23s0f1 type=dpdk options:dpdk-devargs=0000:17:00.1,safe-mode-support=1
docker exec ovsdpdk_db ovs-vsctl show
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
fail_mode: secure
datapath_type: netdev
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port enp24s0f0
Interface enp23s0f1
type: dpdk
options: {dpdk-devargs="0000:17:00.1,safe-mode-support=1"}
Interface enp24s0f0
type: dpdk
options: {dpdk-devargs="0000:18:00.0,safe-mode-support=1"}
- [compute] datapath type 변경
- docker exec ovsdpdk_db ovs-vsctl --no-wait -- --may-exist add-br br-int -- set Bridge br-int datapath_type=netdev
- [compute] system id 설정
- docker exec ovsdpdk_db ovs-vsctl --no-wait set Open_Vswitch . external_ids:system-id=compute01
- 각 노드에 맞게 id 수정
- docker exec ovsdpdk_db ovs-vsctl --no-wait set Open_Vswitch . external_ids:system-id=compute01
- [compute] Encap ID 확인
- docker exec ovn_sb_db ovn-sbctl get Chassis <system id> Encap
- system id에 맞게 <system id> 작성 ex)compute01
- docker exec ovn_sb_db ovn-sbctl get Chassis <system id> Encap
- [compute] Encap 설정 변경
- docker exec ovn_sb_db ovn-sbctl set Encap <get Chassis <system id>> options:enable-dpdk=True
- Encap ID 확인의 출력 값에 해당하는 내용 < get Chassis <system id> > 작성
- docker exec ovn_sb_db ovn-sbctl set Encap <get Chassis <system id>> options:enable-dpdk=True
- [compute] bridge mapping 이름 변경
- docker exec ovsdpdk_vswitchd ovs-vsctl get open . external-ids:ovn-bridge-mappings
- docker exec ovsdpdk_vswitchd ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-ex
7. 튜닝
- VM 생성 시 PMD 영역에서 VM에 할당된 코어 갯 수만큼 할당
- VM 생성 시 nova pin 된 영역에서도 할당된 코어 갯 수만큼 할당
- log on일 시 lcore 영역에서 계산
- core-mask.py
#!/usr/bin/python
import sys
def hex_to_comma_list(hex_mask):
binary = bin(int(hex_mask, 16))[2:]
reversed_binary = binary[::-1]
i = 0
output = ""
for bit in reversed_binary:
if bit == '1':
output = output + str(i) + ','
i = i + 1
return output[:-1]
def comma_list_to_hex(cpus):
cpu_arr = cpus.split(",")
binary_mask = 0
for cpu in cpu_arr:
binary_mask = binary_mask | (1 << int(cpu))
return format(binary_mask, '02x')
if len(sys.argv) != 2:
sys.exit(2)
user_input = sys.argv[1]
try:
print(hex_to_comma_list(user_input))
except:
print(comma_list_to_hex(user_input))
- 적용 내역
nova pinning | # compute node : nova-compute/nova.conf [DEFAULT] vcpu_pin_set = 24-31,56-63,96-127 |
flavor metadata | hw:cpu_policy=dedicated hw:mem_page_size=1GB hw:vif_multiqueue_enabled=true |
lcore mask | python3 dpdk-pmd-mask.py 10,12,13,14,15,44,45,46,47 # compute node : docker exec ovsdpdk_vswitchd ovs-vsctl --no-wait set Open_Vswitch . other_config:dpdk-lcore-mask=f0000000f400 |
pmd mask | 16-23,48-55,64-95 python3 dpdk-pmd-mask.py 16,17,18,19,20,21,22,23,48,49,50,51,52,53,54,55,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95 # compute node : docker exec ovsdpdk_vswitchd ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=ffffffff00ff000000ff0000 # compute node : docker exec ovsdpdk_vswitchd ovs-appctl dpif-netdev/pmd-rxq-show |
8. 확인
- iperf3를 통해 속도를 측정하여 여러 튜닝 결과 확인
'Openstack Kolla Ansible' 카테고리의 다른 글
Kolla-Ansible Routed Provider 설치 (0) | 2024.05.24 |
---|---|
Kolla-Ansible OVN with SR-IOV 설치 (0) | 2024.05.24 |