본문 바로가기

Openstack Ansible

Openstack-Ansible Swift 기반 설치

가이드 문서

 

구성

deploy node : 4core/8gb, 192.168.130.5, 172.28.236.5

controller node : 8core/24gb, 192.168.130.11, 172.28.236.11, 172.28.240.11, 172.28.244.11

compute node 1 : 8core/16gb, 192.168.130.21, 172.28.236.21, 172.28.240.21, 172.28.244.21

storage node 1 : 4core/8gb, 192.168.130.31, 172.28.236.31, 172.28.240.31, 172.28.244.31

50gb, 100gb, 20gb, 20gb, 20gb

 

네트워크

default 192.168.130.0/24

mgmt 172.28.236.0/22

vxlan 172.28.240.0/22

storage 172.28.244.0/22

 

구축 순서

  1. deploy node 생성
    1. 이하 명령어는 deploy node에서 실행
  2. vi /etc/netplan/~.yamlnetplan 설정
# This is the network config written by 'subiquity'
network:
  ethernets:
    enp1s0:
      addresses:
      - 192.168.130.5/24
      gateway4: 192.168.130.1
      nameservers:
        addresses:
        - 8.8.8.8
        search:
        - 8.8.4.4
    enp7s0:
      addresses:
      - 172.28.236.5/22
  version: 2
  1. netplan apply
    1. reboot 필요
  2. apt update
  3. apt dist-upgrade
  4. apt install build-essential git chrony openssh-server python3-dev sudo
  5. git clone -b 26.1.0 https://opendev.org/openstack/openstack-ansible /opt/openstack-ansible
  6. cd/opt/openstack-ansible
  7. scripts/bootstrap-ansible.sh
  8. vi /etc/hosts호스트 설정
127.0.0.1 localhost
127.0.1.1 devstack

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

# deploy
172.28.236.5       deploy

# controller
172.28.236.11      controller

# compute1
172.28.236.21      compute1

# ceph1
172.28.236.31      ceph1
  1. 설정이 완료되었으면 controller node, compute node, storage node 생성
  2. 각 노드에 네트워크와 필요할 경우 스토리지를 설정
  3. 각 노드의 호스트와 netplan 설정
    1. 이하 명령어는 각 node에서 실행
    2. 호스트는 위와 동일하게 설정
    3. netplan은 아래 코드를 보고 노드별 할당된 ip를 넣어 제공
# This is the network config written by 'subiquity'
network:
  version: 2
  renderer: networkd
  ethernets:
    enp1s0:
      dhcp4: no
      addresses: [192.168.130.21/24]
      gateway4: 192.168.130.1
      nameservers:
        addresses:
        - 8.8.8.8
        search:
        - 8.8.4.4
    enp6s0:
      dhcp4: no
    enp7s0:
      dhcp4: no
    enp8s0:
      dhcp4: no
    enp9s0:
      dhcp4: no
  bridges:
    br-mgmt:
      interfaces: [enp7s0]
      addresses: [172.28.236.21/22]
    br-vxlan:
      interfaces: [enp9s0]
      addresses: [172.28.240.21/22]
    br-storage:
      interfaces: [enp8s0]
      addresses: [172.28.244.21/22]
  1. 11, 21, 31등 변경해서 사용
  2. netplan apply
    1. reboot 필요
  3. 각 노드에서 ping명령어로 각 ip로 쏴서 통신이 되는지 확인 안될 경우 interface와 ip 대역이 옳게 연결되었는지 확인
    1. ifconfig 나 ip a를 통해 연결된 네트워크를 확인하고 해당 네트워크가 지원하는 대역대가 옳게 연결되었는지 확인
  4. 각 노드에 아래 명령어 실행
    1. apt update
    2. apt dist-upgrade
apt install bridge-utils debootstrap openssh-server \
  tcpdump vlan python3
    1. apt install linux-modules-extra-$(uname -r)
  1. deploy 노드에서 아래 명령어를 실행하여 연결을 쉽게 하도록 설정
    1. ssh-keygen
    2. ssh-copy-id controller
    3. ssh-copy-id compute1
    4. ssh-copy-id ceph1
  2. deploy 노드에서 아래 명령어로 연결이 잘 되었는지 확인
    1. ssh controller
    2. ssh compute1
    3. ssh ceph1
  3. ceph에 위에 설정한 100,50,20,20,20gb의 스토리지를 붙혀서 아래 명령어 실행
    1. 아래 명령어는 ceph node에서 실행
    2. pvcreate --metadatasize 2048 /dev/vdc
    3. vgcreate cinder-volumes /dev/vdc
    4. pvs, vgs, pvdisplay 등으로 잘 붙었는지 확인
  4. deploy node에서 cp -r /opt/openstack-ansible/etc/openstack_deploy /etc/openstack_deploy
    1. cd /etc/openstack_deploy/
    2. cp openstack_user_config.yml.example openstack_user_config.yml
  5. 생성된 openstack_user_config.yml은 아래와 같은 형식으로 변경 ( Ceph 기반으로 설치 시 문서 참)
---
cidr_networks:
  container: 172.28.236.0/22 # MGMT
  tunnel: 172.28.240.0/22 # VXLAN
  storage: 172.28.244.0/22

used_ips:
  - "172.28.236.1,172.28.236.50"
  - "172.28.240.1,172.28.240.50"
  - "172.28.244.1,172.28.244.50"

global_overrides:
  # The internal and external VIP should be different IPs, however they
  # do not need to be on separate networks.
  external_lb_vip_address: 192.168.130.11
  internal_lb_vip_address: 172.28.236.11
  management_bridge: "br-mgmt"
  provider_networks:
    - network:
        container_bridge: "br-mgmt"
        container_type: "veth"
        container_interface: "eth1"
        ip_from_q: "container"
        type: "raw"
        group_binds:
          - all_containers
          - hosts
        is_container_address: true
    - network:
        container_bridge: "br-vxlan"
        container_type: "veth"
        container_interface: "eth10"
        ip_from_q: "tunnel"
        type: "vxlan"
        range: "1:1000"
        net_name: "vxlan"
        group_binds:
          - neutron_openvswitch_agent
    - network:
        container_bridge: "br-ex"
        container_type: "veth"
        container_interface: "eth12"
        host_bind_override: "eth12"
        type: "flat"
        net_name: "physnet"
        group_binds:
          - neutron_openvswitch_agent
    - network:
        container_bridge: "br-storage"
        container_type: "veth"
        container_interface: "eth2"
        ip_from_q: "storage"
        type: "raw"
        group_binds:
          - glance_api
          - cinder_api
          - cinder_volume
          - nova_compute
          - swift_proxy
#          - ceph-osd
#          - ceph-rgw

  swift:
    part_power: 8
    storage_network: 'br-storage'
    replication_network: 'br-storage'
    drives:
      - name: vdd # my storage name
      - name: vde
      - name: vdf
    mount_point: /srv/node
    storage_policies:
      - policy:
          name: default
          index: 0
          default: True

###
### Infrastructure
###

# galera, memcache, rabbitmq, utility
shared-infra_hosts:
  controller1:
    ip: 172.28.236.11

# repository (apt cache, python packages, etc)
repo-infra_hosts:
  controller1:
    ip: 172.28.236.11

# load balancer
haproxy_hosts:
  controller1:
    ip: 172.28.236.11

###
### OpenStack
###

# keystone
identity_hosts:
  controller1:
    ip: 172.28.236.11

# cinder api services
storage-infra_hosts:
  controller1:
    ip: 172.28.236.11

# glance
image_hosts:
  controller1:
    ip: 172.28.236.11

# placement
placement-infra_hosts:
  controller1:
    ip: 172.28.236.11

# nova api, conductor, etc services
compute-infra_hosts:
  controller1:
    ip: 172.28.236.11

# heat
orchestration_hosts:
  controller1:
    ip: 172.28.236.11

# horizon
dashboard_hosts:
  controller1:
    ip: 172.28.236.11

# neutron server, agents (L3, etc)
network_hosts:
  controller1:
    ip: 172.28.236.11

# nova hypervisors
compute_hosts:
  compute1:
    ip: 172.28.236.21

# cinder storage host (LVM-backed)
storage_hosts:
  storage1:
    ip: 172.28.236.31
    container_vars:
      cinder_backends:
        limit_container_types: cinder_volume
        lvm:
          volume_group: cinder-volumes
          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
          volume_backend_name: LVM_iSCSI
          iscsi_ip_address: "172.28.244.31"

# swift
swift-proxy_hosts:
  storage1:
    ip: 172.28.236.31
    container_vars:
      swift_proxy_vars:
        limit_container_types: swift_proxy
        read_affinity: "r1=100"
        write_affinity: "r1"
        write_affinity_node_count: "1 * replicas"

swift_hosts:
  storage1:
    ip: 172.28.236.31
    container_vars:
      swift_vars:
        limit_container_types: swift
        zone: 0
        region: 1
  1. 대역대를 바꿨다면 위 IP들을 수정
    1. 스토리지 이름은 fdisk -l로 확인
  2. user_variable.ym ( Ceph 기반 설치 시 문서 참고 )
---
debug: True
ssh_delay: 10

#lxc_cache_prep_timeout: 3000

openstack_service_publicuri_proto: http
openstack_external_ssl: false
haproxy_ssl: true
rabbitmq_use_ssl: false

horizon_images_upload_mode: legacy

haproxy_keepalived_external_vip_cidr: "192.168.130.11/24"
haproxy_keepalived_internal_vip_cidr: "172.28.236.11/22"
haproxy_keepalived_external_interface: enp1s0 # my interface name
haproxy_keepalived_internal_interface: br-mgmt

neutron_plugin_base:
  - router

openstack_host_specific_kernel_modules:
  - name: "openvswitch"
    pattern: "CONFIG_OPENVSWITCH="
    group: "network_hosts"

neutron_plugin_type: ml2.ovs.dvr
neutron_l2_population: true
neutron_tunnel_types:  vxlan

neutron_provider_networks:
  network_flat_networks: "*"
  network_types: "vxlan, flat, vlan"
  network_vxlan_ranges: "10001:20000"
  network_mappings: "public:br-ex" # 21.a에서 설정한 네트워크 이름과 매핑
  network_interface_mappings: "br-ex:enp6s0"

# swift
swift_allow_all_users: true
glance_default_store: swift
glance_swift_store_auth_address: '{{ keystone_service_internalurl }}'
glance_swift_store_container: glance_images
glance_swift_store_endpoint_type: internalURL
glance_swift_store_key: '{{ glance_service_password }}'
glance_swift_store_region: RegionOne
glance_swift_store_user: 'service:glance'

# add zed
swift_storage_address: 172.28.244.31
swift_replication_address: 172.28.244.31
#interface_mapping: br-ex:ens3
neutron_ml2_drivers_type: "local,flat,vlan,vxlan"
horizon_network_provider_types: ['local', 'flat', 'vxlan', 'geneve', 'vlan']
  1. cd /opt/openstack-ansible
    1. ./scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
    2. Operation Complete, [ /etc/openstack_deploy/user_secrets.yml ] is ready 출력 확인
  2. ssh ceph1 ( 해당 명령어들은 swift 기반으로 설치하는 것 이기 때문에 ceph 기반으로 설치할 때는 이 명령어를 실행하지 않음 )
    1. mkfs.ext4 /dev/vdd
    2. mkfs.ext4 /dev/vde
    3. mkfs.ext4 /dev/vdf
    4. mkdir /srv/node/vdd
    5. mkdir /srv/node/vde
    6. mkdir /srv/node/vdf
    7. mount /dev/vdd /srv/node/vdd
    8. mount /dev/vde /srv/node/vde
    9. mount /dev/vdf /srv/node/vdf

  1. vi /etc/fstab ( 해당 명령어들은 swift 기반으로 설치하는 것 이기 때문에 ceph 기반으로 설치할 때는 이 명령어를 실행하지 않음 )
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-P2MiJd50dDxneDzn20KFiOfk4GRR2h7ySWRuIRJG2I1XYZWKHEwUGmOgAU3W66R0 / ext4 defaults 0 1
# /boot was on /dev/vda2 during curtin installation
/dev/disk/by-uuid/c0595f2e-9cfc-405f-b8ec-eea1dbfabf1e /boot ext4 defaults 0 1
/swap.img       none    swap    sw      0       0

# swift disk
/dev/vdd /srv/node/vdd ext4 defaults 0 0
/dev/vde /srv/node/vde ext4 defaults 0 0
/dev/vdf /srv/node/vdf ext4 defaults 0 0
  1. 모든 노드 postfix 설치
    1. apt install postfix
    2. no configuration 선택 후 ok
  2. deploy node에서 cd /opt/openstack-ansible/playbooks/
    1. 각종 설치 yml이 존재
    2. openstack_user_config.yml에 세팅이 된 것만 설치가 됨
      1. 추가 설치가 하고 싶다면 openstack_user_config.yml에 설정을 추가
    3. openstack-ansible setup-infrastructure.yml --syntax-check 정상적으로 설정되었는지 확인
    4. openstack-ansible setup-hosts.yml 설치
    5. /openstack/log/ansible-logging/ansible.log 로그 확인
    6. openstack-ansible setup-infrastructure.yml 설치
ansible galera_container -m shell \
  -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"

정상 설치 됐는지 확인

controller1_galera_container-658757fc | CHANGED | rc=0 >>
Variable_name   Value
wsrep_cluster_weight    1
wsrep_cluster_capabilities
wsrep_cluster_conf_id   1
wsrep_cluster_size      1
wsrep_cluster_state_uuid        0e4cf0fb-ee24-11ed-874c-12ee62ed49da
wsrep_cluster_status    Primary
    1. openstack-ansible setup-openstack.yml 설치
      1. keystone, placement, glance, cinder, nova, neutron, heat, horizon, swift 설치
      2. 만약 용량이 부족해서 종료될 경우
        1. 남은 용량이 있는지
        2. lvextend -L +24G /dev/mapper/ubuntu--vg-ubuntu--lv
        3. resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
  1. 설치가 되었다면 172.28.236.11:80 horizon 접속 확인
    1. 아마 안될 것
  2. controller node의 vi /etc/haproxy/haproxy.cfg
    1. 호라이즌 부분을 삭제하고 아래 코드 기입
frontend horizon-redirect-front-1
    bind 192.168.130.11:443
    option httplog
    option forwardfor except 127.0.0.0/8
    option http-server-close
    mode tcp
    http-request add-header          X-Forwarded-Proto https
    timeout client 600s
    timeout server 600s
    default_backend horizon-back


frontend horizon-redirect-front-2
    bind 172.28.236.11:443
    option httplog
    option forwardfor except 127.0.0.0/8
    option http-server-close
    mode tcp
    http-request add-header          X-Forwarded-Proto https
    timeout client 600s
    timeout server 600s
    default_backend horizon-back

backend horizon-back
    mode tcp
    balance source
    stick store-request src
    stick-table type ip size 256k expire 30m
    option forwardfor
    option ssl-hello-chk
    timeout client 600s
    timeout server 600s
    server controller_horizon_container-73de3042 172.28.239.120:443 check port 443 inter 12000 rise 1 fall 1
      1. ip와 lxc-ls -f 로 확인해서 마지막줄 자신한태 맞는 것으로 변경
      2. service haproxy restart
  1. controller node에서 lxc-attach -n controller1_utility에 들어가서 openstack 명령어 가능
    1. ls 해보면 openrc가 있을텐데 . openrc로 연결하여 openstack 명령어 사용
  2. 172.28.236.11:80으로 접속이 가능한데, admin의 network 설정이 안되는 등의 문제가 있음
    1. 그에 따라 deploy node의 /etc/openstack_deploy/user_variable.yml의 마지막 줄을 주석처리하고 51줄을 주석 해제
    2. openstack-ansible /opt/openstack-ansible/playbooks/os-horizon-install.yml로 horizon만 설치
    3. 설치 후 admin network, image, router, interface 설정
  3. vm을 만들어도 console이 안되는데 controller node의 vi /etc/haproxy/haproxy.cfg
    1. 여기서 6080을 찾아서 backend의 ip를 찾아서 console의 ip 부분을 교체

  1. compute, controller node노드에서 ovs-vsctl show
    1. ovs-vsctl add-port br-ex enp6s0
      1. 빈 interface 이름을 추가
  2. VM 생성시 admin network 생성할 때 /etc/openstack_deploy/user_variables.yml
neutron_provider_networks:
  network_flat_networks: "*"
  network_types: "vxlan, flat, vlan"
  network_vxlan_ranges: "10001:20000"
  network_mappings: "public:br-ex" # 21.a에서 설정한 네트워크 이름과 매핑
  1. mappings에서 앞에 부분을 physical network로 설정