2.2.1 Install
ansible/kolla-ansible
1
Information
1.1
Summary
1.1.1
Introduction
This document (26/5/2021)
provides steps for installing Kolla Ansible and Openstack Train on Centos 7.9.
The installation create a single Region or multi Region Openstack.
Due to the
software repositories and guides provided in the document are synthetic from
various temporary sources, steps may not effective in later.
The system is setup
in SVTech JSC lab environment with ProLiant DL360 Gen10 servers and VMware vSphere
virtualization.
1.1.2
Version
|
Software |
version |
rpm |
|
python2 |
2.7.5 |
python-2.7.5-90.el7.x86_64 |
|
python2-pip |
8.1.2-14 |
python2-pip-8.1.2-14.el7.noarch |
|
pip |
20.3.4 |
pip-20.3.4 |
|
VMware ESXi |
6.7.0, 8169922 |
|
|
centos-release |
CentOS 7.9.2009 (Core) |
|
|
kolla-ansible |
9.3.1 |
(pip list) |
|
ansible |
2.9.21 |
ansible-2.9.21-1.el7.noarch |
1.1.3
References
a. Single Region
https://www.cnblogs.com/yyx66/p/14670840.html
https://my.oschina.net/u/4411210/blog/4486244
b. Multi Region
https://blog.csdn.net/zhujisoft/article/details/107198603
https://docs.openstack.org/kolla-ansible/latest/user/multi-regions.html
1.2
Design
1.2.1
Diagram
a.
Single Region
b.
Multi Region
1.2.2
IP Address
|
Node |
ens192 |
ens224 |
ens256 |
|
control01 |
10.1.17.52 |
|
192.168.126.52 |
|
control02 |
10.1.17.53 |
|
192.168.126.53 |
|
compute01 |
10.1.17.37 |
|
192.168.126.37 |
|
compute02 |
10.1.17.38 |
|
192.168.126.38 |
|
vip1 |
10.1.17.50 |
|
|
|
vip2 |
10.1.17.54 |
|
|
1.2.3
Openstack info
1.2.4
Kolla-ansible info
Always setup a kolla-ansible deployment on control01.
2
Deploy Single Region
2.1
Environment
2.1.1
Config vsphere
a.
Config VM: CPU Hardware Assisted, 3 network interface,
2 hdd
b.
Enable network promiscuous mode
2.1.2
Config OS node
https://www.cnblogs.com/yyx66/p/14670840.html
a.
OS
tee
/etc/resolv.conf << EOF
nameserver
8.8.8.8
EOF
yum
install gcc vim wget net-tools ntpdate git -y
b.
Firewall
2. 关闭防火墙
systemctl
stop firewalld.service
systemctl
disable firewalld.service
firewall-cmd
–state
c.
cinder-volumes ( for controller node )
pvcreate
/dev/sdb
vgcreate
cinder-volumes /dev/sdb
d.
Key
ssh-keygen
ssh-copy-id root@control01
ssh-copy-id root@control02
ssh-copy-id root@compute01
ssh-copy-id root@compute02
e.
Selinux
3. 关闭selinux
sed
-i '/^SELINUX=.*/c SELINUX=disabled' /etc/selinux/config
sed
-i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config
grep
--color=auto '^SELINUX' /etc/selinux/config
setenforce
0
f.
hosts
4. 主机名:
echo
"
##
compute
#
public multi region ( test1)
10.1.17.30
control
10.1.17.31
compute1
10.1.17.32
compute2
# public
multi region ( test2)
10.1.17.33
compute3
10.1.17.34
compute4
#
kolla
10.1.17.35
compute5
10.1.17.36
compute6
#
kolla china
10.1.17.37
compute01
10.1.17.38
compute02
##
controller
#
public multi region ( test1)
10.1.17.40
control2
#
public multi region ( test2)
10.1.17.43
control3
10.1.17.44
control4
#
packstack
10.1.17.45
control5
#
kolla
10.1.17.46
control6
10.1.17.47
control7
10.1.17.50
vip1
10.1.17.54
vip2
#
kolla china
10.1.17.51
node
10.1.17.52
control01
10.1.17.53
control02
>>/etc/hosts
6. 修改ssh
sed
-i 's/#ClientAliveInterval 0/ClientAliveInterval 60/g' /etc/ssh/sshd_config
sed
-i 's/#ClientAliveCountMax 3/ClientAliveCountMax 60/g' /etc/ssh/sshd_config
systemctl
daemon-reload && systemctl restart sshd && systemctl status sshd
2.1.3
Config yum
sed
-e 's|^mirrorlist=|#mirrorlist=|g' -e
's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.ustc.edu.cn/centos|g'
\
-i.bak
\
/etc/yum.repos.d/CentOS-Base.repo
wget -O /etc/yum.repos.d/docker-ce.repo
https://download.docker.com/linux/centos/docker-ce.repo
sed -i
's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+'
/etc/yum.repos.d/docker-ce.repo
yum
makecache
yum
install -y epel-release
2.1.4
Install pip/python
11.
安装安装python-pip
yum
install python-pip –y
pip
install --upgrade "pip < 21.0"
pip
install pbr
https://my.oschina.net/u/4411210/blog/4486244
tee
/root/.pip/pip.conf << EOF
[global]
index-url = https://mirrors.aliyun.com/pypi/simple/
[install]
trusted-host=mirrors.aliyun.com
EOF
a.
update
12.
升级系统软件包
yum
update -y
13.
重启系统
reboot
2.2
Setup
kolla
kolla
ansible/ansible/python/pip
2.2.1
Install ansible/kolla-ansible
# node1下操作
1. 安装依赖软件包
yum install
python2-devel libffi-devel openssl-devel libselinux-python -y
yum remove docker
docker-common docker-selinux docker-engine -y
yum install
yum-utils device-mapper-persistent-data lvm2 -y
2. 安装ansible
yum install -y
"ansible < 2.9.19"
yum install -y ansible
3. 配置ansible.cfg文件
sed -i
's/#host_key_checking = False/host_key_checking = True/g'
/etc/ansible/ansible.cfg
sed -i
's/#pipelining = False/pipelining = True/g' /etc/ansible/ansible.cfg
sed -i
's/#forks = 5/forks = 100/g' /etc/ansible/ansible.cfg
4. 安装 kolla-ansible
pip install
kolla-ansible==9.3.1 --ignore-installed PyYAML
2.2.2
Install docker-ce
yum install
docker-ce -y
6. kolla-ansible配置文件到当前环境
mkdir -p
/etc/kolla
chown $USER:$USER
/etc/kolla
cp -r /usr/share/kolla-ansible/etc_examples/kolla/*
/etc/kolla
cp
/usr/share/kolla-ansible/ansible/inventory/* .
7. 修改docker配置文件配置国内阿里云地址,docker推送地址
mkdir /etc/docker/
cat >>
/etc/docker/daemon.json << EOF
{
"registry-mirrors":
[
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
]
}
EOF
8. 开启 Docker 的共享挂载功能
mkdir -p
/etc/systemd/system/docker.service.d
cat >>
/etc/systemd/system/docker.service.d/kolla.conf << EOF
[Service]
MountFlags=shared
EOF
9. 设置docker服务启动
systemctl
daemon-reload && systemctl enable docker && systemctl restart
docker&& systemctl status docker
复制代码
2.3
Deploy
single region
2.3.1
multinode
vi multinode
[control]
# These hostname
must be resolvable from your deployment host
control01
ansible_connection=ssh
ansible_user=root
# The above can
also be specified as follows:
#control[01:03] ansible_user=kolla
# The network
nodes are where your l3-agent and loadbalancers will run
# This can be the
same as a host in the control group
[network]
control01
ansible_connection=ssh
ansible_user=root
[compute]
compute01
ansible_connection=ssh
ansible_user=root
[monitoring]
control01
ansible_connection=ssh
ansible_user=root
# When compute
nodes and control nodes use different interfaces,
# you need to
comment out "api_interface" and other interfaces from the globals.yml
# and specify like
below:
#compute01
neutron_external_interface=eth0 api_interface=em1 storage_interface=em1
tunnel_interface=em1
[storage]
control01
ansible_connection=ssh
ansible_user=root
[deployment]
control01 ansible_connection=ssh
# Giữ nguyên đoạn sau
[baremetal:children]
control
network
compute
storage
monitoring
# You can explicitly specify which hosts run each project by
updating the
# groups in the sections below. Common services are grouped
together.
[chrony-server:children]
Haproxy
…
2.3.2
password.yml
kolla-genpwd
vi
/etc/kolla/passwords.yml
keystone_admin_password:
Admin123
2.3.3
global.yml
root@control01 ~]# cat /etc/kolla/globals.yml
kolla_base_distro: "centos"
kolla_install_type: "source"
openstack_release: "train"
node_custom_config: "/etc/kolla/config"
docker_namespace: "kolla"
kolla_internal_vip_address: "10.1.17.50"
network_interface: "ens192"
tunnel_interface : "ens256"
neutron_external_interface: "ens224"
neutron_plugin_agent: "openvswitch"
neutron_tenant_network_types: "vxlan,vlan,flat"
keepalived_virtual_router_id: "56"
openstack_logging_debug: "True"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
enable_cinder_backup: "no"
enable_heat: "no"
enable_neutron_dvr: "yes"
#enable_neutron_agent_ha: "yes"
enable_neutron_provider_networks: "yes"
enable_nova_ssh: "yes"
nova_compute_virt_type: "qemu"
nova_console: "novnc"
enable_haproxy: "yes"
enable_horizon: "yes"
enable_keystone: "yes"
2.3.4
service config
a. neutron
mkdir -p /etc/kolla/config/neutron
vi /etc/kolla/config/neutron/ml2_conf.ini
[ml2]
tenant_network_types
= vxlan,vlan
[ml2_type_vlan]
network_vlan_ranges
= physnet1:1:4094
b. nova
mkdir /etc/kolla/config
mkdir /etc/kolla/config/nova
cat >> /etc/kolla/config/nova/nova-compute.conf << EOF
[libvirt]
virt_type = qemu
cpu_mode = none
EOF
2.4
Operate
2.4.1
install
openstack
Sử dụng công cụ byobu đã cài đặt ở bước trên để
tránh bị ngắt session khi đang chạy 1 tiến trình với thời gian lệnh rất dài.
kolla-ansible -i
multinode bootstrap-servers
kolla-ansible -i
multinode prechecks
kolla-ansible -i
multinode deploy
kolla-ansible -i
multinode post-deploy
2.4.2
openstack
client
Để sử dụng lệnh trên
openstack, cần cài đặt các gói client và phụ trợ + cài đặt môi trường python
phù hợp.
yum install -y
python3
pip install
virtualenv
# setup virtualenv using python3
virtualenv
openstack
virtualenv -p
/usr/bin/python3 openstack
cat
openstack/pyvenv.cfg
.
openstack/bin/activate
pip install
python-openstackclient python-glanceclient python-neutronclient
source
/etc/kolla/admin-openrc.sh
#check
openstack token
issue
2.4.3
create
instance
openstack flavor
create --id 1 --ram 1024 --disk 1 --vcpu 1 tiny
openstack flavor
create --id 2 --ram 4096 --disk 10 --vcpu 2 small
openstack flavor
create --id 3 --ram 4096 --disk 30 --vcpu 2 medium
openstack flavor
create --id 4 --ram 4096 --disk 50 --vcpu 2 medium2
ssh-keygen -q -N
""
openstack keypair
create --public-key ~/.ssh/id_rsa.pub key1
openstack network
create --share --provider-physical-network physnet1 --provider-network-type
vlan --provider-segment=111 pro_vlan111
openstack subnet
create --subnet-range 10.1.0.0/16 --gateway 10.1.0.1 --network pro_vlan111 --allocation-pool
start=10.1.17.80,end=10.1.17.90 pro_vlan111_subnet1
openstack image
create "cirros" --file /root/cirros-0.4.0-x86_64-disk.img
--disk-format qcow2 --container-format bare --public
openstack network
create --share --provider-physical-network physnet1 --provider-network-type
vlan --provider-segment=126 pro_vlan126
openstack subnet
create --subnet-range 192.168.126.0/24 --gateway 192.168.126.1 --network
pro_vlan126 --allocation-pool start=192.168.126.80,end=192.168.126.90
pro_vlan126_subnet1
(openstack)
[root@control01 ~]# openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------+--------------------------------------+
|
75d27032-3a3e-4679-89ec-aa33f43e61a1 | pro_vlan126 |
16794be6-48e4-4adc-bb2b-481fb16a6216 |
| 761ff68b-15e8-42c3-bca6-6bc794839072
|
pro_vlan111 | f892645d-4fa2-43ae-9560-f51bb6403dd4 |
+--------------------------------------+-------------+--------------------------------------+
openstack server
create --flavor 2 --image centos6 --nic net-id=761ff68b-15e8-42c3-bca6-6bc794839072
--key-name
key1 inst1
openstack server
create --flavor 1 --image cirros --nic net-id=761ff68b-15e8-42c3-bca6-6bc794839072
--key-name key1 inst1
3
Deploy Multi Region
https://blog.csdn.net/zhujisoft/article/details/107198603
https://docs.openstack.org/kolla-ansible/latest/user/multi-regions.html
3.1
Environment/Kolla
Follow same steps ‘Environment’ in
all nodes and ‘setup kolla’ in deployment node (control01) in ‘Deploy single
Region’
3.2
Deploy
RegionOne
Deploy control01,
compute01
3.2.1
Multinode
a.
multinode
[control]
# These hostname
must be resolvable from your deployment host
control01
ansible_connection=ssh
ansible_user=root
# The above can also
be specified as follows:
#control[01:03] ansible_user=kolla
# The network
nodes are where your l3-agent and loadbalancers will run
# This can be the
same as a host in the control group
[network]
control01
ansible_connection=ssh ansible_user=root
[compute]
compute01
ansible_connection=ssh
ansible_user=root
[monitoring]
control01
ansible_connection=ssh
ansible_user=root
# When compute
nodes and control nodes use different interfaces,
# you need to
comment out "api_interface" and other interfaces from the globals.yml
# and specify like
below:
#compute01
neutron_external_interface=eth0 api_interface=em1 storage_interface=em1
tunnel_interface=em1
[storage]
control01
ansible_connection=ssh
ansible_user=root
[deployment]
control01 ansible_connection=ssh
3.2.2
global.yml
kolla_base_distro: "centos"
kolla_install_type: "source"
openstack_release: "train"
node_custom_config: "/etc/kolla/config"
kolla_internal_vip_address: "10.1.17.50"
docker_namespace: "kolla"
network_interface: "ens192"
tunnel_interface : "ens256"
neutron_external_interface: "ens224"
neutron_plugin_agent: "openvswitch"
neutron_tenant_network_types: "vxlan,vlan,flat"
keepalived_virtual_router_id: "56"
openstack_logging_debug: "True"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
enable_cinder_backup: "no"
enable_heat: "no"
enable_neutron_dvr: "yes"
# enable_neutron_agent_ha: "yes"
enable_neutron_provider_networks: "yes"
enable_nova_ssh: "yes"
nova_compute_virt_type: "qemu"
nova_console: "novnc"
openstack_region_name: "RegionOne"
multiple_regions_names:
- "{{
openstack_region_name }}"
- "RegionTwo"
enable_haproxy: "yes"
enable_horizon: "yes"
enable_keystone: "yes"
3.2.3
install
openstack
Sử dụng công cụ byobu đã cài đặt ở bước trên để tránh
bị ngắt session khi đang chạy 1 tiến trình với thời gian lệnh rất dài.
kolla-ansible -i
multinode bootstrap-servers
kolla-ansible -i
multinode prechecks
kolla-ansible -i
multinode deploy
kolla-ansible -i
multinode post-deploy
3.3
Deploy
RegionTwo
Deploy control02,
compute02
3.3.1
Multinode
b.
multinode2
[root@control01
~]# cat multinode2 |more
# These initial
groups are the only groups required to be modified. The
# additional
groups are for more control of the environment.
[control]
# These hostname
must be resolvable from your deployment host
control02
ansible_connection=ssh
ansible_user=root
# The above can
also be specified as follows:
#control[01:03] ansible_user=kolla
# The network
nodes are where your l3-agent and loadbalancers will run
# This can be the
same as a host in the control group
[network]
control02
ansible_connection=ssh
ansible_user=root
[compute]
compute02
ansible_connection=ssh
ansible_user=root
[monitoring]
control02
ansible_connection=ssh
ansible_user=root
# When compute
nodes and control nodes use different interfaces,
# you need to
comment out "api_interface" and other interfaces from the globals.yml
# and specify like
below:
#compute01
neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1
[storage]
control02
ansible_connection=ssh
ansible_user=root
[deployment]
control02 ansible_connection=ssh
[baremetal:children]
control
network
….
3.3.2
global.yml
kolla_base_distro: "centos"
kolla_install_type: "source"
openstack_release: "train"
node_custom_config: "/etc/kolla/config"
kolla_internal_vip_address: "10.1.17.54"
docker_namespace: "kolla"
network_interface: "ens192"
tunnel_interface : "ens256"
neutron_external_interface: "ens224"
neutron_plugin_agent: "openvswitch"
neutron_tenant_network_types: "vxlan,vlan,flat"
keepalived_virtual_router_id: "54"
openstack_logging_debug: "True"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
enable_cinder_backup: "no"
enable_heat: "no"
enable_neutron_dvr: "yes"
# enable_neutron_agent_ha: "yes"
enable_neutron_provider_networks: "yes"
enable_nova_ssh: "yes"
nova_compute_virt_type: "qemu"
nova_console: "novnc"
openstack_region_name: "RegionTwo"
# Use this option to define a list of region names - only needs to be
configured
# in a multi-region deployment, and then only in the *first* region.
multiple_regions_names:
- "{{
openstack_region_name }}"
- "RegionTwo"
kolla_internal_fqdn_r1: 10.1.17.50
keystone_admin_url: "{{ admin_protocol }}://{{
kolla_internal_fqdn_r1 }}:{{ keystone_admin_port }}"
keystone_internal_url: "{{ internal_protocol }}://{{
kolla_internal_fqdn_r1 }}:{{ keystone_public_port }}"
openstack_auth:
auth_url: "{{
admin_protocol }}://{{ kolla_internal_fqdn_r1 }}:{{ keystone_admin_port
}}"
username: "admin"
password: "{{
keystone_admin_password }}"
project_name: "admin"
domain_name:
"default"
enable_haproxy: "yes"
enable_horizon: "no"
enable_keystone: "no"
3.3.3
install
openstack
Sử dụng công cụ byobu đã cài đặt ở bước trên để
tránh bị ngắt session khi đang chạy 1 tiến trình với thời gian lệnh rất dài.
kolla-ansible -i
multinode2 bootstrap-servers
kolla-ansible -i
multinode2 prechecks
kolla-ansible -i
multinode2 deploy
kolla-ansible -i
multinode2 post-deploy
3.4
Operate
3.4.1
openstack
client
Để sử dụng lệnh trên
openstack, cần cài đặt các gói client và phụ trợ + cài đặt môi trường python
phù hợp.
yum install -y
python3
pip install
virtualenv
# setup virtualenv using python3
virtualenv
openstack
virtualenv -p
/usr/bin/python3 openstack
cat
openstack/pyvenv.cfg
.
openstack/bin/activate
pip install
python-openstackclient python-glanceclient python-neutronclient
#check
openstack token
issue
3.4.2
create
instance
# RegionOne
.
openstack/bin/activate
source
/etc/kolla/1-admin-openrc.sh
openstack flavor
create --id 1 --ram 1024 --disk 1 --vcpu 1 tiny
openstack flavor
create --id 2 --ram 4096 --disk 10 --vcpu 2 small
openstack keypair
create --public-key ~/.ssh/id_rsa.pub key1
openstack network
create --share --provider-physical-network physnet1 --provider-network-type
vlan --provider-segment=111 pro_vlan111
openstack subnet
create --subnet-range 10.1.0.0/16 --gateway 10.1.0.1 --network pro_vlan111
--allocation-pool start=10.1.17.80,end=10.1.17.90 pro_vlan111_subnet1
openstack image
create "cirros" --file /root/cirros-0.4.0-x86_64-disk.img
--disk-format qcow2 --container-format bare --public
openstack network
create --share --provider-physical-network physnet1 --provider-network-type
vlan --provider-segment=126 pro_vlan126
openstack subnet
create --subnet-range 192.168.126.0/24 --gateway 192.168.126.1 --network
pro_vlan126 --allocation-pool start=192.168.126.80,end=192.168.126.90
pro_vlan126_subnet1
openstack network
list
|
295117b1-4198-4dbe-895e-c5fb60fc1035 | pro_vlan111 |
543ec694-58fc-49b7-bc8f-40c15466bd58 |
|
c9436e61-fd80-4921-9bf2-719374b81512 | pro_vlan126 |
9595ad64-9f7f-4986-ae00-5bdd59536e57 |
openstack server
create --flavor 1 --image cirros --nic
net-id=295117b1-4198-4dbe-895e-c5fb60fc1035 --key-name key1 inst1
# RegionTwo
.
openstack/bin/activate
source
/etc/kolla/2-admin-openrc.sh
openstack flavor
create --id 3 --ram 1024 --disk 1 --vcpu 1 B-tiny
openstack flavor
create --id 4 --ram 4096 --disk 10 --vcpu 2 B-small
openstack keypair
create --public-key ~/.ssh/id_rsa.pub key2
openstack network
create --share --provider-physical-network physnet1 --provider-network-type
vlan --provider-segment=111 B-pro_vlan111
openstack subnet
create --subnet-range 10.1.0.0/16 --gateway 10.1.0.1 --network B-pro_vlan111 --allocation-pool
start=10.1.17.100,end=10.1.17.110 B-pro_vlan111_subnet1
openstack image
create "cirros" --file /root/cirros-0.4.0-x86_64-disk.img
--disk-format qcow2 --container-format bare --public
openstack network
create --share --provider-physical-network physnet1 --provider-network-type
vlan --provider-segment=126 B-pro_vlan126
openstack subnet
create --subnet-range 192.168.126.0/24 --gateway 192.168.126.1 --network
B-pro_vlan126 --allocation-pool start=192.168.126.100,end=192.168.126.110
B-pro_vlan126_subnet1
openstack network
list
|
8b6f6f4f-0bdf-4951-a766-9c59f32130ba | B-pro_vlan126 |
3d55038c-c447-4790-9d19-95a12b7a0989 |
| dc57a6c0-d03d-4162-a17d-87595809d68f
| B-pro_vlan111 | 706674b8-a78f-4f90-8217-1df598525d1d |
openstack server
create --flavor 3 --image cirros --nic
net-id=dc57a6c0-d03d-4162-a17d-87595809d68f --key-name key2 B-inst1