deploy Openstack O on self privisoned virtualbox Ubuntu VMs manually
今天,把我之前在公司内部给同事们培训Openstack 介绍的note放到这里,是一年多之前基于当时的O版本的Ubuntu的部署,对于初学的同学也许有一些帮助,不过其实整个过程无非就是参照官方安装的guide而已。
Installation Guide
Latest version: installation-openstack-ubuntu-note.md hosted on Github.com/wey-gu
Ubuntu was chosen as host OS.
“It’s a good way to learn by installing it manually for as many services as you could :-) .”
Wey Gu
Host networking
ref: https://docs.openstack.org/install-guide/environment-networking.html
ref: https://help.ubuntu.com/lts/serverguide/network-configuration.html
The example architectures assume use of the following networks:
Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.
Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to instances in your OpenStack environment.
My network solution
1 | Net0: |
Edit the /etc/network/interfaces
file to contain the following:
Replace INTERFACE_NAME
with the actual interface name. For example, eth1 or ens224.
1 | # The provider network interface |
Base Machine
- download image from https://launchpad.net/ubuntu/+mirror/mirrors.neusoft.edu.cn-release
Change root password
1
2$ sudo su
# passwd
Allow root ssh with password
1
2
3# vi /etc/ssh/sshd_config
PermitRootLogin yesCheck nic names
1
2
3
4
5root@ubuntu:~# dmesg | grep rename
[ 2.799294] e1000 0000:00:09.0 enp0s9: renamed from eth2
[ 2.800192] e1000 0000:00:0a.0 enp0s10: renamed from eth3
[ 2.801072] e1000 0000:00:08.0 enp0s8: renamed from eth1
[ 2.804067] e1000 0000:00:03.0 enp0s3: renamed from eth0configure management network as a dummy one
1
2
3
4
5# vi /etc/network/interfaces
auto enp0s3
iface enp0s3 inet static
address 10.20.0.11
netmask 255.255.255.0NTP
install chrony
1
install chrony
Edit the
/etc/chrony/chrony.conf
file and add, change, or remove these keys as necessary for your environment:1
allow 10.20.0.0/24
restart service
1
# service chrony restart
Install OpenStack packages
ref : https://docs.openstack.org/install-guide/environment-packages.html
Enable the OpenStack repository
1
2# apt install software-properties-common
# add-apt-repository cloud-archive:ocataUpgrade the packages on all nodes:
Set apt proxy before doing that will help save your life
1
2
3
4
5# vi /etc/apt/apt.conf.d/90proxy
Acquire::http::Proxy "http://<>:8080";
Acquire::https::Proxy "http://<>:8080";
# sed -i -e 's/cn/us/g' /etc/apt/sources.list1
# apt update && apt dist-upgrade -y
Install the OpenStack client:
1
# apt install python-openstackclient -y
Controller node actions
management network eth0 (enp0s3)
1 | # vi /etc/network/interfaces |
1 | # ifup enp0s3 |
hostname and hosts
1 | # echo controller > /etc/hostname |
SQL database
Install package
1 | # apt install mariadb-server python-pymysql -y |
Create and edit the /etc/mysql/mariadb.conf.d/99-openstack.cnf
file and complete the following actions:
Create a
[mysqld]
section, and set thebind-address
key to the management IP address of the controller node to enable access by other nodes via the management network. Set additional keys to enable useful options and the UTF-8 character set:
1 | [mysqld] |
restart database service
1 | # service mysql restart |
Secure the database service by running the mysql_secure_installation
script. In particular, choose a suitable password for the database root
account:
1 | # mysql_secure_installation |
Message queue
Install the package:
1 | # apt install rabbitmq-server |
Add the openstack
user:
1 | # rabbitmqctl add_user openstack RABBIT_PASS |
Replace RABBIT_PASS
with a suitable password.
Permit configuration, write, and read access for the openstack
user:
1 | # rabbitmqctl set_permissions openstack ".*" ".*" ".*" |
Memcached
Install the packages:
1 | # apt install memcached python-memcache |
Edit the /etc/memcached.conf
file and configure the service to use the management IP address of the controller node. This is to enable access by other nodes via the management network:
1 | -l 10.20.0.10 |
Change the existing line that had
-l 127.0.0.1
.
Restart the Memcached service:
1 | # service memcached restart |
Compute node actions
management network eth0 (enp0s3)
1 | # vi /etc/network/interfaces |
configure NTP by editing /etc/chrony/chrony.conf
1 | server 10.20.0.10 iburst |
change hostname and hosts
1 | # echo compute > /etc/hostname |
Keystone installation
ref: https://docs.openstack.org/newton/install-guide-ubuntu/keystone.html
Keystone will be installed in
controller node
Before you configure the OpenStack Identity service, you must create a database and an administration token.
To create the database, complete the following actions:
Use the database access client to connect to the database server as the
root
user:1
$ mysql -u root -p
In 16.04 LTS local access need no user/psw
1
# mysql
Create the
keystone
database:1
mysql> CREATE DATABASE keystone;
Grant proper access to the
keystone
database:1
2
3
4mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';Replace
KEYSTONE_DBPASS
with a suitable password.Exit the database access client.
Run the following command to install the packages:
1 | # apt install keystone -y |
Edit the
/etc/keystone/keystone.conf
file and complete the following actions:In the
[database]
section, configure database access:1
2
3[database]
...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystoneReplace
KEYSTONE_DBPASS
with the password you chose for the database.Comment out or remove any other
connection
options in the[database]
section.In the
[token]
section, configure the Fernet token provider:1
2
3[token]
...
provider = fernet
Populate the Identity service database:
1
# su -s /bin/sh -c "keystone-manage db_sync" keystone
Initialize Fernet key repositories:
1
2# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystoneBootstrap the Identity service:
1
2
3
4
5# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:35357/v3/ \
--bootstrap-internal-url http://controller:35357/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOneReplace
ADMIN_PASS
with a suitable password for an administrative user.
Configure the Apache HTTP server
Edit the
/etc/apache2/apache2.conf
file and configure theServerName
option to reference the controller node:1
ServerName controller
Finalize the installation
Restart the Apache service and remove the default SQLite database:
1
2# service apache2 restart
# rm -f /var/lib/keystone/keystone.db
Configure the administrative account
1
2
3
4
5
6
7$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3Replace
ADMIN_PASS
with the password used in thekeystone-manage bootstrap
command from the section called Install and configure.
Create a domain, projects, users, and roles
The Identity service provides authentication services for each OpenStack service. The authentication service uses a combination of domains, projects, users, and roles.
This guide uses a service project that contains a unique user for each service that you add to your environment. Create the
service
project:1
2
3
4
5
6
7
8
9
10
11
12
13
14$ openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 24ac7f19cd944f4cba1d77469b2a73ed |
| is_domain | False |
| name | service |
| parent_id | default |
+-------------+----------------------------------+Regular (non-admin) tasks should use an unprivileged project and user. As an example, this guide creates the
demo
project and user.Create the
demo
project:1
2
3
4
5
6
7
8
9
10
11
12
13
14$ openstack project create --domain default \
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 231ad6e7ebba47d6a1e57e1cc07ae446 |
| is_domain | False |
| name | demo |
| parent_id | default |
+-------------+----------------------------------+Do not repeat this step when creating additional users for this project.
Create the
demo
user:1
2
3
4
5
6
7
8
9
10
11
12
13
14$ openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | aeda23aa78f44e859900e22c24817832 |
| name | demo |
| password_expires_at | None |
+---------------------+----------------------------------+Create the
user
role:1
2
3
4
5
6
7
8
9$ openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 997ce8d05fc143ac97d83fdfb5998552 |
| name | user |
+-----------+----------------------------------+Add the
user
role to thedemo
project and user:1
$ openstack role add --project demo --user demo user
Verify operation
For security reasons, disable the temporary authentication token mechanism:
Edit the /etc/keystone/keystone-paste.ini
file and remove admin_token_auth
from the [pipeline:public_api]
, [pipeline:admin_api]
, and [pipeline:api_v3]
sections.
Unset the temporary OS_AUTH_URL
and OS_PASSWORD
environment variable:
1 | $ unset OS_AUTH_URL OS_PASSWORD |
As the admin
user, request an authentication token:
1 | $ openstack --os-auth-url http://controller:35357/v3 \ |
This command uses the password for the admin
user. As we gave above it’s ADMIN_PASS
.
As the demo
user, request an authentication token:
1 | $ openstack --os-auth-url http://controller:5000/v3 \ |
This command uses the password for the
demo
user and API port 5000 which only allows regular (non-admin) access to the Identity service API.
Create OpenStack client environment scripts
The previous section used a combination of environment variables and command options to interact with the Identity service via theopenstack
client. To increase efficiency of client operations, OpenStack supports simple client environment scripts also known as OpenRC files. These scripts typically contain common options for all clients, but also support unique options. For more information, see the OpenStack End User Guide.
Creating the scripts
Create client environment scripts for the admin
and demo
projects and users. Future portions of this guide reference these scripts to load appropriate credentials for client operations.
Edit the
admin-openrc
file and add the following content:1
2
3
4
5
6
7
8export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2Replace
ADMIN_PASS
with the password you chose for theadmin
user in the Identity service.Edit the
demo-openrc
file and add the following content:1
2
3
4
5
6
7
8export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2Replace
OS_PASSWORD=demo
with the password you chose for thedemo
user in the Identity service.
Using the scripts
To run clients as a specific project and user, you can simply load the associated client environment script prior to running them. For example:
Load the
admin-openrc
file to populate environment variables with the location of the Identity service and theadmin
project and user credentials:1
$ . admin-openrc
Request an authentication token:
1
2
3
4
5
6
7
8
9
10
11
12root@ubuntu:~# openstack token issue --max-width 70
+------------+-------------------------------------------------------+
| Field | Value |
+------------+-------------------------------------------------------+
| expires | 2017-08-23T14:00:10+0000 |
| id | gAAAAABZnXxaKuuwh9Kw-dbY1mSn8LeNNQMKmIj2EW8jyjO0NSy5H |
| | QPno4Tj6NqqSkumKhRZW8lPS1nZC2pm5fCuH5XMtVfJTu89RX6Sba |
| | -vSv-OZl5uHvRY4KOK03WH15Dnp1XbWN97xY8tR_kAhc-69 |
| | -WvDe1DLS6vKr-bKbYDVXLqlLshE8E |
| project_id | 78c9c849237649a3a8c4526167427589 |
| user_id | d8efd16c30904a7992010abe4bdb9a2b |
+------------+-------------------------------------------------------+
Glance installation
ref: https://docs.openstack.org/newton/install-guide-ubuntu/glance.html
For simplicity, this guide describes configuring the Image service to use the file
back end, which uploads and stores in a directory on the controller node hosting the Image service. By default, this directory is /var/lib/glance/images/
.
Before you proceed, ensure that the controller node has at least several gigabytes of space available in this directory. Keep in mind that since the file
back end is often local to a controller node, it is not typically suitable for a multi-node glance deployment.
For information on requirements for other back ends, see Configuration Reference.
Install and configure
This section describes how to install and configure the Image service, code-named glance, on the controller node. For simplicity, this configuration stores images on the local file system.
Prerequisites
Before you install and configure the Image service, you must create a database, service credentials, and API endpoints.
To create the database, complete these steps:
Use the database access client to connect to the database server as the
root
user:1
$ mysql
Create the
glance
database:1
mysql> CREATE DATABASE glance;
Grant proper access to the
glance
database:1
2
3
4mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';Replace
GLANCE_DBPASS
with a suitable password.Exit the database access client.
Source the
admin
credentials to gain access to admin-only CLI commands:1
$ . admin-openrc
To create the service credentials, complete these steps:
Create the
glance
user:1
2
3
4
5
6
7
8
9
10
11
12
13$ openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 3f4e777c4062483ab8d9edd7dff829df |
| name | glance |
| password_expires_at | None |
+---------------------+----------------------------------+Add the
admin
role to theglance
user andservice
project:1
$ openstack role add --project service --user glance admin
This command provides no output.
Create the
glance
service entity:1
2
3
4
5
6
7
8
9
10
11
12$ openstack service create --name glance \
--description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| name | glance |
| type | image |
+-------------+----------------------------------+
Create the Image service API endpoints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50$ openstack endpoint create --region RegionOne \
image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 340be3625e9b4239a6415d034e98aace |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0c37ed58103f4300a84ff125a539032d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
Install and configure components
Install the packages:
1 | # apt install glance -y |
Edit the
/etc/glance/glance-api.conf
file and complete the following actions:In the
[database]
section, configure database access:1
2
3[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glanceReplace
GLANCE_DBPASS
with the password you chose for the Image service database.In the
[keystone_authtoken]
and[paste_deploy]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
...
flavor = keystoneReplace
password = glance
with the password you chose for theglance
user in the Identity service.
Comment out or remove any other options in the `[keystone_authtoken]` section.
In the
[glance_store]
section, configure the local file system store and location of image files:1
2
3
4
5[glance_store]
...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
Edit the
/etc/glance/glance-registry.conf
file and complete the following actions:In the
[database]
section, configure database access:1
2
3[database]
...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glanceReplace
GLANCE_DBPASS
with the password you chose for the Image service database.In the
[keystone_authtoken]
and[paste_deploy]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
...
flavor = keystoneReplace
password = glance
with the password you chose for theglance
user in the Identity service.Comment out or remove any other options in the
[keystone_authtoken]
section.
Populate the Image service database:
1 | # su -s /bin/sh -c "glance-manage db_sync" glance |
Ignore any deprecation messages in this output.
Finalize installation
Restart the Image services:
1 | # service glance-registry restart |
Verify operation
Verify operation of the Image service using CirrOS, a small Linux image that helps you test your OpenStack deployment.
For more information about how to download and build images, see OpenStack Virtual Machine Image Guide. For information about how to manage images, see the OpenStack End User Guide.
Source the
admin
credentials to gain access to admin-only CLI commands:1
$ . admin-openrc
Download the source image:
1
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
tip: add proxy to improve speed in office network
1
2
3
4
5$ export http_proxy=http://<>:8080
// after wget
$ unset http_proxyInstall
wget
if your distribution does not include it.Upload the image to the Image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27$ openstack image create "cirros" \
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2015-03-26T16:52:10Z |
| disk_format | qcow2 |
| file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file |
| id | cc5c6982-4910-471e-b864-1098015901b5 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | ae7a98326b9c455588edd2656d723b9d |
| protected | False |
| schema | /v2/schemas/image |
| size | 13200896 |
| status | active |
| tags | |
| updated_at | 2015-03-26T16:52:10Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+For information about the openstack image create parameters, see Create or update an image (glance) in the
OpenStack UserGuide
.For information about disk and container formats for images, see Disk and container formats for images in the
OpenStack VirtualMachine Image Guide
.
OpenStack generates IDs dynamically, so you will see different values in the example command output.
Confirm upload of the image and validate attributes:
1
2
3
4
5
6
7$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+
Nova installation
ref: https://docs.openstack.org/newton/install-guide-ubuntu/nova.html
Nova install and configure controller node
Prerequisites
Before you install and configure the Compute service, you must create databases, service credentials, and API endpoints.
To create the databases, complete these steps:
Use the database access client to connect to the database server as the
root
user:1
2
3# mysql
```bash
Create the
nova_api
,nova
, andnova_cell0
databases:1
2
3
4
5MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
```bashGrant proper access to the databases:
``` MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; ``` Replace `NOVA_DBPASS` with a suitable password.
- Exit the database access client.
Source the
admin
credentials to gain access to admin-only CLI commands:1
2
3$ . admin-openrc
```bashCreate the Compute service credentials:
Create the
nova
user:1
2
3
4
5
6
7
8
9
10
11
12
13
14$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+Add the
admin
role to thenova
user:1
$ openstack role add --project service --user nova admin
This command provides no output.
Create the
nova
service entity:1
2
3
4
5
6
7
8
9
10
11
12
13$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
Create the Compute API service endpoints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+Create a Placement service user using your chosen
PLACEMENT_PASS
:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+Add the Placement user to the service project with the admin role:
1
2$ openstack role add --project service --user placement admin
This command provides no output.
Create the Placement API entry in the service catalog:
1
2
3
4
5
6
7
8
9
10$ openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+-------------+----------------------------------+Create the Placement API service endpoints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
Install and configure components
Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...
) in the configuration snippets indicates potential default configuration options that you should retain.
Install the packages:
1
2
3# apt install nova-api nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler nova-placement-api
Edit the
/etc/nova/nova.conf
file and complete the following actions:In the
[api_database]
and[database]
sections, configure database access:1
2
3
4
5
6
7
8[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/novaReplace
NOVA_DBPASS
with the password you chose for the Compute databases.In the
[DEFAULT]
section, configureRabbitMQ
message queue access:1
2
3
4[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[api]
and[keystone_authtoken]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = novaReplace
nova
with the password you chose for thenova
user in the Identity service.Comment out or remove any other options in the
[keystone_authtoken]
section.In the
[DEFAULT]
section, configure themy_ip
option to use the management interface IP address of the controller node:1
2
3[DEFAULT]
# ...
my_ip = 10.20.0.10
In the
[DEFAULT]
section, enable support for the Networking service:1
2
3
4
5[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriverBy default, Compute uses an internal firewall driver. Since the Networking service includes a firewall driver, you must disable the Compute firewall driver by using the
nova.virt.firewall.NoopFirewallDriver
firewall driver.
In the
[vnc]
section, configure the VNC proxy to use the management interface IP address of the controller node:1
2
3
4
5
6[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
In the
[glance]
section, configure the location of the Image service API:1
2
3
4[glance]
# ...
api_servers = http://controller:9292
In the
[oslo_concurrency]
section, configure the lock path:1
2
3
4[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
- Due to a packaging bug, remove the
log_dir
option from the[DEFAULT]
section.
In the
[placement]
section, configure the Placement API:1
2
3
4
5
6
7
8
9
10
11[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASSReplace
PLACEMENT_PASS
with the password you choose for theplacement
user in the Identity service. Comment out any other options in the[placement]
section.
Populate the nova-api database:
1
2# su -s /bin/sh -c "nova-manage api_db sync" nova
Ignore any deprecation messages in this output.
Register the
cell0
database:1
2# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
Create the
cell1
cell:1
2
3# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650Populate the nova database:
1
2# su -s /bin/sh -c "nova-manage db sync" nova
Verify nova cell0 and cell1 are registered correctly:
1
2
3
4
5
6
7
8# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+
Finalize installation
Restart the Compute services:
1
2
3
4
5# service nova-api restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart
Nova Install and configure a compute node
This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity, this configuration uses the QEMU hyper visor with the KVM extension on compute nodes that support hardware acceleration for virtual machines.
Install and configure components
Install the packages:
1
# apt install nova-compute
Edit the
/etc/nova/nova.conf
file and complete the following actions:In the
[DEFAULT]
section, configureRabbitMQ
message queue access:1
2
3[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = novaReplace
password = nova
with the password you chose for thenova
user in the Identity service.Comment out or remove any other options in the
[keystone_authtoken]
section.
In the
[DEFAULT]
section, configure themy_ip
option:1
2
3[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESSReplace
MANAGEMENT_INTERFACE_IP_ADDRESS
with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.here our compute is 10.20.0.20
In the
[DEFAULT]
section, enable support for the Networking service:By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the `nova.virt.firewall.NoopFirewallDriver` firewall driver.1
2
3
4[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
In the
[vnc]
section, enable and configure remote console access:1
2
3
4
5
6
7[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.htmlThe server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node.
The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.
If the web browser to access remote consoles resides on a host that cannot resolve the
controller
hostname, you must replacecontroller
with the management interface IP address of the controller node.In the
[glance]
section, configure the location of the Image service API:1
2
3
4[glance]
...
api_servers = http://controller:9292
```In the
[oslo_concurrency]
section, configure the lock path:1
2
3[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
Due to a packaging bug, remove the
log-dir
option from the[DEFAULT]
section.In the
[placement]
section, configure the Placement API:1
2
3
4
5
6
7
8
9
10
11
12
13[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = placement
```
Replace `placement` with the password you choose for the `placement` user in the Identity service. Comment out any other options in the `[placement]` section.
Finalize installation
Determine whether your compute node supports hardware acceleration for virtual machines:
1 | $ egrep -c '(vmx|svm)' /proc/cpuinfo |
If this command returns a value of one or greater
, your compute node supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero
, your compute node does not support hardware acceleration and you must configure libvirt
to use QEMU instead of KVM.
Edit the
[libvirt]
section in the/etc/nova/nova-compute.conf
file as follows:1
2
3[libvirt]
...
virt_type = qemu
Restart the Compute service:
1 | # service nova-compute restart |
Add the compute node to the cell database
Run the following commands on the controller node.
Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database:
1
2
3
4
5
6
7
8
9$ . admin-openrc
$ openstack hypervisor list
+----+---------------------+-----------------+-----------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+-----------+-------+
| 1 | compute1 | QEMU | 10.0.0.31 | up |
+----+---------------------+-----------------+-----------+-------+Discover compute hosts:
1
2
3
4
5
6
7
8# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts
on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in /etc/nova/nova.conf
:
1 | [scheduler] |
Neutron installation
ref: https://docs.openstack.org/newton/install-guide-ubuntu/neutron.html
This chapter explains how to install and configure the Networking service (neutron) using the provider networks.
For more information about the Networking service including virtual networking components, layout, and traffic flows, see the OpenStack Networking Guide.
Tenant/Project network configuration
On both Controller and Compute node, Tenant/Project network (eth1 in our design) need to be configured:
# vi /etc/network/interfaces
Add below lines accordingly:
1 | ## provider network |
Neutron Install and configure controller node
Prerequisites
Before you configure the OpenStack Networking (neutron) service, you must create a database, service credentials, and API endpoints.
To create the database, complete these steps:
Use the database access client to connect to the database server as the
root
user:1
$ mysql -u root -p
Create the
neutron
database:1
mysql> CREATE DATABASE neutron;
Grant proper access to the
neutron
database, replacingNEUTRON_DBPASS
with a suitable password:1
2
3
4mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';Exit the database access client.
Source the
admin
credentials to gain access to admin-only CLI commands:1
$ . admin-openrc
To create the service credentials, complete these steps:
Create the
neutron
user:1
2
3
4
5
6
7
8
9
10
11
12
13$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 319f34694728440eb8ffcb27b6dd8b8a |
| name | neutron |
| password_expires_at | None |
+---------------------+----------------------------------+Add the
admin
role to theneutron
user:1
$ openstack role add --project service --user neutron admin
This command provides no output.
Create the
neutron
service entity:1
2
3
4
5
6
7
8
9
10
11
12$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
Create the Networking service API endpoints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
Configure networking options
You can deploy the Networking service using one of two architectures represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports attaching instances to provider (external) networks. No self-service (private) networks, routers, or floating IP addresses. Only the admin
or other privileged user can manage provider networks.
Here we choose Option 1.
Networking Option 1: Provider networks
Install and configure the Networking components on the controller node.
Install the components
1 | # apt install neutron-server neutron-plugin-ml2 \ |
Configure the server component
The Networking server component configuration includes the database, authentication mechanism, message queue, topology change notifications, and plug-in.
Edit the
/etc/neutron/neutron.conf
file and complete the following actions:In the
[database]
section, configure database access:1
2
3[database]
...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutronReplace
NEUTRON_DBPASS
with the password you chose for the database.Comment out or remove any other
connection
options in the[database]
section.In the
[DEFAULT]
section, enable the Modular Layer 2 (ML2) plug-in and disable additional plug-ins:1
2
3
4[DEFAULT]
...
core_plugin = ml2
service_plugins =In the
[DEFAULT]
section, configureRabbitMQ
message queue access:1
2
3[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASS
with the password you chose for theopenstack
account in RabbitMQ.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = neutronReplace
password = neutron
with the password you chose for theneutron
user in the Identity service.Comment out or remove any other options in the
[keystone_authtoken]
section.In the
[DEFAULT]
and[nova]
sections, configure Networking to notify Compute of network topology changes:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
...
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = novaReplace
password = nova
with the password you chose for thenova
user in the Identity service.
Configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.
Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file and complete the following actions:In the
[ml2]
section, enable flat and VLAN networks:1
2
3[ml2]
...
type_drivers = flat,vlanIn the
[ml2]
section, disable self-service networks:1
2
3
4
5[ml2]
...
tenant_network_types =
```bashIn the
[ml2]
section, enable the Linux bridge mechanism:1
2
3[ml2]
...
mechanism_drivers = linuxbridgeAfter you configure the ML2 plug-in, removing values in the
type_drivers
option can lead to database inconsistency.In the
[ml2]
section, enable the port security extension driver:1
2
3[ml2]
...
extension_drivers = port_securityIn the
[ml2_type_flat]
section, configure the provider virtual network as a flat network:1
2
3
4
5[ml2_type_flat]
...
flat_networks = provider
```bashIn the
[securitygroup]
section, enable ipset to increase efficiency of security group rules:1
2
3[securitygroup]
...
enable_ipset = True
Configure the Linux bridge agent
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.
Edit the
/etc/neutron/plugins/ml2/linuxbridge_
file and complete the following actions:In the
[linux_bridge]
section, map the provider virtual network to the provider physical network interface:1
2[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAMEReplace
PROVIDER_INTERFACE_NAME
with the name of the underlying provider physical network interface. See Host networking for more information.in our case it is: enp0s10, the bridged nic of controller network.
In the
[vxlan]
section, disable VXLAN overlay networks:1
2
3[vxlan]
enable_vxlan = FalseIn the
[securitygroup]
section, enable security groups and configure the Linux bridge iptables firewall driver:1
2
3
4[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the DHCP agent
The DHCP agent provides DHCP services for virtual networks.
Edit the
/etc/neutron/dhcp_agent.ini
file and complete the following actions:In the
[DEFAULT]
section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:1
2
3
4
5[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
Return to Networking controller node configuration.
Configure the metadata agent
The metadata agent provides configuration information such as credentials to instances.
Edit the
/etc/neutron/metadata_agent.ini
file and complete the following actions:In the
[DEFAULT]
section, configure the metadata host and shared secret:1
2
3
4
5[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRETReplace
METADATA_SECRET
with a suitable secret for the metadata proxy.
Configure the Compute service to use the Networking service
Edit the
/etc/nova/nova.conf
file and perform the following actions:In the
[neutron]
section, configure access parameters, enable the metadata proxy, and configure the secret:1
2
3
4
5
6
7
8
9
10
11
12
13[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRETReplace
password = neutron
with the password you chose for theneutron
user in the Identity service.Replace
METADATA_SECRET
with the secret you chose for the metadata proxy.
Finalize installation
Populate the database:
1
2# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronDatabase population occurs later for Networking because the script requires complete server and plug-in configuration files.
Restart the Compute API service:
1
# service nova-api restart
Restart the Networking services.
1
2
3
4# service neutron-server restart
# service neutron-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
Verify operation
1 | root@controller:~# openstack network agent list --max-width 50 |
Neutron Install and configure compute node
The compute node handles connectivity and security groups for instances.
Install the components
1 | # apt install neutron-linuxbridge-agent -y |
Configure the common component
The Networking common component configuration includes the authentication mechanism, message queue, and plug-in.
Edit the
/etc/neutron/neutron.conf
file and complete the following actions:In the
[database]
section, comment out anyconnection
options because compute nodes do not directly access the database.In the
[DEFAULT]
section, configureRabbitMQ
message queue access:1
2
3[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASS
with the password you chose for theopenstack
account in RabbitMQ.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = neutronReplace
password = neutron
with the password you chose for theneutron
user in the Identity service.Comment out or remove any other options in the
[keystone_authtoken]
section.
Configure networking options
Choose the same networking option that you chose for the controller node to configure services specific to it. Afterwards, return here and proceed to Configure the Compute service to use the Networking service.
Configure the Linux bridge agent
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.
Edit the
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
file and complete the following actions:In the
[linux_bridge]
section, map the provider virtual network to the provider physical network interface:1
2
3[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAMEReplace
PROVIDER_INTERFACE_NAME
with the name of the underlying provider physical network interface. See Host networking for more information.In the
[vxlan]
section, disable VXLAN overlay networks:1
2
3[vxlan]
enable_vxlan = FalseIn the
[securitygroup]
section, enable security groups and configure the Linux bridge iptables firewall driver:1
2
3
4
5[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Return to Networking compute node configuration.
Configure the Compute service to use the Networking service
Edit the
/etc/nova/nova.conf
file and complete the following actions:In the
[neutron]
section, configure access parameters:1
2
3
4
5
6
7
8
9
10
11[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = neutronReplace
password = neutron
with the password you chose for theneutron
user in the Identity service.
Finalize installation
Restart the Compute service:
1
# service nova-compute restart
Restart the Linux bridge agent:
1
# service neutron-linuxbridge-agent restart
Verify operation
Perform these commands on the controller node.
Source the
admin
credentials to gain access to admin-only CLI commands:1
$ . admin-openrc
List loaded extensions to verify successful launch of the
neutron-server
process:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38$ neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| net-mtu | Network MTU |
| availability_zone | Availability Zone |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| flavors | Neutron Service Flavors |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| extraroute | Neutron Extra Route |
| timestamp_core | Time Stamp Fields addition for core resources |
| router | Neutron L3 Router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| dns-integration | DNS Integration |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| router_availability_zone | Router Availability Zone |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| dvr | Distributed Virtual Router |
+---------------------------+-----------------------------------------------+List agents to verify successful launch of the neutron agents:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27$ openstack network agent list
root@controller:~# openstack network agent list --max-width 70
+----------+------------+----------+-------------------+-------+-------+------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+----------+------------+----------+-------------------+-------+-------+------------+
| 143d7731 | Linux | compute | None | True | UP | neutron-li |
| -9227-4b | bridge | | | | | nuxbridge- |
| af-9052- | agent | | | | | agent |
| 292d7aea | | | | | | |
| 6992 | | | | | | |
| 1d661145 | Linux | controll | None | True | UP | neutron-li |
| -0941 | bridge | er | | | | nuxbridge- |
| -411d-9b | agent | | | | | agent |
| 18-b3371 | | | | | | |
| fe57c4b | | | | | | |
| 7502e1a3 | DHCP agent | controll | nova | True | UP | neutron- |
| -998d- | | er | | | | dhcp-agent |
| 4aca-91e | | | | | | |
| 4-ca17e1 | | | | | | |
| b10c82 | | | | | | |
| 7c47ac70 | Metadata | controll | None | True | UP | neutron- |
| -5de2-44 | agent | er | | | | metadata- |
| 42-8fc1- | | | | | | agent |
| 91fe97ae | | | | | | |
| 120f | | | | | | |
+----------+------------+----------+-------------------+-------+-------+------------+The output should indicate three agents on the controller node and one agent on each compute node.
Congratulations! Let’s try booting an instance
Create provider network/subnetwork
ref: https://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-networks-provider.html
The –provider:physical_network provider and –provider:network_type flat options connect the flat virtual network to the flat (native/untagged) physical network on the eth1 interface on the host
标注: 下边的创建网络里 ,参数:
1 | --provider-network-type flat \ |
对应的是:
1 |
|
1 | root@controller:~# . admin-openrc |
Create flavor
The smallest default flavor consumes 512 MB memory per instance. For environments with compute nodes containing less than 4 GB memory, we recommend creating the m1.nano
flavor that only requires 64 MB per instance. Only use this flavor with the CirrOS image for testing purposes.
1 | $ openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano |
Add security group rules
By default, the default
security group applies to all instances and includes firewall rules that deny remote access to instances. For Linux images such as CirrOS, we recommend allowing at least ICMP (ping) and secure shell (SSH).
Add rules to the
default
security group:Permit ICMP (ping):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22$ openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2016-10-05T09:52:31Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 6ee8d630-9803-4d3d-9aea-8c795abbedc2 |
| port_range_max | None |
| port_range_min | None |
| project_id | 77ae8d7104024123af342ffb0a6f1d88 |
| project_id | 77ae8d7104024123af342ffb0a6f1d88 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 4ceee3d4-d2fe-46c1-895c-382033e87b0d |
| updated_at | 2016-10-05T09:52:31Z |
+-------------------+--------------------------------------+Permit secure shell (SSH) access:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22$ openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2016-10-05T09:54:50Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 3cd0a406-43df-4741-ab29-b5e7dcb7469d |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 77ae8d7104024123af342ffb0a6f1d88 |
| project_id | 77ae8d7104024123af342ffb0a6f1d88 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 4ceee3d4-d2fe-46c1-895c-382033e87b0d |
| updated_at | 2016-10-05T09:54:50Z |
+-------------------+--------------------------------------+
Launch an instance
ref: Launch an instance on the provider network
Determine instance options
To launch an instance, you must at least specify the flavor, image name, network, security group, key, and instance name.
On the controller node, source the
demo
credentials to gain access to user-only CLI commands:1
$ . demo-openrc
A flavor specifies a virtual resource allocation profile which includes processor, memory, and storage.
List available flavors:
1
2
3
4
5
6
7$ openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+You can also reference a flavor by ID.
List available images:
1
2
3
4
5
6
7$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 390eb5f7-8d49-41ec-95b7-68c0d5d54b34 | cirros | active |
+--------------------------------------+--------+--------+This instance uses the
cirros
image.List available networks:
1
2
3
4
5
6
7
8$ openstack network list
+--------------------------------------+--------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------------+--------------------------------------+
| 4716ddfe-6e60-40e7-b2a8-42e57bf3c31c | selfservice | 2112d5eb-f9d6-45fd-906e-7cabd38b7c7c |
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+--------------+--------------------------------------+This instance uses the
provider
provider network. However, you must reference this network using the ID instead of the name.
List available security groups:
1
2
3
4
5
6
7$ openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| dd2b614c-3dad-48ed-958b-b155a3b38515 | default | Default security group | a516b957032844328896baa01e0f906c |
+--------------------------------------+---------+------------------------+----------------------------------+This instance uses the
default
security group.
Launch the instance
Launch the instance:
Replace
PROVIDER_NET_ID
with the ID of theprovider
provider network.If you chose option 1 and your environment contains only one network, you can omit the
--nic
option because OpenStack automatically chooses the only network available.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33root@controller:~# openstack server create --flavor m1.nano --image cirros \
> --nic net-id=2a33434f-ba29-4645-9b5d-24f1509066f1 --security-group default provider-instance
+-----------------------------+-----------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | MnjXdXf3qHia |
| config_drive | |
| created | 2017-08-23T17:29:04Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 02f54ef9-e867-4c1a-88f9-8eddd144da6f |
| image | cirros (c17e391e-93e1-4480-9cf3-bf8623063e61) |
| key_name | None |
| name | provider-instance |
| progress | 0 |
| project_id | cb015df53fb34d90b077e4c36ce35826 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-08-23T17:29:05Z |
| user_id | cb98fad69e84459bb48f42130d5c0ce5 |
| volumes_attached | |
+-----------------------------+-----------------------------------------------+Check the status of your instance:
1
2
3
4
5
6
7
8
9
10
11
12
13
14root@controller:~# nova list
+--------------------------------------+-------------------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------+--------+------------+-------------+----------+
| 02f54ef9-e867-4c1a-88f9-8eddd144da6f | provider-instance | BUILD | scheduling | NOSTATE | |
+--------------------------------------+-------------------+--------+------------+-------------+----------+
root@controller:~# openstack server list
+--------------------------------------+-------------------+--------+----------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-------------------+--------+----------+------------+
| 02f54ef9-e867-4c1a-88f9-8eddd144da6f | provider-instance | BUILD | | cirros |
+--------------------------------------+-------------------+--------+----------+------------+The status changes from
BUILD
toACTIVE
when the build process successfully completes.
Access the instance using the virtual console
Obtain a Virtual Network Computing (VNC) session URL for your instance and access it from a web browser:
1
2
3
4
5
6
7
8$ openstack console url show provider-instance
+-------+---------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?token=5eeccb47-525c-4918-ac2a-3ad1e9f1f493 |
+-------+---------------------------------------------------------------------------------+If your web browser runs on a host that cannot resolve the
controller
host name, you can replacecontroller
with the IP address of the management interface on your controller node.The CirrOS image includes conventional user name/password authentication and provides these credentials at the login prompt. After logging into CirrOS, we recommend that you verify network connectivity using
ping
.Verify access to the provider physical network gateway:
1
2
3
4
5
6
7
8
9
10
11$ ping -c 4 172.16.0.1
PING 203.0.113.1 (172.16.0.1) 56(84) bytes of data.
64 bytes from 172.16.0.1: icmp_req=1 ttl=64 time=0.357 ms
64 bytes from 172.16.0.1: icmp_req=2 ttl=64 time=0.473 ms
64 bytes from 172.16.0.1: icmp_req=3 ttl=64 time=0.504 ms
64 bytes from 172.16.0.1: icmp_req=4 ttl=64 time=0.470 ms
--- 172.16.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 msVerify access to the internet:
1
2
3
4
5
6
7
8
9
10
11$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms
64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
Access the instance remotely
Verify connectivity to the instance from the controller node or any host on the provider physical network:
1
2
3
4
5
6
7
8
9
10
11$ ping -c 4 172.16.0.103
PING 203.0.113.103 (203.0.113.103) 56(84) bytes of data.
64 bytes from 203.0.113.103: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.103: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.103: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.103: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 msAccess your instance using SSH from the controller node or any host on the provider physical network:
1
2
3
4
5
6
7$ ssh [email protected]
The authenticity of host '203.0.113.102 (203.0.113.102)' can't be established.
RSA key fingerprint is ed:05:e9:e7:52:a0:ff:83:68:94:c7:d1:f2:f8:e2:e9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts.
If your instance does not launch or seem to work as you expect, see the Instance Boot Failures section in OpenStack Operations Guide for more information or use one of the many other options to seek assistance. We want your first installation to work!
Return to Launch an instance.
[ISSUE] DHCP failure in VM troubleshooting
ref: https://docs.openstack.org/neutron/pike/admin/intro-basic-networking.html
what it is like
in VM console ( initial dhcp discover)
1 | $ ifup eth0 |
in controller console (monitor log)
1 | root@controller:~# tail -f /var/log/syslog |
By tcpdump from controllor bridge, it’s found the DHCPOFFER was sent to VM:
1 | root@controller:~# tcpdump -i brq2a33434f-ba -vv port 67 or port 68 -e -n |
While from compute , tcpdump the br-int bridge shows it’s not received
1 | root@compute:~# tcpdump -i brq2a33434f-ba -vv port 67 or port 68 -e -n |
Conclusion:
the DHCP offer was sent out from DHCP agent dnsmasq, but the package cannot be captured from host bridge connecting to vm eth0. the issue is located in the provider network router, the ECN router 146.11.40.1 in our office.
By searching online, there is a tech called DHCP snooping to prevent multiple dhcp server in one LAN from router, which makes sense.
Cinder
Cinder on controller
Here we provide a iSCSI driver backend cinder practice
ref: https://docs.openstack.org/ocata/install-guide-ubuntu/cinder.html
The OpenStack Block Storage service (cinder) adds persistent storage to a virtual machine. Block Storage provides an infrastructure for managing volumes, and interacts with OpenStack Compute to provide volumes for instances. The service also enables management of volume snapshots, and volume types.
The Block Storage service consists of the following components:
cinder-api
Accepts API requests, and routes them to the
cinder-volume
for action.cinder-volume
Interacts directly with the Block Storage service, and processes such as the
cinder-scheduler
. It also interacts with these processes through a message queue. Thecinder-volume
service responds to read and write requests sent to the Block Storage service to maintain state. It can interact with a variety of storage providers through a driver architecture.cinder-scheduler daemon
Selects the optimal storage provider node on which to create the volume. A similar component to the
nova-scheduler
.cinder-backup daemon
The
cinder-backup
service provides backing up volumes of any type to a backup storage provider. Like thecinder-volume
service, it can interact with a variety of storage providers through a driver architecture.Messaging queue
Routes information between the Block Storage processes.
Install and configure controller node
This section describes how to install and configure the Block Storage service, code-named cinder, on the controller node. This service requires at least one additional storage node that provides volumes to instances.
Prerequisites
Before you install and configure the Block Storage service, you must create a database, service credentials, and API endpoints.
To create the database, complete these steps:
Use the database access client to connect to the database server as the
root
user:1
# mysql
Create the
cinder
database:1
MariaDB [(none)]> CREATE DATABASE cinder;
Grant proper access to the
cinder
database:``` MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ IDENTIFIED BY 'CINDER_DBPASS'; ``` Replace `CINDER_DBPASS` with a suitable password.
- Exit the database access client.
Source the
admin
credentials to gain access to admin-only CLI commands:1
2$ . admin-openrc
To create the service credentials, complete these steps:
Create a
cinder
user:1
2
3
4
5
6
7
8
9
10
11
12
13
14$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+Add the
admin
role to thecinder
user:1
$ openstack role add --project service --user cinder admin
This command provides no output.
Create the
cinderv2
andcinderv3
service entities:1
2
3
4
5
6
7
8
9
10
11
12$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+1
2
3
4
5
6
7
8
9
10
11
12$ openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+
The Block Storage services require two service entities.
Create the Block Storage service API endpoints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50$ openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+The Block Storage services require endpoints for each service entity.
Install and configure components
Install the packages:
1 | # apt install cinder-api cinder-scheduler |
Edit the /etc/cinder/cinder.conf
file and complete the following actions:
In the
[database]
section, configure database access:1
2
3[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinderReplace
CINDER_DBPASS
with the password you chose for the Block Storage database.In the
[DEFAULT]
section, configureRabbitMQ
message queue access:1
2
3[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinderReplace
password
with the password you chose for thecinder
user in the Identity service.Comment out or remove any other options in the
[keystone_authtoken]
section.In the
[DEFAULT]
section, configure themy_ip
option to use the management interface IP address of the controller node:1
2
3[DEFAULT]
# ...
my_ip = 10.20.0.10
In the
[oslo_concurrency]
section, configure the lock path:1
2
3[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
Populate the Block Storage database:
1 | # su -s /bin/sh -c "cinder-manage db sync" cinder |
Ignore any deprecation messages in this output.
Configure Compute to use Block Storage
Edit the
/etc/nova/nova.conf
file and add the following to it:1
2
3[cinder]
os_region_name = RegionOne
Finalize installation
Restart the Compute API service:
1
# service nova-api restart
Restart the Block Storage services:
1
2# service cinder-scheduler restart
# service apache2 restart
Cinder on block storage node
configure storage network for compute
Check nic name
1 | root@compute:~# dmesg | grep renamed |
eth2
was named as enp0s9
Edit /etc/network/interfaces
1 | # storage network eth2 |
1 | # ifup enp0s9 |
Create cinder machine: storage
Storage node actions
Clone it from base VM and add a virtual disk for storage vm
Management net eth0 (enp0s3) and storage net eth2 (enp0s9)
Edit /etc/network/interfaces
1 | # management network eth0 |
1 | # storage network eth2 |
1 | //start the two nics |
configure NTP by editing /etc/chrony/chrony.conf
1 | server 10.20.0.10 iburst |
change hostname and hosts
1 | # echo storage > /etc/hostname |
Check new disk was there already
check by fdisk -l
, /dev/sdb is there :-) .
1 | root@storage:~# fdisk -l |
Cinder on storage node
Install and configure a storage node
This section describes how to install and configure storage nodes for the Block Storage service. For simplicity, this configuration references one storage node with an empty local block storage device. The instructions use /dev/sdb
, but you can substitute a different value for your particular node.
The service provisions logical volumes on this device using the LVM driver and provides them to instances via iSCSI transport. You can follow these instructions with minor modifications to horizontally scale your environment with additional storage nodes.
Prerequisites
Before you install and configure the Block Storage service on the storage node, you must prepare the storage device.
Perform these steps on the storage node.
Install the supporting utility packages:
1
# apt install lvm2
Some distributions include LVM by default.
Create the LVM physical volume
/dev/sdb
:1
2
3# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully createdCreate the LVM volume group
cinder-volumes
:1
2
3# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully createdThe Block Storage service creates logical volumes in this volume group.
Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the
/dev
directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain thecinder-volumes
volume group. Edit the/etc/lvm/lvm.conf
file and complete the following actions:In the
devices
section, add a filter that accepts the/dev/sdb
device and rejects all other devices:1
2
3devices {
...
filter = [ "a/sdb/", "r/.*/"]Each item in the filter array begins with
a
for accept orr
for reject and includes a regular expression for the device name. The array must end withr/.*/
to reject any remaining devices. You can use the vgs -vvvv command to test filters.If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the
/dev/sda
device contains the operating system:1
2
3
4
5
6
7
8filter = [ "a/sda/", "a/sdb/", "r/.*/"]
```bash
Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the`/etc/lvm/lvm.conf` file on those nodes to include only the operating system disk. For example, if the `/dev/sda`device contains the operating system:
```bash
filter = [ "a/sda/", "r/.*/"]
Install and configure components
Install the packages:
1 | # apt install cinder-volume -y |
Edit the
/etc/cinder/cinder.conf
file and complete the following actions:In the
[database]
section, configure database access:1
2
3[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinderReplace
CINDER_DBPASS
with the password you chose for the Block Storage database.In the
[DEFAULT]
section, configureRabbitMQ
message queue access:1
2
3[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinderReplace
password
with the password you chose for thecinder
user in the Identity service.Comment out or remove any other options in the
[keystone_authtoken]
section.In the
[DEFAULT]
section, configure themy_ip
option:1
2
3[DEFAULT]
# ...
my_ip = STORAGE_INTERFACE_IP_ADDRESSReplace
STORAGE_INTERFACE_IP_ADDRESS
with the IP address of the storage network (eth2).
In the
[lvm]
section, configure the LVM back end with the LVM driver,cinder-volumes
volume group, iSCSI protocol, and appropriate iSCSI service:1
2
3
4
5
6[lvm]
# ...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
In the
[DEFAULT]
section, enable the LVM back end:1
2
3[DEFAULT]
# ...
enabled_backends = lvmBack-end names are arbitrary. As an example, this guide uses the name of the driver as the name of the back end.
In the
[DEFAULT]
section, configure the location of the Image service API:``` [DEFAULT] # ... glance_api_servers = http://controller:9292 ```
In the
[oslo_concurrency]
section, configure the lock path:1
2
3[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
Finalize installation
Restart the Block Storage volume service including its dependencies:
1
2# service tgt restart
# service cinder-volume restart
Verify operation
Verify operation of the Block Storage service.
Perform these commands on the controller node.
Source the
admin
credentials to gain access to admin-only CLI commands:1
$ . admin-openrc
List service components to verify successful launch of each process:
1
2
3
4
5
6
7root@controller:~# openstack volume service list
+------------------+-------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2017-08-24T14:35:44.000000 |
| cinder-volume | storage@lvm | nova | enabled | up | 2017-08-24T14:35:39.000000 |
+------------------+-------------+------+---------+-------+----------------------------+
Let’s try something on block storage!
Create a volume
Source the
demo
credentials to perform the following steps as a non-administrative project:1
$ . demo-openrc
Create a 1 GB volume:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25$ openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-03-08T14:30:48.391027 |
| description | None |
| encrypted | False |
| id | a1e8be72-a395-4a6f-8e07-856a57c39524 |
| multiattach | False |
| name | volume1 |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | 684286a9079845359882afc3aa5011fb |
+---------------------+--------------------------------------+After a short time, the volume status should change from
creating
toavailable
:1
2
3
4
5
6root@controller:~# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 81ffed40-ed71-495d-bfa9-8fb8c72cf222 | volume1 | available | 1 | |
+--------------------------------------+--------------+-----------+------+-------------+check where it is?
1 | root@storage:~# lvdisplay |
Replace INSTANCE_NAME
with the name of the instance and VOLUME_NAME
with the name of the volume you want to attach to it.
Example
Attach the volume1
volume to the provider-instance
instance:
1 | $ openstack server add volume provider-instance volume1 |
This command provides no output.
- List volumes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
root@controller:~# openstack volume list
+--------------------------------------+--------------+--------+------+--------------------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+--------+------+--------------------------------------------+
| 81ffed40-ed71-495d-bfa9-8fb8c72cf222 | volume1 | in-use | 1 | Attached to provider-instance on /dev/vdb |
+--------------------------------------+--------------+--------+------+--------------------------------------------+
root@storage:~# lvdisplay
--- Logical volume ---
LV Path /dev/ubuntu-vg/root
LV Name root
VG Name ubuntu-vg
LV UUID NA7DgH-V0Sv-cH8E-wvej-aJmP-6EBO-joXO0C
LV Write Access read/write
LV Creation host, time ubuntu, 2017-08-23 16:30:36 +0800
LV Status available
# open 1
LV Size 45.52 GiB
Current LE 11653
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
--- Logical volume ---
LV Path /dev/ubuntu-vg/swap_1
LV Name swap_1
VG Name ubuntu-vg
LV UUID Vtixi8-qKcP-f1LH-bHqM-E73h-NN7z-eSD2zk
LV Write Access read/write
LV Creation host, time ubuntu, 2017-08-23 16:30:36 +0800
LV Status available
# open 2
LV Size 4.00 GiB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-81ffed40-ed71-495d-bfa9-8fb8c72cf222
LV Name volume-81ffed40-ed71-495d-bfa9-8fb8c72cf222
VG Name cinder-volumes
LV UUID 6jbPGA-i3Eo-O4ng-8Mf3-IoeF-9WF7-g1DGEA
LV Write Access read/write
LV Creation host, time storage, 2017-08-24 22:38:28 +0800
LV Status available
# open 1
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2
Access your instance using SSH or
virsh console
and use thefdisk
command to verify presence of the volume as the/dev/vdb
block storage device:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/vda1 * 16065 2088449 1036192+ 83 Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn't contain a valid partition tableCheck from storage node on iSCSI target point of view, it’s found
- Initiator:
iqn.1993-08.org.debian:01:e7b693dedcab alias: compute
- LUN 1:
Backing store path: /dev/cinder-volumes/volume-81ffed40-ed71-495d-bfa9-8fb8c72cf222
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43root@storage:~# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2010-10.org.openstack:volume-81ffed40-ed71-495d-bfa9-8fb8c72cf222
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 1
Initiator: iqn.1993-08.org.debian:01:e7b693dedcab alias: compute
Connection: 0
IP Address: 192.168.199.20
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 1074 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/cinder-volumes/volume-81ffed40-ed71-495d-bfa9-8fb8c72cf222
Backing store flags:
Account information:
7jTnhhxXsVM4BwqxG979
ACL information:
ALL- Initiator:
Checking from compute via
virsh dumpxml <instance-id>
It’s shown the device from initiator point of view:
1
2
3
4
5
6
7
8
9<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/disk/by-path/ip-192.168.199.30:3260-iscsi-iqn.2010-10.org.openstack:volume-81ffed40-ed71-495d-bfa9-8fb8c72cf222-lun-1'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>81ffed40-ed71-495d-bfa9-8fb8c72cf222</serial>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
Heat
Install and configure
This section describes how to install and configure the Orchestration service for Ubuntu 14.04 (LTS).
While our Ubuntu 16.04.3 LTS will be ok as well.
Prerequisites
Before you install and configure Orchestration, you must create a database, service credentials, and API endpoints. Orchestration also requires additional information in the Identity service.
To create the database, complete these steps:
- Use the database access client to connect to the database server as the
root
user:
1
$ mysql
- Create the
heat
database:
1
CREATE DATABASE heat;
- Grant proper access to the
heat
database:
1
2
3
4
5
6
7
8```
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \
IDENTIFIED BY 'HEAT_DBPASS';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \
IDENTIFIED BY 'HEAT_DBPASS';
```
Replace `HEAT_DBPASS` with a suitable password.- Exit the database access client.
- Use the database access client to connect to the database server as the
Source the
admin
credentials to gain access to admin-only CLI commands:1
2
3$ . admin-openrc
```bashTo create the service credentials, complete these steps:
Create the
heat
user:1
2
3
4
5
6
7
8
9
10
11$ openstack user create --domain default --password-prompt heat
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | ca2e175b851943349be29a328cc5e360 |
| name | heat |
+-----------+----------------------------------+Add the
admin
role to theheat
user:1
$ openstack role add --project service --user heat admin
This command provides no output.
Create the
heat
andheat-cfn
service entities:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23$ openstack service create --name heat \
--description "Orchestration" orchestration
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Orchestration |
| enabled | True |
| id | 727841c6f5df4773baa4e8a5ae7d72eb |
| name | heat |
| type | orchestration |
+-------------+----------------------------------+
$ openstack service create --name heat-cfn \
--description "Orchestration" cloudformation
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Orchestration |
| enabled | True |
| id | c42cede91a4e47c3b10c8aedc8d890c6 |
| name | heat-cfn |
| type | cloudformation |
+-------------+----------------------------------+
Create the Orchestration service API endpoints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47$ openstack endpoint create --region RegionOne \
orchestration public http://controller:8004/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 3f4dab34624e4be7b000265f25049609 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 727841c6f5df4773baa4e8a5ae7d72eb |
| service_name | heat |
| service_type | orchestration |
| url | http://controller:8004/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 9489f78e958e45cc85570fec7e836d98 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 727841c6f5df4773baa4e8a5ae7d72eb |
| service_name | heat |
| service_type | orchestration |
| url | http://controller:8004/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 76091559514b40c6b7b38dde790efe99 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 727841c6f5df4773baa4e8a5ae7d72eb |
| service_name | heat |
| service_type | orchestration |
| url | http://controller:8004/v1/%(tenant_id)s |
+--------------+-----------------------------------------+1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47$ openstack endpoint create --region RegionOne \
cloudformation public http://controller:8000/v1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b3ea082e019c4024842bf0a80555052c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c42cede91a4e47c3b10c8aedc8d890c6 |
| service_name | heat-cfn |
| service_type | cloudformation |
| url | http://controller:8000/v1 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
cloudformation internal http://controller:8000/v1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 169df4368cdc435b8b115a9cb084044e |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c42cede91a4e47c3b10c8aedc8d890c6 |
| service_name | heat-cfn |
| service_type | cloudformation |
| url | http://controller:8000/v1 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
cloudformation admin http://controller:8000/v1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d3edcd61eb343c1bbd629aa041ff88b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c42cede91a4e47c3b10c8aedc8d890c6 |
| service_name | heat-cfn |
| service_type | cloudformation |
| url | http://controller:8000/v1 |
+--------------+----------------------------------+Orchestration requires additional information in the Identity service to manage stacks. To add this information, complete these steps:
Create the
heat
domain that contains projects and users for stacks:1
2
3
4
5
6
7
8
9$ openstack domain create --description "Stack projects and users" heat
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Stack projects and users |
| enabled | True |
| id | 0f4d1bd326f2454dacc72157ba328a47 |
| name | heat |
+-------------+----------------------------------+Create the
heat_domain_admin
user to manage projects and users in theheat
domain:here i gave password: heat_domain_admin
1
2
3
4
5
6
7
8
9
10
11$ openstack user create --domain heat --password-prompt heat_domain_admin
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 0f4d1bd326f2454dacc72157ba328a47 |
| enabled | True |
| id | b7bd1abfbcf64478b47a0f13cd4d970a |
| name | heat_domain_admin |
+-----------+----------------------------------+Add the
admin
role to theheat_domain_admin
user in theheat
domain to enable administrative stack management privileges by theheat_domain_admin
user:1
$ openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
This command provides no output.
Create the
heat_stack_owner
role:1
2
3
4
5
6
7
8$ openstack role create heat_stack_owner
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 15e34f0c4fed4e68b3246275883c8630 |
| name | heat_stack_owner |
+-----------+----------------------------------+Add the
heat_stack_owner
role to thedemo
project and user to enable stack management by thedemo
user:1
$ openstack role add --project demo --user demo heat_stack_owner
This command provides no output.
You must add the
heat_stack_owner
role to each user that manages stacks.Create the
heat_stack_user
role:1
2
3
4
5
6
7
8$ openstack role create heat_stack_user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 88849d41a55d4d1d91e4f11bffd8fc5c |
| name | heat_stack_user |
+-----------+----------------------------------+The Orchestration service automatically assigns the
heat_stack_user
role to users that it creates during stack deployment. By default, this role restricts API <Application Programming Interface (API)> operations. To avoid conflicts, do not add this role to users with theheat_stack_owner
role.
Install and configure components
Install the packages:
1 | # apt-get install heat-api heat-api-cfn heat-engine |
Edit the
/etc/heat/heat.conf
file and complete the following actions:In the
[database]
section, configure database access:1
2
3[database]
...
connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heatReplace
HEAT_DBPASS
with the password you chose for the Orchestration database.In the
[DEFAULT]
section, configureRabbitMQ
message queue access:1
2
3[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[keystone_authtoken]
,[trustee]
and[clients_keystone]
sections, configure Identity service access:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = heat
password = HEAT_PASS
[trustee]
...
auth_type = password
auth_url = http://controller:35357
username = heat
password = heat
user_domain_name = default
[clients_keystone]
...
auth_uri = http://controller:5000Replace
password
with the password you chose for theheat
user in the Identity service.In the
[DEFAULT]
section, configure the metadata and wait condition URLs:1
2
3
4[DEFAULT]
...
heat_metadata_server_url = http://controller:8000
heat_waitcondition_server_url = http://controller:8000/v1/waitconditionIn the
[DEFAULT]
section, configure the stack domain and administrative credentials:1
2
3
4
5[DEFAULT]
...
stack_domain_admin = heat_domain_admin
stack_domain_admin_password = heat_domain_admin
stack_user_domain_name = heatReplace
heat_domain_admin
with the password you chose for theheat_domain_admin
user in the Identity service.
Populate the Orchestration database:
1
# su -s /bin/sh -c "heat-manage db_sync" heat
Ignore any deprecation messages in this output.
Finalize installation
Restart the Orchestration services:
1
2
3# service heat-api restart
# service heat-api-cfn restart
# service heat-engine restart
Verify operation
Verify operation of the Orchestration service.
Perform these commands on the controller node.
Source the
admin
tenant credentials:1
$ . admin-openrc
List service components to verify successful launch and registration of each process:
1
2
3
4
5
6
7
8
9$ openstack orchestration service list
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| hostname | binary | engine_id | host | topic | updated_at | status |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | 3e85d1ab-a543-41aa-aa97-378c381fb958 | controller | engine | 2015-10-13T14:16:06.000000 | up |
| controller | heat-engine | 45dbdcf6-5660-4d5f-973a-c4fc819da678 | controller | engine | 2015-10-13T14:16:06.000000 | up |
| controller | heat-engine | 51162b63-ecb8-4c6c-98c6-993af899c4f7 | controller | engine | 2015-10-13T14:16:06.000000 | up |
| controller | heat-engine | 8d7edc6d-77a6-460d-bd2a-984d76954646 | controller | engine | 2015-10-13T14:16:06.000000 | up |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+This output should indicate four
heat-engine
components (default to 4 or number of CPUs on the host, whichever is greater) on the controller node.
[ISSUE] list heat service failure
error occurred during openstack orchestration service list
the initial output is very nonsense as below:
1 | # openstack orchestration service list |
give --debug
to have detailed information
1 | REQ: curl -g -i -X GET http://controller:8004/v1/78c9c849237649a3a8c4526167427589/services -H "User-Agent: python-heatclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}d4b406278269babd78368ed572cbe50382938cb6" |
We could see it got 503
when performing api call in 8004
port
1 | REQ: curl -g -i -X GET http://controller:8004/v1/78c9c849237649a3a8c4526167427589/services -H "User-Agent: python-heatclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}d4b406278269babd78368ed572cbe50382938cb6" |
it’s checked to be heat wsgi service
1 | # openstack endpoint list | grep 8004 |
Then let’s check 503
in /var/log/heat/heat-api.log
1 | 2017-08-25 14:24:48.962 2778 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} |
it’s keystone 401
meaning the credential is with issues when requesting token from keystone, let us check keystone logs:
1 | # grep "Authorization failed" /var/log/apache2/keyston*.log |
Solution
Where the credential for heat keystone call was configured? /etc/heat/heat.conf
, it turned out we set wrong password for keystone, the one we set was heat
while HEAT_PASS was configured, changed it as below and restart services will solve the issue.
1 | [keystone_authtoken] |
restart services to make it work
1 | # service heat-api restart |
and verify it:
1 | root@controller:/var/log# openstack orchestration service list --max-width 85 |
Start Orchestration! let’s start from a single instance HOT
ref: https://docs.openstack.org/heat/latest/install/launch-instance.html
In environments that include the Orchestration service, you can create a stack that launches an instance.
Create a template
The Orchestration service uses templates to describe stacks. To learn about the template language, see the Template Guide in the Heat developer documentation.
Create the
HOT-demo.yml
file with the following content:You could fetch it from here to avoid format error:
1
# wget https://github.com/wey-gu/workshops/raw/master/00-Openstack-Basic/HOT-demo.yml
1 | heat_template_version: 2015-10-15 |
Create a stack
Create a stack using the demo-template.yml
template.
Source the
demo
credentials to perform the following steps as a non-administrative project:1
$ . demo-openrc
Determine available networks.
1
2
3
4
5
6root@controller:~# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 2a33434f-ba29-4645-9b5d-24f1509066f1 | provider | 9b118521-59b5-40ee-a439-9d59c3b392ea |
+--------------------------------------+----------+--------------------------------------+This output may differ from your environment.
Set the
NET_ID
environment variable to reflect the ID of a network. For example, using the provider network:1
$ export NET_ID=$(openstack network list | awk '/ provider / { print $2 }')
Create a stack of one CirrOS instance on the provider network:
1
2
3
4
5
6$ openstack stack create -t HOT-demo.yml --parameter "NetID=$NET_ID" stack
+--------------------------------------+------------+--------------------+---------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+--------------------+---------------------+--------------+
| dbf46d1b-0b97-4d45-a0b3-9662a1eb6cf3 | stack | CREATE_IN_PROGRESS | 2015-10-13T15:27:20 | None |
+--------------------------------------+------------+--------------------+---------------------+--------------+After a short time, verify successful creation of the stack:
1
2
3
4
5
6$ openstack stack list
+--------------------------------------+------------+-----------------+---------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| dbf46d1b-0b97-4d45-a0b3-9662a1eb6cf3 | stack | CREATE_COMPLETE | 2015-10-13T15:27:20 | None |
+--------------------------------------+------------+-----------------+---------------------+--------------+Show the name and IP address of the instance and compare with the output of the OpenStack client:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15root@controller:~# openstack stack output show --all stack
+---------------+-------------------------------------------------+
| Field | Value |
+---------------+-------------------------------------------------+
| instance_name | { |
| | "output_value": "stack-server-bigm7yoguexa", |
| | "output_key": "instance_name", |
| | "description": "Name of the instance." |
| | } |
| instance_ip | { |
| | "output_value": "146.11.41.233", |
| | "output_key": "instance_ip", |
| | "description": "IP address of the instance." |
| | } |
+---------------+-------------------------------------------------+1
2
3
4
5
6
7root@controller:~# openstack server list
+--------------------------------------+---------------------------+---------+------------------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+---------------------------+---------+------------------------+------------+
| a81fc7b9-b5c0-4a0f-85d3-3ff739f8e5ce | stack-server-bigm7yoguexa | ACTIVE | provider=146.11.41.233 | cirros |
| e73c64ac-3af3-47fd-abfe-e138e40f0a40 | provider-instance | SHUTOFF | provider=146.11.41.232 | cirros |
+--------------------------------------+---------------------------+---------+------------------------+------------+Delete the stack.
1
$ openstack stack delete --yes stack
How about Design a fake vAPG VNF and instantiate it?
ref: https://docs.openstack.org/heat/latest/template_guide/index.html
HOT
fetch it here:
1 # wget https://github.com/wey-gu/workshops/raw/master/00-Openstack-Basic/HOT-vAPG.yml
1 | heat_template_version: 2015-10-15 |
Instantiation vAPG
For simplicity we reused the existed network
1 | # . demo-openrc |
Print outs during/after instantiation
1 | root@controller:~# openstack stack list |
Check resources afterwards
1 | root@controller:~# openstack volume list |
Horizon dashboard
The Dashboard (horizon) is a web interface that enables cloud administrators and users to manage various OpenStack resources and services.
This example deployment uses an Apache web server.
it’s actually a Django based openstack client front-end
Install and configure
Install the packages:
1 | # apt install openstack-dashboard |
- Do not edit the `ALLOWED_HOSTS` parameter under the Ubuntu configuration section.
- `ALLOWED_HOSTS` can also be `['*']` to accept all hosts. This may be useful for development work, but is potentially insecure and should not be used in production. See the [Django documentation](https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts) for further information.
Configure the
memcached
session storage service:1
2
3
4
5
6
7
8
9SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
Comment out any other session storage configuration.
Enable the Identity API version 3:
1
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
Enable support for domains:
1
2
3
4
5
6
7
8
9
10
11
12OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
```
- Configure API versions:
```bash
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}Configure
Default
as the default domain for users that you create via the dashboard:1
2OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
Configure
user
as the default role for users that you create via the dashboard:1
2OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
If you chose networking option 1, disable support for layer-3 networking services:
1
2
3
4
5
6
7
8
9
10
11
12
13OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}Optionally, configure the time zone:
1
TIME_ZONE = "<TIME_ZONE>"
Replace
TIME_ZONE
with an appropriate time zone identifier. For more information, see the list of time zones.
Finalize installation
Reload the web server configuration:
1
# service apache2 reloadetc
Verify operation
Verify operation of the dashboard.
Access the dashboard using a web browser at http://controller/horizon
.
Authenticate using admin
or demo
user and default
domain credentials.
Monitoring the how process by
tail -f /var/log/apache2/*.log
[ISSUE] Horizon 500
internal error
Fault reproduce in cli
1 | # curl http://10.20.0.10/horizon |
It’s apache2 based web service, error could be fond in /var/log/apache2/error.log
1 | [Fri Aug 25 16:22:13.681394 2017] [wsgi:error] [pid 9401:tid 140055623386880] [remote 10.20.0.10:18248] mod_wsgi (pid=9401): Target WSGI script '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi' cannot be loaded as Python module. |
beautify it as below
1 | mod_wsgi (pid=9401): Target WSGI script '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi' cannot be loaded as Python module. |
The log shows that the file cannot be accessed by horizon. Check its ownership and permission
1 | root@controller:~# ll /var/lib/openstack-dashboard/secret_key |
try chown to horizon:horizon
:
1 | # chown -R horizon:horizon /var/lib/openstack-dashboard/secret_key |
retry with curl
, still got 500
,and with same error on apache error log
1 | # curl http://10.20.0.10/horizon |
We should identify the process owner and change to it accordingly.
By checking horizon wsgi process it’s found the user is www-data (the one for apache2 ):
1 | root@controller:~# ps -aux | grep horizon |
Solution
Change owner to www-data
1 | # chown www-data /var/lib/openstack-dashboard/secret_key |
Retry with curl http://10.20.0.10/horizon
no error came out :-).
Enable RabbitMQ web admin for studying
Enable the plugins for this feature:
1 | root@controller:~# netstat -plunt | grep 15672 |
Browse from web browser with URL: http://controller:15672/
with:
- user: openstack
- password: RABBIT_PASS
Got failure Login failed, check logs
1 | # less /var/log/rabbitmq/[email protected] |
Add new user ad admin grants:
1 | rabbitmqctl add_user admin admin |
Accessed !
Neutron reconfigure as Linux bridge with vlan
ref: https://docs.openstack.org/kilo/networking-guide/deploy_scenario4b.html
Controller configuration
Configure the kernel to disable reverse path filtering. Edit the
/etc/sysctl.conf
file:1
2
3net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0Load the new kernel configuration:
1
$ sysctl -p
Configure the ML2 plug-in and Linux bridge agent. Edit the
/etc/neutron/plugins/ml2/.ini
file:1
2
3
4
5
6
7
8
9
10[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
[ml2_type_flat]
flat_networks = ecn,provider
[ml2_type_vlan]
network_vlan_ranges = provider:100:200Edit the
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
file:1
2
3
4
5
6
7[linux_bridge]
physical_interface_mappings = ecn:enp0s10,provider:enp0s8
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True
enable_ipset = TrueRestart the Networking services.
1
2
3
4# service neutron-server restart
# service neutron-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
Compute node configuration
Configure the kernel to disable reverse path filtering. Edit the
/etc/sysctl.conf
file:1
2net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0Load the new kernel configuration:
1
$ sysctl -p
Configure the Linux bridge agent. Edit the
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
file:1
2
3
4
5
6
7[linux_bridge]
physical_interface_mappings = ecn:enp0s10,provider:enp0s8
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True
enable_ipset = True
Verify operation
Source the administrative project credentials.
1
. admin-openrc
Verify presence and operation of the agents:
1
2
3
4
5
6
7
8
9
10root@controller:~# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 143d7731-9227-4baf-9052-292d7aea6992 | Linux bridge agent | compute | | :-) | True | neutron-linuxbridge-agent |
| 1d661145-0941-411d-9b18-b3371fe57c4b | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
| 7502e1a3-998d-4aca-91e4-ca17e1b10c82 | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
| 7c47ac70-5de2-4442-8fc1-91fe97ae120f | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+1
2
3
4
5
6
7
8
9root@controller:~# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 143d7731-9227-4baf-9052-292d7aea6992 | Linux bridge agent | compute | None | True | UP | neutron-linuxbridge-agent |
| 1d661145-0941-411d-9b18-b3371fe57c4b | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
| 7502e1a3-998d-4aca-91e4-ca17e1b10c82 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
| 7c47ac70-5de2-4442-8fc1-91fe97ae120f | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
Create initial networks
This example creates a VLAN provider network. Change the VLAN ID and IP address range to values suitable for your environment.
Source the administrative project credentials.
1
. demo-openrc
Create a provider network:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52root@controller:~# openstack network create --share --external --provider-physical-network provider --provider-network-type vlan --provider-segment 101 provider-vlan-101
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-08-28T03:25:26Z |
| description | |
| dns_domain | None |
| id | 152a4a85-dc52-4c62-9bbd-742eb4f7b8fa |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| mtu | 1500 |
| name | provider-vlan-101 |
| port_security_enabled | True |
| project_id | 78c9c849237649a3a8c4526167427589 |
| provider:network_type | vlan |
| provider:physical_network | provider |
| provider:segmentation_id | 101 |
| qos_policy_id | None |
| revision_number | 4 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| updated_at | 2017-08-28T03:25:26Z |
+---------------------------+--------------------------------------+
// or neutron cli
$ neutron net-create provider-101 --shared --external\
--provider:physical_network provider --provider:network_type vlan \
--provider:segmentation_id 101
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 572a3fc9-ad1f-4e54-a63a-4bf5047c1a4a |
| name | provider-101 |
| provider:network_type | vlan |
| provider:physical_network | provider |
| provider:segmentation_id | 101 |
| router:external | True |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | e0bddbc9210d409795887175341b7098 |
+---------------------------+--------------------------------------+Note: The
shared
option allows any project to use this network.Create a subnet on the provider network:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30root@controller:~# openstack subnet create --network provider-vlan-101 --allocation-pool start=172.16.0.100,end=172.16.0.200 --gateway 172.16.0.1 --subnet-range 172.16.0.0/24 provider-vlan-101
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 172.16.0.100-172.16.0.200 |
| cidr | 172.16.0.0/24 |
| created_at | 2017-08-28T03:32:06Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 172.16.0.1 |
| host_routes | |
| id | 61693a58-a984-4f7b-9097-dd5e489a88bd |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider-vlan-101 |
| network_id | 2a33434f-ba29-4645-9b5d-24f1509066f1 |
| project_id | 78c9c849237649a3a8c4526167427589 |
| revision_number | 2 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| updated_at | 2017-08-28T03:32:06Z |
+-------------------+--------------------------------------+
// or via neutron cli
$ neutron subnet-create provider-vlan-101 172.16.0.0/24 \
--name provider-101-subnet --gateway 172.16.0.1
Verify network operation
On the controller node, verify creation of the
qdhcp
namespace:1
2$ ip netns
qdhcp-8b868082-e312-4110-8627-298109d4401cNote: The
qdhcp
namespace might not exist until launching an instance.Source the regular project credentials. The following steps use the
demo
project.Create the appropriate security group rules to allow ping and SSH access to the instance.
for OSC command referring to above chapters below is command for nova cli
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41root@controller:~# openstack network list
+--------------------------------------+-------------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------------+--------------------------------------+
| 152a4a85-dc52-4c62-9bbd-742eb4f7b8fa | provider-vlan-101 | 91d305ac-762a-4062-b1da-8a55d7ebd735 |
+--------------------------------------+-------------------+--------------------------------------+
root@controller:~# openstack server create --flavor m1.nano --image cirros --nic net-id=152a4a85-dc52-4c62-9bbd-742eb4f7b8fa --security-group default provider-vlan-instance
+-------------------------------------+-----------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | e9YKhRWy7XmN |
| config_drive | |
| created | 2017-08-28T03:44:10Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 2b78d9ee-3476-4e6b-9c4e-09ad2b7f115e |
| image | cirros (c17e391e-93e1-4480-9cf3-bf8623063e61) |
| key_name | None |
| name | provider-vlan-instance |
| progress | 0 |
| project_id | 78c9c849237649a3a8c4526167427589 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-08-28T03:44:10Z |
| user_id | d8efd16c30904a7992010abe4bdb9a2b |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------+Launch an instance with an interface on the provider network.
Note
This example uses a CirrOS image that was manually uploaded into the Image Service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35$ nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64-disk test_server
+--------------------------------------+-----------------------------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | h7CkMdkRXuuh |
| config_drive | |
| created | 2015-07-22T20:40:16Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | dee2a9f4-e24c-444d-8c94-386f11f74af5 |
| image | cirros-0.3.3-x86_64-disk (2b6bb38f-f69f-493c-a1c0-264dfd4188d8) |
| key_name | - |
| metadata | {} |
| name | test_server |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 5f2db133e98e4bc2999ac2850ce2acd1 |
| updated | 2015-07-22T20:40:16Z |
| user_id | ea417ebfa86741af86f84a5dbcc97cd2 |
+--------------------------------------+-----------------------------------------------------------------+Determine the IP address of the instance. The following step uses 203.0.113.3.
1
2
3
4
5
6
7$ nova list
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+
| dee2a9f4-e24c-444d-8c94-386f11f74af5 | test_server | ACTIVE | - | Running | provider-101=203.0.113.3 |
+--------------------------------------+-------------+--------+------------+-------------+--------------------------+On the controller node or any host with access to the provider network, ping the IP address of the instance:
1
2
3
4
5
6
7
8
9
10
11$ ping -c 4 203.0.113.3
PING 203.0.113.3 (203.0.113.3) 56(84) bytes of data.
64 bytes from 203.0.113.3: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.3: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.3: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.3: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 msObtain access to the instance.
Test connectivity to the Internet:
1
2
3
4
5
6
7
8
9
10
11$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms
64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms
64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms
64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
[ISSUE] Troubleshooting vlan dhcp
console to vm, it’s blocked on dhcp discover, after dhcp failure, metadata network is not ok as well…
1 | root@compute:~# virsh console 1 |
from compute nothing received after discover sent out
1 | root@compute:~# tcpdump -i brq152a4a85-dc -vv port 67 or port 68 -e -n |
While it had been sent out from dhcp agent “DHCP OFFER”
1 | tcpdump: listening on brq152a4a85-dc, link-type EN10MB (Ethernet), capture size 262144 bytes |
The network configured in virtualbox was not in promiscuous mode caus
succeeded:
1 | udhcpc (v1.20.1) started |