This is the third post on the installation of OpenStack Rocky in an Ubuntu-based deployment.
In this post, I am explaining how to install the compute nodes. Please check the previous posts How to install OpenStack Rocky – part 1 and How to install OpenStack Rocky – part 2 to learn on how to prepare the servers to host our OpenStack Rocky platform, and the controller.
In the first post, we prepared the network for the compute elements. The description is in the next figure:
And in the second post, we installed the controller, where we configured neutron to be able to create on-demand networks, and we also configured nova to discover new compute elements (value discover_hosts_in_cells_interval in nova.conf).
Installation of the compute elements
In this section, we are installing nova-compute and neutron in the compute elements.
Remember… we prepared one interface connected to the provider network without IP address (enp1s0f0), and other interface connected to the management network with an IP address, and with the ability to access to the internet via NAT (enp1s0f1). We also disabled IPv6. We are also able to ping to the controller, using the management network (we configured the /etc/hosts file).
We need to installing chrony:
$ apt install chrony
And now we have to update the file /etc/chrony/chrony.conf to include the controller as the NTP server and disabling the other NTP servers. The modified fragment of the file should look like the next one:
server controller iburst #pool ntp.ubuntu.com iburst maxsources 4 #pool 0.ubuntu.pool.ntp.org iburst maxsources 1 #pool 1.ubuntu.pool.ntp.org iburst maxsources 1 #pool 2.ubuntu.pool.ntp.org iburst maxsources 2
At this point, we need to restart chrony and we’ll have all the prerequisites installed:
$ service chrony restart
Activate the OpenStack packages
We have to install the OpenStack repository and install the basic command line:
apt install software-properties-common add-apt-repository cloud-archive:rocky apt update && apt -y dist-upgrade apt install -y python-openstackclient
Now we’ll install the nova-compute elements (we are installing the KVM subsystem):
apt install -y nova-compute nova-compute-kvm
And now we are creating the configuration file /etc/nova.conf with the next content:
[DEFAULT] # BUG: log_dir = /var/log/nova lock_path = /var/lock/nova state_path = /var/lib/nova transport_url = rabbit://openstack:RABBIT_PASS@controller my_ip = 192.168.1.241 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [api_database] connection = sqlite:////var/lib/nova/nova_api.sqlite [cells] enable = False [cinder] os_region_name = RegionOne [database] connection = sqlite:////var/lib/nova/nova.sqlite [glance] api_servers = http://controller:9292 [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [neutron] url = http://controller:9696 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://188.8.131.52:6080/vnc_auto.html
This is the basic content for our installation. All the variables will take the default values.
In this configuration file we MUST configure 2 important values:
- my_ip, to match the management IP address of the compute element (in this case, it is fh01).
- novncproxy_base_url, to match the URL that includes the public address of your horizon server. It is important not to include (e.g.) http://controller:6080 … because that is an internal IP address that will not probably be routable from the browser with which you will access to horizon.
Once customized these values, we just need to restart nova:
service nova-compute restart
In the guides you will probably see that you are commited to execute a command like this one (su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova) in the controller, but again, we configured the controller to periodically discover the compute elements. Anyway, executing the command is not needed but it will not break our deployment.
Finally, we need to install and configure neutron, to be able to manage the OpenStack network. We just need to issue the next command:
apt install neutron-linuxbridge-agent
Once installed the components, we have to update the file /etc/neutron/neutron.conf with the following content:
[DEFAULT] lock_path = /var/lock/neutron core_plugin = ml2 transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [agent] root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf" [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = NEUTRON_PASS
And also update the file /etc/neutron/plugins/ml2/linuxbridge_agent.ini with the next content:
[linux_bridge] physical_interface_mappings = provider:enp1s0f0 [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver enable_security_group = true [vxlan] enable_vxlan = true local_ip = 192.168.1.241 l2_population = true
In this second file, we must configure the interface that is connected to the provider network (in my case, enp1s0f0), and set the proper IP address of the compute element (in my case, 192.168.1.241).
At this point we just need to restart the neutron services:
# service nova-compute restart # service neutron-linuxbridge-agent restart
And that’s all, folks.
Now you should be able to execute commands using OpenStack, create virtual machines, etc. Now you can install horizon, create networks, etc. You can find information on using OpenStack in the official documentation.
In this blog, I am writing some other posts that will cover issues like
- Installing horizon and configure it to use https.
- Updating noVNC to the latest release.
- Installing cinder, and using it as the backend for the images (this is very useful in case you have a SAN).
- Activating PCI Passthrough to be able to use GPUs to the VMs.
Moreover, I will try to write some posts on debugging the OpenStack network and understanding how the VM images and volumes are connected and used in the VMs.