OpenStack - Centos Lab3 : Nova – Kilo
Centos Lab3 : Nova – Kilo
In this Lab we will deploy the OpenStack Compute Service, aka Nova.
Nova is a cloud compute controller, which is the core of any IaaS system. Nova interacts with Keystone for authentication, Glance for images, Neutron for network service (though it still has it’s own embedded option as well) and Horizon as a user and administrative graphical (web based) interface. Nova can manage a number of different underlying compute, storage, and network services, and is in the process of adding the ability to manage physical non-virtualized compute components as well!
In this lab, we’ll focus on deploying the compute control components (API servers, etc.) as well as a compute agent that will run on the same server (All-In-One mode). In a later lab, we will add a second separate compute instance to highlight how additional services are added, and capacity in the cloud can be scaled.
Compute Service Installation
Step 1: As with the previous labs, you will need to SSH the aio node.
If you have logged out, SSH into your AIO node:
ssh centos@aio151
If asked, the user password (as with the sudo password) would be centos
, then become root via the sudo password:
sudo su -
Then we’ll source the OpenStack administrative user credentials. As you’ll recall from the previous lab, this sets a set of environment variables (OS_USERNAME, etc.) that are then picked up by the command line tools (like the keystone and glance tools we’ll be using in this lab) so that we don’t have to pass the equivalent –os-username command line variables for each command we run:
source ~/openrc.sh
Install Compute Controller Service packages
Step 2: You will now install a number of nova packages that will provide the Compute services on the aio node:
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
You have just installed:
- openstack-nova-api: Accepts and responds to end user compute API calls.
- openstack-nova-cert: Manages x509 certificates
- openstack-nova-conductor: acts as an intermediary between compute nodes and the nova database
- openstack-nova-console : Authorizes tokens for users that console proxies provide.
- openstack-nova-novncproxy: Provides a proxy for accessing running instances through a VNC connection using a web browser.
- openstack-nova-scheduler: determines how to dispatch compute and volume requests.
- python-novaclient: Client library for OpenStack Compute API.
Install Compute Node packages
Step 3: While the previous step installed the service components, we also want to configure a local compute agent to manage our local KVM hypervisor, and we’ll install the sysfsutils package to add the required local tools for managing virtual disk connectivity.
yum install openstack-nova-compute sysfsutils -y
As with our previous steps, we’ll create the database for nova to store it’s state in, and configure the nova user access credentials (again, super secret:pass
):
Create Database for Compute Service
Step 4: Create Database nova for Openstack nova by logging into MariaDB with password as pass
mysql -uroot -ppass
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'pass';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'pass';
exit
Step 5: Create a nova service user in Keystone
We need to create the user that Nova uses to authenticate with the Identity Service. As with Glance, we’ll add the nova
user to the service
tenant and give the user the admin role:
openstack user create nova --password pass --email Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.
Associate the user with the tenant and role:
openstack role add --project service --user nova admin
While we’re at it, we also need to configure the service and endpoint catalog entries in Keystone.
The service endpoint is just like the one we created in Glance, but now we’re using the well known name of nova
, and the type tag of compute
openstack service create --name nova --description "Compute service" compute
openstack endpoint create --publicurl http://aio151:8774/v2/%\(tenant_id\)s --internalurl http://aio151:8774/v2/%\(tenant_id\)s --adminurl http://aio151:8774/v2/%\(tenant_id\)s --region RegionOne compute
Example output:
+--------------+-------------------------------------+
| Field | Value |
+--------------+-------------------------------------+
| adminurl | http://aio132:8774/v2/%(tenant_id)s |
| id | a741f82c58ac475d8519cf8e9431ec0c |
| internalurl | http://aio132:8774/v2/%(tenant_id)s |
| publicurl | http://aio132:8774/v2/%(tenant_id)s |
| region | RegionOne |
| service_id | c6f1f6c038f648448e560b6cb5075556 |
| service_name | nova |
| service_type | compute |
+--------------+-------------------------------------+
Configure Compute Service
Step 6: Configure the common compute services’ connections to the internal components of Nova (RabbitMQ), the database, and Keystone. And some less common ones…
As with glance, we configure RabbitMQ connectivity to allow the nova processes to leverage the message queue for communication. We’ll also configure the database connection for those services that talk directly to the database (principally the API service, Scheduler, and the Compute Conductor). We’ll also establish a connection to Keystone so that Nova can authenticate itself for communications with other services (e.g. talking to Glance), or to accept and validate client communications (nova CLI authenticating with Nova via Keystone).
We’ll also need to configure the VNC server (Keyboard Video Mouse via web browser for “console” access to our virtual machines), a connection to glance (we’ll need to be able to get those images that we’re going to store in Glance for our virtual machines),
In this case we’ll edit the nova.conf file using the openstack-config tool rather than editing the file directly. This reduces the likelihood that we place a value in the wrong location (e.g. under the wrong [heading]). We’ll operate on the /etc/nova/nova.conf file for the service configuration(s), and then modify the same configuration file for the compute service configuration.
First we’ll establish the required communications parameters for RabbitMQ. These parameters go in the [DEFAULT] section.
The format is:
openstsack-config --set {config_file} {section} {parameter} {value}
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host aio151
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password pass
Next we’ll configure the database communications, again using the openstack-config tool.
openstack-config --set /etc/nova/nova.conf database connection 'mysql://nova:pass@aio151/nova'
As should be obvious, this is a much more efficient method than manually editing the files, and does reduce the likelihood of “placement” errors. It’s still important to get the actual parameters right as well!
And we’ll carry on with the Keystone config, much like in Glance, we tell Nova “where” Keystone lives, but in this case, we’re differentiating between Authorization, and Identity endpoints. One being a validation endpoint (I’d like a token for myself please) and the other one is used to verify client tokens (is this client/token valid).
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri 'http://aio151:5000/v2.0'
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri 'http://aio151:35357'
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password pass
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
Next we’ll provide the configuration for the VNC proxy process, which provides a web based ‘Keyboard Video Mouse’ interface for interacting with the console of our virtual compute devices.
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip '10.1.64.151'
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen '10.1.64.151'
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address '10.1.64.151'
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url 'http://10.1.64.151:6080/vnc_auto.html'
Next we’ll add the pointer to Glance so that Nova can interoperate with the Image service.
openstack-config --set /etc/nova/nova.conf glance host 'aio151'
Modify the Hypervisor configuration for Nova-Compute
Step 7: Determine the hypervisor type
You must determine whether your system’s processor and/or hypervisor support hardware acceleration for virtual machines, as this will determine if we can use the KVM virtualization engine, or if we need instead to leverage the QEMU emulator. It turns out that the interfaces and management of these to systems is now identical, but there are backend differences, and it is in order to address those differences that we need to determine _what_ the right configuration is.
Run the following command to determine if KVM will function on your machine:
egrep -c '(vmx|svm)' /proc/cpuinfo
As our systems are already virtualized, we will get a value of zero, and so we must configure libvirt to use qemu instead of kvm in the [libvirt] section of /etc/nova/nova.conf. Again with the openstack-config client:
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
That should complete the “edits” we need to make tot he configuration file. We have a few more tasks to complete now that the nova tools can start to find the right connection parameters for communications.
Step 8: Populate the database tables for the nova database.
We’ll use the same model we used with glance, and leverage the nova-manage
tool to migrate the database from nothing to “current” state.
su -s /bin/sh -c "nova-manage db sync" nova
Then we’ll enable and start (or re-start) the services that we’ve configured this far.
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
And we also need to start the Nova compute services so that we can eventually turn on a VM!
sudo systemctl enable libvirtd.service openstack-nova-compute.service
sudo systemctl start libvirtd.service openstack-nova-compute.service
sudo systemctl status libvirtd.service openstack-nova-compute.service
Step 9: Verify Nova Operations:
Unfortunately, even though we have Nova properly configured, we can’t yet turn on a VM. This is because we have no network service yet, and we’ve not enabled the Nova Network model services at this point. In the next lab we’ll enable Neutron so that we finally have network functionality, and will then be able to actually _use_ this OpenStack environment. Until then, we can at least ensure that the OpenStack Compute service is healthy and ready to start serving us a soon as the Network comes online.
Firstly, we can see if the services that make up Nova (api, scheduler, conductor, auth, cert and at least our first compute node) have checked in with the API service. This will let us know if our inter process (RabbitMQ), database (MariaDB), and Keystone connections are functional:
nova service-list
Example output:
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | aio151 | internal | enabled | up | 2015-04-27T12:54:25.000000 | - |
| 2 | nova-consoleauth | aio151 | internal | enabled | up | 2015-04-27T12:54:25.000000 | - |
| 3 | nova-scheduler | aio151 | internal | enabled | up | 2015-04-27T12:54:25.000000 | - |
| 4 | nova-cert | aio151 | internal | enabled | up | 2015-04-27T12:54:25.000000 | - |
| 5 | nova-compute | aio151 | nova | enabled | up | 2015-04-27T12:54:19.000000 | - |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
We had also previously configured a connection to Glance, and we should be able to ask Nova to ask Glance what images are available as in:
nova image-list
Example output:
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| ff10d15d-d75d-4bda-b9bc-342213a95b03 | CirrOS 0.3.2 | ACTIVE | |
| 8f90a562-e995-4f86-a7c1-b76a901f12b5 | cirros_0.3.2_direct | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
If time permits, review the lab to get a reminder of what you have accomplished.
In the next Lab, we’ll install the Neutron controller, and connect Nova and Neutron together so that we can spin up a VM!