Taillieu.Info

More Than a Hobby..

OpenStack - Centos Lab7 : Cinder – Kilo

Centos Lab7 : Cinder – Kilo

 

The OpenStack Block Storage Service (Cinder) enables management of persistent volumes, volume snapshots, and volume types. It interacts with Compute to provide volumes for instances.

In this lab, we will be leveraging the Linux Volume Manager (LVM) as the backend to Cinder, and will be deploying the cinder-volume management process on the same node as the LVM components. This is just one of many possible backend models, but it’s one of the simplest to deploy, as it can leverage the same underlying resources as the rest of the platform infrastructure. LVM doesn’t provide direct access to the VMs (as is standard for Cinder), instead, it will use iSCSI as the connection between the volume manager (LVM) and the vm manager (KVM) for _all_ connections, even connections that map back to the same host for both source and sink.

In a multi node environment, such as our lab, you typically install Cinder API/server on the Control Node, which is the aio node in our lab environment. You would also normally have a separate Storage Node that contains the disk that will serve volumes and may run the cinder-volume process to manage that storage.

In this particular lab, we will deploy the cinder api and scheduling processes along with the volume manager all on the aio node. We could (but in this case won’t) deploy the cinder volume manager on the compute node as well.

Cinder Installation on the AIO Node

Step 1: If you have not already accessed the lab environment, SSH to the AIO node and source your openrc file.

Enter following command and type centos as the sudo password:

Copy
ssh centos@aio151
sudo su -
source ~/openrc.sh

Step 2: First we’ll install the cinder components:

Copy
yum install openstack-cinder python-cinderclient -y

You just installed:

  • openstack-cinder: The Cinder api, scheduler, and volume manager components.
  • python-cinderclient: This is the python client for the OpenStack Cinder API.

As we are going to use LVM for our physical disk management, we need to install this as well. The lvm2 package includes physical, volume, and logical volume management tools:

Copy
yum install lvm2 -y

As with all our services, we will configure our database, and a Cinder user to allow access to our Cinder database:

Create Database for Storage Service

Step 3: Create the cinder database and cinder user (with password pass):

Copy
mysql -u root -ppass <<EOF
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'pass';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'pass';
exit
EOF

Create cinder user map to the service tenant

Step 4: Create the cinder user that will be used to authenticate with Keystone.

Copy
openstack user create cinder --password pass --email Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.

Example output:

+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |       Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.        |
| enabled  |               True               |
|    id    | f138fecff5f14ac5b137f91e34178f83 |
|   name   |              cinder              |
| username |              cinder              |
+----------+----------------------------------+

Then add the cinder user to the service tenant with the admin role.

Copy
openstack role add --project service --user cinder admin

Define the volume service and create the service endpoint

Step 5: Register the Block Storage Service with the Identity Service so that the other OpenStack services can locate it.

First create the volume “type” service called cinder.

Copy
openstack service create --name cinder --description "OpenStack Block Storage" volume

Example output:

+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 842d968e68114225a31f6a79abeb51f9 |
|     name    |              cinder              |
|     type    |              volume              |
+-------------+----------------------------------+

Now create the endpoint and associate it to the Cinder volume service id.

Note: we are going to create a v1 and v2 endpoint. The cinder client supports and uses the v2 interface, some versions of the nova client however, still expect and need v1 api responses, so we enable both endpoints.

Copy
openstack endpoint create --publicurl http://aio151:8776/v2/%\(tenant_id\)s --internalurl http://aio151:8776/v2/%\(tenant_id\)s --adminurl http://aio151:8776/v2/%\(tenant_id\)s --region RegionOne volume

Example output:

+--------------+-------------------------------------+
| Field        | Value                               |
+--------------+-------------------------------------+
| adminurl     | http://aio132:8776/v2/%(tenant_id)s |
| id           | 5aa368a42e9147a289dbd2e7677d9b78    |
| internalurl  | http://aio132:8776/v2/%(tenant_id)s |
| publicurl    | http://aio132:8776/v2/%(tenant_id)s |
| region       | RegionOne                           |
| service_id   | 4488fc895e2d48f2897ee5657847b4c7    |
| service_name | cinder                              |
| service_type | volume                              |
+--------------+-------------------------------------+

Now register the v2 service and the service endpoint:

Copy
openstack service create --name cinderv2 --description "OpenStack Block Storage v2" volumev2

Example output:

+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |    OpenStack Block Storage v2    |
|   enabled   |               True               |
|      id     | d5cce7f3a6db46438c3a5d5dd049dcf6 |
|     name    |             cinderv2             |
|     type    |             volumev2             |
+-------------+----------------------------------+
Copy
openstack endpoint create --publicurl http://aio151:8776/v2/%\(tenant_id\)s --internalurl http://aio151:8776/v2/%\(tenant_id\)s --adminurl http://aio151:8776/v2/%\(tenant_id\)s --region RegionOne volumev2

Example output:

+--------------+-------------------------------------+
| Field        | Value                               |
+--------------+-------------------------------------+
| adminurl     | http://aio132:8776/v2/%(tenant_id)s |
| id           | a98614449e6f404ab4151a5076503abc    |
| internalurl  | http://aio132:8776/v2/%(tenant_id)s |
| publicurl    | http://aio132:8776/v2/%(tenant_id)s |
| region       | RegionOne                           |
| service_id   | 394086d9534547d38d12214d85c4eabf    |
| service_name | cinderv2                            |
| service_type | volumev2                            |
+--------------+-------------------------------------+

Configure Cinder Service

Step 6: Update the cinder configuration file located in /etc/cinder/cinder with the openstack-config too. As with other services, we’ll configure the rabbit connection, the authentication strategy and authentication parameters (based on the user we created earlier in the lab), and finally, the database connection, and the local IP address (needed to support the cinder-volume target address):

Copy
cp -f /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf
Copy
chown -R cinder:cinder /etc/cinder/cinder.conf
Copy
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/cinder/cinder.conf DEFAULT logdir /var/log/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT state_path /var/lib/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT volumes_dir /etc/cinder/volumes
openstack-config --set /etc/cinder/cinder.conf DEFAULT iscsi_helper lioadm
openstack-config --set /etc/cinder/cinder.conf DEFAULT rootwrap_config /etc/cinder/rootwrap.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.64.151
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://aio151:5000/v2.0
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken identity_uri http://aio151:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host 127.0.0.1
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password pass
openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:pass@aio151/cinder
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_host aio151
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid guest
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password pass
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit lock_path /var/lib/cinder/tmp

Now that our configuration is ready (and the database connection information is defined), we can populate the database using the cinder-manage tool:

Copy
su -s /bin/sh -c "cinder-manage db sync" cinder
Note: Ignore deprecation warnings.

Before we start Cinder and the local Cinder volume-manager, we have to ensure that there is storage available to back the environment. We’ll configure LVM for this next.

Enable and start the LVM service

Start the LVM metadata service and configure it to start when the system boots:

Copy
sudo systemctl enable lvm2-lvmetad.service
sudo systemctl start lvm2-lvmetad.service
sudo systemctl status lvm2-lvmetad.service

Configure physical hard disks

Step 7: First we need to find out if our system has a “spare” disk available to use for cinder. We can use the sfdisk “list” to see what physical volumes are attached to the system:

Copy
fdisk -l

Example Output:

Disk /dev/vda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000af71d

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    41929649    20963801   83  Linux

Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vdc: 2 MB, 2097152 bytes, 4096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

This output shows two disks, one of which has partitions (/dev/vda) and one of which doesn’t have anything configured (/dev/vdb). We can safely use /dev/vdb because it has nothing configured on it. We’ll use the pvcreate command to prepare (create a base partition table) on the disk so that the LVM tools can then manage volumes and logical sub-volumes on the disk.

Copy
pvcreate -f /dev/vdb

Example output:

Physical volume "/dev/vdb" successfully created

Now that the physical disk is ready, we can create a volume group on the disk. We’ll use the cinder-volumes name, as that is the default that the Cinder volume manager will look for. We could also change the name, but this would require a re-configuration via the cinder.conf file.

Copy
vgcreate "cinder-volumes" /dev/vdb

Example output:

Volume group "cinder-volumes" successfully created

We can validate that the LVM system sees the physical and volume creation:

Copy
pvdisplay /dev/vdb

Example output:

--- Physical volume ---
  PV Name               /dev/vdb
  VG Name               cinder-volumes
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               2559
  Allocated PE          0
  PV UUID               2AIG41-7Ozm-w9tH-oB2Y-eE2f-uw02-6GVwfB

And for the volume group:

Copy
vgdisplay cinder-volumes

Example output:

--- Volume group ---
  VG Name               cinder-volumes
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               10.00 GiB
  PE Size               4.00 MiB
  Total PE              2559
  Alloc PE / Size       0 / 0   
  Free  PE / Size       2559 / 10.00 GiB
  VG UUID               7otzC4-F15t-1jck-11qU-spom-adJ1-k9336p

So we can see that we can now create volumes up to the total free space (10GiB in this case). So 10 1GiB volumes, or 1 10GiB volume is possible with this environment.

Step 8: Start Cinder Services:

Finally we can start the Cinder services, and enable them to run after a system reboot:

Copy
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service

Test Cinder Service

Step 9: Create a 5GB cinder volume named Vol1:

Copy
cinder create --display-name Vol1 5

Example output:

+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-04-29T09:21:02.148261      |
| display_description |                 None                 |
|     display_name    |                 Vol1                 |
|      encrypted      |                False                 |
|          id         | 5b9069e0-f46b-4cb1-9635-c4b705b09816 |
|       metadata      |                  {}                  |
|     multiattach     |                false                 |
|         size        |                  5                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

You will notice that the “status” is listed as “creating” as this was just the validation that Cinder accepted the create request. We can now check to see if the volume has been created, and is ready for use:

Copy
cinder list

Example output:

+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ba91671f-576e-4f40-a067-eb1dfdd9809b | available |     Vol1     |  5   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Attach Cinder Volume to an instance

Once the volume status is “available”, we can continue and now use the volume we created (assuming you still have a virtual machine running from the previous lab). But first we’ll look at the current state of the disk environment on the test-aio machine:

Log into the test-aio VM via SSH so that we can validate that the current disk state, and specifically, we expect to see a disk for the base OS, and possibly a very small disk for the config-drive function, and that should be all:

Copy
test_aio_ip=$(nova floating-ip-list | grep `nova list | awk '/ test-aio / {print $2}'` | awk '/ | / { print $4}')
ssh cirros@${test_aio_ip}
sudo fdisk -l
exit

Example output:

$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors 
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: Ox00000000

Device Boot Start     End    Blocks   Id System
/dev/vda1 *    16065  2088449  1036192+ 83 Linux

Step 10: Attach your new volume to the test-aio VM:

To associate a volume to an instance, you need the volume name or id and the instance name or id for the aio-test instance that you created in the previous lab.

On the aio151 node, run the below command which will extract the volume id from the cinder list output for the volume we created, and use it to associate the volume to the test-aio node.

Copy
volume_id=`cinder list  | awk '/ Vol1 / {print $2}'`
nova volume-attach test-aio ${volume_id} auto

Example output:

+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 49c4c51f-653c-4078-9f75-b606c18b2830 |
| serverId | 68e0d27d-c21b-4b62-8ffe-719f71f1e1c2 |
| volumeId | 49c4c51f-653c-4078-9f75-b606c18b2830 |
+----------+--------------------------------------+

Now the ssh session to the test-aio VM should show the volume attachment, which we can validate by running the fdisk command again:

Copy
ssh cirros@${test_aio_ip}
sudo fdisk -l

Try to find the hard disk attached to the Cinder volume, specifically, we should see a disk that didn’t exist in the previous version, in this case, /dev/vdb.

$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors 
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: Ox00000000

Device Boot Start     End    Blocks   Id System
/dev/vda1 *    16065  2088449  1036192+ 83 Linux

Disk /dev/vdb: 5368 MB, 5368709120 bytes
16 heads, 63 sectors/track, 10402 cylinders, total 10485760 sectors 
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/vdb doesn't contain a valid partition table

In order to actually use this disk, we can first format and then mount the disk (these commands are again run on the test-aio VM):

Copy
sudo mkfs.ext4 /dev/vdb
sudo mount /dev/vdb /mnt
Copy
df -h
Copy
exit
At this point you have enabled and tested the basic functionality of Cinder. You now have an extra disk available on AIO, and if time permits:
1) write a file to the disk (e.g. “touch /mnt/file.txt”)
2) unmount the disk from test-aio.
3) dis-associate the volume from test-aio
4) associate the volume with test-compute
5) mount the disk (don’t format it again)
6) verify that the file is now available on test-compute
Well done! You have created a block storage volume using LVM as backend storage and successfully attached it to one of your VMs.

If time permits, review the lab to get a reminder of what you have accomplished.

In the next lab you will install the OpenStack Dashboard and investigate its capabilities. As you do so, consider which of the actions you complete in Horizon did you already complete with the CLI. Is there functionality that is easier to achieve through the dashboard? … through the CLI? Is there any functionality that is not available to you in the dashboard?