Taillieu.Info

More Than a Hobby..

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /customers/3/2/5/taillieu.info/httpd.www/templates/taillieuinfo_joomla3_rev1/functions.php on line 194

OpenStack - Centos Lab6 : Neutron GRE Tunnel – Kilo

Centos Lab6 : Neutron GRE Tunnel – Kilo

 

Finally we have all the elements in place for us to actually spin up VMs. We have enabled:

  • Keystone: to allow users and the system itself to authenticate and authorize actions, and to discover RESTful communications endpoints
  • Glance: in order to store virtual machine images that we can have the Nova compute engine call on for our deployments
  • Nova: in order to orchestrate and manage the deployment of virtual machines, associate networks, create local “ephemeral disks”, etc.
  • Neutron: To support the management and deployment of Virtual L2 network and L3 Subnets and Routers so that multiple virtual machines can communicate with themselves, and the rest of the Internet (if appropriate)

There are still a few more pieces of information we’ll want to collect for a normal deployment scenario, so we’ll do that and then get started with bringing compute nodes online!

Caution In this lab we are switching back to the control node, so make sure you are working on aio and not compute!

Step 1: Ensure you are logged in to your AIO node, and have elevated your privileges to the root user.

Copy
ssh centos@aio151
sudo su -

Before we spin up our first Virtual Machine, we want to ensure that we’ll be able to log into it once it has come online. As most cloud images do not have any passwords set (and in fact usually disable password access), we will need to pass an ssh “public” key to our instances. We pass the public key, as it is intended to be placed into areas where security is not a concern. As an example, many developers have accounts on the https://github.com service, and it is possible for _anyone_ to download a users public keys from github by simply pointing a browser or http tool at https://github.com/{username}.keys. This highlights the lack of security you need to apply to your public keys. The opposite is true for the private key, for which it is important that security is maintained, so as to limit the chances for someone to acquire that key, and gain access to services secured with your public key access!

In the lab environment, we’ll create a new key pair (public and private), and we’ll keep the private key on our AIO node (this is our “secure” end), and we’ll inject our public key into the VMs we deploy. We’ll do this via “Cloud-Init” and the metadata service.

Step 2: Generate an SSH keypair and upload to Nova

On the AIO node, execute ssh-keygen to generate the keypair and will pass the default parameters to it:

  • -t rsa: create an RSA algorithm key pair
  • -N ”: we do not want a passphrase for this key (we likely would want one for a more secure setup)
  • -f ~/.ssh/id_rsa: place the file into the “default” location for keys on the machine
Copy
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
Note: If it asks for overwrite, proceed with “y” .

Example output:

Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
96:20:a5:ce:be:6e:f9:ef:3a:20:92:ce:e3:c0:99:81 root@aio151.novalocal
The key's randomart image is:
+--[ RSA 2048]----+
|      .          |
|     o           |
|    o .          |
|.  o . . .       |
|Eo  o   S        |
|+ =..  .         |
|+= ..o           |
|.+  o..          |
|...oo.o=o        |
+-----------------+

Next we ensure we’ve sourced’ our OpenStack credentials so that we can leverage the nova CLI tool to upload our newly minted public key into nova’s public keypair storage:

Copy
source ~/openrc.sh

And then we ask nova to store the keypair with the name mykey. We’ll be able to use the name to tell nova which key to use for providing a secure connection to our VMs.

Copy
nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey

We can now list the keypairs to see what keys we have available (and to ensure ours was actually uploaded):

Copy
nova keypair-list

Example Output:

+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | 96:20:a5:ce:be:6e:f9:ef:3a:20:92:ce:e3:c0:99:81 |
+-------+-------------------------------------------------+

Step 3: Security groups and security group rules

We still have one more modification that we’ll likely want to make, and that is to modify the default Security group model, or to add a new one. The default group has rules specifically designed to let communications originating from the VM to occur, but to disallow all incoming interactions. The exception is that the current defaults include allowing port 22 access into the VMs. We’re going to want to test connectivity with the ping utility (ICMP port 1), and will want to connect to the VM with ssh (TCP port 22) and will likely want to allow HTTP traffic (TCP port 80) as well. To do this, we could either create a new group and include that on our VM boot process, or in this case, since it is highly likely that we’d want these capabilities on all our VMs, we can simply modify the default group.

First we’ll enable HTTP on port 80. We want to modify the ‘default’ group by adding a tcp protocol rule that supports connections with a port id of 80 (it’s there twice for the ability to insert a range of ports like 80-120 or similar). We are passing an ‘incoming’ range of 0.0.0.0/0 which is “any” host is allowed to talk to our VMs behind this group. We could have also specified a source group (and it’s IP range) but in this case, we’re making it wide open.

Copy
nova secgroup-add-rule default tcp 80 80 0.0.0.0/0
Copy
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

We’ll do a similar rule for ICMP, but rather than only allowing ping, we’re allowing “all” ICMP traffic to pass through.

Copy
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
Copy
nova secgroup-list-rules default

Example Output:

+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
|             |           |         |           | default      |
|             |           |         |           | default      |
+-------------+-----------+---------+-----------+--------------+

Step 4: Launch an instance:

Well, we’re not quite there, but now it’s a matter of collecting the appropriate information, including:

  • Flavor: The memory/cpu/disk/network bandwidth/etc. model for the VM
  • Image: We uploaded an image to glance, but we need the exact name or ID
  • Network ID: Since we have two networks available (public and private), we need to be explicit about our network connectivity
  • Keypair: We just created it, and saw how to get it above, we’ll just use the name ‘mykey’
  • Security Group: We only modified the default, so we don’t have to pass anything this time
  • Config Drive: We _could_ use the Config_Drive option for Cloud Init, but we’ll leverage the MetaData service this time
  • Scheduling Hints: We are going to want to ensure we get on VM onto each host so we can check our connectivity between the VMs, so we’ll want to pass a hint to the scheduler, and we’ll use an availability zone ‘hack’ to do that

First get a list of flavors. We can use either the name or the ID.

Copy
nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

Next, get the Image Name (we’ll have to escape any spaces in the name with preceding backslash like so “\ “):

Copy
nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 8980f452-1b34-4bcc-a24a-3ccb85434187 | CirrOS 0.3.2        | ACTIVE |        |
| 12eeba8c-74b3-4aeb-86a1-3abd92693148 | cirros_0.3.2_direct | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

Let’s grab the network ID for the private network:

Copy
neutron net-list
+--------------------------------------+-------------+----------------------------------------------------+
| id                                   | name        | subnets                                            |
+--------------------------------------+-------------+----------------------------------------------------+
| db4f3ec2-4528-499a-a99d-33d4ae4abdc8 | private-net | 43390c68-205f-4838-a0ed-0487fe41865d 10.10.10.0/24 |
| dd6ba027-6d1f-47ff-8519-02ed86e0857b | public      | f60b7950-b67d-4783-b943-8f2a3460c50a 10.1.65.0/24  |
+--------------------------------------+-------------+----------------------------------------------------+

We can grab the list of availability zones from nova:

Copy
nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- aio151             |                                        |
| | |- nova-conductor   | enabled :-) 2015-05-11T06:46:00.000000 |
| | |- nova-consoleauth | enabled :-) 2015-05-11T06:46:02.000000 |
| | |- nova-cert        | enabled :-) 2015-05-11T06:46:02.000000 |
| | |- nova-scheduler   | enabled :-) 2015-05-11T06:45:57.000000 |
| nova                  | available                              |
| |- aio151             |                                        |
| | |- nova-compute     | enabled :-) 2015-05-11T06:46:00.000000 |
| |- compute161         |                                        |
| | |- nova-compute     | enabled :-) 2015-05-11T06:46:06.000000 |
+-----------------------+----------------------------------------+

From this we can see that there are internal resources and ‘nova’ resources (the domain we want), and under ‘nova’ there are aio151 and compute161, which are the resources we want. We’ll need to pass them as nova:aio151 and nova:compute161 respectively to the parameter below.

Step 5: Apply parameters.

We now have all the information at our fingertips, and normally we’d have to replace all those parameters. Instead, we use a little “bash” magic, and extract the parameters from the same requests we made above:

Copy
nova boot --image `nova image-list | awk '/ CirrOS 0.3.2 / {print $2}'` --flavor 2 --availability-zone nova:`nova availability-zone-list | awk '/aio151/ {print $3}' | tail -1` --key-name mykey --nic net-id=`neutron net-list | awk '/ private-net / {print $2}'` test-aio
Copy
nova boot --image `nova image-list | awk '/ CirrOS 0.3.2 / {print $2}'` --flavor 2 --availability-zone nova:`nova availability-zone-list | awk '/compute161/ {print $3}' | tail -1` --key-name mykey --nic net-id=`neutron net-list | awk '/ private-net / {print $2}'` test-compute

We should get output similar to the following at the end of each command, but that will just let us know the information that the boot command is passing to Nova:

+--------------------------------------+-----------------------------------------------------+
| Property                             | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          | nova                                                |
| OS-EXT-SRV-ATTR:host                 | -                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                   |
| OS-EXT-STS:power_state               | 0                                                   |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | -                                                   |
| OS-SRV-USG:terminated_at             | -                                                   |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| adminPass                            | 4FAyXaNnyvjM                                        |
| config_drive                         |                                                     |
| created                              | 2015-05-11T06:57:07Z                                |
| flavor                               | m1.tiny (1)                                         |
| hostId                               |                                                     |
| id                                   | 879fbe8b-7e24-4dbf-b653-b95412730491                |
| image                                | CirrOS 0.3.2 (2ea64498-fb74-4c0d-a11b-84636ed1562d) |
| key_name                             | mykey                                               |
| metadata                             | {}                                                  |
| name                                 | test-aio                                            |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| security_groups                      | default                                             |
| status                               | BUILD                                               |
| tenant_id                            | 736a24d0a1e343388175c1f3b0a3b382                    |
| updated                              | 2015-05-11T06:57:07Z                                |
| user_id                              | 070a8bc2d1a44a00b75b1d6594b65909                    |
+--------------------------------------+-----------------------------------------------------+

We are more interested in wether it actually finished booting or not, and we can verify that with the nova list command, or we can get more information with the nova show {VM_NAME_OR_ID} command.

Quickly check to see if the VMs are active.

Copy
nova list 
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks               |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| c29cad8c-6289-4358-ac28-9b390f3d4574 | test-aio     | BUILD  | spawning   | NOSTATE     | private-net=10.10.10.3 |
| 179f9f12-7955-41f3-b8ce-2731ae9cbed8 | test-compute | BUILD  | spawning   | NOSTATE     | private-net=10.10.10.4 |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+

In this case our Status says we’re still in BUILD state as the Scheduler hasn’t yet determined the target VM.

Eventually, we should see:

+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks               |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| c29cad8c-6289-4358-ac28-9b390f3d4574 | test-aio     | ACTIVE | -          | Running     | private-net=10.10.10.3 |
| 179f9f12-7955-41f3-b8ce-2731ae9cbed8 | test-compute | ACTIVE | -          | Running     | private-net=10.10.10.4 |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+

Step 6: Launch VMs using Heat.

Well, that was certainly an adventure, and we have VMs up and running, but what if I needed to do that repeatably. We will eventually get to HEAT, which can automate all of this with a template, and we will shortly install Horizon, which would at least let us point and click our way through this, but what if we wanted a way to start and stop these instances at will? Well, while the APIs would be the truly programatic way of leveraging this environment, and the Python savvy amongst us might use the raw Python SDK, we can instead use a little command line bash magic to automate this. We’ll go one step further, and actually pass a little “user data” script as well so we can automatically have our VMs do something for us too!

Copy
cat > build.sh << EOD
#!/bin/bash

# First let's make sure our VMs don't already exist, otherwise, perhaps we just delete them:
if [[ -n "`nova list | grep test-aio`" || -n "`nova list | grep test-compute`" ]]; then
  nova delete test-aio test-compute
fi

# Create a little user data script to pass to our VMs:
cat > /tmp/user.data.sh <<EOF
cat > index.html << EOL
This is a little web browser on `hostname`
EOL
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html; } | nc -l -p 80; done &
EOF

# Grab our list of parameters
private_net=`neutron net-list | awk '/ private-net / {print $2}'`
image_id=`glance image-list | awk '/ CirrOS / {print $2}'`
flavor_id=`nova flavor-list | awk '/ m1.tiny / {print $2}'`
keyname='mykey'

nova boot --image \${image_id} --flavor \${flavor_id} --nic net-id=\${private_net} \
--user-data /tmp/user.data.sh --availability-zone nova:aio151 --key-name \${keyname} test-aio

nova boot --image \${image_id} --flavor \${flavor_id} --nic net-id=\${private_net} \
--user-data /tmp/user.data.sh --availability-zone nova:compute161 --key-name \${keyname} test-compute

sleep 15
echo VNC for aio-test:
nova get-vnc-console test-aio novnc | sed -e 's/10.1.64.*\:/localhost:/' - | awk '/ novnc / {print $4}'
echo VNC for aio-compute:
nova get-vnc-console test-compute novnc | sed -e 's/10.1.64.*\:/localhost:/' - | awk '/ novnc / {print $4}'
EOD

This script does everything _except_ assign a floating IP to the VMs… We’ll do that manually, and leave it’s execution as an exercise for the reader.

Assigning floating ip

Step 7: Create and Assign floating IPs.

First, let’s figure out what pool we have available to pull addresses from:

Copy
nova floating-ip-pool-list

Example output:

+--------+
| name   |
+--------+
| public |
+--------+

Then we’ll “create” two addresses (run the following command twice). This should give you two addresses in your floating range to work with. These are now allocated to your project (the admin project), and unless you release them, no-one else can use these addresses!

We can create a floating IP with either the neutron or nova clients (nova just calls neutron). One advantage of the nova client is that it currently let’s you pass hostnames and the actual ip addresses of the allocated floating IPs to make associations. The neutron model looks like this (but don’t create your floating IP yet, use the next method instead):

Copy
neutron floatingip-create public

Example output:

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 10.1.65.32                           |
| floating_network_id | ce93be69-38d4-47e0-a56c-a570da665bdf |
| id                  | 1d6d1993-e251-412a-a324-34e3307ecefa |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 01bb322759894cbe95421cad623f8f3a     |
+---------------------+--------------------------------------+

But we can also grab a floating IP via nova, and we’ll go one step farther, capturing the address allocated as a shell variable:

Copy
floating_ip=`neutron floatingip-create public | awk '/ floating_ip_address / {print $4}'`
echo ${floating_ip}

All that will be returned from the above two lines is the actual address itself. We can then use this IP address captured in this fashion to associate with our test-aio VM:

Copy
nova floating-ip-associate test-aio ${floating_ip}

Step 8: Validate Floating IP Assignment.

Check if everything has been properly configured:

Copy
nova floating-ip-list

Example output:

+--------------------------------------+------------+--------------------------------------+------------+--------+
| Id                                   | IP         | Server Id                            | Fixed IP   | Pool   |
+--------------------------------------+------------+--------------------------------------+------------+--------+
| c4dc6ad6-1b5b-4be5-ba8c-2b8f0aa446d7 | 10.1.65.52 | cfc171ce-538c-4a2c-9594-e1c3db48370d | 10.10.10.3 | public |
| e4d7e50a-e990-49f1-ba46-61825ba6594f | 10.1.65.51 | -                                    | -          | public |
+--------------------------------------+------------+--------------------------------------+------------+--------+

Now that we have our VMs configured with floating IP addresses, we should be able to both ping them, and gain access to them:

Copy
ping -c 5 ${floating_ip}

And then we can try ssh:

Copy

If our ssh keys were properly mapped to the vm, we shouldn’t be asked for a password. If that for some reason didn’t work, the cirros image (being one of the few cloud images with a password set) will allow for password login, and the default password is cubswin:)

Exit from the VM.

In addition, if you leveraged the scripted method of deployment above, you’ll have passed a small startup script to the VM and you can use wget to ask for data from the VM!:

An alternate form of access is to connect over the VNC service. Since we’re in a Lab environment there are a few different approaches to access. We can run the Web VNC in VNC from your laptop connection, in which case the first set of instructions is useful, or we can use the SSH redirect model of access (covered again at the end of this section):

+-------+----------------------------------------------------------------------------------+
| Type  | Url                                                                              |
+-------+----------------------------------------------------------------------------------+
| novnc | http://aio151:6080/vnc_auto.html?token=8ea3acf6-1b2d-4297-87da-5154b7a56dd6      |
+-------+----------------------------------------------------------------------------------+

You can paste this URL into your local (laptop) web browser, but you need to make a modification, specifically, you need to change the hostname or IP address to localhost. This is so that the ssh port redirect that we configured when we first logged into the lab-gateway is able to redirect localhost:6080 to your AIO node port 6080.

rather than the “normal” version above, you can use the following (replace the {target VM} name):

As a reminder, the port redirect connection would have been something like:

In this lab you made initial use of the basic OpenStack functions, including creating and manipulating Layer 2 and Layer 3 networks, and creating servers. You’ve also started to develop a model for programmatically leveraging these VMs, including automating a task of turning on a simple web server on the VM!

If time permits, review the lab to get a reminder of what you have accomplished.

In the next Lab, you’ll add persistent storage to your OpenStack arsenal.