Skip to main content

How to enable Openstack Octavia, LBaaS V2 with devstack.

Little Intro to Octavia.

Octavia is a service manager for Openstack Load balancer as a service. Neutron LBaaS V2 plugin talks to Octavia through Octavia driver. Octavia driver talks to Octavia api(o-api) and that in turn talks to Octavia control worker(o-cw).

Neutron LBaaS V2 Plugin ----> Octavia plugin driver -----> o-api ----> o-cw

Other than o-api and o-cw Octavia has 2 more components, housekeeping(o-hk) and health manager(o-hm).

o-api Octavia api server which receives all the request from octavia LBaaS driver and passes it to o-cw.
o-cw It's the main workhorse, it creates the load balancer VMs(amphorae), configures them.
o-hk keeps track of spare amphorae.
o-hm Manages the health of the amphorae through heartbeats, collects the stats and takes care of fail over by creating a new amphora when it fails to get the heartbeat.

How to enable Openstack Octavia, LBaaS V2 with Devstack.

Since octavia uses a VM for loadbalancer its needs good amount of memory and CPUs.
So minimum required is. 
8 gig memory
2 vcpus
I always tried it on >= Ubuntu 14.04

Follow these steps for enabling the Octavia, LBaaS V2 with Devstack.

1. Add a user [as a root]
adduser stack

2. Install sudo, add stack to sudoers list and make it passwordless [as a root].
apt-get install sudo -y || yum install -y sudo
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3. Install git [as a root].
apt-get install git -y || yum install -y git

4. Logout root and login as stack. Go to stack’s home dir.

5. Clone the devstack from git.

6. change directory.
cd devstack

7. Create a new file local.conf and copy the following data from the link.

Please note this is how local.conf will enable LBaaS V2 and Octavia.

enable_plugin neutron-lbaas https://review.openstack.org/openstack/neutron-lbaas 
enable_plugin octavia https://github.com/openstack/octavia.git

ENABLED_SERVICES+=,q-lbaasv2 
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api

8. Run the stack.
./stack.sh


Basic testing of the Octavia through CLI.

1. Add security rules to allow ping, ssh and, HTTP.
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default tcp 80 80 0.0.0.0/0

2. Launch a members.
nova boot --image cirros-0.3.3-x86_64-disk --flavor 1 --nic net-id=private_net_id mem1

3. Create LB, this step should add a new VM. Try nova list, should show 3 VM. (Wait till LB goes to active state). It might take a lot of time before it goes to active state.
neutron lbaas-loadbalancer-create --name lb1 private-subnet 

4. Create a listener.
neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name test_listener

5. Create a new pool.
neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener test_listener --protocol HTTP --name test_pool

6. Create (Add) members
neutron lbaas-member-create --subnet private-subnet --address [mem1's ip here] --protocol-port 80 test_pool

7. Set up a quick web server on members.

ssh to cirros member created in the step 2 using cirros as user and cubswin:) for password and run the below script to start the webserver.


8. Curl the VIP
sudo ip netns exec qrouter-XXXXXXXX_id curl -v vip_ip*


FAQ

1. How to ssh to Amphora VM?
[A] ssh -i /etc/octavia/.ssh/octavia_ssh_key -l ubuntu Amphora_IP

2. Where is the Amphora agent log in the Amphora VM?
[A]  ls /var/log/upstart/amphora-agent.log

3. How to curl to Amphora.
[A] curl -k --cert /etc/octavia/certs/client.pem https://amphoare_ip:9443/0.5/info  | python -m json.tool

4. Where is the Octavia conf.
[A] /etc/octavia/octavia.conf

Explore the HA, Spare pool, SSL termination, L7 Loadbalancing and other features. Let me know in comments if you have any questions.
Happy Loadbalancing.

Comments

Popular posts from this blog

Sending a SIGHUP signal to some external process from Python script

Code : import psutil import os import signal pids = psutil.get_pid_list() for pid in pids: if psutil.Process(pid).name == "process_name": os.kill(pid,signal.SIGHUP) break Steps to follow. 1.Get the PID of the process, in this case  "process_name"   to which you want to send out a SIGHUP signal. 2.Use os.kill(pid,sig) command to send out the SIGHUP signal to that process. 1.Get the PID of the process to which you want to send out a SIGHUP signal. One has to install a package called psutil by the following command. easy_install psutil Check out the following links for more details https://code.google.com/p/psutil/ https://pypi.python.org/pypi/psutil use psutil.get_pid_list() to get all of the PIDs. psutil.get_pid_list() works in the following manner.  pids = [ int ( x ) for x in os . listdir ( '/proc' ) if x . isdigit ()] return pids once you get all the PIDs get the PID you are i

Enable stats GUI on haproxy.

Add bottom snippet to the haproxy.conf below the defaults section. listen  stats         bind 19.41.259.10:1234         mode            http         log             global         maxconn 10         clitimeout      100s         srvtimeout      100s         contimeout      100s         timeout queue   100s         stats enable         stats hide-version         stats refresh 30s         stats show-node         stats auth admin:password         stats uri  /haproxy?stats Make sure you are updating the IP address on the bind to your VIP and if you want, you can change the port also. One can even change the uri on the last line of the above snippet, like stats uri /haproxy_stats or whatever one wants. To make sure you have not done any mistake in the config file, one can use the following command to validate. haproxy -c -f haproxy.cfg Once everything looks good, restart the haproxy process and stats GUI should be available at 19.41.259.10:1234/hapro