Little Intro to Octavia.
Octavia is a service manager for Openstack Load balancer as a service. Neutron LBaaS V2 plugin talks to Octavia through Octavia driver. Octavia driver talks to Octavia api(o-api) and that in turn talks to Octavia control worker(o-cw).
Neutron LBaaS V2 Plugin ----> Octavia plugin driver -----> o-api ----> o-cw
Other than o-api and o-cw Octavia has 2 more components, housekeeping(o-hk) and health manager(o-hm).
o-api Octavia api server which receives all the request from octavia LBaaS driver and passes it to o-cw.
o-cw It's the main workhorse, it creates the load balancer VMs(amphorae), configures them.
o-hk keeps track of spare amphorae.
o-hm Manages the health of the amphorae through heartbeats, collects the stats and takes care of fail over by creating a new amphora when it fails to get the heartbeat.
How to enable Openstack Octavia, LBaaS V2 with Devstack.
Since octavia uses a VM for loadbalancer its needs good amount of memory and CPUs.
So minimum required is.
8 gig memory
2 vcpus
I always tried it on >= Ubuntu 14.04
Follow these steps for enabling the Octavia, LBaaS V2 with Devstack.
1. Add a user [as a root]
adduser stack
2. Install sudo, add stack to sudoers list and make it passwordless [as a root].
apt-get install sudo -y || yum install -y sudo
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
3. Install git [as a root].
apt-get install git -y || yum install -y git
4. Logout root and login as stack. Go to stack’s home dir.
5. Clone the devstack from git.
6. change directory.
cd devstack
7. Create a new file local.conf and copy the following data from the link.
Please note this is how local.conf will enable LBaaS V2 and Octavia.
enable_plugin neutron-lbaas https://review.openstack.org/openstack/neutron-lbaas
enable_plugin octavia https://github.com/openstack/octavia.git
ENABLED_SERVICES+=,q-lbaasv2
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
8. Run the stack.
./stack.sh
Basic testing of the Octavia through CLI.
1. Add security rules to allow ping, ssh and, HTTP.
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default tcp 80 80 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default tcp 80 80 0.0.0.0/0
2. Launch a members.
nova boot --image cirros-0.3.3-x86_64-disk --flavor 1 --nic net-id=private_net_id mem1
3. Create LB, this step should add a new VM. Try nova list, should show 3 VM. (Wait till LB goes to active state). It might take a lot of time before it goes to active state.
neutron lbaas-loadbalancer-create --name lb1 private-subnet
4. Create a listener.
neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name test_listener
5. Create a new pool.
neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener test_listener --protocol HTTP --name test_pool
6. Create (Add) members
neutron lbaas-member-create --subnet private-subnet --address [mem1's ip here] --protocol-port 80 test_pool
7. Set up a quick web server on members.
ssh to cirros member created in the step 2 using cirros as user and cubswin:) for password and run the below script to start the webserver.
8. Curl the VIP
sudo ip netns exec qrouter-XXXXXXXX_id curl -v vip_ip*
FAQ
1. How to ssh to Amphora VM?
[A] ssh -i /etc/octavia/.ssh/octavia_ssh_key -l ubuntu Amphora_IP
2. Where is the Amphora agent log in the Amphora VM?
[A] ls /var/log/upstart/amphora-agent.log
3. How to curl to Amphora.
[A] curl -k --cert /etc/octavia/certs/client.pem https://amphoare_ip:9443/0.5/info | python -m json.tool
4. Where is the Octavia conf.
[A] /etc/octavia/octavia.conf
Explore the HA, Spare pool, SSL termination, L7 Loadbalancing and other features. Let me know in comments if you have any questions.
Happy Loadbalancing.
Comments
Post a Comment