Skip to main content

Adding a table to the openstack databases using migration scripts

So I had a task of adding a new table to the neutron database and at the same time not to use the neutron's migration script, as we wanted to keep the neutron code pure.

I tried to google for the alembic data migration but could not find anything useful. So I started to reverse engineer the migration scripts of the other Openstack projects.

I used the db code base from the following link.
https://github.com/stackforge/group-based-policy/tree/stable/juno/gbpservice/neutron/db

So I ll just mention the modifications we need to do to make it work. Please note that I am doing these changes in the Devstack environment.

1. Create folders for new project.
Lets say we are building a new project called test_db. Add a folder named test_db as shown in the below location
/opt/stack/test_db/test_db/ (Yes 2 folders just to keep in sync with Devstack Arch)

2. Copy migration folder from the below link to /opt/stack/test_db/test_db/
https://github.com/stackforge/group-based-policy/tree/stable/juno/gbpservice/neutron/db

Which basically uses the neutron's infrastructure to create the tables in nuetron DB.

3. Modify the  file /opt/stack/test_db/test_db/migration/alembic_migrations/env.py.
Change from 
gbpservice.neutron.db.migration.models import head  # noqa
To
test_db.migration.models import head  # noqa

and search for GBP/gbp and replace that with our porject name TEST_DB/test_db.

4. Create upgrade and downgrade script.
revision = '9d65abe72596'
down_revision = None

from alembic import op
import sqlalchemy as sa

def upgrade():
    op.create_table(
    'test_db_table',
    sa.Column('tenant_id', sa.String(length=100)),
    )

def downgrade():
    op.drop_table('test_db_table')
This is to create the table with the name test_db_table with just one field tenant_id, and also to drop the table when we want to downgrade.

5. Edit HEAD file, /opt/stack/test_db/test_db/migration/alembic_migrations/versions/HEAD.
Delete all the content and add 9d65abe72596 (Note: same as the revision value of the above script)

6. Edit the file /opt/stack/test_db/test_db/migration/cli.py
Change from 
gbpservice.neutron.db.migration:alembic_migrations
TO
test_db.migration:alembic_migrations

7. Comment off the following 2 lines in /opt/stack/test_db/test_db/migration/models/head.py
from gbpservice.neutron.db.grouppolicy import group_policy_db # noqa
from gbpservice.neutron.db.grouppolicy import group_policy_mapping_db # noqa

8. Create a file called /usr/local/bin/test-db-manage
and add the following content.
#!/usr/bin/python
# PBR Generated from u'console_scripts'
import sys
sys.path.append('/opt/stack/test_db')
from test_db.migration.cli import main

if __name__ == "__main__":
      sys.exit(main())
9. Sudo chmod +x /usr/local/bin/test-db-manage

10. To create the table.
 test-db-manage --config-file /etc/neutron/neutron.conf upgrade head

Since we have neutron.conf as the config file, the migration script uses the neutron's db url and adds the table into the neutron database.

11. To delete the table.
test-db-manage --config-file /etc/neutron/neutron.conf downgrade base

Please do let me know if you have any inputs :). 



Comments

Popular posts from this blog

Enable stats GUI on haproxy.

Add bottom snippet to the haproxy.conf below the defaults section. listen  stats         bind 19.41.259.10:1234         mode            http         log             global         maxconn 10         clitimeout      100s         srvtimeout      100s         contimeout      100s         timeout queue   100s         stats enable         stats hide-version         stats refresh 30s         stats show-node         stats auth admin:password         stats uri  /haproxy?stats Make sure you are updating the IP address on the bind to your VIP and if you want, you can change the port also. One can even change the uri on the last line of the above snippet, like stats uri /haproxy_stats or whatever one wants. To make sure you have not done any mistake in the config file, one can use the following command to validate. haproxy -c -f haproxy.cfg Once everything looks good, restart the haproxy process and stats GUI should be available at 19.41.259.10:1234/hapro

Sending a SIGHUP signal to some external process from Python script

Code : import psutil import os import signal pids = psutil.get_pid_list() for pid in pids: if psutil.Process(pid).name == "process_name": os.kill(pid,signal.SIGHUP) break Steps to follow. 1.Get the PID of the process, in this case  "process_name"   to which you want to send out a SIGHUP signal. 2.Use os.kill(pid,sig) command to send out the SIGHUP signal to that process. 1.Get the PID of the process to which you want to send out a SIGHUP signal. One has to install a package called psutil by the following command. easy_install psutil Check out the following links for more details https://code.google.com/p/psutil/ https://pypi.python.org/pypi/psutil use psutil.get_pid_list() to get all of the PIDs. psutil.get_pid_list() works in the following manner.  pids = [ int ( x ) for x in os . listdir ( '/proc' ) if x . isdigit ()] return pids once you get all the PIDs get the PID you are i

How to enable Openstack Octavia, LBaaS V2 with devstack.

Little Intro to Octavia. Octavia is a service manager for Openstack Load balancer as a service. Neutron LBaaS V2 plugin talks to Octavia through Octavia driver. Octavia driver talks to Octavia api(o-api) and that in turn talks to Octavia control worker(o-cw). Neutron LBaaS V2 Plugin ----> Octavia plugin driver -----> o-api ----> o-cw Other than o-api and o-cw Octavia has 2 more components, housekeeping(o-hk) and health manager(o-hm). o-api Octavia api server which receives all the request from octavia LBaaS driver and passes it to o-cw. o-cw It's the main workhorse, it creates the load balancer VMs( amphorae ), configures them. o-hk keeps track of spare amphorae. o-hm Manages the health of the amphorae through heartbeats, collects the stats and takes care of fail over by creating a new amphora when it fails to get the heartbeat. How to enable Openstack Octavia, LBaaS V2 with Devstack. Since octavia uses a VM for loadbalancer its needs good a