Skip to main content

Enable stats GUI on haproxy.

Add bottom snippet to the haproxy.conf below the defaults section.

listen  stats
        bind 19.41.259.10:1234
        mode            http
        log             global

        maxconn 10

        clitimeout      100s
        srvtimeout      100s
        contimeout      100s
        timeout queue   100s

        stats enable
        stats hide-version
        stats refresh 30s
        stats show-node
        stats auth admin:password
        stats uri  /haproxy?stats



Make sure you are updating the IP address on the bind to your VIP and if you want, you can change the port also.

One can even change the uri on the last line of the above snippet, like

stats uri /haproxy_stats or whatever one wants.

To make sure you have not done any mistake in the config file, one can use the following command to validate.

haproxy -c -f haproxy.cfg

Once everything looks good, restart the haproxy process and stats GUI should be available at

19.41.259.10:1234/haproxy?stats

Comments

Popular posts from this blog

Sending a SIGHUP signal to some external process from Python script

Code : import psutil import os import signal pids = psutil.get_pid_list() for pid in pids: if psutil.Process(pid).name == "process_name": os.kill(pid,signal.SIGHUP) break Steps to follow. 1.Get the PID of the process, in this case  "process_name"   to which you want to send out a SIGHUP signal. 2.Use os.kill(pid,sig) command to send out the SIGHUP signal to that process. 1.Get the PID of the process to which you want to send out a SIGHUP signal. One has to install a package called psutil by the following command. easy_install psutil Check out the following links for more details https://code.google.com/p/psutil/ https://pypi.python.org/pypi/psutil use psutil.get_pid_list() to get all of the PIDs. psutil.get_pid_list() works in the following manner.  pids = [ int ( x ) for x in os . listdir ( '/proc' ) if x . isdigit ()] return pids once you get all the PIDs get the PID you are i

How to enable Openstack Octavia, LBaaS V2 with devstack.

Little Intro to Octavia. Octavia is a service manager for Openstack Load balancer as a service. Neutron LBaaS V2 plugin talks to Octavia through Octavia driver. Octavia driver talks to Octavia api(o-api) and that in turn talks to Octavia control worker(o-cw). Neutron LBaaS V2 Plugin ----> Octavia plugin driver -----> o-api ----> o-cw Other than o-api and o-cw Octavia has 2 more components, housekeeping(o-hk) and health manager(o-hm). o-api Octavia api server which receives all the request from octavia LBaaS driver and passes it to o-cw. o-cw It's the main workhorse, it creates the load balancer VMs( amphorae ), configures them. o-hk keeps track of spare amphorae. o-hm Manages the health of the amphorae through heartbeats, collects the stats and takes care of fail over by creating a new amphora when it fails to get the heartbeat. How to enable Openstack Octavia, LBaaS V2 with Devstack. Since octavia uses a VM for loadbalancer its needs good a