1. Overview
When we use HAProxy as our load balancer, any disruption to the service will cause major disruption to our services. Therefore, it’s vital that we validate our configuration before any deployment.
In this tutorial, we’ll be learning how to validate the HAProxy config files using the haproxy command.
2. HAProxy
HAProxy is an open-source software that functions as a reverse proxy and load balancer. It’s widely used in production environments to distribute traffic across multiple servers or services. Besides that, the HAProxy is also a popular choice as an API gateway because it functions at the L7 layer, understanding the HTTP semantics.
The HAProxy binary is available on most of the package managers such as APT or YUM. To install it in Debian-based Linux such as Ubuntu, we can run the apt-get command:
$ apt-get install -y haproxy
Then, we can check the existence of the binary on the path by running the haproxy –version command:
$ haproxy --version
HA-Proxy version 2.0.29-0ubuntu1.3 2023/02/13 - https://haproxy.org/
Usage : haproxy [-f <cfgfile|cfgdir>]* [ -vdVD ] [ -n <maxconn> ] [ -N <maxpconn> ]
[ -p <pidfile> ] [ -m <max megs> ] [ -C <dir> ] [-- <cfgfile>*]
-v displays version ; -vv shows known build options.
...
If the command prints the version and the usage information, then our installation is successful.
3. HAProxy Configuration File
The HAProxy configuration file is a text file that defines the behavior of the HAProxy service. It consists of several sections that define different aspects of the service, such as global, frontend, and backend.
First, the global section defines settings that are applicable on a process level. These settings include the maximum number of connections to accept, the host to forward the logs to, exposing the Runtime API, and many more:
global
# Limit maximum connections to 1024
maxconn 1024
# Exposing the stats socket to allow statistic retrieval
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
Then, the frontend section defines how the HAProxy service should handle incoming requests. Through this section, we can define settings such as the listening interface and ports, the backend to route traffic, and the proxying mode:
frontend api_gateway
bind *:80
default_backend load_balancer
The example above configures a frontend name api_gateway and listens on port 80 of all the interfaces. Then, it routes all traffic to the backend name load_balancer.
Finally, the backend section completes the loop by defining the target servers that back this backend. For example, we can define the load_balancer backend to consist of several backing server instances:
backend load_balancer
server server1 192.168.12.1:8080
server server2 192.168.2.38:8080
What we’ve demonstrated here is a tiny subset of all the available sections and configuration keywords. To find the complete list, we can head to the official documentation.
4. Validating HAProxy Configuration Files
Deploying a broken configuration file to a running HAProxy service will cause it to stop functioning. Therefore, we should always validate the configuration files first.
4.1. Validating With the haproxy -c Command
To prevent issues with an invalid config file, we can validate our config file using the haproxy binary before we deploy the changes to the HAProxy service. Specifically, we can run the haproxy command with the option -c to validate the config file.
For demonstration’s sake, let’s create a simple HAProxy configuration that distributes loads among an array of backend servers:
$ cat haproxy-loadbalancer.cfg
defaults
timeout connect 5000
timeout client 50000
timeout server 50000
frontend haproxy_as_api_gateway
bind 127.0.0.1:80
default_backend load_balancer
backend load_balancer
server server1 127.0.0.1:8080
server server2 127.0.0.1:8081
The configuration above specifies a frontend, haproxy_as_api_gateway, that listens for traffic at port 80. Then, it redirects the traffic to the load_balancer backend. The backend section of the configuration file defines two backing servers for distributing the incoming loads. At the top, we define some default timeout settings that apply to both the frontend and backend configurations.
Let’s validate the configuration file using the haproxy -c command:
$ haproxy -c -f haproxy-loadbalancer.cfg
Configuration file is valid
$ echo $?
0
From the output, we can see that the configuration file is valid. Additionally, the command will return an exit code of 0 for a valid configuration file. This can be useful for scripting purposes because we can check the exit code instead of parsing the output text.
4.2. Validating an Invalid Configuration File
Let’s simulate an invalid configuration file by introducing a typo to the setting:
$ cat haproxy-loadbalancer.cfg
defaults
timeout connect 5000
timeout client 50000
timeout server 50000
frontend haproxy_as_api_gateway
binds 127.0.0.1:80
default_backend load_balancer
backend load_balancer
server server1 127.0.0.1:8080
server server2 127.0.0.1:8081
In the frontend section, we change the bind setting keyword to binds, which is not a valid setting keyword. Let’s run the validation command again on the configuration file:
$ haproxy -c -f haproxy-loadbalancer.cfg
[ALERT] 121/022305 (128) : parsing [haproxy-loadbalancer.cfg:6] : unknown keyword 'binds' in 'frontend' section
[ALERT] 121/022305 (128) : Error(s) found in configuration file : haproxy-loadbalancer.cfg
[ALERT] 121/022305 (128) : Fatal errors found in configuration.
$ echo $?
1
When given an invalid configuration file, the command will detect and report the issue and pinpoint it to the exact section in its output. Additionally, it returns an exit code of 1 to indicate that the configuration file is faulty.
4.3. Valid Configuration Files With Warnings
In some cases, the command will raise warnings despite the fact that the configuration file is valid. Neglecting to specify the timeout for the frontend and backend sections is one such example:
$ cat haproxy-loadbalancer.cfg
frontend haproxy_as_api_gateway
bind 127.0.0.1:80
default_backend load_balancer
backend load_balancer
server server1 127.0.0.1:8080
server server2 127.0.0.1:8081
$ haproxy -c -f haproxy-loadbalancer.cfg
[WARNING] 121/020045 (116) : config : missing timeouts for frontend 'haproxy_as_api_gateway'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 121/020045 (116) : config : missing timeouts for backend 'load_balancer'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
Configuration file is valid
$ echo $?
0
In this example, we omitted the timeout settings from our configuration file. Then, running the validation command raises two alerts that tell us the pitfalls of not defining these timeouts. But, at the end of the day, our configuration file is still a valid HAProxy configuration file.
The main difference between the warnings and alerts reported by the validation command is that the former will not immediately cause issues upon restart, whereas the latter will. However, the warnings are raised for a reason and we should always heed them to prevent cascading the issue in the future.
5. Conclusion
In this tutorial, we first learned what the HAProxy service is. Then, we briefly looked at the anatomy of the configuration files. Finally, we introduced the haproxy -c command as the way to validate a configuration file.