HAProxy as Load Balancer for Web and Databases

HAProxy 可以用來當各種服務(db 或 web)的 load balancer
其實就是幫你把 requests 分散到後端不同的機器
也會自動偵測機器掛掉的話就不會把請求送給它
即所謂的 Reverse Proxy

還有一種是 Caching Reverse Proxy
例如 Varnish, Squid
通常會直接把 response 緩存下來
這樣就不需要把 requests 打到後端去了

Install

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:vbernat/haproxy-1.5
$ sudo apt-get update
$ sudo apt-get install haproxy

Web Load Balancer: nginx

Configuration

in /etc/haproxy/haproxy.cfg

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private

    # Default ciphers to use on SSL-enabled listening sockets.
    # For more information, see ciphers(1SSL).
    ssl-default-bind-ciphers A_SECRET_STRING
    ssl-default-bind-options no-sslv3

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend web
    bind *:80
    mode http
    # acl static path_beg /asset
    # use_backend static_nodes if static
    default_backend web_nodes

backend web_nodes
    mode http
    balance roundrobin
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    # option httpclose
    # option http-server-close
    # appsession session len 32 timeout 12h
    cookie haproxyserverid insert nocache maxidle 1h
    option httpchk HEAD /health/
    server web1 100.100.100.1:80 check cookie web1
    server web2 100.100.100.2:80 check cookie web2
    server web3 100.100.100.3:80 check cookie web3
    server worker1 100.100.100.11:80 check cookie worker1
    server worker2 100.100.100.12:80 check cookie worker2

listen stats *:1936
    stats enable
    stats uri /
    stats hide-version
    stats auth YOUR_USERNAME:YOUR_PASSWORD

balance
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-balance

cookie
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-cookie

appsession
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-appsession

option forwardfor
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20forwardfor

ref:
https://serversforhackers.com/haproxy/
https://www.digitalocean.com/community/tutorials/how-to-use-haproxy-as-a-layer-7-load-balancer-for-wordpress-and-nginx-on-ubuntu-14-04

Database Load Balancer: MySQL

Create User for HAProxy in managed Databases

必須在要管理的資料庫新增一個 haproxy 的 user
讓 haproxy 檢測連線

# 100.100.100.78 是跑 haproxy 的那台機器
CREATE USER [email protected];

# 你可以看看 user 有沒有被建立
SELECT user, host, password FROM mysql.user;

Configuration

in /etc/haproxy/haproxy.cfg

global
    log 127.0.0.1 local0 notice
    maxconn 2000
    user haproxy
    group haproxy

defaults
    log     global
    retries 5
    timeout connect  10000
    timeout client  100000
    timeout server  100000

listen mariadb-cluster
    bind 0.0.0.0:3306
    mode tcp
    option mysql-check user haproxy
    balance source
    server svtw-db1 100.100.100.79:3306 check weight 4
    server svtw-db2 100.100.100.80:3306 check weight 4
    server svtw-db3 100.100.100.88:3306 check weight 2

listen webinterface
    bind 0.0.0.0:8080
    mode http
    stats enable
    stats uri /
$ sudo service haproxy reload
$ sudo service haproxy restart

ref:
https://www.digitalocean.com/community/tutorials/how-to-use-haproxy-to-set-up-http-load-balancing-on-an-ubuntu-vps

Python @property: setter, getter

class UserPreference(object):
    DEFAULT_VALUES = {
        'allow_fb_publish': True,
    }

    def __init__(self, user_id):
        """
        pref:2876321
        {
            'allow_fb_publish': 1,
        }
        """

        self.user_id = user_id
        self.key = 'pref:%s' % (self.user_id)

    @property
    def allow_fb_publish(self):
        value = rdb.hget(self.key, 'allow_fb_publish')
        if value is None:
            value = self.allow_fb_publish = self.DEFAULT_VALUES['allow_fb_publish']

        return value

    @allow_fb_publish.setter
    def allow_fb_publish(self, new_value):
        if not isinstance(new_value, bool):
            raise TypeError('Must be bool, not %s' % (type(new_value).__name__))

        new_value_int = int(new_value)
        rdb.hset(self.key, 'allow_fb_publish', new_value_int)

        return new_value_int

ref:
http://www.programiz.com/python-programming/property
http://openhome.cc/Gossip/Python/Property.html

nginx as load balancer

WSGI using uWSGI and nginx on Ubuntu
https://library.linode.com/web-servers/nginx/python-uwsgi/ubuntu-12.04-precise-pangolin

HTTP 负载均衡模块(HTTP Upstream)
http://www.howtocn.org/nginx:nginx%E6%A8%A1%E5%9D%97%E5%8F%82%E8%80%83%E6%89%8B%E5%86%8C%E4%B8%AD%E6%96%87%E7%89%88:standardhttpmodules:httpupstream

nginx 負載平衡的策略
http://wenku.baidu.com/view/175894c708a1284ac850438a.html

Nginx 模块推荐 Session 粘连
http://www.php-oa.com/2012/03/15/nginx-sticky-upstream-check.html

使用 nginx sticky 实现基于 cookie 的负载均衡
http://www.ttlsa.com/nginx/nginx-modules-nginx-sticky-module/

ref:
http://blog.csdn.net/ydt619/article/details/5954632
http://www.wubin.org.cn/?action=show&id=78

svcn-web1 自己當 nginx load balancer

upstream django_cluster {
    ip_hash;
    # or
    least_conn;
    server 100.100.100.70:8000 weight=3; # svtw-web1
    server 100.100.100.71:8000 weight=4; # svtw-web2
    server 100.100.100.72:8000 weight=4; # svtw-web3
    ...
}

server {
    listen  80;
    server_name  streetvoice.cn;
    charset  utf-8;
    client_max_body_size  75M;

    location /asset  {
        alias /data/storage/asset;
        access_log off;
    }

    location / {
        real_ip_header      X-Forwarded-For;
        set_real_ip_from    10.0.0.0/8;
        proxy_set_header    Host $http_host;
        proxy_redirect      off;
        proxy_read_timeout  120;
        include             /etc/nginx/uwsgi_params;
        uwsgi_pass          django_cluster;
    }
}

weight 默認為 1
max_fails 默認為 1
fail_timeout 默認為 10s

基本上默認的配置就夠了

意思是那台機器發生 max_fails 次錯誤的話
在 fail_timeout 內會被標成不可用

ref:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server