0.0.1 upstream
proxy_pass只能定义一个主机,而upstream则是定义一个主机组,之后proxy_pass调用这个组
示例:
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}
注意:upstream只能定义在http中
示例一:
在node1节点的http模块下定义upstream:
upstream upservers {
server node2.shellscript.cn:81;
server node3.shellscript.cn;
}
编辑default.conf,注释掉之前定义的proxy_cache,在vim命令行模式下执行%s@[:space:]proxy_cache@#$@即可
在default.conf下添加proxy_pass:
location /forum/ {
#proxy_cache mycache;
#proxy_cache_valid 200 1d;
#proxy_cache_valid 301 302 10m;
#proxy_cache_valid any 1m;
#proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_pass http://upservers/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location ~* \.(jpg|png|gif)$ {
#proxy_cache mycache;
proxy_pass http://upservers; ####注意这里upservers后面没有/字符,因为这是正则匹配下的location写法
proxy_set_header X-Real-IP $remote_addr;
}
划粗体的为调用刚刚定义的upstream模块
重启nginx,客户端访问node1:
[root@server ]# for((i=1;i<=4;i++));do curl node1.shellscript.cn/forum/;done
httpd on node2
http on node3
httpd on node2
http on node3
[root@server ]#
可以看到默认采用轮询算法
0.0.2 weight权重使用(加权轮询)
编辑upstream,在node2后面添加weight=2:
upstream upservers {
server node2.shellscript.cn:81 weight=2;
server node3.shellscript.cn;
}
不添加weight则默认为1
重启nginx,客户端访问node1:
[root@server conf.d]# for((i=1;i<=6;i++));do curl node1.shellscript.cn/forum/;done
httpd on node2
httpd on node2
http on node3
httpd on node2
httpd on node2
http on node3
[root@server conf.d]#
很明显node2访问次数较多,说明weight已经生效
0.0.3 ip_hash算法
修改upstream如下:
upstream upservers {
ip_hash;
server node2.shellscript.cn:81 weight=2;
server node3.shellscript.cn;
}
客户端访问:
[root@server conf.d]# for((i=1;i<=6;i++));do curl node1.shellscript.cn/forum/;done
httpd on node2
httpd on node2
httpd on node2
httpd on node2
httpd on node2
httpd on node2
[root@server conf.d]#
只要node2可以访问,就不会访问node3,但是不同的客户端会调往不同的upservers节点上
健康状态检测:max_fails和fail_timeout
指定最大失败次数和超时时间,修改upstream如下:
upstream upservers {
server node2.shellscript.cn:81 max_fails=2 fail_timeout=1;
server node3.shellscript.cn max_fails=2 fail_timeout=1;
}
重启Nginx,因为此时weight权重一样,所以客户端访问依旧是轮询算法
此时把node2节点的http服务停掉以此模拟故障,接着客户端访问node1:
[root@server]# for((i=1;i<=4;i++));do curl node1.shellscript.cn/forum/;done
http on node3
http on node3
http on node3
http on node3
[root@server]#
重新启动node2的http服务,客户端访问则恢复正常:
[root@server]# for((i=1;i<=4;i++));do curl node1.shellscript.cn/forum/;done
http on node3
httpd on node2
http on node3
httpd on node2
[root@server]#
0.0.4 down和backup
编辑upstream,标记node2为backup备用节点:
upstream upservers {
server node2.shellscript.cn:81 max_fails=2 fail_timeout=1 backup;
server node3.shellscript.cn max_fails=2 fail_timeout=1;
}
重启nginx,客户端访问:
[root@server conf.d]# for((i=1;i<=4;i++));do curl node1.shellscript.cn/forum/;done
http on node3
http on node3
http on node3
http on node3
[root@server conf.d]#
此时只有node3响应,当node3出现故障后才会使用backup备用节点。不然一直使用node3。
down则是标记该节点不可用。
关于ip_hash更详细的解释可参照官方文档:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash




