<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://training-course-material.com/index.php?action=history&amp;feed=atom&amp;title=Load_balancing_with_nginx</id>
	<title>Load balancing with nginx - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://training-course-material.com/index.php?action=history&amp;feed=atom&amp;title=Load_balancing_with_nginx"/>
	<link rel="alternate" type="text/html" href="https://training-course-material.com/index.php?title=Load_balancing_with_nginx&amp;action=history"/>
	<updated>2026-05-13T09:36:09Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.1</generator>
	<entry>
		<id>https://training-course-material.com/index.php?title=Load_balancing_with_nginx&amp;diff=23943&amp;oldid=prev</id>
		<title>Cesar Chew at 17:25, 24 November 2014</title>
		<link rel="alternate" type="text/html" href="https://training-course-material.com/index.php?title=Load_balancing_with_nginx&amp;diff=23943&amp;oldid=prev"/>
		<updated>2014-11-24T17:25:05Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Cat|Nginx}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;slideshow style=&amp;quot;nobleprog&amp;quot; headingmark=&amp;quot;⌘&amp;quot; incmark=&amp;quot;…&amp;quot; scaled=&amp;quot;true&amp;quot; font=&amp;quot;Trebuchet MS&amp;quot; &amp;gt;&lt;br /&gt;
;title: Load balancing with nginx&lt;br /&gt;
;author: Bernard Szlachta (NobleProg Ltd)&lt;br /&gt;
&amp;lt;/slideshow&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== nginx as load balancer ⌘===&lt;br /&gt;
* A common usage pattern - one or more nginx servers, serving static content directly, and sending dynamic requests onto a cluster of Java servers&lt;br /&gt;
* Helps provide good uptime and improved performance&lt;br /&gt;
&lt;br /&gt;
=== Load balancing algorithms ⌘===&lt;br /&gt;
* Round-robin (default)&lt;br /&gt;
:* Connection 1 to host 1, connection 2 to host 2, connection 3 to host 1 etc etc&lt;br /&gt;
* ip_hash&lt;br /&gt;
:* Provides consistent mapping between client and upstream server&lt;br /&gt;
* least connections&lt;br /&gt;
:* Sends each connection to the backend with the fewest connections&lt;br /&gt;
:* Use with care&lt;br /&gt;
=== Simple load balancing example ⌘===&lt;br /&gt;
;&lt;br /&gt;
 upstream app {&lt;br /&gt;
  server app1.example.com:8080;&lt;br /&gt;
  server app2.example.com:8080;&lt;br /&gt;
  server app3.example.com:8080;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
 server {&lt;br /&gt;
  location / {&lt;br /&gt;
  proxy_pass http://app;&lt;br /&gt;
  }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Backend health ⌘===&lt;br /&gt;
* We can disable a backend if we are unable to connect to it:&lt;br /&gt;
&lt;br /&gt;
 upstream app {&lt;br /&gt;
  server app1.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app2.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app3.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
* This stops trying to use a backend after connection attempts have ailed three times in a row&lt;br /&gt;
* Will try using it again after 30 seconds&lt;br /&gt;
=== &amp;#039;Sticky&amp;#039; backends ⌘=== &lt;br /&gt;
* Want each client to be served by the same backend?&lt;br /&gt;
* Use ip_hash&lt;br /&gt;
* Only looks at the first three octets of the IP&lt;br /&gt;
:* Therefore often unsuitable for internal-facing applications&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;Sticky&amp;#039; backends example ⌘===&lt;br /&gt;
; &lt;br /&gt;
 upstream app {&lt;br /&gt;
  ip_hash;&lt;br /&gt;
  server app1.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app2.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app3.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Least connections ⌘===&lt;br /&gt;
* Good in theory&lt;br /&gt;
* Can lead to &amp;#039;interesting&amp;#039; problems with servers constantly being added/removed&lt;br /&gt;
:* Server overloaded, starts to fail connections&lt;br /&gt;
:* Removed from pool, number of connections falls&lt;br /&gt;
:* Now healthy again and has no connections, so suddenly flooded with connections&lt;br /&gt;
:* Server overloaded, starts to fail connections...&lt;br /&gt;
&lt;br /&gt;
=== Least connections example ⌘===&lt;br /&gt;
;&lt;br /&gt;
 upstream app {&lt;br /&gt;
  least_conn;&lt;br /&gt;
  server app1.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app2.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app3.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Keepalive connections ⌘===&lt;br /&gt;
* Keep a connection &amp;#039;pool&amp;#039; to the backends and recycles them&lt;br /&gt;
* Lower overhead for each individual request as no need to create a new TCP connection&lt;br /&gt;
* If more connections are required than the pool size, these will be dynamically created as before&lt;br /&gt;
&lt;br /&gt;
=== Keepalive example ⌘===&lt;br /&gt;
;&lt;br /&gt;
 upstream app {&lt;br /&gt;
  ip_hash;&lt;br /&gt;
  keepalive 32;&lt;br /&gt;
  server app1.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app2.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
  server app3.example.com:8080 max_fails=3 fail_timeout=30s;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Weights ⌘===&lt;br /&gt;
* Sometimes you may be balancing across backends of varying power&lt;br /&gt;
* Use weights to signify which servers should get more or less   connections&lt;br /&gt;
* Higher weight - more connections&lt;br /&gt;
&lt;br /&gt;
=== Weight example ⌘===&lt;br /&gt;
;&lt;br /&gt;
 upstream backend {&lt;br /&gt;
  ip_hash;&lt;br /&gt;
  server   backend1.example.com weight=2;&lt;br /&gt;
  server   backend2.example.com;&lt;br /&gt;
  server   backend3.example.com;&lt;br /&gt;
  server   backend4.example.com;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Node maintenance ⌘=== &lt;br /&gt;
* A node can be temporarily disabled in the config, for example when upgrades are to be carried out on it&lt;br /&gt;
;&lt;br /&gt;
 upstream backend {&lt;br /&gt;
  ip_hash;&lt;br /&gt;
  server   backend1.example.com down;&lt;br /&gt;
  server   backend2.example.com;&lt;br /&gt;
  server   backend3.example.com;&lt;br /&gt;
  server   backend4.example.com;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Exercise ⌘=== &lt;br /&gt;
* Download the HTTP daemons:&lt;br /&gt;
&lt;br /&gt;
 [[File:Httpservers.tar.gz]]&lt;br /&gt;
&lt;br /&gt;
* Run each of them in separate windows&lt;br /&gt;
* Create round-robin load balancer setup with backends at 127.0.0.1:8001 and 127.0.0.1:8002&lt;br /&gt;
* Stop the daemon on 8002.  What happens?&lt;br /&gt;
* Change the config so that connections to the 8002 daemon automatically cease when it has failed&lt;/div&gt;</summary>
		<author><name>Cesar Chew</name></author>
	</entry>
</feed>