Reverse Proxy: Difference between revisions
												
				Jump to navigation
				Jump to search
				
Cesar Chew (talk | contribs) No edit summary  | 
			
(No difference) 
 | 
Latest revision as of 17:31, 24 November 2014
<slideshow style="nobleprog" headingmark="⌘" incmark="…" scaled="true" font="Trebuchet MS" >
- title
 - Reverse Proxy
 - author
 - Bernard Szlachta (NobleProg Ltd)
 
</slideshow>
nginx as a reverse proxy ⌘
- A popular use of nginx
 - Minimises concurrent connections to proxy backend
 - Put nginx in front of Tomcat
 
- nginx serves all static content (images, css, javascript)
 - nginx proxies to Tomcat for dynamic content
 
A simple example ⌘
location /static/ {
   root /var/www/static/;
 }
location / {
   proxy_pass http://localhost:8000/;
 }
Automatic URI rewriting ⌘
- On the previous example the path would be unchanged because it is at the root
 - If a path is specified in location and target, by default it is rewritten
 
 
 location /foo/ {
   proxy_pass http://localhost:8000/bar/;
 }
- The proxy target will see the request as though it had been made for /bar/
 
Rewriting Server Responses ⌘
- Sometimes applications will generate 301/302
 - Often this will include a path calculated from the local settings
 - When used behind a proxy this may be wrong
 
Server response problem example ⌘
location /foo/ {
  proxy_pass http://localhost:8000/bar/;
}
$ curl -I http://localhost/foo/ HTTP/1.1 302 Moved Temporarily Location: http://localhost:8000/baz/
Solution: Proxy Redirect ⌘
location /foo/ {
  proxy_pass http://localhost:8000/bar/;
  proxy_redirect on;
}
Adding an X-Forwarded-For ⌘
- The proxy destination doesn't know who made the original request
 - Solution: Add an X-Forwarded-For header
 
location /foo/ {
  proxy_pass http://localhost:8000/bar/;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Dealing with legacy applications + cookies ⌘
- Scenario: multiple applications need to be on a single domain
 
- Problem: These applications set cookies for the domain they are on - not the new single domain
 
nginx to the rescue, again ⌘
location /legacy1 {
 proxy_cookie_domain legacy1.example.com app.example.com;
 proxy_cookie_path $uri /legacy1$uri;
 proxy_pass http://legacy1.example.com/;
}
location /legacy2 {
 proxy_cookie_domain legacy2.example.com app.example.com;
 proxy_cookie_path $uri /legacy2$uri;
 proxy_pass http://legacy2.example.com/;
}
What did we do? ⌘
- nginx will rewrite the cookie domain to app.example.com
 - nginx adds a cookie path matching the URI
 - Each app sees only its own cookies
 
Caching ⌘
- nginx can be used to accelerate content from the upstream
 - By caching dynamically generated content, performance can be improved massively
 
Simple caching example ⌘
http {
proxy_temp_path /var/spool/nginx;
proxy_cache_path /var/spool/nginx keys_zone=CACHE:100m levels=1:2 inactive=6h max_size=1g;
 location / {
   proxy_pass http://localhost:8000/;
   proxy_cache CACHE;
 }
}
What we did ⌘
- Configured a 100MB in-memory cache
 - Configured a 1GB on-disk cache
 - Told nginx to cache everything suitable
 
What nginx does ⌘
- nginx proxies requests to localhost:8000 as in previous examples
 - nginx looks at the headers received to check if data is safe to cache, and for how long
 - nginx stores a copy on disk
 - Next time a request is made for this URL, on-disk cache is checked
 - If it exists and is fresh, serve directly, storing a copy in RAM
 
What is fresh? ⌘
- Backend server should send appropriate headers to let nginx (and any other caches along the way) what is OK to cache, and how long for
 - `Cache-Control: private`
 
- don't cache at all
 
- Cache-Control: public, max-age: 900
 
- cache for 15 minutes
 
- Expires: Mon, 1 Jul 2013 01:00:00 GMT
 
- cache until the specified date
 
Cache stampedes ⌘
- Cache stampedes are a big problems for sites using caching to scale
 - Imaging the following scenario:
 
- Proxy destination can handle max 50 requests/second
 - nginx is added, traffic is 300 requests/second
 - 25% of traffic is for the front page
 - what happens when max-age is reached?
 
Dogpile ⌘
Dogpile effect ⌘
- Everyone tries to get the same URL at once
 - It's very slow / broken for everyone until a new copy gets into the cache
 - Load graphs show waves of peaks
 
nginx solution ⌘
- nginx can be configured to serve stale content whilst updating
 
location / {
  proxy_pass http://localhost:8000/;
  proxy_cache CACHE;
  proxy_cache_use_stale updating;
}
Hiding errors ⌘
- If old content is better than no content, nginx can also be configured to use stale content when it recieves no response or an error from the backend
 - We also set a timeout value
 
location / {
  proxy_pass http://localhost:8000/;
  proxy_cache CACHE;
  proxy_cache_use_stale timeout http_500;
  proxy_read_timeout 15s;
}
