Fast WordPress/Nginx setup on a cheap VPS

2011-02-28 by Senko Rašić

I’ve got a single Linode 512 VPS server hosting a few of my WordPress powered sites (including this blog), a few Python web apps, a mail server and a few other fairly standard things. It’s not a high-traffic site so the server is not under much stress, but sometimes it does get a bit more visitors, so I was wondering what easy things I can do to ensure it doesn’t crash miserably if at any moment something on it actually gets popular.

Nginx instead of Apache

First easy thing is to use Nginx instead of Apache for the web server. This may or may not be an easy thing to do – if you need (or prefer) to stay with Apache, Patrick McKenzie has some good advice on how to avoid Apache crashing and burning under load on a small VPS.

On the other hand, if all you’re using Apache for is mod_php (PHP) and mod_rewrite (clean URLs), it’s pretty easy to switch. It’s gotten a lot easier recently when PHP FastCGI Process Manager got included in the PHP mainline, finally giving PHP proper support for FastCGI. Since it’s a fairly new addition, depending on your operating system, you may have to compile php-fpm yourself, or already have it available in packages.

Nginx, being asynchronous (instead of using worker processes or threads, with one worker processing only one client at a time), can handle thousands of clients at the same time with little memory overhead, and is really efficient in serving static files. For PHP requests, it makes a FastCGI request “upstream” to a pool of php-fpm workers. To avoid having too many workers killing the server under load, as well as opening too many database connections (which is basically what happens with default Apache setup unless you tweak the worker and client limits), I only have 3 php worker processes (which serve all sites, not 3 per site).

Another advantage of Nginx over Apache workers is that it’s not susceptibe to slowloris attack. That is, it can handle huge number of slow clients connected to it just fine. Also, KeepAlive is a non-issue here (see Patrick’s blog post about KeepAlive, why it’s useful and why it can bring your Apache server to its knees).

Cache aggressively

Only 3 PHP instances can’t serve that many requests. But, since the sites are fairly static, it’s easy to cache them. So I use the excellent Wp-Cache plugin, which caches the pages into static files and serves them whenever possible.

But why stop there? WP-Cache, although really fast, still needs to run, making the PHP workers busy. Nginx comes with support for caching FastCGI responses, meaning it can cache the pages too, and serve them even quicker. It does have a drawback though – it’s impractical (if not impossible) to invalidate the cache properly on updates, so the nginx cache will get stale. UPDATE: I have since disabled the FastCGI cache due to it serving blank pages instead of the cached ones to some subset of visitors, so it’s off until I can see what happened. If you do use this caching, make sure you test it working properly from multiple IPs.

But my sites are fairly static, and I can live with updates not being visible in less than a minute, so I have it cache the pages just for one minute. It’s a short interval, but it’s good enough because the request will probably hit the Wp-Cache anyways, so the overhead won’t be big.

Nginx can also be used with memcached to have the pages in memory instead of disk, if you need really fast sites, but memory is the scarcest resource in a low-end VPS, so it’s better to be a bit slower than to start hitting swap because you’re trying to cache too many things in RAM.

Config files

To do this, I had to install and configure nginx and php-fpm, and install Wp-Cache plugin (which I used in default configuration). Relevant portion of my nginx config file:

# where to cache, how much disk to use
fastcgi_cache_path  /tmp/www-cache levels=1:2
        keys_zone=senkonetcache:8m max_size=30m;
server {
        # cache only 200 and 404 responses
        # caching 30x breaks WP login form
        fastcgi_cache_valid 200 1m;
        fastcgi_cache_valid 404 1m;
        location / {
                # clean urls; if there's no such file, rewrite it
                # to point to WP's index.php; unrelated nginx gotcha:
                # 'if' is declarative:
                if (!-e $request_filename) {
                        rewrite ^/(.*)$ /index.php?q=$1 last;
                index index.php index.html;
        # pass the *.php requests upstream
        location ~ .php$ {
                # use the cache, Luke
                fastcgi_cache senkonetcache;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME
                include /etc/nginx/fastcgi_params;
                # if you want to be able to log in to WP, make
                # sure nginx doesn't eat all the cookies
                fastcgi_pass_header Set-Cookie;

And my /etc/php5/fpm/pool.d/www.conf file (default one is well-documented, read up on the options used here):

listen =
listen.allowed_clients =
pm = dynamic
pm.max_children = 3
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_requests = 100

The numbers

I’ve used several tools (siege, httperf, ab, and my own custom tool) to try and test the performance. The tests were not very scientific, but all the tools more or less agree about the ballpark figures. I’ve recorded the numbers given by siege (with options -b -c 500 -t 2m, hitting only one URL):

  • no caching: 12 reqs/s, 32.3% availability, ~2.5 load average
  • only wp-cache: 166 reqs/s, 97.2% availability, ~1.25 load average
  • only (warm, 1m run) nginx cache: 403 reqs/s, 99.7% availability, ~1 load average
  • nginx cache and wp-cache: 412 reqs/s, 99.7% availability, ~1 load average

No surprises there about caching being a good way to speed up your website. What was surprising is that the server behaved nicely and recovered very quickly even in the no caching case – so even if it does get smashed, it doesn’t break (it just fails the requests that it didn’t have time to serve).

Also, although nginx-only and nginx/wp-wp cache numbers are almost the same, they hide the fact that nginx-only cache behaved quite badly on cold cache. When a request comes in, nginx either serves it from the cache or asks the upstream – but it doesn’t attempt to satisfy the rest of the requests from the first one. So if a lot of requests come while there’s nothing in the cache for that URL, all of them go upstream. When that happens, having Wp-Cache really saves the day.

In summary

Key ideas to take away:

  • use Wp-Cache
  • limit the number of workers (either fpm or Apache, whichever you use)
  • Nginx is really fast
  • Nginx FastCGI cache gives a lot of additional performance boost, if you’re fine with content being a bit stale
Senko Rašić
We’re small, experienced and passionate team of web developers, doing custom app development and web consulting.