We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.— Donald Knuth
As part of updating my WordPress Installation for the arrival of https://paulsaunders.org.uk, I decided I’d need to add some caching in order to cope with the extra visitors. In addition, with it being a Multisite, I figured there’d be some common assets which could be cached nicely between the different sites.
So, as I am wont to do, I went looking at tutorials on the web. Two strategies made themselves known: FastCGI caching by nginx and application-level caching by the WordPress Plugin W3TC.
The FastCGI cache works by adding some commands to the nginx config. The server, path, CGI script and arguments of incoming requests are then hashed together and the result of the FastCGI script is saved to a file. The next time those same values are requested (as determined by the matching hash), the response is streamed from the file instead of by calling the script.
The W3TC plugin, being geared towards WordPress, performs much more acceleration. As well as caching pages to disk, it can minify CSS and Javascript (remove unnecessary whitespace, newlines etc to reduce the amount of data transferred), harness CDNs and monitor performance using Google Pagespeed, for example.
So, I configured both of these systems but I wasn’t quite getting the performance that I was expecting from nginx. Pages were taking 4 or 5 seconds to appear.
And then the Donald Knuth quote above came into my head. I realised that, in my effort to get the best performance, I was actually doing more work in the caching than in the actual serving of the page.
So, I started by disabling W3TC and suddenly the munin graph for nginx started showing a lot more “HIT”s on the cache. Page speed also improved dramatically and is now down to about a second per page. I suspect I can get this down over the next few weeks, by turning back on parts of W3TC, but for now that’s a nice lesson in over-optimization.