WordPress benchmarks running on Nginx, Php, Apache and Fastcgi

This weekend we had a chance to test out high-bandwidth WordPress blog set-up configurations and performance. We had a customer that landed multiple Digg front page stories and we had to tune the server to deal with the high peak time traffic.

For best performance we usually deploy Nginx and tie it together with Apache as a back-end for PHP processing, memcached, super cache and do a lot of rewriting rules and other optimizations.

The server is powered by one CPU Quad Core 5430 series processor, with 2GB RAM and SATA drives, runs on a SuperMicro server board. It’s a powerful box that can handle traffic well if correctly tuned and optimized.

We were running the latest WordPress blog software with some custom rewrites done on the front end Nginx daemon (front end proxy). All static content was served by Nginx and all PHP queries were forwarded to Apache 2 version compiled from source as well as latest PHP 5.2.6 loaded as a module.

Nginx front end, Apache + PHP, Super Cache, some custom rewrite rules:
Requests done: 1000, concurrency 30 threads

Server Software:        nginx/0.7.11
Server Hostname:        www.neatorama.com
Server Port:            80

Document Path:          /
Document Length:        300421 bytes

Concurrency Level:      30
Time taken for tests:   0.691438 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      300687000 bytes
HTML transferred:       300421000 bytes
Requests per second:    1446.26 [#/sec] (mean)
Time per request:       20.743 [ms] (mean)
Time per request:       0.691 [ms] (mean, across all concurrent requests)
Transfer rate:          424678.70 [Kbytes/sec] received

Connection Times (ms)
min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       3
Processing:     4   20   6.1     19      41
Waiting:        0   16   6.6     14      28
Total:          4   20   6.2     19      41

Percentage of the requests served within a certain time (ms)
50%     19
66%     24
75%     26
80%     27
90%     27
95%     29
98%     30
99%     33
100%     41 (longest request)

Nginx + FastCGI, running spawn-fcgi from Lightspeed distribution with 30 child threads, all static content served by Nginx and PHP served by PHP 5.2.6 php-cgi version (custom compiled from the source of course).

Server Software: nginx/0.7.11
Server Hostname: www.*****.com
Server Port: 80

Document Path: /
Document Length: 170281 bytes

Concurrency Level: 30
Time taken for tests: 103.429538 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Non-2xx responses: 900
Total transferred: 170518100 bytes
HTML transferred: 170281000 bytes
Requests per second: 9.67 [#/sec] (mean)
Time per request: 3102.886 [ms] (mean)
Time per request: 103.430 [ms] (mean, across all concurrent requests)
Transfer rate: 1609.99 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 1748 3076 492.0 3035 4844
Waiting: 768 1673 370.9 1659 2948
Total: 1748 3076 492.0 3035 4844

Percentage of the requests served within a certain time (ms)
50% 3035
66% 3248
75% 3405
80% 3496
90% 3732
95% 3943
98% 4192
99% 4398
100% 4844 (longest request)

As you can see Nginx + Apache, PHP, Super Cache was the clear winner – no questions asked.

Conclusion: we will stick with Nginx as a front-end proxy serving all static content while PHP processing forwarded to Apache web server running on the same server (using worker), running Super Cache and doing as much rewrites and file checking using Nginx for best performance. Apache is a big resource hog, however, our tests show it still outperforms running PHP scripts via FastCGI.

I would like to note that running PHP as a FastCGI required slightly lower memory usage, however the CPU load shoot up and I was not sure if we could handle Digg traffic that easily. Enjoy!


  1. mike503 says:

    Forget about spawn-fcgi. Use php-fpm.

  2. Kai says:

    Thanks for your comment. We will check it out and post feedback after a few tests etc.

  3. victori says:

    60req/sec using page cache? … is hitting the PHP interpreter that slow? Consider using nginx’s memcached page cache method instead to avoid the php interpreter all together.

    On our site using Java+Jetty+Hibernate+Spring+Wicket stack we easily pull 60req/sec on heavy dynamic page and. >250req/sec on simple dynamic pages (hibernate 2nd level SQL cache rocks hard).

    oh and we also hit digg multiple times, I Honestly expected a larger spike in traffic than what we got from our few digg effects.


  4. Anon says:

    Complete requests: 1000
    Failed requests: 883
    (Connect: 0, Length: 883, Exceptions: 0)
    Write errors: 0
    Non-2xx responses: 883

    Complete requests: 1000
    Failed requests: 0
    Write errors: 0
    Non-2xx responses: 900

    Am I reading this correctly? Does this mean out of 1000 requests only 100 or so were served properly?

  5. Sutekin says:

    Your interperation of results is totally false.

    There is a bottleneck on apache side and %88.3 of requests returned blank/error pages.

    Btw lets go for real result interperation

    1000 pages at 103430 ms = 103.4 ms per page on avarage

    117 pages at 16653 ms = 142.3 ms per page on avarage

    NGX rocks ever time..

  6. Bejjan says:

    I managed to crank apache up to serve ~600 phpinfo-pages per second.
    Now that might sound good, but a benchmark of Nginx with PHP-fpm the results was just amazing! over 11.000 served phpinfo per second.

    Apache, get bent!
    Hello NGINX!

  7. Alich Tanti says:

    Right now, i have used apache worker + mod_fcgid which can served for vps solution.

Leave a Reply