Search This Blog

Friday, October 2, 2009

Apache vs Nginx : Web Server Performance Deathmatch

Deathmatch may be an overstatement but here are the results from some performance benchmarking.

The Setup:

Server:

  • CENTOS 5.1
  • Dual 2.4GHz Xeon CPUs
  • 4GB RAM
  • RAID5 (4 x 15k disks)
  • Server and test client were connected via a consumer grade 10/100 switch

Configurations:

  • Basic static vhost
  • Keepalive turned on and with timeout of 15 seconds
  • GZIP turned on

I used autobench to perform the tests. Basically this is a perl script that sits on top of httperf and will run multiple tests in succession outputing the results to CSV. Awfully convenient.

All the tests were run against the same robots.txt file. Autobench ran the following command 20 times incrementing the request rate by 10 each time. I started at 10 requests per second and went up to 200.

httperf –timeout=5 –client=0/1 –server=HOST –port=80 –uri=/robots.txt –rate=X –send-buffer=4096 –recv-buffer=16384 –num-conns=5000 –num-calls=10

I performed two samples and arbitrarily used the second as the results shown here. At the bottom of this post I will have spreadsheet containing the data from these tests so you can check out all the results.

The Results:

Both web servers performed well in all the tests and had no issues completing the requests. So I will not mention the metrics that they finished very closely on, only the ones that they did not have similar results.

There were three httperf related tests that Nginx and Apache differed on more than small amount, reply rate, network I/O and response time.

This one really piqued my interest. It seems strange to me why we would see such a result from Apache. In both tests there was a big difference at the 700 request mark. Statistically the difference was only on the max reply rate. The average and minimum are within a few tenths of a percent consistent through the tests. The max for Apache in the first test was 734.7 and in the second 758.7, standard deviations of 13.9 and 22.9 respectively. I suppose the real question here is whether or not this is my test or how Apache acts. If it is the later it seems strange that dealing with 700 requests would be any different than dealing with 800. From 800 requests to 2000, the larger differences in these results seems more realistic, controlled and gradual.

The network I/O graph I find interesting mostly because I don’t know how to take it. On one hand it seems Apache is simply using more bandwidth to do the same number of requests as Nginx. Which would seem bad. On the other it could just mean that Apache does a better job of consuming and using the available pipe. Which would seem good. My hunch is that it is the former.

The response times are also interesting since Nginx responds consistently at 0.4 ms. I am not sure why this is since I don’t know the internals to Nginx but I imagine that it is something that is built into how it deals with requests.

While the httperf tests were running I collected sar data from that time. The results show that Nginx uses a good amount less CPU and produces equally less load.

Apache:

CPU:

Load:

Nginx:

CPU:

Load:

Thats all I got, pretty cool. Nginx seems to compete pretty well with Apache and there doesn’t seem like there is a good reason not to use it especially in CPU usage constrained situations (ie huge traffic, slow machines and etc).

Here’s my results spreadsheet for the detailed results of each httperf metric.

No comments:

Post a Comment