Quantcast




A new version of the software that powers this site is now available for beta testing at Pyplate. Check out the installation instructions, and build your own Raspberry Pi powered site. Now includes seven free themes.

Follow me on Twitter and Google+

Google+







submit to reddit       
       

Raspberry Pi Server Cluster Tests

Now that I've built a simple Raspberry Pi server cluster, it would be interesting to see how much traffic it can handle compared to a single Pi. There are many different server benchmarking tools available. I'm going to use siege.

Using Siege to test server response times

Siege is a program which can be used to send large numbers of requests to a web server. Using the following command will send HTTP requests to a server on my local network:

siege -d10 -c10 -t1m http://192.168.0.10/spec.html

The -c option specifies that there should be 10 concurrent users at a time. The -d option specifies the maximum delay between requests. The actual delay is randomized, but will be within the timeframe used with the -d option. The -t option tells siege how long the test should last - 1 minute in this case.

In each of the tests that I conducted, I used a maximum delay of 10 seconds and a total test time of 1 minute. All tests used siege to make requests across my local network, not via the internet.

Testing the load balancer with a single Raspberry Pi

I wanted to see how a single Raspberry Pi handles traffic with and without the load balancer. I suspected that the load balancer would introduce a small delay, but actually a single Pi seems to operate more efficiently when it's behind a load balancer. It seems that requests are buffered before they reach the Pi. This graph shows the average response times for varying numbers of concurrent users, with and without the load balancer:

Seeing how performance improves with more nodes

The next test that I conducted was to see how performance changed as more nodes were added to the cluster. I ran tests with varying numbers of concurrent users, but for simplicity I'll just show results for tests with 10 concurrent users.

This graph shows the average maximum and minimum response times for an increasing number of worker nodes:

As you can see, the minimum response time doesn't really improve as more nodes are added, which makes sense. The minimum response time is only as fast as any one of the nodes in the cluster, and adding more nodes won't change that.

The maximum response time improved dramatically as more nodes were added. Using a cluster has definitely increased my server's capacity.

Read more about siege.
Comments

Comments

comments powered by Disqus