VPS Performance Comparison
UPDATE: The scripts used for benchmarking, graphing, and all the raw data can be found over at my git repository. Be aware that the graphs presented here has an Y-axis which goes from the minimum to the maximum value and not from 0 to maximum.
My latest side project, was it up?--a free web monitoring service, required a number of VPS instances due to its distributed nature. I previously conducted a comparison of Slicehost and Prgmr. This time I needed to purchase several instances and therefore went out and did a more throughout and wider comparison of which VPS provider would give me the most bang for my buck. I targeted VPS offerings in the $20 space (and the cheapest Amazon EC2 option):
Note that the monthly cost of Rackspace and Amazon's offerings does not include any data transfer. At $0.22/$0.08 out/inn per GB you'd have to pay an additional $15 per month for 50GB of in-bound and 50GB out-bound transfers. Amazon charges $0.17/$0.10 out/inn per GB of bandwidth which would amount to $13.5 extra per month for 50GB transfer both ways. The following table shows the specifications of the various providers' systems:
|Prgmr||x86_64||Opteron 2347 HE||1|
|Rackspace||x86_64||Opteron 2347 HE||4|
Linode is the only provider which gives you a choice between a 32-bit and 64-bit architecture. I therefore performed all tests on both platforms on Linode. Amazon's smallest VPS is 32-bit while all other providers uses 64-bit DomU's.
Prmgmr and Amazon gives you access to 1 virtual CPU while Slicehost, Linode, and Rackspace gives you 4. More VCPUs is not necessarily better though. With as many VCPUs as physical CPUs a single domU can burst to the full system capacity. If the system gets busy though, the amount of VCPUs could lead to increased context switching. 4 VCPUs should therefore give you better ideal performance, while 1 VCPU gives you a more stable performance profile.
Aside from cost, performance was the most important criteria for me when selecting a provider for was it up?. 5 different benchmarks were carried out every 3 hours over a week, leading to 56 runs each. The slowest system used up to 3 hours to complete all 5 benchmarks. Weeklong benchmarking was used to account for variance in host load during the day/night and week. I speculated that the host systems could be more utilizied on weekdays when people in the US were awake (all providers under test were US based). At the end of this article you'll find a table summarizing the averages and standard deviations of the 5 benchmarks on all providers.
Single process Unixbench were the first test carried out. I had to patch Unixbench to disable a failing test and graphical tests. The scores reported are points, were a higher score is better than a lower.
To my surprise Amazon, the most expensive offering, were clearly the loser on this workload. All providers had fairly little variance on this test, except for Slicehost who even dipped below Amazon's scores at some instances. Could this be the first signs of a very full host? Or maybe my neighboring nodes got slashdotted? The clear winner is Linode on both 32 and 64-bit architectures.
Parallel Unixbench followed. I had to patch the Unixbench controller script to always use 4 parallel processes. Normally Unixbench selects how many processes to spawn depending on the amount of virtual CPUs the system has.
Linode continues to impress with the best Unixbench scores for both its supported architectures. The results are similar for the single process benchmark, Amazon is slowest overall with Slicehost beeing the most unstable environment. Its important to note that both providers with only one virtual CPU, Prgmr and Amazon, has the lowest scores for both Unixbench tests.
SQL-bench on PostgreSQL is a large single-threaded database benchmark created by the MySQL project. I had to patch the suite to disable one test which did not run on PostgreSQL and decrease the ammount of some iterations to make the suite finish in a reasonable time.
As we see from the graph, this benchmark varies greatly over time for most providers. If we consult our summary table we note that 64-bit Linode has the best average time with 32-bit Linode in second place. Slicehost's average runtime is highest by a large margin. Amazon is clearly the most stable host in the SQL-bench tests with several orders of magnitude less standard deviation than other providers.
Django test suite on PostgreSQL was run to hopefully give a picture of how the various hosts performed under a web application load. This is a single-threaded benchmark.
The results are very similar to those of the SQL-bench runs against the same database system. Linode 64-bit and 32-bit has the lowest average runtimes and Amazon has a surprisingly low standard deviation. Similar to the SQL-bench runs Rackspace has the highest standard deviation while Slicehost has the highest average runtimes, more than 2.5 times slower than Linode 64-bit.
Django test suite on in-memory SQLite exactly the same as the previous benchmark against a different database system.
This memory intensive benchmark faired badly for Amazon as it has the highest average runtime. Overall the runtimes are fairly stable, only Slicehost has a significant standard deviation. As with all the database and Django specific benchmarks Linode 64-bit has the lowest average runtime with 32-bit Linode close behind.
|1 x||1 σ||2 x||2 σ||3 x||3 σ||4 x||4 σ||5 x||5 σ|
Summarizing the benchmarks gives us one clear winner: Linode. 32-bit gave the best results on the Unixbench runs while 64-bit was fastest on the Django and database tests. Since Linode also has the highest included bandwidth I have a hard time recommending any of the other providers if performance and price is most important for you.
If I had an opportunity to compare these providers again, I would have included more multi-threaded benchmarks. I'll probably not do a new comparison as this article has taken a lot of time and been over 6 months in the making.
If you're going to buy a VPS I'd appreciate it if you used my referral page for Linode.