CloudHarmony™ provides objective performance analysis to compare cloud providers. Their intent is to be the goto source for independent, un-biased, and objective performance metrics for cloud services. CloudHarmony is not affiliated with, owned or funded by any cloud provider. The benchmarks provided by CloudHarmony fall into 3 categories: Performance Benchmarking, Network Benchmarking, and Uptime Monitoring.
CloudHarmony states that there are 7 questions one might ask when considering benchmark-based claims. Answering these questions will help to provide a clearer understanding on the validity and applicability of the claims.
- What is the claim? Typically the bold-face, attention grabbing headline like Service Y is 10X faster than Service Z
- What is the claimed measurement? Usually implied by the headline. For example the claim Service Y is 10X faster than Service Zimplies a measurement of system performance
- What is the actual measurement? To answer this question, look at the methodology and benchmark(s) used. This may require some digging, but can usually be found somewhere in the article body. Once found, do some research to determine what was actually measured. For example, if Geekbench was used, you would discover the actual measurement is processor and memory performance, but not disk or network IO
- Is it an apples-to-apples comparison? The validity of a benchmark-based claim ultimately depends on the fairness of the testing methodology. Claims involving comparisons should compare similar things. For example, Ford could compare a Mustang Shelby GT500 (top speed 190 MPH) to a Chevy Aveo (top speed 100 MPH) and claim their cars are nearly twice as fast, but the Aveo is not a comparable vehicle and therefore the claim would be invalid. A more fair, apples-to-apples comparison would be a Mustang GT500 and a Chevy Camaro ZL1 (top speed 186).
- Is the playing field level? Another important question to ask is whether or not there are any extraneous factors that provided an unfair advantage to one test subject over another. For example, using the top speed analogy, Ford could compare a Mustang with 92 octane fuel and a downhill course to a Camaro with 85 octane fuel and an uphill course. Because there are extraneous factors (fuel and angle of the course) which provided an unfair advantage to the Mustang, the claim would be invalid. To be fair, the top speeds of both vehicles should be measured on the same course, with the same fuel, fuel quantity, driver and weather conditions.
- Was the data reported accurately? Benchmarking often results in large datasets. Summarizing the data concisely and accurately can be challenging. Things to watch out for include lack of good statistical analysis (i.e. reporting average only), math errors, and sloppy calculations. For example, if large, highly variable data is collected, it is generally a best practice to report the median value in place of mean (average) to mitigate the effects of outliers. Standard deviation is also a useful metric to include to identify data consistency.
- Does it matter to you? The final question to ask is, assuming the results are valid, does it actually mean anything to you? For example, purchasing a vehicle based on a top speed comparison is not advisable if fuel economy is what really matters to you.
Read Full Post »
According to a ZDNet article (by John Hazard, March 9, 2011) “IT manager jobs to staff jobs in move to the Cloud“:
The typical IT organization usually maintains manager-to-staff ratio of about 11 percent (that number dips to 6 or 7 percent in larger companies), said John Longwell, vice president of research for Computer Economics. The ratio has been volatile for four years, according to the Computer Economics recently released study, IT management and administration staffing ratios. As businesses adjusted to the recession, they first eliminated staff positions, raising the ratio to its peak of 12 percent in 2009. In 2010, businesses trimmed management roles as well, lowering the ratio to 11 percent, Longwell said. But the long term trend is toward a higher ratio of managers-to-staff ratio, he told me.
“Over the longer term, though, I think we will see a continued evolution of the IT organizations toward having more chiefs and fewer Indians as functions move into the cloud or become more automated.”
For a complete copy of the article see:
Read Full Post »
In a Computerworld (Australia) article entitled “Is there best practice for a server to system administrator ratio?” from July 9, 2010, the following was reported:
“We have observed that it can be, for example with a physical server, as low as 10 per admin, and for virtual servers as many as 500,” Gartner analyst, Errol Rasit, said. “But it really depends on the type of application. We have seen as an example from a particular customer – from some of our larger customers – that they had their admins managing 15 physical servers and when that moves to virtualisation it moves to something like 75 virtual servers.
To give you a different order of magnitude in another example one admin was looking at 50 physical servers and then moving to 250 virtual servers. I will say that we have seen maybe 500 or 600 virtual servers being managed by a single admin.
IDC meanwhile notes that in Australia the ratio for an SMB would vary greatly from a hoster and again to a cloud provider like Amazon or Microsoft. The analyst house’s statistics suggest anywhere from 10,000:1 at a dominant vendor like Google down to the SMB average of 30:1 for physical boxes and 80:1 for virtual machines.
One enterprise IT manager told us the ratio for physical servers was roughly 50:1, another working for a government organisation said 15-20:1, and an IT director at a research and development outfit noted that in a mid-size organisation a system administrator could maintain 10-14 servers per week or if their role was merely maintenance (i.e. no projects, no debugging, etc) then they could look after 25-35 servers per week. The IT director added a bigger organisation with larger economies of scale could potentially increase the ration to 10-14 servers to each admin per day with staff dedicated to just maintenance.
One of the key factors in increasing the ratio, however, is how much automation can be rolled into the maintenance / management of the server farm.
“A lot of what changes the ratio in the physical world is the types of tools being used to automate a lot of the processes; so run book automation and these sorts of things,” Gartner’s Rasit said. “That tends to be the main differentiator. The problem with virtualisation and virtualisation tools is there are a lot of them. It is very, very easy for a lot of customers to try and automate everything and that doesn’t necessarily always bear fruit for the organisation because they are spending too much time doing that.
A complete copy of the article can be found:
Read Full Post »
Powered by PathView Cloud, the Cloud Provider Scorecard rates the performance of leading cloud providers to and from numerous locations throughout North America. The scores, 100 being the best, represent a proprietary scoring algorithm of network performance characteristics – such as capacity, jitter, latency and packet loss – between the provider and these locations. The cloud provider offering the best performance to each city is indicated by the colored circles on the map. Cloud providers are monitored continuously and the scorecard is updated daily. Cloud providers include AWS, GoGrid, Hosting.com, Rackspace and Salesforce.com.
Read Full Post »
Posted in cloud computing, content management system, response time, servers, throughput, Xen, tagged Amazon, cloud, cms, drupal, ec2, reponse time, servers, throughput, virtualization on January 11, 2009 |
Leave a Comment »
According to a benchmark test run by John Quinn & Cailin Nelson,
Drupal systems perform very well on amazon ec2, even with a simple single machine deployment. The larger hardware types perform significantly better, producing up to 12,500 pages per minute. this could be increased significantly by clustering as outlined here. The apc op-code cache increases performance by a factor of roughly 4x. The average response times were good in all the tests. The slowest tests yielded average times of 1.5s. again, response times where significantly better on the better hardware and reduced further by the use of apc.
Amazon uses Xen based virtualization technology to implement ec2. The cloud makes provisioning a machine as easy as executing a simple script command. when you are through with the machine, you simply terminate it and pay only for the hours that you’ve used. ec2 provides three types of virtual hardware that you can instantiate.
Source: John & Cailin Blog, “lamp performance on the elastic compute cloud: benchmarking drupal on amazon ec2″, January 28, 2008.
Read Full Post »