Feeds:
Posts
Comments

Archive for the ‘response time’ Category


CloudHarmonyCloudHarmony™ provides  objective performance analysis to compare cloud providers. Their intent is to be the goto source for independent, un-biased, and objective performance metrics for cloud services. CloudHarmony is not affiliated with, owned or funded by any cloud provider.   The benchmarks provided by CloudHarmony fall into 3 categories: Performance Benchmarking, Network Benchmarking, and Uptime Monitoring.

CloudHarmony states that there are 7 questions one might ask when considering benchmark-based claims. Answering these questions will help to provide a clearer understanding on the validity and applicability of the claims.

  1. What is the claim? Typically the bold-face, attention grabbing headline like Service Y is 10X faster than Service Z
  2. What is the claimed measurement? Usually implied by the headline. For example the claim Service Y is 10X faster than Service Zimplies a measurement of system performance
  3. What is the actual measurement? To answer this question, look at the methodology and benchmark(s) used. This may require some digging, but can usually be found somewhere in the article body. Once found, do some research to determine what was actually measured. For example, if Geekbench was used, you would discover the actual measurement is processor and memory performance, but not disk or network IO
  4. Is it an apples-to-apples comparison? The validity of a benchmark-based claim ultimately depends on the fairness of the testing methodology. Claims involving comparisons should compare similar things. For example, Ford could compare a Mustang Shelby GT500 (top speed 190 MPH) to a Chevy Aveo (top speed 100 MPH) and claim their cars are nearly twice as fast, but the Aveo is not a comparable vehicle and therefore the claim would be invalid. A more fair, apples-to-apples comparison would be a Mustang GT500 and a Chevy Camaro ZL1 (top speed 186).
  5. Is the playing field level? Another important question to ask is whether or not there are any extraneous factors that provided an unfair advantage to one test subject over another. For example, using the top speed analogy, Ford could compare a Mustang with 92 octane fuel and a downhill course to a Camaro with 85 octane fuel and an uphill course. Because there are extraneous factors (fuel and angle of the course) which provided an unfair advantage to the Mustang, the claim would be invalid. To be fair, the top speeds of both vehicles should be measured on the same course, with the same fuel, fuel quantity, driver and weather conditions.
  6. Was the data reported accurately? Benchmarking often results in large datasets. Summarizing the data concisely and accurately can be challenging. Things to watch out for include lack of good statistical analysis (i.e. reporting average only), math errors, and sloppy calculations. For example, if large, highly variable data is collected, it is generally a best practice to report the median value in place of mean (average) to mitigate the effects of outliers. Standard deviation is also a useful metric to include to identify data consistency.
  7. Does it matter to you? The final question to ask is, assuming the results are valid, does it actually mean anything to you? For example, purchasing a vehicle based on a top speed comparison is not advisable if fuel economy is what really matters to you.

Read Full Post »

javascript benchmarkAccording to benchmark tests performed by Stephen Shankland of CNet News, Google’s Chrome outperforms Firefox, MS Internet Explorer and Safari on the five subtests of JavaScript performance.   The five Javascript benchmarks used in the study were:

• Richards: OS kernel simulation benchmark, originally written in BCPL by Martin Richards (539 lines).

• DeltaBlue: One-way constraint solver, originally written in Smalltalk by John Maloney and Mario Wolczko (880 lines).

• Crypto: Encryption and decryption benchmark based on code by Tom Wu (1,689 lines).

• RayTrace: Ray tracer benchmark based on code by Adam Burmister (3,418 lines).

• EarleyBoyer: Classic Scheme benchmarks, translated to JavaScript by Florian Loitsch’s Scheme2Js compiler (4,682 lines).

The complete test can be found: http://news.cnet.com/8301-1001_3-10030888-92.html?tag=mncol

Read Full Post »

Powered by PathView Cloud, the Cloud Provider Scorecard rates the performance of leading cloud providers to and from numerous locations throughout North America. The scores, 100 being the best, represent a proprietary scoring algorithm of network performance characteristics – such as capacity, jitter, latency and packet loss – between the provider and these locations.  The cloud provider offering the best performance to each city is indicated by the colored circles on the map. Cloud providers are monitored continuously and the scorecard is updated daily.  Cloud providers include AWS, GoGrid, Hosting.com, Rackspace and Salesforce.com.

Source: http://www.apparentnetworks.com/CPC/Scorecard.aspx

Read Full Post »

apache-viThis post on www.ilovebonnie.net documents some impressive system performance improvements by the addition of Squid Cache (a caching proxy) and APC Cache (opcode cache for PHP).
* Apache is able to deliver roughly 700% more requests per second with Squid when serving 1KB and 100KB images.
* Server load is reduced using Squid because the server does not have to create a bunch of Apache processes to handle the requests.
* APC Cache took a system that could barely handle 10-20 requests per second to handling 50-60 requests per second. A 400% increase.
* APC allowed the load times to remain under 5 seconds even with 200 concurrent threads slamming on the server.
* These two caches are easy to setup and install and allow you to get a lot more performance out of them.

The post has an in-depth discussion and a number of supporting charts. The primary point is how simple it can be to improve performance and scalability by adding caching.

Source: http://www.ilovebonnie.net/2009/07/14/benchmark-results-show-400-to-700-percent-increase-in-server-capabilities-with-apc-and-squid-cache/

Read Full Post »

drupalAccording to a benchmark test run by John Quinn &  Cailin Nelson,

Drupal systems perform very well on amazon ec2, even with a simple single machine deployment. The larger hardware types perform significantly better, producing up to 12,500 pages per minute. this could be increased significantly by clustering as outlined here.  The apc op-code cache increases performance by a factor of roughly 4x.  The average response times were good in all the tests. The slowest tests yielded average times of 1.5s. again, response times where significantly better on the better hardware and reduced further by the use of apc.

Amazon uses Xen based virtualization technology to implement ec2. The cloud makes provisioning a machine as easy as executing a simple script command. when you are through with the machine, you simply terminate it and pay only for the hours that you’ve used. ec2 provides three types of virtual hardware that you can instantiate.

Source: John & Cailin Blog, “lamp performance on the elastic compute cloud: benchmarking drupal on amazon ec2”, January 28, 2008.

Read Full Post »