Feeds:
Posts
Comments

Archive for the ‘software’ Category

Kathy Iberle (Hewlett-Packard) and Sue Bartlett (IIS/STEP Technology) have developed a model to determine the ratio of software testers to software developers.   The following comes from the abstract of their paper “Estimating Tester to Developer Ratios (or Not)”.

Test managers often need to make an initial estimate of the number of people that will be required to test a particular product, before the information or the time to do a detailed task breakdown is available. One piece of data that is almost always available is the number of developers that are or will be working on the project in question. Common sense suggests that there is a relationship between the number of testers and the number of developers.  This article presents a model that can be used in describing that relationship. It is a heuristic method for predicting a ratio of testers to developers on future projects. The method uses the model to predict differences from a baseline project. A reader with some projects behind her will be able to come up with a rule-of-thumb model to suit her most common situations, to understand when the model might not give accurate answers and what additional factors might need to be taken into consideration.

In the paper the authors present two case studies: (1) “MergoApp”, a e-commerce website where the tester-developer ratio was 1:4, and (2)“DataApp”, a database application to replace an Excel application, where the actual tester-developer ratio was 4:8. A copy of their model can be found at Kathy Iberle’s web site (http://www.kiberle.com/articles.htm).   In addition, slides for the presentation can be found here: Estimate Slides.

Read Full Post »


CloudHarmonyCloudHarmony™ provides  objective performance analysis to compare cloud providers. Their intent is to be the goto source for independent, un-biased, and objective performance metrics for cloud services. CloudHarmony is not affiliated with, owned or funded by any cloud provider.   The benchmarks provided by CloudHarmony fall into 3 categories: Performance Benchmarking, Network Benchmarking, and Uptime Monitoring.

CloudHarmony states that there are 7 questions one might ask when considering benchmark-based claims. Answering these questions will help to provide a clearer understanding on the validity and applicability of the claims.

  1. What is the claim? Typically the bold-face, attention grabbing headline like Service Y is 10X faster than Service Z
  2. What is the claimed measurement? Usually implied by the headline. For example the claim Service Y is 10X faster than Service Zimplies a measurement of system performance
  3. What is the actual measurement? To answer this question, look at the methodology and benchmark(s) used. This may require some digging, but can usually be found somewhere in the article body. Once found, do some research to determine what was actually measured. For example, if Geekbench was used, you would discover the actual measurement is processor and memory performance, but not disk or network IO
  4. Is it an apples-to-apples comparison? The validity of a benchmark-based claim ultimately depends on the fairness of the testing methodology. Claims involving comparisons should compare similar things. For example, Ford could compare a Mustang Shelby GT500 (top speed 190 MPH) to a Chevy Aveo (top speed 100 MPH) and claim their cars are nearly twice as fast, but the Aveo is not a comparable vehicle and therefore the claim would be invalid. A more fair, apples-to-apples comparison would be a Mustang GT500 and a Chevy Camaro ZL1 (top speed 186).
  5. Is the playing field level? Another important question to ask is whether or not there are any extraneous factors that provided an unfair advantage to one test subject over another. For example, using the top speed analogy, Ford could compare a Mustang with 92 octane fuel and a downhill course to a Camaro with 85 octane fuel and an uphill course. Because there are extraneous factors (fuel and angle of the course) which provided an unfair advantage to the Mustang, the claim would be invalid. To be fair, the top speeds of both vehicles should be measured on the same course, with the same fuel, fuel quantity, driver and weather conditions.
  6. Was the data reported accurately? Benchmarking often results in large datasets. Summarizing the data concisely and accurately can be challenging. Things to watch out for include lack of good statistical analysis (i.e. reporting average only), math errors, and sloppy calculations. For example, if large, highly variable data is collected, it is generally a best practice to report the median value in place of mean (average) to mitigate the effects of outliers. Standard deviation is also a useful metric to include to identify data consistency.
  7. Does it matter to you? The final question to ask is, assuming the results are valid, does it actually mean anything to you? For example, purchasing a vehicle based on a top speed comparison is not advisable if fuel economy is what really matters to you.

Read Full Post »

javascript benchmarkAccording to benchmark tests performed by Stephen Shankland of CNet News, Google’s Chrome outperforms Firefox, MS Internet Explorer and Safari on the five subtests of JavaScript performance.   The five Javascript benchmarks used in the study were:

• Richards: OS kernel simulation benchmark, originally written in BCPL by Martin Richards (539 lines).

• DeltaBlue: One-way constraint solver, originally written in Smalltalk by John Maloney and Mario Wolczko (880 lines).

• Crypto: Encryption and decryption benchmark based on code by Tom Wu (1,689 lines).

• RayTrace: Ray tracer benchmark based on code by Adam Burmister (3,418 lines).

• EarleyBoyer: Classic Scheme benchmarks, translated to JavaScript by Florian Loitsch’s Scheme2Js compiler (4,682 lines).

The complete test can be found: http://news.cnet.com/8301-1001_3-10030888-92.html?tag=mncol

Read Full Post »

Cem Kaner, a professor at the Florida Institute of Technology, has done research on the ratio of software testers to software developers. His presentation entitled “Managing the Proportion of Testers to Other Developers” is partially based on a meeting of the Software Test Managers Roundtable (STMR 3) in Fall 2001.

FIT

FIT

The study found that:
– There were very small ratios (1-to-7 and less) and very
large ratios (5-to-1).
– Some of each worked and some of each failed.
– Many remembered successful projects with ratios lower than 1-to-1 more favorably than successful projects with larger ratios.

Read the paper to find out why is there such a range of successful ratios, and why test managers be happy with relatively low ratios?

See: http://www.kaner.com/pdfs/pnsqc_ratios.pdf and http://www.kaner.com/

Read Full Post »

apache-viThis post on www.ilovebonnie.net documents some impressive system performance improvements by the addition of Squid Cache (a caching proxy) and APC Cache (opcode cache for PHP).
* Apache is able to deliver roughly 700% more requests per second with Squid when serving 1KB and 100KB images.
* Server load is reduced using Squid because the server does not have to create a bunch of Apache processes to handle the requests.
* APC Cache took a system that could barely handle 10-20 requests per second to handling 50-60 requests per second. A 400% increase.
* APC allowed the load times to remain under 5 seconds even with 200 concurrent threads slamming on the server.
* These two caches are easy to setup and install and allow you to get a lot more performance out of them.

The post has an in-depth discussion and a number of supporting charts. The primary point is how simple it can be to improve performance and scalability by adding caching.

Source: http://www.ilovebonnie.net/2009/07/14/benchmark-results-show-400-to-700-percent-increase-in-server-capabilities-with-apc-and-squid-cache/

Read Full Post »

drupalAccording to a benchmark test run by John Quinn &  Cailin Nelson,

Drupal systems perform very well on amazon ec2, even with a simple single machine deployment. The larger hardware types perform significantly better, producing up to 12,500 pages per minute. this could be increased significantly by clustering as outlined here.  The apc op-code cache increases performance by a factor of roughly 4x.  The average response times were good in all the tests. The slowest tests yielded average times of 1.5s. again, response times where significantly better on the better hardware and reduced further by the use of apc.

Amazon uses Xen based virtualization technology to implement ec2. The cloud makes provisioning a machine as easy as executing a simple script command. when you are through with the machine, you simply terminate it and pay only for the hours that you’ve used. ec2 provides three types of virtual hardware that you can instantiate.

Source: John & Cailin Blog, “lamp performance on the elastic compute cloud: benchmarking drupal on amazon ec2″, January 28, 2008.

Read Full Post »

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: