Feeds:
Posts
Comments

Archive for November, 2011

GMIn 2008 James W. Flosdorf, Jr. published the study “A PROGRAM EVALUATION OF THE ITIL-BASED CHANGE MANAGEMENT PROGRAM AT GENERAL MOTORS CORPORATION“.

The focus of this study was to determine which commonly implemented ITIL best practices in the Change, Release and Configuration Management disciplines were, by statistical measure, the best predictors of IT performance excellence. The ITPI researchers condensed their findings and categorized them into seven sets of related practices, comprising 30 different individual practices (ITPI, 2007). Of these 30 individual best practices, five were found to match the KPIs used in this evaluation of GM’s Change Management program. These five ITPI best 59 practice performance indicators were change success rate, emergency change rate, unauthorized change rate, release impact rate, and release rollback rate. The ITPI study defined these KPIs as follows:

  • Change Success Rate – changes that met functional objectives and were completed during planned time
  • Emergency Change Rate – changes are tracked, but do not get standard review before they are implemented
  • Unauthorized Change Rate – percentage of changes that are unauthorized; changes made without being tracked by the standard change/release process
  • Release Impact Rate – percentage of production releases that cause a service outage or incident
  • Release Rollback Rate – percentage of production changes in the last 12 months that were rolled back

Part of the study compares GM’s performance to the ITPI study ranking of top-, medium-, and low-performing IT organizations for these five best practices. Top performers are defined by ITPI as the IT organizations performing in the top 20th percentile of all survey respondents (ITPI, 2007). It was found that GM performed better than the average of the top-performers in all of the best practice areas except for Emergency Change Rate (Urgent Changes), where GM (at 10.08%) had more urgent changes on average then the top-performers (at 7.10%) but less than the medium-performers (at 12.70%). Or put another way, GM was higher than the topperformer’s mean emergency change rate by 41.97%.
Table 11: Comparison of GM ChM Program Performance to ITPI Study KPIs
———————————————————————————————————
ITPI Study Performance Ranking
IT Best Practice KPI                         General Motors          Top                     Medium               Low
Change Success Rate                        98.03%                          96.40%             92.50%                81.30%
Emergency Change Rate                 10.08%                            7.10%             12.70%               22.90%
Unauthorized Change Rate              0.05%                            0.70%               3.20%               11.40%
Release Impact Rate                            0.21%                            2.90%               5.60%               11.10%
Release Rollback Rate                         1.05%                            3.30%               3.80%                 8.50%
———————————————————————————————————
ITPI–IT Process Institute; KPI-Key Performance Indicator
———————————————————————————————–

Read Full Post »


CloudHarmonyCloudHarmony™ provides  objective performance analysis to compare cloud providers. Their intent is to be the goto source for independent, un-biased, and objective performance metrics for cloud services. CloudHarmony is not affiliated with, owned or funded by any cloud provider.   The benchmarks provided by CloudHarmony fall into 3 categories: Performance Benchmarking, Network Benchmarking, and Uptime Monitoring.

CloudHarmony states that there are 7 questions one might ask when considering benchmark-based claims. Answering these questions will help to provide a clearer understanding on the validity and applicability of the claims.

  1. What is the claim? Typically the bold-face, attention grabbing headline like Service Y is 10X faster than Service Z
  2. What is the claimed measurement? Usually implied by the headline. For example the claim Service Y is 10X faster than Service Zimplies a measurement of system performance
  3. What is the actual measurement? To answer this question, look at the methodology and benchmark(s) used. This may require some digging, but can usually be found somewhere in the article body. Once found, do some research to determine what was actually measured. For example, if Geekbench was used, you would discover the actual measurement is processor and memory performance, but not disk or network IO
  4. Is it an apples-to-apples comparison? The validity of a benchmark-based claim ultimately depends on the fairness of the testing methodology. Claims involving comparisons should compare similar things. For example, Ford could compare a Mustang Shelby GT500 (top speed 190 MPH) to a Chevy Aveo (top speed 100 MPH) and claim their cars are nearly twice as fast, but the Aveo is not a comparable vehicle and therefore the claim would be invalid. A more fair, apples-to-apples comparison would be a Mustang GT500 and a Chevy Camaro ZL1 (top speed 186).
  5. Is the playing field level? Another important question to ask is whether or not there are any extraneous factors that provided an unfair advantage to one test subject over another. For example, using the top speed analogy, Ford could compare a Mustang with 92 octane fuel and a downhill course to a Camaro with 85 octane fuel and an uphill course. Because there are extraneous factors (fuel and angle of the course) which provided an unfair advantage to the Mustang, the claim would be invalid. To be fair, the top speeds of both vehicles should be measured on the same course, with the same fuel, fuel quantity, driver and weather conditions.
  6. Was the data reported accurately? Benchmarking often results in large datasets. Summarizing the data concisely and accurately can be challenging. Things to watch out for include lack of good statistical analysis (i.e. reporting average only), math errors, and sloppy calculations. For example, if large, highly variable data is collected, it is generally a best practice to report the median value in place of mean (average) to mitigate the effects of outliers. Standard deviation is also a useful metric to include to identify data consistency.
  7. Does it matter to you? The final question to ask is, assuming the results are valid, does it actually mean anything to you? For example, purchasing a vehicle based on a top speed comparison is not advisable if fuel economy is what really matters to you.

Read Full Post »

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: