Feeds:
Posts
Comments

Archive for the ‘workload’ Category


CloudHarmonyCloudHarmony™ provides  objective performance analysis to compare cloud providers. Their intent is to be the goto source for independent, un-biased, and objective performance metrics for cloud services. CloudHarmony is not affiliated with, owned or funded by any cloud provider.   The benchmarks provided by CloudHarmony fall into 3 categories: Performance Benchmarking, Network Benchmarking, and Uptime Monitoring.

CloudHarmony states that there are 7 questions one might ask when considering benchmark-based claims. Answering these questions will help to provide a clearer understanding on the validity and applicability of the claims.

  1. What is the claim? Typically the bold-face, attention grabbing headline like Service Y is 10X faster than Service Z
  2. What is the claimed measurement? Usually implied by the headline. For example the claim Service Y is 10X faster than Service Zimplies a measurement of system performance
  3. What is the actual measurement? To answer this question, look at the methodology and benchmark(s) used. This may require some digging, but can usually be found somewhere in the article body. Once found, do some research to determine what was actually measured. For example, if Geekbench was used, you would discover the actual measurement is processor and memory performance, but not disk or network IO
  4. Is it an apples-to-apples comparison? The validity of a benchmark-based claim ultimately depends on the fairness of the testing methodology. Claims involving comparisons should compare similar things. For example, Ford could compare a Mustang Shelby GT500 (top speed 190 MPH) to a Chevy Aveo (top speed 100 MPH) and claim their cars are nearly twice as fast, but the Aveo is not a comparable vehicle and therefore the claim would be invalid. A more fair, apples-to-apples comparison would be a Mustang GT500 and a Chevy Camaro ZL1 (top speed 186).
  5. Is the playing field level? Another important question to ask is whether or not there are any extraneous factors that provided an unfair advantage to one test subject over another. For example, using the top speed analogy, Ford could compare a Mustang with 92 octane fuel and a downhill course to a Camaro with 85 octane fuel and an uphill course. Because there are extraneous factors (fuel and angle of the course) which provided an unfair advantage to the Mustang, the claim would be invalid. To be fair, the top speeds of both vehicles should be measured on the same course, with the same fuel, fuel quantity, driver and weather conditions.
  6. Was the data reported accurately? Benchmarking often results in large datasets. Summarizing the data concisely and accurately can be challenging. Things to watch out for include lack of good statistical analysis (i.e. reporting average only), math errors, and sloppy calculations. For example, if large, highly variable data is collected, it is generally a best practice to report the median value in place of mean (average) to mitigate the effects of outliers. Standard deviation is also a useful metric to include to identify data consistency.
  7. Does it matter to you? The final question to ask is, assuming the results are valid, does it actually mean anything to you? For example, purchasing a vehicle based on a top speed comparison is not advisable if fuel economy is what really matters to you.

Read Full Post »

Oracle 24X7Venkat S. Devraj, co-founder and CTO of database and application automation software provider Stratavia and author of Oracle 24×7 Tips & Techniques (McGraw-Hill), had the following to say about the number of DBA’s necessary to administer an Oracle DB environment:

Every so often, I come across IT Managers bragging that they have a ratio of “50 DB instances to 1 DBA” or “80 DBs to 1 DBA”… — Is that supposed to be good? And conversely, is a lower ratio such as “5 to 1” necessarily bad? Compared to what? In response, I get back vague assertions such as “well, the average in the database industry seems to be “20 to 1”.

Venkat recommends a benchmarking approach:

The reality is, a unidimensional *and* subjective ratio, based on so-called industry best practices, never reveals the entire picture. A better method (albeit also subjective) to evaluate and improve DBA effectiveness would be to establish the current productivity level (“PL”) as a baseline, initiate ways to enhance it and carry out comparisons on an ongoing basis against this baseline. Cross-industry comparisons seldom make sense, however the PL from other high-performing IT groups in similar companies/industries may serve as a decent benchmark.

Finally, Venkat recommends developing a 2X2 matrix where an “Environmental Complexity Score” is charted against a  “Delivery Maturity Score”.  Your PL depends on where you land in the 2X2 matrix. If you picture the X-Y chart as comprising 4 quadrants (left top, left bottom, right top and right bottom), the left top is “Bad”, the left bottom is “Mediocre”, the right top is “Good” and the right bottom is “Excellent”.

For a full description of Mr. Devraj’s approach, see: Selective Deliberations on Databases, Data Center Automation & Cloud Computing

Read Full Post »

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: