Feeds:
Posts
Comments

Following is some of the best benchmarking data I have seen for digital marketing campaigns.   According to Dave Chaffey in his article ‘Display advertising clickthrough rates‘ (April 2015), we have the following benchmark data related to digital marketing:

  • Overall Display Ad Clickthrough Rates (CTRs): across all ad formats and placements Ad CTR is 0.06% in April 2015
  • Rich media CTRs: rich media Ad CTR is 0.27%
  • Ad CTR trends: an approximate 0.1% CTR has been an average for some time
  • Online ad Clickthrough Rates by ad format (skyscraper, pop-up, leaderboard, banner, etc.)

EMEA-Ad-Clickthrough-rates

 

 

 

 

 

 

 

 

  • Search vs Social vs Display CTR’s on Desktop, Mobil and Tablet 

Search-vs-social-vs-ad-CTRs

Kathy Iberle (Hewlett-Packard) and Sue Bartlett (IIS/STEP Technology) have developed a model to determine the ratio of software testers to software developers.   The following comes from the abstract of their paper “Estimating Tester to Developer Ratios (or Not)”.

Test managers often need to make an initial estimate of the number of people that will be required to test a particular product, before the information or the time to do a detailed task breakdown is available. One piece of data that is almost always available is the number of developers that are or will be working on the project in question. Common sense suggests that there is a relationship between the number of testers and the number of developers.  This article presents a model that can be used in describing that relationship. It is a heuristic method for predicting a ratio of testers to developers on future projects. The method uses the model to predict differences from a baseline project. A reader with some projects behind her will be able to come up with a rule-of-thumb model to suit her most common situations, to understand when the model might not give accurate answers and what additional factors might need to be taken into consideration.

In the paper the authors present two case studies: (1) “MergoApp”, a e-commerce website where the tester-developer ratio was 1:4, and (2)“DataApp”, a database application to replace an Excel application, where the actual tester-developer ratio was 4:8. A copy of their model can be found at Kathy Iberle’s web site (http://www.kiberle.com/articles.htm).   In addition, slides for the presentation can be found here: Estimate Slides.

GMIn 2008 James W. Flosdorf, Jr. published the study “A PROGRAM EVALUATION OF THE ITIL-BASED CHANGE MANAGEMENT PROGRAM AT GENERAL MOTORS CORPORATION“.

The focus of this study was to determine which commonly implemented ITIL best practices in the Change, Release and Configuration Management disciplines were, by statistical measure, the best predictors of IT performance excellence. The ITPI researchers condensed their findings and categorized them into seven sets of related practices, comprising 30 different individual practices (ITPI, 2007). Of these 30 individual best practices, five were found to match the KPIs used in this evaluation of GM’s Change Management program. These five ITPI best 59 practice performance indicators were change success rate, emergency change rate, unauthorized change rate, release impact rate, and release rollback rate. The ITPI study defined these KPIs as follows:

  • Change Success Rate – changes that met functional objectives and were completed during planned time
  • Emergency Change Rate – changes are tracked, but do not get standard review before they are implemented
  • Unauthorized Change Rate – percentage of changes that are unauthorized; changes made without being tracked by the standard change/release process
  • Release Impact Rate – percentage of production releases that cause a service outage or incident
  • Release Rollback Rate – percentage of production changes in the last 12 months that were rolled back

Part of the study compares GM’s performance to the ITPI study ranking of top-, medium-, and low-performing IT organizations for these five best practices. Top performers are defined by ITPI as the IT organizations performing in the top 20th percentile of all survey respondents (ITPI, 2007). It was found that GM performed better than the average of the top-performers in all of the best practice areas except for Emergency Change Rate (Urgent Changes), where GM (at 10.08%) had more urgent changes on average then the top-performers (at 7.10%) but less than the medium-performers (at 12.70%). Or put another way, GM was higher than the topperformer’s mean emergency change rate by 41.97%.
Table 11: Comparison of GM ChM Program Performance to ITPI Study KPIs
———————————————————————————————————
ITPI Study Performance Ranking
IT Best Practice KPI                         General Motors          Top                     Medium               Low
Change Success Rate                        98.03%                          96.40%             92.50%                81.30%
Emergency Change Rate                 10.08%                            7.10%             12.70%               22.90%
Unauthorized Change Rate              0.05%                            0.70%               3.20%               11.40%
Release Impact Rate                            0.21%                            2.90%               5.60%               11.10%
Release Rollback Rate                         1.05%                            3.30%               3.80%                 8.50%
———————————————————————————————————
ITPI–IT Process Institute; KPI-Key Performance Indicator
———————————————————————————————–


CloudHarmonyCloudHarmony™ provides  objective performance analysis to compare cloud providers. Their intent is to be the goto source for independent, un-biased, and objective performance metrics for cloud services. CloudHarmony is not affiliated with, owned or funded by any cloud provider.   The benchmarks provided by CloudHarmony fall into 3 categories: Performance Benchmarking, Network Benchmarking, and Uptime Monitoring.

CloudHarmony states that there are 7 questions one might ask when considering benchmark-based claims. Answering these questions will help to provide a clearer understanding on the validity and applicability of the claims.

  1. What is the claim? Typically the bold-face, attention grabbing headline like Service Y is 10X faster than Service Z
  2. What is the claimed measurement? Usually implied by the headline. For example the claim Service Y is 10X faster than Service Zimplies a measurement of system performance
  3. What is the actual measurement? To answer this question, look at the methodology and benchmark(s) used. This may require some digging, but can usually be found somewhere in the article body. Once found, do some research to determine what was actually measured. For example, if Geekbench was used, you would discover the actual measurement is processor and memory performance, but not disk or network IO
  4. Is it an apples-to-apples comparison? The validity of a benchmark-based claim ultimately depends on the fairness of the testing methodology. Claims involving comparisons should compare similar things. For example, Ford could compare a Mustang Shelby GT500 (top speed 190 MPH) to a Chevy Aveo (top speed 100 MPH) and claim their cars are nearly twice as fast, but the Aveo is not a comparable vehicle and therefore the claim would be invalid. A more fair, apples-to-apples comparison would be a Mustang GT500 and a Chevy Camaro ZL1 (top speed 186).
  5. Is the playing field level? Another important question to ask is whether or not there are any extraneous factors that provided an unfair advantage to one test subject over another. For example, using the top speed analogy, Ford could compare a Mustang with 92 octane fuel and a downhill course to a Camaro with 85 octane fuel and an uphill course. Because there are extraneous factors (fuel and angle of the course) which provided an unfair advantage to the Mustang, the claim would be invalid. To be fair, the top speeds of both vehicles should be measured on the same course, with the same fuel, fuel quantity, driver and weather conditions.
  6. Was the data reported accurately? Benchmarking often results in large datasets. Summarizing the data concisely and accurately can be challenging. Things to watch out for include lack of good statistical analysis (i.e. reporting average only), math errors, and sloppy calculations. For example, if large, highly variable data is collected, it is generally a best practice to report the median value in place of mean (average) to mitigate the effects of outliers. Standard deviation is also a useful metric to include to identify data consistency.
  7. Does it matter to you? The final question to ask is, assuming the results are valid, does it actually mean anything to you? For example, purchasing a vehicle based on a top speed comparison is not advisable if fuel economy is what really matters to you.

Oracle 24X7Venkat S. Devraj, co-founder and CTO of database and application automation software provider Stratavia and author of Oracle 24×7 Tips & Techniques (McGraw-Hill), had the following to say about the number of DBA’s necessary to administer an Oracle DB environment:

Every so often, I come across IT Managers bragging that they have a ratio of “50 DB instances to 1 DBA” or “80 DBs to 1 DBA”… — Is that supposed to be good? And conversely, is a lower ratio such as “5 to 1” necessarily bad? Compared to what? In response, I get back vague assertions such as “well, the average in the database industry seems to be “20 to 1”.

Venkat recommends a benchmarking approach:

The reality is, a unidimensional *and* subjective ratio, based on so-called industry best practices, never reveals the entire picture. A better method (albeit also subjective) to evaluate and improve DBA effectiveness would be to establish the current productivity level (“PL”) as a baseline, initiate ways to enhance it and carry out comparisons on an ongoing basis against this baseline. Cross-industry comparisons seldom make sense, however the PL from other high-performing IT groups in similar companies/industries may serve as a decent benchmark.

Finally, Venkat recommends developing a 2X2 matrix where an “Environmental Complexity Score” is charted against a  “Delivery Maturity Score”.  Your PL depends on where you land in the 2X2 matrix. If you picture the X-Y chart as comprising 4 quadrants (left top, left bottom, right top and right bottom), the left top is “Bad”, the left bottom is “Mediocre”, the right top is “Good” and the right bottom is “Excellent”.

For a full description of Mr. Devraj’s approach, see: Selective Deliberations on Databases, Data Center Automation & Cloud Computing

TelusAccording to the 2009 Rotman-TELUS Joint Study on Canadian IT Security Practices the financial crisis has had a negative impact on IT security budgets.

The 2009 subprime mortgage crisis was prompted by a striking rise in mortgage foreclosures in the United States, with major adverse effects for banks and financial markets around the globe.

Regarding security budgets being affected by the global crisis, 75% of responding organizations reacted by applying budgetary cuts to their security expenditures, while 25% actually increased their security investment.  50% of the respondents reported minor adjustments where only 10% or less of their budget was affected (most of them adjusting downward). 20% reported moderate cuts of 10%-25%, and less than 10% applied severe cuts of 50% or more.

A detailed analysis of the reactions showed that Average Budgetary Impact was a 4.6% budgetary cut in IT security expenditures in the Government sector, a 6.6% budgetary cut in the Private sector, and a 10.8% budgetary cut in the Public sector (from Table 17).

The 2009 report has IT security metrics on:

  • Application Security
  • IT Security Budgets
  • IT Governance
  • IT Security Breaches
  • Security Technologies

Proceed here for a full copy of the Joint Study on Canadian IT Security Practices .

ZDNetAccording to a ZDNet article (by John Hazard, March 9, 2011) “IT manager jobs to staff jobs in move to the Cloud“:

The typical IT organization usually maintains manager-to-staff ratio of about 11 percent (that number dips to 6 or 7 percent in larger companies), said John Longwell, vice president of research for Computer Economics. The ratio has been volatile for four years, according to the Computer Economics recently released study, IT management and administration staffing ratios. As businesses adjusted to the recession, they first eliminated staff positions, raising the ratio to its peak of 12 percent in 2009. In 2010, businesses trimmed management roles as well, lowering the ratio to 11 percent, Longwell said. But the long term trend is toward a higher ratio of managers-to-staff ratio, he told me.

“Over the longer term, though, I think we will see a continued evolution of the IT organizations toward having more chiefs and fewer Indians as functions move into the cloud or become more automated.”

For a complete copy of the article see: http://www.zdnet.com/blog/btl/it-manager-jobs-to-staff-jobs-in-move-to-the-cloud/45808?tag=content;search-results-rivers

ComputerWorldIn a Computerworld (Australia) article entitled “Is there best practice for a server to system administrator ratio?”  from July 9, 2010, the following was reported:

“We have observed that it can be, for example with a physical server, as low as 10 per admin, and for virtual servers as many as 500,” Gartner analyst, Errol Rasit, said. “But it really depends on the type of application. We have seen as an example from a particular customer – from some of our larger customers – that they had their admins managing 15 physical servers and when that moves to virtualisation it moves to something like 75 virtual servers.

To give you a different order of magnitude in another example one admin was looking at 50 physical servers and then moving to 250 virtual servers. I will say that we have seen maybe 500 or 600 virtual servers being managed by a single admin.

IDC meanwhile notes that in Australia the ratio for an SMB would vary greatly from a hoster and again to a cloud provider like Amazon or Microsoft. The analyst house’s statistics suggest anywhere from 10,000:1 at a dominant vendor like Google down to the SMB average of 30:1 for physical boxes and 80:1 for virtual machines.

One enterprise IT manager told us the ratio for physical servers was roughly 50:1, another working for a government organisation said 15-20:1, and an IT director at a research and development outfit noted that in a mid-size organisation a system administrator could maintain 10-14 servers per week or if their role was merely maintenance (i.e. no projects, no debugging, etc) then they could look after 25-35 servers per week. The IT director added a bigger organisation with larger economies of scale could potentially increase the ration to 10-14 servers to each admin per day with staff dedicated to just maintenance.

One of the key factors in increasing the ratio, however, is how much automation can be rolled into the maintenance / management of the server farm.

“A lot of what changes the ratio in the physical world is the types of tools being used to automate a lot of the processes; so run book automation and these sorts of things,” Gartner’s Rasit said. “That tends to be the main differentiator. The problem with virtualisation and virtualisation tools is there are a lot of them. It is very, very easy for a lot of customers to try and automate everything and that doesn’t necessarily always bear fruit for the organisation because they are spending too much time doing that.

A complete copy of the article can be found: http://www.computerworld.com.au/article/352635/there_best_practice_server_system_administrator_ratio_/

GartnerMark McDonald, group vice president and head of research in Gartner Executive Programs, suggests replacing the IT budget / revenue ratio with a metric that has meaning – like IT headcount to Free Cash Flow.  That is a metric one CIO is using and it makes more sense because it can be managed.

He suggests measuring IT headcount because more than 70% of most IT budgets are already contractually committed – effectively removing them for short-term management changes.  IT headcount is the result of factors the CIO can control, like the level of automation, the skill of their people, the structure of their operations and the nature of their IT investment budget.

McDonald suggests that free cash flow is a better numerator, as it is more indicative of a company’s health.  Management can influence free cash slow and manage it to some extent in either a strong or weak economies.  Case in point; look at organizations building cash in the recession.  Free cash flow is also something that IT can influence as IT systems integrate process and information flows which improves end-to-end process and cash performance.

It is harder to measure, free cash flow and IT headcount, but it should produce a clearer signal and inform better management decisions and actions.

See full article: http://blogs.gartner.com/mark_mcdonald/2010/04/06/it-spend-as-a-percent-of-revenue-%E2%80%93-a-dubious-metric-at-best/

Robert HalfIn a presentation entitled “Staffing Strategies for the 21stCentury” by Katherine Spencer Lee, Executive Director at Robert Half Technology (September 18, 2008), the following IT staffing metrics were presented:

A Robert Half Technology* survey asked 1,400 CIOs to compare …
Actual versus ideal ratio of internal end-users to technical support employees at their company

  • Mean response for Actual was 136:1
  • Mean response for Ideal was 82:1

Technical Support Center staffs are 40 percent smaller, on average, than optimal.

Mobile vs Static staffing ratios:

  • There is a baseline ratio around 90 customers per analyst.
  • Technical and mobile user bases earn a lower ratio due to higher complexity (1:80-110)
  • Fewer analysts required for non-technical and static users (1:120-160)

Organizational goals should help set staffing levels:

  1. Compete at the cutting edge of innovation (25:1 to 50:1)
  2. Compete on full service and overall value (60:1 t0 100:1)
  3. Compete on thin cost margin and scalability (125:1 to 200:1)

A complete copy of the Robert Half presentation can be found here.