Feeds:
Posts
Comments

Archive for the ‘hardware’ Category

ZDNetAccording to a ZDNet article (by John Hazard, March 9, 2011) “IT manager jobs to staff jobs in move to the Cloud“:

The typical IT organization usually maintains manager-to-staff ratio of about 11 percent (that number dips to 6 or 7 percent in larger companies), said John Longwell, vice president of research for Computer Economics. The ratio has been volatile for four years, according to the Computer Economics recently released study, IT management and administration staffing ratios. As businesses adjusted to the recession, they first eliminated staff positions, raising the ratio to its peak of 12 percent in 2009. In 2010, businesses trimmed management roles as well, lowering the ratio to 11 percent, Longwell said. But the long term trend is toward a higher ratio of managers-to-staff ratio, he told me.

“Over the longer term, though, I think we will see a continued evolution of the IT organizations toward having more chiefs and fewer Indians as functions move into the cloud or become more automated.”

For a complete copy of the article see: http://www.zdnet.com/blog/btl/it-manager-jobs-to-staff-jobs-in-move-to-the-cloud/45808?tag=content;search-results-rivers

Read Full Post »

ComputerWorldIn a Computerworld (Australia) article entitled “Is there best practice for a server to system administrator ratio?”  from July 9, 2010, the following was reported:

“We have observed that it can be, for example with a physical server, as low as 10 per admin, and for virtual servers as many as 500,” Gartner analyst, Errol Rasit, said. “But it really depends on the type of application. We have seen as an example from a particular customer – from some of our larger customers – that they had their admins managing 15 physical servers and when that moves to virtualisation it moves to something like 75 virtual servers.

To give you a different order of magnitude in another example one admin was looking at 50 physical servers and then moving to 250 virtual servers. I will say that we have seen maybe 500 or 600 virtual servers being managed by a single admin.

IDC meanwhile notes that in Australia the ratio for an SMB would vary greatly from a hoster and again to a cloud provider like Amazon or Microsoft. The analyst house’s statistics suggest anywhere from 10,000:1 at a dominant vendor like Google down to the SMB average of 30:1 for physical boxes and 80:1 for virtual machines.

One enterprise IT manager told us the ratio for physical servers was roughly 50:1, another working for a government organisation said 15-20:1, and an IT director at a research and development outfit noted that in a mid-size organisation a system administrator could maintain 10-14 servers per week or if their role was merely maintenance (i.e. no projects, no debugging, etc) then they could look after 25-35 servers per week. The IT director added a bigger organisation with larger economies of scale could potentially increase the ration to 10-14 servers to each admin per day with staff dedicated to just maintenance.

One of the key factors in increasing the ratio, however, is how much automation can be rolled into the maintenance / management of the server farm.

“A lot of what changes the ratio in the physical world is the types of tools being used to automate a lot of the processes; so run book automation and these sorts of things,” Gartner’s Rasit said. “That tends to be the main differentiator. The problem with virtualisation and virtualisation tools is there are a lot of them. It is very, very easy for a lot of customers to try and automate everything and that doesn’t necessarily always bear fruit for the organisation because they are spending too much time doing that.

A complete copy of the article can be found: http://www.computerworld.com.au/article/352635/there_best_practice_server_system_administrator_ratio_/

Read Full Post »

PatentsCan you believe this?  In 2006 as United States Patent (7020621) was issued with the following purpose:

A method for determining the total cost incurred per user of information technology (IT) in a distributed computing environment includes obtaining base costs and ongoing costs of an IT system and applying those costs to a series of metrics. The metrics are compared against benchmarks to evaluate and assess where cost efficiencies can be achieved.

In one embodiment, the invention includes a method for determining the cost per user of an information technology system. The method includes obtaining base costs, ongoing direct costs, and ongoing indirect costs. The method further includes gathering information relating to user profiles and organizational characteristics. These costs and information are input into a computer program to determine the cost for each user.

Full information can be found: http://www.google.com/patents?hl=en&lr=&vid=USPAT7020621&id=JU54AAAAEBAJ&oi=fnd&dq=7020621&printsec=abstract#v=onepage&q=&f=false

Read Full Post »

A “Information Technology Operation Benchmarks Report” created by Nick Ganesan, CIO/Associate Vice-Chancellor for ITTS aQuality Benchmarkt Fayetteville State University.  The report contains detailed benchmark data for:

  1. IT Budget Profile
  2. IT Budget per IT User
  3. IT Budget as a Percentage of Institutional Budget
  4. IT Users to IT Staff ratio
  5. IT Staff to Number of PCs – Ratio
  6. Staffing Profile by service area
  7. PCs to IT User Ratio
  8. Central IT Support Percentage
  9. Staff ratio by service areas

The benchmark data is against IT services at universities located in the United States.

Location of Report: http://www.kfupm.edu.sa/sict/ictc/related%20documents/IT%20Benchmark/Fayetteville%20State%20University%20IT_Benchmarks_ver1.pdf

Local Copy: IT Benchmark for Universities – Fayetteville State University Report

Read Full Post »

Avishay Traeger from the IBM Haifa Research Lab and Erez Zadok from Stony Brook University are raising awareness of issues relating to proper benchmarking practices of file and storage systems.  They hope that with greater awareness, standards will be raised, and more rigorous and scientific evaluations will be performed and published.

acm_imagesIn May 2008 they published a paper in the ACM Transactions on Storage entitled “A Nine Year Study of File System and Storage Benchmarking'” in which they surveyed 415 file system and storage benchmarks from 106 papers that were published in four highly-regarded conferences (SOSP, OSDI, USENIX, and FAST) between 1999 and 2007.  They found that most popular benchmarks are flawed, and many research papers used poor benchmarking practices and did not provide a clear indication of the system’s true performance.  They have provided a set of guidelines that they hope will improve future performance evaluations. An updated version of the guidelines is available.

Traeger and Zadok have also set up a mailing list for information on future events, as well as discussions.  More information can be found on their File and Storage System Benchmarking Portal
http://fsbench.filesystems.org/.

Read Full Post »

apache-viThis post on www.ilovebonnie.net documents some impressive system performance improvements by the addition of Squid Cache (a caching proxy) and APC Cache (opcode cache for PHP).
* Apache is able to deliver roughly 700% more requests per second with Squid when serving 1KB and 100KB images.
* Server load is reduced using Squid because the server does not have to create a bunch of Apache processes to handle the requests.
* APC Cache took a system that could barely handle 10-20 requests per second to handling 50-60 requests per second. A 400% increase.
* APC allowed the load times to remain under 5 seconds even with 200 concurrent threads slamming on the server.
* These two caches are easy to setup and install and allow you to get a lot more performance out of them.

The post has an in-depth discussion and a number of supporting charts. The primary point is how simple it can be to improve performance and scalability by adding caching.

Source: http://www.ilovebonnie.net/2009/07/14/benchmark-results-show-400-to-700-percent-increase-in-server-capabilities-with-apc-and-squid-cache/

Read Full Post »

hp_media_serverWhen investigating the purchase of computer servers it is important to understand the terms “Mean Time Between Failure” (MTBF) and “Mean Time to Repair” (MTTR).  Here is a link to an outstanding article by George Spafford that expains the terms and gives good examples of each.

Understanding ‘Mean Time Between Failure’

May 14, 2004 by George Spafford in Datamation
http://itmanagement.earthweb.com/columns/article.php/3354191

Read Full Post »

drupalAccording to a benchmark test run by John Quinn &  Cailin Nelson,

Drupal systems perform very well on amazon ec2, even with a simple single machine deployment. The larger hardware types perform significantly better, producing up to 12,500 pages per minute. this could be increased significantly by clustering as outlined here.  The apc op-code cache increases performance by a factor of roughly 4x.  The average response times were good in all the tests. The slowest tests yielded average times of 1.5s. again, response times where significantly better on the better hardware and reduced further by the use of apc.

Amazon uses Xen based virtualization technology to implement ec2. The cloud makes provisioning a machine as easy as executing a simple script command. when you are through with the machine, you simply terminate it and pay only for the hours that you’ve used. ec2 provides three types of virtual hardware that you can instantiate.

Source: John & Cailin Blog, “lamp performance on the elastic compute cloud: benchmarking drupal on amazon ec2″, January 28, 2008.

Read Full Post »

zohoServiceXen, an IT firm located in Atlanta, Georgia, has provided six (6) interactive spreadsheets to assist in IT benchmarking activities.  Each spreadsheet is a shared Zoho Sheet.  See below:

  1. Data Center Security Audit
  2. New Employee Cost Calculator
  3. Server Buy vs. Lease Calculator
  4. Total Cost of Ownership (TCO) Calculator
  5. Virtualization Fit Tool

Read Full Post »

Amazon Virtualization

Amazon Virtualization

Virtualization Benchmark

Amazon sold storage to external customers for 15 cents/GB/month (estimated).

Bechtel’s internal storage costs were $3.75/GB/month.

WHAT BECHTEL LEARNED: Amazon could sell storage cheaply, Ramleth believes, because its servers were more highly utilized.

Source: CIO Magazine, Bechtel’s New Benchmarks, October 24, 2008.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: