Rationalising server estates has been one of the most significant datacentre trends of recent years, driven by the twin needs to make server management more cost effective and the growing constraints on power.

The major IT vendors have sensed the change and both IBM and HP are currently turning themselves into pioneers of the new frontier through internal projects. The approaches of the two companies may differ but their aims are identical – to cut costs, simplify their datacentres, and create a centre of excellence as a marketing tool.

"The primary elements of the cost case are people, software licences, power and space"

Doug Nelson, Systems Consultant, IBM

The reliance of corporates on their computer networks has increased massively in recent years and so has the server count. The army of staff required to maintain these critical systems, together with new requirements such as data sharing with business partners and mobile working for employees, is driving costs and complexity through the roof. Current research by market analyst Quocirca, in a report called Datacentre Asset Planning, highlights the problem.

The report, taken from a survey of over 300 senior IT managers, shows that 28 per cent do not know the exact number of servers they have, 22 per cent say it could take up to a day to find a server that has gone down, and another 20 per cent say it would take longer than that.
With office space at a premium, the need to consolidate as much of the server population is obvious.

How does your IT infrastructure measure up?

Receive the 2008 UK company IT infrastructure benchmarking report, by completing the CIO survey.

This survey will uncover the latest industry trends, highlight rising CIO concerns and popular IT investments in various vertical sectors. Participation is free and qualifies you to receive a complimentary copy of the 2008 CIO Benchmarking Report so you can evaluate how your organisation compares.

“Space and power constraints are beginning to hit datacentres,” Dennis Szubert, who wrote the Quocirca report with Clive Longbottom, says. “Eleven per cent will run out of space this year, while 14 per cent have already hit a power supply limit.”

It is therefore hardly a surprise that consolidation, and more particularly, virtualisation, are in the spotlight. “Companies have been trying to consolidate servers for years,” Szubert explains. “but, until virtualisation appeared on industry-standard servers with VMware, you were restricted to using homogenous workloads on a single physical server. VMware changed all that because each virtual server gets its own copy of the operating system and is isolated from the other virtual machines running on the same hardware. Now, it’s probably better to consolidate different workloads onto the same physical server so that you don’t have simultaneous usage peaks.”

It is against this backdrop – familiar to some of the more go-ahead IT departments – that IBM and HP are taking centre stage.

IBM plans to reduce its server headcount from over 16,000 servers of various types to just over 100 System z9 and z10 mainframes running Linux. The initiative also forms part of the company’s Project Big Green which aims to reduce the computer giant’s environmental impact. The project is broken down into four tranches and the first stage is in progress to reduce 4,000 servers to 30 mainframes.

Server reduction
In contrast, HP is reducing its servers to 14,000 from 21,000. This may not sound as radical as IBM but HP is tackling the same problem using its own BladeSystem server blades. The argument from HP is that in three years it has reduced its former installation of 35 datacentres down to three new pairs, or six new datacentres, with full failover for business continuity and disaster recovery.

The IBM route involves very sophisticated, expensive hardware with each mainframe costing over £500,000 but averaging around 133 virtualised servers each, according to the company. HP is taking an incremental approach to show customers how they can add blades as needs and budgets dictate. The HP server racks will occupy more space and use more power than the mainframes but will be cheaper to buy.

Dundee City Council

Three years ago, Dundee City Council recognised that its IT expense costs were in danger of spiralling out of control. The growth of IT had resulted in disparate systems and its software licensing costs were moving over to a per-processor basis. Additionally, Ged Bell, the council’s head of IT, discovered that some servers were only utilising six per cent of their processing capability.
It was a classic case for consolidation. The decision to consolidate everything into Dundee’s datacentre also made it possible to support the need for resilience and disaster recovery. Consequently, a second datacentre was commissioned and built on a separate site to provide failover and to increase capacity.
This decision was preceded by a period of infrastructure evaluation. “We saw there was a chance of getting down to two core platforms. We did a lot of in-house development using Oracle as a platform but the per-processor licensing costs were getting out of hand. We felt that we could be leaders in local government terms by adopting open source [and virtualisation],” Bell says.
The first application to be virtualised was a major Northgate payroll application onto an IBM Linux virtual machine running on an IBM z8 mainframe. This was a two-processor system and provided an immediate cost justification over the four- or six-processor box that Bell felt would otherwise be required. Over time, at least 10 more of the council’s applications were added.

Consolidation is about more than real estate and cost. The rationalisation of software is an essential phase that often reduces licensing cost and certainly reduces maintenance.

Phil Dodsworth, HP’s director of datacentre solutions, explains: “We had a dreadful problem with ‘shadow IT’ to solve first. Business units all over the world had been doing their own thing buying software and hardware and the first strategy announced by our CIO Randy Mott was to eliminate these. We found that we had 5,000 applications, many of which duplicated functionality. We decided the goal should be to have just 1,500 applications across the whole organisation. I think we’re running about 1,560 now but we’re reducing that number all of the time as the programme rolls out.”

Single entity
Mott reveals an example of the shadow problem. “Two years ago, HP had 17 instances of Siebel across the company, with different versions using different capabilities,” he says. “In May 2007, this was reduced to one version and we are currently rolling out a new set of features and functions across the whole company.”

Another issue to be faced is how the migration to the new infrastructure is handled. The ‘clean-sheet’ approach of HP means that systems can continue running until it is time to switch over.

Dodsworth says: “HP is not moving any infrastructure from the old datacentres into the new, and the new datacentres are going to be equipped with new infrastructure and every energy-saving technology that is available. Where the software is concerned, because we’re moving to a new infrastructure, we’re able to put the new servers on the floor, provision software, and then it’s just a case of migrating to the software at an appropriate time, and you can do that without any service interruption. That’s the advantage of not having to power things down, load them onto a truck and recommission them at the new site.”
The HP programme will see a lot of hardware being retired and some hardware being remanufactured and sold at a discount through the HP Renew scheme. Some will be sold through second-user and brokerage markets and older kit will be broken down to recover precious metals and elements before being recycled. This, of course, is an issue that has to be addressed in this age of the Waste Electrical and Electronic Equipment (WEEE) Directive that affects computer disposal.

IBM is in a different situation because it is reducing the number of datacentres but not building new ones. Doug Neilson, an IBM systems consultant, explains, “As we have reduced the number of datacentres, we are obviously consolidating into the most appropriate sites in terms of space, power, resilience and security. Sometimes it’s to do with skills; sometimes it’s to do with floor space or whether it is one of the more modern datacentres.”

Migration and consolidation
The consolidation project commenced last August and IBM is now in the process of studying the server infrastructures, bringing the servers together, migrating, consolidating, and building up operational procedures. The project will run for about three years and Neilson says that the payback will come within two years.

He adds: “The primary elements of the cost case are people, software licences, power and space. Think about software licenses: many pieces of software are licenced on a per-processor basis and running that software on 4,000 servers means over 4,000 licences. If I can run it on 30 instead, the licence saving could be dramatic. People savings are more variable but we have found that, with distributed servers, the number of people involved grows linearly. So, if I have 2,000 servers and 200 people, with 4,000 servers there are 400 people. With mainframes it doesn’t work that way: capacity can be grown dramatically without increasing the number of people who support it.”

It will not all be cost savings for IBM because applications have to be moved. In some cases, these will have to be migrated from Windows or Unix onto Linux and, even though it is now a well understood migration path, it will still incur costs.

Return on investment
HP is not so open about its return on investment.
“Before we started the programme our ROI was not good,” Dodsworth says. “[We were spending four or five per cent] of sales revenue [on IT]. The programme does require us to spend and it’s obviously quite a heavy expenditure associated with building the datacentres and implementing all of this new technology but, at the end of the programme, we will have IT that will cost approximately two per cent of our sales revenue.

Virtualisation – a history

Although highly fashionable at present, virtualisation was first used on mainframes at IBM’s laboratories in 1966. It became commercially available in 1967 and was seen as a way to get the best value from expensive CP-40 and CP-67 mainframes. Virtualisation took advantage of the computers’ multitasking capabilities to offer concurrent time-sharing capabilities.

By 1972, the concept’s requirements were better understood and the term hypervisor started to be used to describe the operating system that managed the virtualised machines in the CP-370 mainframe. The term was coined because the standard mainframe kernel ran in what IBM called ‘supervisor mode’ but the new system controlled several supervisor-based virtual machines and was therefore a “hyper supervisor”.

The growing computational capabilities of minicomputers in the 1970s and 1980s made virtualisation possible on Unix and VMS operating systems, spreading the technology to a wider market. Digital Equipment and a host of other minicomputer makers of that era all offered virtualisation but it was hardly a mainstream, low-cost infrastructure.

Although virtualisation was theoretically possible on microcomputers in the mid-90s, development was stalled because of the popularity of the distributed computing model.
The great breakthrough for virtualisation occurred in the last 10 years, with VMware appearing in 1999.

"So there’s a huge cost benefit that will be delivered through the change in the infrastructure, the reduction in energy consumption, streamlining our IT workforce by removing contingent work, and moving to a staff-based model.”

One of the principal cost returns will come from reduction in power draw.
Quocirca’s Szubert says: “Power consumption is becoming one of the biggest components in the cost of computing. The predictions are that it is approaching and will exceed the cost of hardware [over the lifetime of the server]. So you’ve got to start factoring it in soon. The airline industry is getting panned for its carbon footprint but the IT industry causes just as much pollution. The IT industry is getting a pretty easy ride but I think that’s going to change with the increasing demands for computing.”

The power issue is becoming critical with datacentres being moved to areas like Oregon where hydroelectric power offers cheaper, eco-friendlier electricity. In other areas, electricity suppliers are capping the amount of power that a company can draw for data processing. Consolidation through virtualisation can make a dramatic average reduction of 50 per cent in power usage.

Consolidation has a lot going for it so it is no wonder that 67 per cent of Quocirca’s respondents say they are consolidating systems and a further 17 per cent are considering it.

Related stories:

Data centre operators facing green tidal wave

Data centre management survey damns green claims