Virtualisation – creating logical pools of IT resources not linked to physical devices – can reduce spending on new server and storage hardware, increase application uptime and simplify IT management. But organisations will only get those benefits if they follow some key steps, according to IT managers, analysts and other industry observers.


Some users purchase storage virtualisation for only one purpose without realising the other benefits it can provide them, says Mike Karp, an analyst at Enterprise Management Associates, a research group in the US.

For example, “It does not necessarily pay to purchase file-based virtualisation technology just to do data migration,” he says. Storage virtualisation becomes more worthwhile, he says, when it is also used for purposes such as capacity management, load balancing and ‘information life-cycle management,’ which moves data to less expensive storage devices as the data becomes less valuable.

Users seeking to virtualise their file based storage should first consider their objective, says Greg Schulz, founder and analyst at The StorageIO Group, an industry analysis and consulting firm. If the goal is to simply reduce the number of NAS appliances in use, consolidating them into fewer, larger devices might be easier and less expensive than virtualisation, he says.

But if the goal is to get a single view of all the space available on all the NAS appliances in different offices, virtualisation might be better, he says.

Careful research also helps IT staff set expectations. According to analyst Andi Mann at Enterprise Management Associates, the outcomes of virtualisation include enabling disaster recovery and business continuity; increasing flexibility and agility; improving server utilisation; reducing downtime and then, only after those objectives have been met, lowering administration and management costs .


Realising cost savings – or even just keeping the virtual environment stable and secure – requires consistent processes for creating, configuring, maintaining and eventually eliminating virtual servers when they are no longer needed.

Because a virtual machine (VM) doesn’t require the purchase of new hardware, there is often no formal process to approve its creation, says Stefan Paychere, founder and chief technology officer at Dunes Technologies Incorporated, a virtualisation management software dealer. That can result in a ‘sprawl’ of virtual servers that are as hard to manage as their physical counterparts.

Botched changes to a physical ‘host’ server are especially dangerous because they can damage the availability or performance of multiple VMs. Many IT managers also neglect back-up and failover plans, even though virtualisation can make it easier for working servers to take over for failed hardware.

Consistent monitoring of the VMs is also necessary to ensure they’ve been properly configured.


Predicting the hardware required for each VM is tricky, says Gordon Haff, an analyst at Illuminata Incorporated, a research and analysis firm in the US.

“People look at something like CPU utilisation and assume they can put more virtual machines on a piece of hardware than they in fact can,” he says.

Some can dynamically reallocate and balance VMs among physical machines as application needs change, he says.

Premigration tools from several dealers also help calculate the best ratio of virtual to physical servers.

Schulz himself recommends testing VM configurations under actual performance conditions, rather than extrapolating from tests with only a few servers or a small amount of data. He also suggests deploying virtualisation first in low-risk areas such as test and development, then for new applications developed and tested in virtual environments, and only then for older applications.


A physical server running multiple virtual machines generates far more network traffic than one running a single application.

However, IT decision-makers often don’t consider the impact of virtualisation on other parts of the IT infrastructure.

Server virtualisation can boost storage needs so that you can throw up a server in a second, and use 15GB to 20GB of storage without even realising it. Companies can reduce their appetite for storage by ‘tightly restricting’ the creation of VMs through tools that perform automated discovery, inventory and configuration management of VMs .

Boosting network bandwidth to handle this increased traffic to and from VMs might require implementing multipathing, path fail-over and load balancing on the network to ensure adequate bandwidth in case one network component becomes overloaded or fails.

Maintaining proper security, uptime or redundancy might require creating virtual LANs to keep sensitive traffic from unauthorised eyes or on the fastest network links. The network interface cards that connect servers to the network can also be virtualised to give each guest server its own IP address, and the host bus adapters that link servers to storage arrays can be virtualised to present multiple logical ports to the storage fabric. Such beyond the- server virtualisation helps preserve the unique elements that ensure security and performance for each VM


Once you have the grand design down, watch out for the implementation details that can increase costs or complexity. One such detail: Different software companies have different licensing policies for VMs .

For example, if you are on Microsoft’s virtualisation software, certain products let you run multiple instances without any additional licensing charge. “If you take the same application package and move it over to a VMware environment, the licensing per instance is counted differently.” The result can be significant additional licensing costs that were probably not planned for, he says.

Migrating applications and servers from physical to virtual machines can be more complicated than dealers claim. In some cases, the operating system or application will need specific updates before being converted to the virtualised environment.

Driver and hardware incompatibilities that did not exist in the previous physical environment, but do exist in the new virtualised environment, can also require multiple migration attempts.

Hardware conflicts are a concern for foe those lacking the resources to ensure compatibility with popular virtualisation platforms such as VMware’s, says Enterprise Management Associates’ Karp.

Another detail to keep in mind: IT managers require new skills in the virtualised world, such as an understanding of the network traffic going in and out of the virtualised environment. With virtualised switches and virtualised network interface cards and virtualised local area networks, you need somebody who understands all of that and can properly implement it.

The same is true of storage, where now you have multiple hosts sharing common volumes instead of each server having its own dedicated storage.

Virtualisation “should not be adding any more complexity, should not be adding any more management work for you,” says The StorageIO Group’s Schulz. “It should not be introducing any new bottlenecks; it should not be introducing any new instability.”


■ The number of CPU cycles

■ The amount of disk space

■ The level of disk I/O

■ The amount of memory

■ The network bandwidth each VM will require