The IT industry is full of bad habits, none greater than treating new technology areas as faits accomplis - which of course makes things a tad complicated eighteen months down the line, when organisations are just about deciding to adopt, but marketeers have already moved onto the next ‘big thing'. I remember overhearing a PR executive, a good few years ago now, saying how SOA needed a new name largely to overcome this issue of allowing people to actually do something with it without making vendors appear in some way backward. It was ever, and no doubt it always will be, thus.

Case in point: virtualisation. If the hype were to be believed, most organisations would be done by now - servers would be consolidated into neat arrays of racks and blades, drawing on a similarly neat configuration of fit-for-purpose storage operating as a single, flexible pool. What's not to like? Nothing in principle, but the practice might take a little longer (probably a good thing for VMware, to be fair - if we were all finished, they'd have nothing left to sell).

In reality, most organisations are still to leave the starting blocks when it comes to virtualisation. Thinking specifically about server virtualisation, for example, while over 50 per cent of larger (250-employee-plus employee) organisations may have adopted it in some shape or form, the majority of deployments are non-critical workloads or pilots.

So, we need to be careful. Virtualisation in general, and server virtualisation in particular, is not a done deal. We think we can speak with confidence when we say that it will be accepted as a mainstream technology, as (for a change) adoption is being driven as much by organisational ‘pull' as by industry ‘push' - not only because of the relatively immediate cash savings in terms of hardware, power and licensing that virtualisation can bring, but also for the operational flexibility and provisioning benefits ("Want a new physical server? No. Want a virtual server? Yes.").

But let's not run away with ourselves. Just because over half of the world's organisations have some form of virtualisation in place, doesn't mean it is already a mainstream technology. A number of hurdles still need to be overcome, some of which will be down to what happens when organisations do start to do more with it, and some to do with what vendors then provide as a response.

In a nutshell, much of this comes down to good, old fashioned configuration management. For example, lessons from early adopters suggest that the problems of physical server sprawl (which server virtualisation is reputed to resolve) can very quickly be superseded by virtual server sprawl - we can imagine similar scenarios for virtual storage, thin provisioning or no thin provisioning.

Thinking about management in the broader sense, there remains a way to go as well: there are management tools for virtual environments, and management tools for physical environments, but the twain don't yet meet. Feedback welcome on this, but common sense suggests that for an operationally quiet life, it will be necessary to manage physical and virtual machines as a single pool using the same tools. This is not yet the case, which makes me feel just slightly jumpy - I know, they took away my screwdriver a long time ago, but stories about how virtualisation is being rolled out without due consideration of security, patching, asset management, licensing, business continuity and so on can't help but do so.

This isn't meant to be a stake in the ground so I can say, "I told you so," in 18 months' time. Rather I'm looking forward to when virtualisation moves from what it enables now - simplifying the existing environment - to what it can enable, namely offering a stepping stone towards more dynamic use of IT resources. In principle, a virtual server, and its associated virtual storage, need only exist for the time required to run a given workload - this could be years, or equally, days or hours.

To achieve this vision, requires first that the platform of virtualisation and management tools are in place, working together seamlessly. What's the missing piece? When I spoke to David Greschler, Microsoft's director of virtualisation strategy recently, he summed it up in a single word: ‘orchestration' - that is, software running above the management layer, which can make policy-based decisions about what should be running where.

Before I get castigated in the comments let me say yes, indeed, this is precisely what is known in the mainframe world as ‘resource management'. And indeed, there are plenty of good reasons why a mainframe could operate as a server virtualisation platform, just as there are plenty of good reasons why an x86 platform could do the same. The difference now is the ubiquity of virtualisation - which will require orchestration to operate across computer systems, and indeed across data centres if the cloud vision (currently very much ‘push') is to be believed.

So, virtualisation does indeed hold much potential. Hopefully it should be just like riding a bike, as hardware and operating systems evolve in such a way that it becomes part of the fabric (indeed, perhaps virtualisation will really have succeeded when we stop talking about it). But it is important we don't get panicked into thinking we should already be racing along, before we reach the point it is safe to take the trainer wheels off.