High-performance computers are changing the face of cutting-edge research at many of the country’s top universities and higher education institutes. Gary Flood looks at the systems aiding the science.

Welcome to the world of HPF (high performance computing), or to you and me – the supercomputer. These are clusters of processors engineered to offer their users processing, memory and storage power not even the most commercial computing platforms can match, let alone even the niftiest corporate road warrior’s laptop.

And even cost and budget constraints (even more part of a college IT head’s budget than his public sector equivalent) can’t keep the UK’s tertiary sector’s hands off this kind of leading-edge research capability.

There are some 176 UK universities, higher education institutes and research councils, many of which either have or are looking to install such configurations.

Take the recently installed infrastructure at Cardiff University, part of the School of Psychology. It is currently the UK’s largest academic high-performance computer cluster set up specifically to support the institution’s Brain and Repair Imaging Centre.

Its work centres on overlaying so-called MEG (Magnetoencephalography, a technique to capture neural magnetic fields) images onto MRI (Magnetic Resonance Imaging) scans using parallelism – a technique that delivers, as noted above, an amazing 100 complete brain images in just over a quarter of an hour, 24 times faster than doing so consecutively .

“This means processing the images in near realtime which lets our researchers get a much quicker understanding of both how the brain works but also what effects injuries are having,” says the IT systems manager at the Centre Spiro Stathakis .

This is done with a 75-node cluster of IBM e326m dual processor/dual-core AMD Opteron servers, delivering an equivalent of 300 processor power: the cluster has been (conservatively) estimated at offering performance levels of 530 Gigaflops (millions of floating point operations per second). This set-up also encompasses 40 Terabytes of storage.

The system needs it: there is a very high load of statistical analysis going on behind the scenes to make those interesting images appear, he says. “We next want to write some more code to start doing a lot more exploratory research by creating algorithms for more intense data analysis.” Another UK higher education body that is enthusiastic about supercomputers is the University of Surrey.

At Surrey, a blade-based cluster has been set up, using Intel Core 2s and IBM system storage, for use in experiments and research into multimedia security, “complex, intelligent and adaptive systems,” such as computational modelling of parts of the brain and how schizophrenics process certain visual features – standard research areas, sure – but also applications for the financial services industry .

Financial impetus

The Department of Computing at Surrey has teamed up with a financial analysis firm called CD02 on a three-year bid to look at ways to develop better pricing and risk analysis technology, which will ultimately help banks, hedge funds and investment outfits to trade in a somewhat exotic financial instrument called a collateralised debt obligation, or CDO. This project, sponsored by the former DTI, centres on the supercomputing power to model huge problem spaces and simulations to explore very complex risk analysis – but is just one of the things the cluster will be used for.

“Datasets in science are getting bigger,” says Lee Gillam, a PhD and research fellow in the Department. “As a result scientists increasingly look to use computers with more power than ever before.” Surrey was particularly drawn to a blade-based solution to deliver such power, he adds. “The reduced footprint was certainly a factor, as well, to be honest, as the fact that this was backed by IBM.” Meanwhile, at the University of Westminster, Stephen Winter enthuses about the supercomputer the college’s School of Informatics has just had installed, where he is Dean. “We want to use this to look at problems around traffic flow,” he told CIO, “as well as do some modelling around climate change and health informatics issues.” For its part Westminster has a grid-style 32-node cluster made up of Sun servers that is capable of being scaled up to three times that topography. “This is a mix of proprietary and open source that we think will be an excellent computing platform,” he adds. Initial work is concentrating on making the system as user-friendly as possible, with Winter convinced that “grid is a good way to deliver HPC power in a very transparent way”.

The verdict is clear: UK higher education is embracing a rich variety of techniques to get supercomputer power at an affordable rate, while being ecumenical about the underlying architecture or delivery method, whether cluster, parallel, blade or grid. lt seems speed is the key to solving problems that can’t be solved any other way .

Supercomputing facts

According to market watchers IDC, the global market for such devices grew by 24 per cent in 2005 to just over $9bn. That year is the last full year the company has data on, but IDC is in any case predicting a third consecutive year of growth for 2006, with a 9.4 per cent increase.

The same analyst group splits the market between IBM and Hewlett-Packard (at 31per cent each), followed by Sun (15 per cent) and Dell (8.5 per cent).

The most powerful supercomputers can be found on the Top 500 list, updated twice a year, once in June at the ISC European Supercomputing Conference and again at a US Supercomputing Conference in November. The top spot is currently dominated by another IBM supercomputing device, BlueGene – currently reaching sustained speeds of 360-plus Teraflops (a teraflop is a trillion operations a second) but potentially capable of Petaflop-level (a thousand trillion operations per second) processing power .

An interesting new entrant into the world of supercomputing is Microsoft, which is launching its own Compute Cluster offering.

What is a ‘supercomputer’?

The term itself, (or rather ‘Super Computing’), was first used by New York World newspaper in 1929 to refer to large custom-built tabulators IBM had made for Columbia University. Fast forward to computer genius Seymour Cray and the 1960s, when he and his team regularly built world-beating machines that achieved, for the time, dazzling speeds with up to 16 processors.

These days a supercomputer is commonly defined as being leading edge in terms of processing capacity, particularly speed of calculation, at the time of its introduction.

Such computers often get media interest for things like their chess playing ability. But it is in their application to cracking heavy-duty scientific or research issues that their real value lies.