Ethical algorithms are still way off the radar for most organisations. But an increasing number are looking at the potential for cognitive systems, and the prospects of computers making more decisions that affect people's lives is bound to stoke up concerns about the rights and wrongs of the process.

The work on ethical algorithms – aimed at providing a sense of 'fair play' in the computing process – is in its proof of concept stage, but it prompts questions about the possible business applications.

Jeremy Pitt, leader of the work programme and deputy head of the Intelligent Systems and Networks Group at Imperial College, suggests the big potential could be in creating an environment for new applications. He likens it to Apple's creation of the App Store, which has enabled an army of developers to harness the potential of its iOS operating systems, or the potential that some see in the open data movement.

"The way I visualise it is that for many of these applications you have this digital ecosystem of edge nodes, that is you and me, generating information," he says. It would involve a bunch of networking platforms on which people could develop applications. "If you open up the APIs and control the privacy properly, you can unleash a vast spirit of entrepreneurialism."

Pitt says there are a number of dimensions to this. "One is about resource allocation, finding a way an algorithm can allocate scare resources to individuals fairly, based on what's happened in the past, what's happening now and what we might envisage for the future.

"Another aspect is around alternative dispute resolution, trying to find ways of automating the mediation process." He relates this to a retired judge telling him that a crucial element of a successful legal system is that the loser of a case, despite being unhappy, can appreciate that the process was fair and transparent and has no resentment against the system.

"A third is in what we have called design contractualism, the idea that we make social, moral, legal and ethical judgements, then try to encode it in the software to make sure those judgements are visually perceptive to anyone who has to use our software."

Pitt is already involved in the early stages of one initiative, to apply the concepts to the allocation of power supplies, under the Autonomic Power System led by the University of Strathclyde. This is directed at the fair allocation of energy in a future environment dominated by wind, wave and solar sources subject to fluctuations with the weather.

"You could think of things like smart grids that have algorithmic decision-making," he says.

"Another application is how to keep people happy in a shared physical environment. For example, open plan offices when people have to work in the same space."

This could be applied in buildings such as hospitals and schools, which are subject to public accountability and a requirement not just to be fair, but to be seen to be so. "Not only could you generate a fair outcome, but be able to explain how you arrived there," Pitt says.

Others acknowledge the potential, but say it has to be approached with considerable caution. Carl Bates, UK head of Deloitte Analytics, says it could be possible to develop an "ethical classifier" to make automatic judgements based on data, but that it could stir up an adverse reaction.

"Careful consideration would need to be given to the application of an ethical classifier across different geographies, cultures, law and politics," he says. "From a business perspective, it's unlikely that consumers would want an organisation to make both the original decision and also judge whether that decision was ethical.  Independence here is important, but the problem is that only the original decision-makers have all the data."

It is possible that using computers to make decisions ethically could be a double-edged sword, according to Paco Hope, principal at software consultancy Cigital.

"Computers are good at problems like distributing scarce resources based on predefined qualifications," he says. "Given good criteria, they can create an arms-length distance between those in charge of a decision (such as hiring, selecting a supplier, product pricing) and some of the criteria used in that process.

"On the other hand, computer algorithms can create distortions. They can become the ultimate hiding place for mischief, bias, and corruption. If an algorithm is so complicated that it can be subtly influenced without detection, then it can silently serve someone's agenda while appearing unbiased and trusted.

"Whether well or ill intentioned, simple computer algorithms create a tyranny of the majority because they always favour the middle of the bell curve. Only the most sophisticated algorithms work well in the tails."

As things stand these are theoretical positions, but it could be within this decade that the use of cognitive computing takes off, and that's the point at which the issue of ethical algorithms will become more pressing.