The implementation of artificial intelligence (AI) across all areas of daily life, from mobile applications to job recruitment, has brought with it important ethical connotations. The impact of AI is so important that lawmakers, businesses and even governments are meticulously looking at it to establish boundaries and rules that can safeguard individuals rights.
Professor Lokke Moerel, Senior of Counsel at Morrison & Foerster and one of Europe's leading AI lawyers, exposed the current ethical dilemmas affecting AI implementation and the need to open black boxes to identify biases at the recent CIO UK Artificial Intelligence Summit.
She opened her talk by introducing two ethical dilemmas to her audience. In the first one, data scientists from a pharmaceutical company were presented with a situation where a product was short in supply. The firm's executives asked them to make a prediction on how to best distribute the product to limit shortage and reduce complaints.
The data team reorganised the distribution with a positive outcome and a minimal number of complaints. In theory, the black box algorithm was successful and all stakeholders were satisfied.
However, explained Professor Moerel, looking at the reorganisation of the product distribution, it was discovered that it actually demoted certain postal codes - those from unprivileged neighbourhoods and with communities less prone to complain as a result of social alienation. What to do?
"'The AI did it' is not an acceptable excuse. Algorithmic accountability implies an obligation to report and justify algorithmic decision-making and to mitigate any negative social impacts or potential harms," Professor Moerel said, referencing an MIT article about algorithmic accountability from 2017.
"The second scenario that the academic described concerned an AI-powered recruiting tool which favoured male candidates over female ones, but not more so than in the past. The underlying cause of this bias was that the algorithm had been trained using historical data, which consisted predominantly of resumes of male candidates."
"If your data is biased - 'one-sided' - the algorithm will be biased," declared the lawyer and academic. "You can't just use all your historical data because all your historical data is likely biased. It's very hard to get clean data."
Digital Revolution, the new Industrial Revolution
In Professor Moerel's view, there are great similarities between the first Industrial Revolution and today's digital one. Although the former brought with it technological progress, it also created child labour, pollution and poor working conditions.
New technologies required numerous trials before they could work as required and be fully regulated.
"The first car had a person walking in front of it with a red flag because it didn't have brakes," she said. "Think about what society did then: roads, airbags, rules to regulate vehicles safety. Artificial intelligence is that first car without brakes. How it looks today is not how it will look in half a year from now and after we start cracking the black box."
Although it might be tempting to adopt a pessimistic approach to AI, the scholar stressed that the new technology is just making its first steps, but that issues will be ultimately addressed. As was the case in the Industrial Revolution, some of the problems might however take years to be dealt with adequately.
Above all, Professor Moerel stressed the need for accountability. Despite decisions being processed by machines, there must be human accountability for the AI-driven tools used by organisations.
"If you end up in court and your answer is 'the algorithm did it', it won't be accepted," she said. "It's one of your tools and you have to deal with it, making sure it comes to the right solutions: you must justify your algorithm-making and mitigate the negative effects. That is your task."
According to Professor Moerel, at the core of the AI ethics dilemma lies privacy. From a legal perspective, there is no ownership of data - data is an intangible asset. There are also no intellectual property rights in data.
"The bizarre thing is that all the other fundamental rights - discrimination, freedom of speech, and so on - are folded into the assessment whether you can process the data under data protection laws," she said. "That's why it's all about privacy."
AI feeds itself with great amounts of data, which it then analyses to make predictions. The professor cited an example where people who suffer from obesity have a particular set of characteristics, which result into predictions. These predictions lead to actions, such as insurance companies increasing the insurance premium.
This implies a major legal shift in the burden of proof. If someone has those characteristics, they will be predicted as future obesity cases. How can people then prove that they won't be obese?
"The question is, do I get a chance to prove the algorithm was wrong?" Professor Moerel asked the audience. "There are a number of challenges, including unforeseen applications and discrimination."
Professor Moerel disagrees with calls for new extensive legislation, as the GDPR already provides adequate rules for data processing in the context of AI.
"GDPR requires you to mitigate impact on individuals plus society as a whole. This requires also an ethical assessment. There are many examples where companies were legally compliant, they got the consent, and still made everyone upset."
She added: "Law is what you may or may not do - what you are allowed to do - and ethics is what you should or should not do. It's a different assessment. Other than what people think, ethics are quite stable over time."
If organisations and governments are transparent about the data and AI tools they are using, then there's hope for better use of AI. The tipping point is that black boxes shouldn't result in unfair biases and instead benefit individuals or society as a whole. In addition, algorithms need to be auditable and people must be accountable for them.
"We will overcome all the downsides of AI like we did with the Industrial Revolution but you have to be open about the negatives or the concerns so you can address them," Professor Moerel concluded. "If you don't, you'll get a big backlash."