The European Union has laid out a strategy to promote the growth of Artificial Intelligence technology in Europe. As so often with EU initiatives, the proposals are long on vision but short on specific practical changes. EU-based entities will welcome major government-level investment and support for AI technology development but would have been even more grateful if the EU had clarified, for example, specific issues on how privacy law interfaces with AI.

In its 2017 mid-term review of the on-going EU Digital Single Market (DSM) strategy, the European Commission highlighted the importance of ensuring that the EU establishes itself as a leader in AI. Hence, the Commission's communication on AI for Europe which sets out the scope of its European AI initiative, its key objectives and concerns, and gives an indication of what to expect from the Commission in this area from now until 2020 and beyond.

Essentially, the Commission wants to increase technological and industrial capacity within the EU and encourage the use of AI in both the private and public sectors. That's all pretty vague – as is the desire to "anticipate the socio-economic changes that AI will bring" and continue to develop an "appropriate and ethical legal framework" for AI.

Europe already falls behind the US and Asia in terms of investment into AI. Failure to invest adequately in AI could risk the EU becoming primarily a consumer of solutions developed elsewhere – as has already proved to be the case for digital goods and services. So it seems right that, given that the whole rationale for the DSM strategy is to redress the issues caused by the dominance of the EU tech market by non-EU companies, the Commission wants to try to evolve with the market rather than play a desperate game of catch-up.

So, the Commission has ambitiously urged the EU to increase its combined private and public sector investment in AI to at least €20 billion by the end of 2020. In support of this, the Commission has pledged to increase its own AI investments by approximately 70% to around €1.5 billion by the end of 2020.

The Commission also wants to ensure that AI is available and accessible to all, and it has particularly committed to facilitate access for small and medium-sized enterprises, which may otherwise struggle to acclimatise.

One way in which it seeks to do this is by supporting the development of a "single access point for all users to relevant AI resources in the EU" in the form of an AI-on-demand platform. It's envisaged that the platform will include knowledge, data repositories, computing power, tools and algorithms, and will enable potential users of the technology to assess how and where to integrate AI solutions into their processes, products, and services.

Data and Deep Learning

Without access to vast amounts of data, the AI industry would arguably stagnate in many ways. The development of certain AI, such as machine-learning AI, depends on the ability to identify patterns in available data sets and apply those to new data sets. Most machine-learning projects start by using large data sets as a catalyst to "train the brain". The Commission has recognised this, stating that "access to data is a key ingredient for a competitive AI landscape".

The Commission has urged Member States to ensure that their public policies encourage the wider availability of privately-held data and emphasised that companies should recognise the importance of re-using non-personal data for AI training purposes.

But, of course, separating out personal and non-personal data is a major task, and the Commission does not clarify issues of how anonymisation or pseudonymisation techniques ought to work in relation to personal data used for AI development.

Ethical and Legal Framework

As technology advances, so too must the regulation of that techology. Building trust and accountability around AI is one of the Commission's objectives, which it seeks to achieve by increasing the transparency and "explainability" of AI systems.

Advance have already been made to this end in the data sphere through the General Data Protection Regulation, which came into force in May 2018, as well as through various other proposals under the DSM (such as the ePrivacy Regulation and Cybersecurity Act). These have been deemed to be "essential" by the Commission, which stated that "businesses and citizens alike need to be able to trust the technology they interact with, and have a predictable legal environment".

To address its specific concerns in the AI field, the Commission intends to develop draft AI ethics guidelines by the end of the year in conjunction with "all relevant stakeholders". It is intended that these guidelines will look specifically at issues such as the future of work, fairness, safety, security, social inclusion, and algorithmic transparency, as well as broader topics, including the impact of AI on fundamental rights such as privacy, consumer protection, and non-discrimination.

The Commission has further stated its intention to engage relevant stakeholders in this sphere by setting up a "European AI Alliance" by July 2019, the purpose of which will be to share best practices, encourage private investments, and carry out other activities related to the development of AI.

Conclusion

It's unclear why the EU feels that it's OK to wait until 2019 for the next stage of clarification of AI law and policy on AI. Many companies are already well underway on AI and machine-learning development projects. They are already grappling with issues of the legal framework and questions of how to ensure GDPR compliance and accountability in their target solutions. In the absence of clear rules, they have to agree their own approach to issues of transparency, ensuring anti-bias and privacy-by-design. The EU is moving too slowly and risks leaving a legal vacuum as AI technology development races far ahead of the legal framework.