Building a Foundation for Ethical AI: European Commission publishes Guidelines for Trustworthy AI

On 8 April 2019, Ethics Guidelines for Trustworthy AI were published by the European Commission. The Guidelines are part of a wider plan for the EU to enhance its competitiveness in the field of AI as it provides a tool for all stakeholders – companies, academia, civil society, state institutions – for building a horizontal foundation to achieve Trustworthy AI. Though non-binding in nature, the Guidelines provide a common baseline of an ethical minimum when designing, developing or using AI.

Need for rules of ethics – origin of Guidelines

In 2017, the European Council stressed the need to urgently take steps in relation to emerging trends such as AI and blockchain technology, while at the same time ensuring the protection of data, digital rights and ethical standards.1 As an outcome, the European Commission presented its AI Strategy in its communication on 25 April 2018 COM(2018)237, in which it set out its aim to (1) boost the EU’s technological and industrial capacity and AI uptake across the economy; (2) prepare for socio-economic changes; and (3) ensure an appropriate ethical and legal framework. The Ethics Guidelines are a product of the latter.

The background to the European Council and Commission’s activity is the competitive international landscape. Compared to China and US, Europe is considerably behind in terms of private investment in AI.2 As the McKinsey Institute put it: “Europe has digital start-ups and considerable innovation, but, unlike the United States and China, has been largely unable to translate that consistently into global digital platforms”. This has brought Europe into a situation where it has to take rapid measures to raise its competitiveness by integrating and directing its high-level R&D, innovative and deep-tech start-ups and strong industry to focus more on AI and by attracting greater private investment. To reach that aim, creating an environment with clear ethical standards and legal certainty is crucial.

The task was assigned to the High-Level Expert Group on Artificial Intelligence (AI HLEG), established in June 2018. In addition to the Guidelines, AI HLEG has already issued a document defining AI and is also expected to deliver Policy and Investment Recommendations and advise the Commission regarding legislative evaluation processes, next-generation digital strategy development and interactions with a broader set of stakeholders.3 During consultations with stakeholders and Member States regarding the first draft published in December 2018, the Guidelines were “deemed a good starting point”.

Meaning of “trustworthiness”

The Guidelines have described trustworthy AI through three characteristics: it has to be lawful, ethical and robust. As the Guidelines’ main purpose is to set the standard of ethics, the legal issues have been left a side. Only a glimpse of legal matters that cover or have connection to AI is given, bringing out data protection, product liability, consumer law and human rights.

The Guidelines present the rules of ethics derived from the fundamental rights and conditions for robust AI. It first brings out the ethical principles, specified as ethical imperatives that should be adhered to when developing, deploying or using AI systems: respect for human autonomy, prevention of harm, fairness and explicability. Systems – not even AI systems – cannot apprehend ethics on their own, yet we expect systems to make the best decisions – decisions that can be trusted. In order to exceed the vagueness of ethical rules and to realise them, the Guidelines also establish certain requirements based on the aforementioned ethical imperatives. Including human agency and oversight, technical robustness and safety, privacy and data protection, transparency, non-discrimination, societal and environmental well-being and accountability among others, the requirements ought to protect common values. The Guidelines are a combination of the expectations towards the system and expectations regarding their decisions. At the same time, following the requirements should safeguard the secureness and reliability of an AI system.

It can be concluded that the scope of the requirements is rather comprehensive. What must be kept in mind – and what is also brought out in the Guidelines as well – is that tensions between different principles and requirements may arise. For example, ensuring privacy and data protection while also providing a robust system may raise the need for deliberation.

The Guidelines have been criticised for not entailing imperative restrictions on developing AI weapons. One example of such a claim is the statement made by Professor Thomas Metzinger from the University of Mainz, one of the draft authors of the Guidelines, who criticised the document for not prohibiting the development and use of AI to develop weapons – a position already strongly presented by him in a research paper published by the European Parliamentary Research Service in 2018.

What lies ahead?

The development process of the Guidelines is not over yet. As the idea of the Guidelines is to be a practical tool for AI stakeholders, it needs first-hand feedback from its addressees. Therefore, a piloting process among voluntary stakeholders will take off in summer 2019. Based on the feedback, the high-level working group will review and amend the assessment list.

Other organisations

As mentioned in the opening paragraph, the European Commission is not the only institution focusing on AI. With investment and development, the need for strategies and guidelines becomes acute for States as well as private actors.

One interesting initiative led by the Université de Montréal and concluded in 2018 is the Montreal Declaration for Responsible Development of Artificial Intelligence (Montréal Declaration). Content-wise, the declaration mirrors similar core values as the Guidelines for Trustworthy AI: well-being, autonomy, intimacy and privacy, solidarity, democracy, equity, inclusion, caution, responsibility and environmental sustainability. Compared to the aforementioned initiatives, the Montréal Declaration is unique as its signatories are citizens and organisations such as private companies, academic institutions and NGOs.

In May 2019, the OECD published its Principles on AI, approved by Member States, by adopting the Recommendation of the Council on Artificial Intelligence. This again stresses the same core values as a baseline for any AI-related activity. However, compared to the Ethics Guidelines for Trustworthy AI, the OECD document is much more general and addresses governments specifically whereas the Guidelines are meant as a tool for everyone designing, developing or deploying AI systems.

Other large organisations have not introduced such comprehensive guidelines or declarations specifically focusing on AI. This does not mean, however, that AI will be left out of the picture. For example, the Council of Europe’s Parliamentary Assembly published in 2017 recommendations on technological convergence, artificial intelligence and human rights and is focusing on AI in a number of fields. In May 2019, the Committee of Ministers agreed to further examine the feasibility and potential elements of a legal framework for AI.

The United Nations addresses AI through several entities. The International Telecommunication Union has become one of the key UN platforms for exploring the impact of AI. For the purpose of addressing issues related to lethal autonomous weapon systems (LAWS), the Group of Governmental Experts (GGE) on emerging technologies in the area of LAWS was established in 2016. The GGE provided a report in 2017 confirming the focus of the GGE, and a report in 2018 suggesting possible guiding principles. Furthermore, the United Nations Centre on Artificial Intelligence and Robotics is addressing the risks and benefits of AI and robotics from the perspective of crime and security through raising awareness, education, exchanging information, and building harmony among stakeholders.

AI as a major influencing development in technology has clearly made its way onto the agendas of international organisations. Hopefully these trends will help to lay sufficient foundations for AI technologies to enhance the common goals of humanity and to minimise the collateral risks inherent in using new technologies.

Author: Liina Lumiste (née Hirv), NATO CCD COE Law Branch

This publication does not necessarily reflect the policy or the opinion of the NATO Cooperative Cyber Defence Centre of Excellence (the Centre) or NATO. The Centre may not be held responsible for any loss or harm arising from the use of information contained in this publication and is not responsible for the content of the external sources, including external websites referenced in this publication.

  1. European Council meeting (19 October 2017) – Conclusions: http://data.consilium.europa.eu/doc/document/ST-14-2017-INIT/en/pdf []
  2. Communication relies on relevant data from 2016, according to which private investments in Europe amounted to 2.4-3.2 billion euros, while in Asia to 6.5-9.7 billion euros and in Northern America to 12.1-18.6 euros. Data from: 10 imperatives for Europe in the age of AI and automation, McKinsey, 2017. []
  3. The tasks of the AI HLEG are described in greater depth on the European Commission website: https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence []