EU releases AI ethics guidelines

New guidelines set out seven pointers for achieving trustworthy AI and form part of the EC's AI strategy


The European Commission (EC) has taken the unprecedent step of releasing a set of ethical guidelines for artificial intelligence (AI) development.

While AI presents a myriad of opportunities across all divisions of societies and industries, the EC is hoping to take steps to address the legal and ethical questions spawned by the widescale adoption of AI.

The guidelines, which have been broken down into seven pointers for achieving trustworthy AI, have been based on the work of theHigh-Level Expert Group on AI (AI HLEG) comprised of 52 independent experts from academia, industry and civil society, who were appointed by the EC in June 2018.

The guidelines form part of the EC's AI strategy, which has been broken down into three pillars: Encouraging public and private uptake of AI; ensuring EU member states are prepared for socio-economic changes brought about by AI; and creating an appropriate ethical and legal framework, under which the seven AI ethics guidelines fall under.

The seven guidelines form part of a pilot phase initiated by the EC which will attempt to ensure that the ethical guidelines for AI development can be implemented in practice, with the EC estimating that public and private investments across the EU will be least €20bn ($22.5bn) annually over the next decade.

Andrus Ansip, vice-president of the EC's Digital Single Market strategy, remarked: "The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies."

"Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust."

The seven guidelines set out as part of the EC's Digital Single Market strategy are:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Mariya Gabriel, commissioner for digital economy and society, described the launch of the guidelines as an "important step" toward ethical and secure AI in the EU.

"We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society," Gabriel remarked. "We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI."

The next steps of the EC's AI strategy will include the launch of a large-scale pilot with partners this summer, followed by the push for an international consensus for human-centric AI, as the EC attempts to strengthen cooperation with what it described as "like-minded partners" such as Japan, Canada or Singapore, as well as through international cooperation through organizations such as the G7 and G20.

Helping employees transition to an automated workforce small

Read next:

Helping employees transition to an automated workforce