Written by Mihalis Kritikos,

© Shutterstock

The public service revolution expected from the adoption of artificial intelligence (AI) and machine learning (ML) simultaneously promises positive change and threatens negative societal impacts – we only need to mention ‘predictive policing’ to comprehend the potential for both service efficiencies and unintended consequences. AI ethics attempts to unpick these issues and provide a solid ethical framework. However, the snowballing adoption of AI ethics principles and guidelines by national governments, international organisations, research institutions and companies during the last three years triggers questions about the actual applicability and efficient implementation of these instruments. As a response to these concerns, scholars and practitioners are currently trying to find ways to translate these principles into practical requirements to enable the application of AI ethics principles and guidelines. Some of this work is about translating ethics principles into technical requirements, and/or design methodologies such as privacy-by-design, ethics-by-design, or ethically aligned design.

Several ethical tools and framework models have been created to visualise ethical concerns and develop a set of practices to anticipate and address the potential negative effects of AI on people. However many questions arise. Are these technical solutions sufficient to get from AI ethics to specific policy and legislation for governing AI? How can we apply the variety of ethical frameworks consistently in governing data, developing algorithms and actually using AI systems? Who bears this responsibility? And are there (or should there be) mechanisms for enforcement and monitoring in place? What is, in fact, a trustworthy and responsible AI, especially with regard to data governance? What is the role of ethical frameworks in ensuring trustworthy and responsible data governance and AI? Are there any lessons learnt from existing frameworks? How can AI systems best be governed? What are the promises and perils of ethical councils and frameworks for AI governance? What possible frameworks could guide AI governance, like those based on fairness, accountability and transparency?

To try to answer some of these issues, STOA launched a study to produce stakeholder-specific recommendations for the responsible implementation of AI systems and technologies, aligning them to already adopted ethical principles. The study, ‘Artificial Intelligence: From ethics to policy‘ was carried out by Dr Aimee van Wynsberghe of Delft University of Technology and co-director of the Foundation for Responsible Robotics at the request of the STOA Panel, following a proposal from Eva Kaili (S&D, Greece), STOA Chair. The study’s central focus is the question of how can we get from AI ethics to specific policy and legislation for governing AI? The study builds on the ethics guidelines principles developed by the European Commission’s High-Level Expert Group on Artificial Intelligence by providing insight into how the principles can be translated into design requirements and concrete recommendations.

The study firstly provides a brief overview of AI as a technology and the unique features it brings to the discussion of ethics: what is AI and what is new about it that is deserving of ethical attention. Particular attention is paid to the role of ‘black boxes’ and algorithmic fairness. Following this, the study unpacks what ethics is, and how ethics ought to be understood as a resource in the AI debate beyond its current use to generate principles.

From an overview of the current literature, the author produces a remarkable range of insights regarding the transparency of AI algorithms, the balance of trade-offs between accuracy and fairness, the conceptualisation of AI as a socio-technical system and the use of Ethical Technology Assessments as a viable mechanism for uncovering ethical issues ab initio. By arguing in favour of viewing AI as an ongoing social experiment that requires appropriate ex ante ethical constraints, assessment of epistemological constraints and constant monitoring, the author proposes a precautionary approach that is adapted to the realities and risks of AI.

The study then proposes an extensive range of ethically informed and stakeholder-specific policy options for the responsible implementation of AI/ML products, aligning them to defined values and ethical principles that prioritise human wellbeing in a given context. The entire set of policy options, viewed as ethical constraints, constitute a meta-ethical technology assessment framework directed towards the public administration and governmental organisations who are looking to deploy AI/ML solutions, as well as the private companies who are creating AI/ML solutions for use in the public space.

Among the proposed options, the development of a data hygiene certification scheme, the demonstration of the clear goals of AI/ML application and the production of an ‘Accountability Report’ in response to the ethical technology assessment appear as the most applicable in the context of the current debate about regulating the ethical aspects of AI. Besides proposing a meta-ethical framework, the author also makes a preliminary identification of the possible concerns surrounding the proposed policy options and their applicability. Particular emphasis is placed on the role of the ethicists and the allocation of tasks when it comes to the completion of the ethical technology assessment, the affordability of this process, especially for small and medium enterprises, and the horizontal character of the proposed regulatory process. The study includes useful accounts of the debates regarding the interface between regulation, technology and ethics, as well a critical engagement with traditional narratives about the role of ethics in the technological innovation process. In the concluding section, the author makes some important remarks about the meaning of ethics in an AI-focused regulatory context, its policy implications as well as its normative value.

Given the lack of operational experience with regard to AI, and its inherent uncertainties and risks, the study’s proposed framework appears to ensure accountability and transparency when organisations apply ethical frameworks and principles. Its interdisciplinary character, the cross-cutting nature of its insights and the acknowledgement of the role society plays in shaping technology and its regulation could pave the way for AI development that is both efficient in operational terms and acceptable to society.

Read the full report and accompanying STOA Options Brief to find out more.