Written by Mihalis Kritikos,
Artificial intelligence (AI) is affecting the architecture and implementation of law in several ways. AI systems are being introduced in regulatory and standards-setting bodies and courts in several jurisdictions, to advance the functions of the la w and facilitate access to justice. Sound standards and certifications for AI systems need to be created so that judges, lawyers and citizens alike know when to trust and when to mistrust AI. Within this frame, several questions arise: Do we need ‘legal protection by design’? What are the legal and ethical boundaries to AI systems? Are existing legal frameworks adequate to cope with the challenges associated with the deployment of AI?
To respond to these questions and in view of the recent launch of its new Centre for Artificial Intelligence (C4AI), STOA co-hosted the 2020 edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law, on 16‑17 November 2020 with the United Nations Educational, Scientific and Cultural Organization (UNESCO) and other prominent institutions. Co-founded in 2019 by IEEE SA, the Future Society, and ELONtech, the Roundtable was held virtually, from New York, under the patronage of H.E. the President of the Hellenic Republic Katerina Sakellaropoulou. The mission of the Athens Roundtable is to advance the global dialogue on policy, practice, international cooperation, capacity-building and evidence-based instruments for the trustworthy adoption of AI in government, industry and society, under the prism of legal systems, the practice of law and regulatory compliance. The two-day event, which attracted more than 700 attendees, reviewed progress on the AI governance initiatives of key participating legislative, regulatory and non-regulatory bodies, exchanged views on emerging best practices, discussed the world’s most mature AI standards and certification initiatives, and examined those initiatives in the context of specific real-world AI applications.
The event featured prominent speakers from international regulatory and legislative bodies, industry, academia and civil society. There was consensus that, to protect our democracies, it is imperative to ensure the deployment of AI in ways that do not undermine the rule of law. The speakers agreed that the trustworthy adoption of artificial intelligence is predicated on a thorough examination of the effectiveness of AI systems and on constant review of their legal soundness, especially in high-risk domains. This is critical to ensure that societies capture the upsides of AI while minimising its downsides and risks. During the discussion, the use of algorithmic systems to support or even to fully assume the function of the decision-making process in legal questions directly affecting humans emerged as a key issue: ‘black box’ algorithms, possibly developed on the basis of potentially biased data, and with no clear chain of accountability should be considered as unacceptable. The representatives of all major international organisations agreed on the need for a strengthened working relationship between the EU, the Organisation for Economic Co-operation and Development (OECD), UNESCO and the Council of Europe – as a critical success factor in establishing impactful governance frameworks and protocols leveraging the entire policy toolbox smartly from ‘self’ to ‘soft’ and ‘hard’ regulation.
In both her opening and closing remarks, STOA Chair Eva Kaili (S&D, Greece) highlighted that Europe should lead these efforts and pave the way for the establishment of a legal framework on human-centric AI that is similar to the General Data Protection Regulation (GDPR) and for the development of some commonly agreed metrics for ethical AI. In her view, the rule of law will have to be synonymous with governments and big corporations being prevented from using AI technologies to gain access to citizens’ sensitive personal data or from using perception manipulation techniques to that end. The panellists also agreed that enhanced algorithmic scrutiny is necessary, combined with a thorough assessment of the quality of such computer-based decision-supporting systems, with regard to their level of transparency, to the provision of a meaningful scheme of accountability and to assurance of minimisation of bias.
The discussion also focused on the various ways AI can be regulated, as well as on how algorithmic decision-making systems can be controlled and audited, including the methodologies needed to analyse automated systems for possible flaws and to identify common ways of risk calibration. Carl Bildt, former Prime Minister of Sweden, recommended that the EU should closely cooperate with organisations like UNESCO to specify its ethical principles and should create, along with its transatlantic partners, equivalent systems of trust for all parts of society. Algorithmic bias in legal and judicial environments became a topic of discussion across almost all panels and most recommendations agreed on the necessity to build AI systems that are as diverse as our societies, given that technology can become a magnifier of social inequalities.
The speakers also emphasised the need to intensify efforts to regulate weaponised AI and reach an international agreement on definitional issues and the red lines that should be drawn when developing and deploying AI applications in critical domains. In several sessions, the issue of training and education to enhance algorithmic literacy was advanced as a key requirement for safeguarding citizens’ trust as well as for allowing users to exercise, in an meaningful way, their right to be forgotten, their right to an explanation when their data are being used for AI algorithms, and the right to redress against decisions made by AI systems. Several regulators also highlighted the mismatch between the traditional regulatory approach and the fast pace of technology developments in the domain of AI that point to the urgent need to introduce smart regulatory instruments, including ethical impact assessments.
In her concluding remarks, STOA Chair Eva Kaili underlined that a privacy-by-design and ethics-by-design approach should be followed throughout the entire lifecycle of AI systems, from their initial development to actual implementation especially in the legal domain. In a period of intense digital interdependence, where AI strategies and ethical principles are increasingly adopted at an organisational level worldwide, multi-stakeholder engagement, such as the Athens Roundtable, is critical to identifying and disseminating widely adopted practices for operationalising trustworthy AI.
The full recording of the meeting is available here.
[…] Source Article from https://epthinktank.eu/2021/01/13/digital-revolution-and-legal-evolution-athens-roundtable-on-the-ru… […]
This is an extremely relevant discussion in all spheres including of course medicine. thanks