AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.
© Adobe Stock

The European Parliament regularly receives enquiries from citizens about artificial intelligence.

Artificial intelligence (AI) is a technology that combines machine-learning techniques, robotics and automated decision-making systems. It is central to the digital transformation of society and it has become a priority for the European Union (EU). On the one hand, AI could have a positive impact for society and the economy, in healthcare or in the transport sector for example. On the other hand, it entails a number of potential risks for EU citizens’ fundamental rights that may have negative consequences. Against this background, the EU aims at fostering and regulating AI, while taking both threats and opportunities into account.

European Parliament position

In its February 2020 resolution on a comprehensive European industrial policy on AI and robotics, the European Parliament stressed that AI will increase productivity and output and, even though some jobs will be replaced, new jobs will also be created. The use of robotics and AI would improve working conditions, as it ‘should also reduce human exposure to harmful and hazardous conditions’. Moreover, the resolution highlighted the strategic sectors in which AI could bring an added value in the general public interest, such as health, energy and transport.

In October 2020, the European Parliament adopted three resolutions on what AI rules should include with regards to ethics, intellectual property rights and liability, and called for a harmonised approach at the EU level. Furthermore, the resolutions stated that the AI technologies should be ‘human-centric and human-made’ and should not cause any harm to individuals, society or the environment.

First, in its resolution on the ethical aspects of AI, the European Parliament put forward recommendations about the ethical principles needed for the new legal framework, which must be ‘values-based’ and built on safety, transparency and accountability. Moreover, it highlighted the advantages of AI application, for instance with regard to the internal market, transport, defence and green transition. However, AI technologies ‘must be tailored to human needs in line with the principle whereby their development, deployment and use should always be at the service of human beings’.

Second, the resolution on intellectual property rights called on the Commission to carry out an impact assessment regarding the protection of intellectual property rights in the context of AI development. Furthermore, it highlighted the benefits of AI development. For instance, AI could help in the fight against ‘deep fakes’ through the verification of facts and information. However, the European Parliament called for further clarification of the protection of data and for a common AI legislation to avoid massive litigation that might affect aspects such as traffic safety. The European Parliament also specified that AI technologies should not have legal personality and that only humans have the ownership of intellectual property rights.

Finally, in the resolution on a civil liability regime for AI, the European Parliament called for the adoption of a horizontal and harmonised legal framework for civil liability claims, which would ‘prevent potential misuses of AI-systems’. The resolution addressed some of the key aspects of this framework, such as the liability of the operator, insurance, and different liability rules for different risks. For instance, it proposed that those operating high-risk AI should be strictly liable for any resulting damage, material and immaterial harm.

In June 2020, the European Parliament decided to set up a new Special Committee on Artificial Intelligence in a Digital Age (AIDA) to assess the impact of AI, investigate its challenges, analyse the approach of non-EU countries and define common EU objectives in the medium- and long-term.

European Commission and AI

The European Commission has established its policy on artificial intelligence through a series of documents. In a 2018 communication, the European Commission set out an EU approach to AI addressing the socio-economic, ethical and legal aspects. In April 2019, based on the idea that trust is a prerequisite to ensuring a human-centric approach, the European Commission published non-binding guidelines on ethics in AI, which set out seven key requirements that AI systems should meet. In addition, the Commission highlighted in a communication that European values should be the core requirements for a trustworthy AI.

Furthermore, in a February 2020 White Paper on Artificial Intelligence: a European approach to excellence and trust, the Commission highlighted the need for a coordinated approach and proposed policy options for a future EU regulatory framework on AI. In parallel, it also published a communication highlighting the need to evaluate the intellectual property framework to enhance access to and use of data, which is essential for training AI systems. Later in 2020, the European Commission held a public consultation on its approach on AI.

Further information

Keep sending your questions to the Citizens’ Enquiries Unit (Ask EP)! We reply in the EU language that you use to write to us.