The Netherlands Institute for Human Rights is publishing a practical overview of fundamental rights protection in the European AI Act. The overview can assist regulators, policymakers and other stakeholders in implementing the legislation. As one of the fundamental rights authorities, the Institute plays an important role in the joint supervision of the protection of fundamental rights in the use of AI. 

Fundamental rights as a common thread throughout the AI Act 

Protecting fundamental rights is one of the main objectives of the new AI Act. It seeks to enable innovation and to harvest the benefits of AI, and at the same time ensure a high level of protection for people's safety, health and fundamental rights. Indeed, the protection of fundamental rights is a common thread running through the entire Act and is reflected in many of the obligations. 

This is not without reason. AI is widely used in all aspects of society and can affect virtually all fundamental rights. From the right to non-discrimination and a fair trial to the right to education, social security and privacy.  

The child benefits scandal has already demonstrated that the use of digital systems can have far-reaching consequences. With the growing use of AI, the risks to fundamental rights will increase. Precisely because AI can affect so many rights and the impact can be so great, protecting fundamental rights is complex. 

Further, this fundamental rights dimension is also a new addition to existing product safety supervision framework, which will apply to AI. The protection of fundamental rights in the AI Act is therefore a new task for supervisory authorities and requires extra attention. 

Practical overview for regulators, policymakers and other stakeholders 

The overview demonstrates how the AI Act aims to protect fundamental rights in practice. It offers regulators, policymakers and other stakeholders a practical tool for identifying where important safeguards are contained in the law and how they can apply them in their work. This enables them to identify risks to fundamental rights in a timely manner and take appropriate measures. 

The structure of the publication is as follows. It begins with the basics of fundamental rights and explains why almost all rights are relevant with regard to AI and that assessments must always be carried out in the right context. Subsequently it discusses the different provisions that specifically protect fundamental rights, from AI literacy and prohibited AI systems, to the obligations for high-risk AI systems, and the regulatory sandbox. It then also discusses the fundamental rights risks of the specific AI systems that the Act has designated as “high risk”. Finally, it provides insight into how supervision is organised and what role the Institute fulfils as one of the fundamental rights authority for AI. 

This overview enables stakeholders not only to understand what the law prescribes, but also to immediately see where they need to be alert to risks to fundamental rights in their work. 

Effective protection requires more than rules 

According to the Institute, the effectiveness of fundamental rights protection in the AI Regulation depends on several factors. 

The way in which the EU translates the general obligations of the AI AI into technical standards, guidelines and templates determines how well fundamental rights are protected. The overview shows the areas in which these instruments are crucial for the protection of fundamental rights. 

Above all, however, success depends on the skills and capabilities of all parties involved: providers, controllers, and supervisors must be able to identify fundamental rights risks in a timely and effective manner. The overview identifies the areas in which parties need to be especially attuned to potential to fundamental rights risks. 

The AI Act is also currently being renegotiated at European level in the so-called “digital omnibus”. This legislative proposal may have consequences for the protection of fundamental rights in the application of AI. The overview provides a benchmark for monitoring how the proposals deviate from the current level of protection. 

The Institute as an AI fundamental rights authority 

As an AI fundamental rights authority, the Institute contributes to monitor fundamental rights risks and to the conformity evaluations of specific AI systems. In the coming period, the Institute will focus on helping stakeholders to recognise risks in their sector and on closely monitoring developments surrounding the “digital omnibus”.