Understanding the European Parliament’s Compromise Amendments to the AI Act

The European Union is seeking to regulate artificial intelligence with a new bill, and as it stands, the bill is looking good for the people. The new proposed amendments by the co-rapporteurs seek to ensure that fundamental rights are respected for users of high-risk AI systems.

What does this mean for you and your business? We’ll break it down for you and show you how you can make your voice heard in the debate.

Overview of the European Parliament’s AI Act

You may have heard about the European Parliament’s new AI Act, which is designed to regulate the use of artificial intelligence. As you know, AI has the potential to impact many areas of our lives, so it’s important that we have a framework in place to ensure that it’s used responsibly.

The co-rapporteurs who drafted the act have now circulated new compromise amendments, which propose how to carry out fundamental rights impact assessments and other obligations for users of high-risk systems. The amendments are still being negotiated, so it’s too early to say what the final version will look like. But it’s important that we keep a close eye on the proceedings and make our voices heard.

Fundamental Rights Impact Assessments for High-Risk AI Systems

When you put it into plain English, the co-rapporteurs’ amendments say that companies using AI that could have a serious impact on people’s fundamental rights must carry out a “fundamental rights impact assessment” (FRIAS) before going ahead. This is to make sure they are aware of and can mitigate any risks to people’s rights.

You can see how this amendment would be important in light of the fact that AI is becoming increasingly widespread and is being used in more and more high-risk applications. For example, if an AI system is used for making decisions that could lead to someone being wrongly convicted of a crime, it’s important that the company behind it has done a FRIAS to check that the system is not violating anyone’s fundamental rights.

This amendment was heavily debated in the European Parliament and there was a lot of pushback from some MEPs and businesses about the need for such assessments. However, in the end the co-rapporteurs’ amendments were passed by a large majority.

Obligations and Liabilities of User Entities

In their new compromise amendments, the co-rapporteurs state that if a user entity is using a high-risk AI system, it must take into account the impact of its actions on the fundamental rights of data subjects. In addition, it must ensure that individuals have the right to information about such systems and their use.

User entities will also be required to carry out a fundamental rights impact assessment before using a high-risk AI system. This assessment must include an examination of how the system will interact with the fundamental rights of data subjects, as well as how those rights will be protected. The user entity must also seek to mitigate any risks to those rights.

Requirements for Data Governance and Openness

The new amendments also set out rules for the data governance and openness for those using high-risk AI systems. These guidelines include requirements for the disclosure of sources and algorithms used by high-risk AI systems, as well as rules for providing access to datasets.

Furthermore, the amendments require that users of high-risk AI systems be given a detailed explanation of how their decisions are made and how their data is processed. This explanation should be meaningful, accessible and intelligible, so that users can make an informed decision with regards to their data.

Lastly, the amendments dictate that users must have timely access to their personal data so they can verify that it is being used in accordance with applicable laws and ethical principles. This ensures the proper protection of personal data when employing high-risk AI systems.

Enhancing Transparency and Data Quality

As part of the compromise amendments to the AI Act, the European Parliament plans to enhance transparency and data quality to ensure that users of high-risk systems can understand and monitor any changes made by the AI provider. Specifically, they are proposing that providers of high-risk systems be required to make all relevant information regarding their AI systems available to users in an easily comprehensible format.

This would enable users of high-risk systems to better evaluate any changes made by the AI provider, thereby ensuring that they are able to understand the impact of such changes on their own fundamental rights. Additionally, this would also ensure that data is updated regularly so that users have access to up-to-date information. Furthermore, it would also help protect against algorithmic bias by allowing users to detect any potential biases in the algorithm used by the AI provider.

Risks of Biometric Data Processing in High-Risk AI Systems

You should also be aware of the risks of biometric data processing in high-risk AI systems. The new amendments propose that the use of biometric data for training, evaluation and verification with “high-risk” AI systems—such as those used for facial recognition and automated decision-making—must be subject to a risk assessment.

The European Commission has suggested that users must apply “privacy by design” measures, such as pseudonymization or encryption, to protect individuals’ biometric data when processing it with high-risk AI systems. Users would also need to demonstrate how algorithms are validated to ensure they do not have a “disproportionate impact on specific vulnerable groups”.

Overall, these new amendments aim to ensure a higher level of privacy and security when using biometric data with high-risk AI systems. This is important because it helps ensure that our digital identities remain safe and secure in this digital age.

Understanding the European Parliament’s Compromise Amendments to the AI Act

The AI Act is still being amended, and your voice is important in this conversation. The more people who speak up about the importance of data protection and the need for caution when it comes to artificial intelligence, the more likely it is that the final AI Act will reflect these values.

So far, over 5,000 people have voiced their concerns about the AI Act. If you want your voice to be heard, make sure to contact your representatives and let them know where you stand.