Risk Profiling in the EU: Fundamental Rights and the Misalignment of the GDPR and the AI Act
This comment of the Young Meijers Committee’s Privacy and Non-Discrimination subcommittee analyses how EU law regulates profiling, with a specific focus on the recently enacted Regulation (EU) 2024/1689 (the AI Act).
The coexistence of different legislatory instruments produces regulatory gaps and inconsistencies. Systems capable of inferring sensitive attributes may escape high risk classification under the AI Act, while GDPR safeguards often remain unenforceable in practice because of opacity, delegation of responsibility, and weak human oversight. The Law Enforcement Directive adds another asymmetry by permitting broad restrictions on data-subject rights where AI is used for law enforcement purposes.
Accordingly, this comment asks: How could the AI Act be strengthened to address discriminatory profiling risks more effectively, considering lessons learnt from the GDPR and other relevant EU instruments?
Amongst other recommendations, the group propose that system-level oversight under the AI Act should include explicit safeguards ensuring that it complements, and does not dilute, core rights under the GDPR and the Law Enforcement Directive. In particular, the framework should guarantee meaningful human review, robust transparency, effective access to remedies, and judicial oversight, especially where automated profiling produces legal or similarly significant effects on individuals. Building on this foundation, regulatory gaps between the AI Act, the GDPR, the Law Enforcement Directive, and EU equality law should be formally closed.
Risk Profiling in the EU: Fundamental Rights and the Misalignment of the GDPR and the AI Act
This comment of the Young Meijers Committee’s Privacy and Non-Discrimination subcommittee analyses how EU law regulates profiling, with a specific focus on the recently enacted Regulation (EU) 2024/1689 (the AI Act).
The coexistence of different legislatory instruments produces regulatory gaps and inconsistencies. Systems capable of inferring sensitive attributes may escape high risk classification under the AI Act, while GDPR safeguards often remain unenforceable in practice because of opacity, delegation of responsibility, and weak human oversight. The Law Enforcement Directive adds another asymmetry by permitting broad restrictions on data-subject rights where AI is used for law enforcement purposes.
Accordingly, this comment asks: How could the AI Act be strengthened to address discriminatory profiling risks more effectively, considering lessons learnt from the GDPR and other relevant EU instruments?
Amongst other recommendations, the group propose that system-level oversight under the AI Act should include explicit safeguards ensuring that it complements, and does not dilute, core rights under the GDPR and the Law Enforcement Directive. In particular, the framework should guarantee meaningful human review, robust transparency, effective access to remedies, and judicial oversight, especially where automated profiling produces legal or similarly significant effects on individuals. Building on this foundation, regulatory gaps between the AI Act, the GDPR, the Law Enforcement Directive, and EU equality law should be formally closed.
Download commentPublished on
3 March 2026