EU issues guidelines on prohibited AI practices
The EU Commission issued, on 4 February, draft guidelines that specifically address practices defined as ‘high risk’ in the EU AI Act such as harmful manipulation, social scoring, and real-time remote biometric identification.
The guidelines, which still need to be formally adopted, aim at uniform application of the AI Act across the European Union but are non-binding. The Commission will review these guidelines when practical experience has been gained in the implementation of these prohibitions, as well as in light of market surveillance authorities’ enforcement, and decisions by the Court of Justice of the European Union.
EU Member States have to appoint Market Surveillance Authorities by 2 August 2025. For most countries, it is not yet clear which regulator will be given this task. However, the Data Protection Authorities have said they are best placed to enforce in this area due to their GDPR remit.
Speaking at a PL&B Conference Data opportunities in Ireland in Dublin on 6 February, Ireland’s Data Protection Commissioner, Dr Des Hogan, said that there is no decision yet in Ireland as to who will be appointed as the AI regulator, but he expects the DPC to be consulted on these proposals.
Commenting on AI, Hogan said that societal questions cannot be left to regulators alone, as industry has a role to play as well. “As a regulator we have to apply the law, but individuals will need to have confidence in AI models.”
Last year, Ireland’s DPC sought the European Data Protection Board’s (EDPB’s) Opinion on AI models as a result of queries from their fellow DPAs on how to regulate AI issues. Thanks to this harmonised view, companies should now be in a position to see how they can deploy AI models, he said.
The EU AI Act’s provisions on high-risk processing and AI literacy took effect on 2 February. The EU Commission also published, on 6 February, guidelines on the definition of an AI system.
See: