The EU takes its first steps towards coordinated AI enforcement

Viewpoints
February 1, 2024
3 minutes

On 24 January 2024, the European Commission established the European Artificial Intelligence Office (AI Office) to implement the upcoming AI Act. Together with the political agreement on the EU AI Act, these developments represent a step closer towards coordinated AI enforcement action. 

Summary of AI Office roles 

The AI Office has a variety of roles and tasks that are broadly similar to the responsibility of the European Data Protection Board in implementing and enforcing the GDPR in EU. The AI Office’s roles include the following:

  • Developing guidance and investigating infringements of the AI Act. These include monitoring the implementation of general purposes AI models and systems and investigating potential infringements, as well as developing guidance for such systems particularly when unforeseen risks from such systems emerge. The AI Office will also coordinate enforcement action across the EU on prohibited and high-risk AI systems (see below for examples of such AI systems).
  • Providing assistance to the EC. The AI Office will assist the EC in applying the AI Act consistently across the EU, as well as in preparing decisions and implementing delegated legislation related to the AI Act. 
  • International cooperation. The AI Office is also tasked with advocating responsible stewardship of AI and to promote the EU’s approach to AI regulation worldwide, including by contributing to the implementation of international agreements governing AI.

Practical considerations for organisations

As non-compliance with the AI Act may result in a maximum administrative fine up to €35 million or 7% of a company's total worldwide annual turnover, whichever is higher, organisations whose businesses involve the use of AI should consider several factors to measure their exposure to enforcement risk and strategise accordingly.

  • Classify risks. The AI Act adopts a tailored approach to regulating AI systems, depending on the risk of harms that may result from the use of AI systems as well as their intended use. Generally, the use of higher risk AI systems are likely to result in a higher risk of enforcement action.  These are generally AI systems that form safety components and are subject to specific EU legislation, as well as AI systems in specific use cases such as critical infrastructure, employment, and law enforcement purposes. Certain AI systems (i.e. AI systems that deploy subliminal or manipulative techniques, exploits vulnerabilities, or uses biometrics either to deduce sensitive personal data or to identify individuals in real time) will be subject to an outright prohibition and the potential use of such systems is likely to be monitored particularly closely by the AI Office or individual EU member state regulators.
  • Sector overlay. In addition to the AI Act, there may also a wide range of sector-specific regulations or guidelines to consider. As with the AI Act, the risk of enforcement generally aligns with the risks presented by the relevant sector. For example, AI systems used in healthcare, financial services, and critical infrastructure will typically be subject to increased scrutiny, due to the increased likelihood of harm that may result from malfunctioning AI systems in these sectors. 
  • Extraterritoriality. Organisations established outside of the EU may be subject to enforcement action from the AI Office or a relevant EU member state regulator due to the wide extraterritorial reach of the AI Act. Organisations that place their relevant AI systems on the EU market, or put their relevant AI systems into service or use output generated by their AI-systems in the EU will fall within scope, as would organisations that combine an AI system with their product and place that product on the market or puts that product into service in the EU under their own name or trademark. 

Conclusion

Currently, enforcement action on AI is generally limited to a few cases initiated individually by EU member state data protection regulators. These cases generally involve non-compliance with data protection legislation, such as a fine imposed by the French data protection regulator on an AI-powered facial recognition platform for unlawful processing of personal data. This approach is likely to change over time as the EU moves closer towards coordinated enforcement for noncompliance with AI-specific legislation.

In addition, although the AI Act has been finalised, the regulation of AI is nevertheless unlikely to remain static as the technology evolves and/or becomes better understood. Apart from understanding their risks under the existing regulatory framework of AI, organisations should also be mindful and reactive to regulatory developments in this area. 

Subscribe to Ropes & Gray Viewpoints by topic here.