This Week in Data/Cyber/Tech: A Split in UK and EU Enforcement of AI; and CJEU AGs Issue Opinions on GDPR Fines and Automated Decision-Making Rights

Viewpoints
September 20, 2024
5 minutes

There's rarely a quiet week in data protection — and this one was no exception. Below are two of the most interesting development from the past seven days that caught my eye.

Story #1: Has the ICO approved the use of legitimate interests for AI training?

A common point of discussion with clients and colleagues is the differing approaches to AI regulation in the European Union and UK — namely, that EU legislators have adopted a strict, rules-based view of the world, while their UK counterparts take the path more lightly touched. Perhaps unsurprisingly, that split is starting to show in the ways their respective regulators approach some of the core issues of AI legality.

*****

Last week, Meta announced that it was going to start using Facebook and Instagram data of UK users to train the company’s generative AI models. The move comes three months after Meta put on hold its plans to use social media posts for AI training purposes, following a request from the UK ICO. That same month, the Irish data protection regulator, the DPC also asked Meta not to train its LLMs on public Facebook and Instagram data until further notice — and at the time of writing, Meta has not indicated that it will do so for users in the EU. 

Although the ICO said in a statement on Friday that it has “not provided regulatory approval” for Meta’s plans and will “monitor the situation” to ensure that the company “demonstrate[s] ongoing compliance”, it doesn’t seem unreasonable to assume that the ICO has tacitly accepted the legality of these practices. That approach mirrors another recent issue on which the ICO is taking a different approach to regulators on the other side of the English Channel: whether so-called pay or consent models comply with the GDPR. (The Europeans say no, in most cases, while the English appear to be saying yes.) 

If that assumption proves to be correct, it represents an important early insight into how the ICO intends to think about and regulate artificial intelligence — one that has ramifications for companies of all shapes and sizes that want to use publicly available data for AI training purposes. In particular, it looks like the ICO is willing to allow the use of personal data for AI training purposes on the basis of legitimate interests (and the ability to opt out of such use), i.e., rather than requiring user consent. Whether the DPC, or the European Data Protection Board, insist that such processing needs consent is the six billion euro question. 

*****

A caveat to the opening paragraph of this post is that, for companies which operate across the UK and EU, the different regulatory standards in those jurisdictions may have limited effect in practice. That’s because if you have to design a governance framework to meet the requirements of the AI Act, in most cases it is easier to also apply those practices to the UK rather than standing up separate compliance programmes.

That said, there will certain discrepancies that companies may find it hard not to take advantage of — and, much like pay or consent, the ICO’s approach to the lawful use of data for AI training purposes feels like it could be one of those.

Story #2: CJEU AGs issue opinions on GDPR parental liability and the provision of information on AI algorithms

Among several opinions issued last week by the advocates general of the European Court of Justice that relate to data protection, the two that caught my eye have implications for businesses involved in the biggest macro trends of the moment: private equity and AI.

The first opinion — on the scope of parental liability and the calculation of GDPR damages — is applicable generally, but will be of particular interest to private equity firms in relation to their portfolio companies. And the second — on the information to be provided to data subjects on automated decision-making — is especially topical, given the interplay between such processing and AI (indeed, in many cases they are one and the same).

  • Case C-383/23

AG Medina held that where a GDPR fine is imposed on a controller or processor that is (or forms part of) an undertaking, the total annual turnover of the undertaking (i.e., parent) is used to calculate the maximum amount of the fine that may be imposed on the infringing entity.

Helpfully, however, that's not the end of the story. When determining the actual fine to be imposed, the undertaking is only one of the factors to be considered, on a case-by-case basis. The data protection authority or court will also need to weigh the decision-making power of the parent company, the scope of the infringing conduct and the number of entities of the undertaking that are involved, among other things.

  • Case C-203/22

AG de la Tour considered the Art. 15 GDPR right of access through the lens of automated decision-making — that is to say, the Art. 15(1)(h) right for a data subject to request from the controller (1) information about the existence of automated decision-making, including profiling, (2) “meaningful” information about the logic involved in the decision-making, and (3) information about the significance and envisaged consequences of processing for the data subject.

The conclusion of the Opinion says: “[T]he controller is not required to disclose to the data subject information which, by reason of its technical nature, is so complex that it cannot be understood by persons who do not have technical expertise, which is such as to preclude disclosure of the algorithms used in automated decision-making.”

That said, as companies’ use of AI increases, I expect that data subjects will also increasingly ask to be provided with more information about these technologies, such that a generic description of their logic may not always suffice. That could particularly be the case where the data subject *does* have the technical expertise to understand complex information about algorithmic decision-making.

*****

The obligatory caveat with AG opinions is that they are not binding on the ECJ. That said, they are usually followed by the court, and so it is generally reasonable to take the view that they’re likely to become good law in due course (i.e., normally several months after the opinion is handed down).

Subscribe to Ropes & Gray Viewpoints by topic here.