Understanding the AI Act: AI Literacy Requirements and Compliance Strategies for Organizations

Viewpoints
October 2, 2024
5 minutes

The AI Act, which came into effect on August 1, 2024, imposes several obligations across a staggered timeline. One of the earliest obligations is the requirement for certain organizations to implement AI literacy measures, which will be enforced from February 2, 2025 alongside the ban on AI systems that present  unacceptable risks under the AI Act. For more information on the AI Act and a timeline for compliance, please see our previous Viewpoint alert here.

Under the AI Act, AI literacy is the skill, knowledge and understanding that allows entities and/or individuals to make an informed deployment of AI systems and to gain awareness about the opportunities and risks of AI and possible harm it can cause. While guidance on the topic is currently limited, additional guidance on the topic is likely to be forthcoming closer to the deadline for compliance, as the AI Board and the relevant authorities in EU member states have been tasked to publish guidance such as codes of conduct under the AI Act. 

Key takeaways for organisations 

Entities within scope: The requirements of AI literacy apply generally to providers (i.e., an entity that develops the AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge) and deployers (i.e., an entity that uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity) of AI systems regardless of the risks and capabilities of the relevant AI system. It does not apply to other entities under the AI Act, such as importers or distributors, however, such entities are subject to their own subset of obligations under the AI Act, and may be deemed to be a provider of AI systems in certain circumstances such as when they put their name or trademark on, or make a substantial modification to, a high-risk AI system already put into service or placed onto the market in the EU. 

Subjective standards of AI literacy: The exact requirements and standards of AI literacy will be context specific. It will depend on, among others:

  • The type and risk of the relevant AI system. AI literacy requires providers and deployers to take into account the respective rights and obligations of entities and individuals in the context of the AI Act, and to consider the person or groups of persons on whom the AI system is to be used on. This means that providers of high-risk AI systems (such as AI systems used in educational and vocational training) are thus likely to be subject to a higher standard of AI literacy, compared to providers of lower risk AI systems. 
  • The size and resources of the organisation. The AI Act also requires organisations to ensure, “to their best extent”, a sufficient level of AI literacy of their staff and other relevant persons dealing with the operation and use of their AI systems. This is also likely to mean that the size and resources of the organisation will be taken into account when determining what is a compliant level of AI literacy under the AI Act. 
  • The relevant employees. The AI Act states that all “relevant actors” across the AI value chain should be provided with appropriate knowledge, and providers and deployers must take into account the technical knowledge, experience, education and training and the context the AI systems are to be used in when implementing measures to ensure a sufficient level of AI literacy. This means that the standards of AI literacy will depend on the relevant personnel developing or using the relevant AI system, and the AI Act expressly identifies that “persons assigned to implement the instructions for use and human oversight (of high-risk AI systems)” must have, among others, an appropriate level of AI literacy to fulfil their tasks for a deployer of an AI system.

This also means that non-technical personnel are still subject to the requirements of AI literacy. Guidance from the Dutch supervisory authority provides further insight; it states that the “level of AI literacy of each employee must be in line with the context in which the AI systems are used and how (groups of) people may be affected by the systems”. It also provides further examples in this respect; for instance, employees performing a HR function must understand that an AI system may contain biases or ignore essential information that may lead to an applicant being or not being selected for the wrong reasons. 

Initial enforcement is likely to be limited to private litigation. While the AI Act’s obligations on AI literacy take effect on 2 February 2025, its provisions on penalties for non-compliance will only apply 6 months later, on 2 August 2025. From that date, it will be open to each EU member state’s respective national competent authorities to impose enforcement measures and sanctions to ensure compliance with the AI Act. However, prior to that date the main form of enforcement is likely to take the form of private litigation. While the new Product Liability Directive (adopted on 12 March 2024) expressly expands the scope of the EU’s product liability regime to cover AI systems, it may also be influential on how litigation may be brought for non-compliance with the AI Act. 

Commentary

To prepare for the AI Act’s requirements on AI literacy, organisations should consider strategising their compliance approach to the issue as the requirements of AI literacy ultimately remain context specific and do not have a one-size-fits-all approach. Organisations should consider:

  • The role they play and the obligations placed on them under the AI Act;
  • The type and risks presented by its AI system(s); and
  • How they may tailor their AI literacy measures according to the roles their employees play within the organisation, in particular with regards to the roles they play with regards to the development and use of AI systems within the organisation. 

While public enforcement will only be possible from 2 August 2025, organisations should also observe any private litigation action for non-compliance with the AI Act, as such litigation is likely to be influential or indicative as to how the AI Act is enforced by public authorities from this date. In addition, organisations should also pay close attention to the development of the EU’s proposed AI Liability Directive which will establish separate rules of liability with regards to AI systems. 

Organisations should also observe further guidance on AI literacy, as and when it is published. Apart from the codes of conduct and similar guidance to be published by the AI Board and the relevant authorities in EU member states, ad-hoc guidance, such as the Dutch supervisory authority’s guidance on AI literacy, may also be published and they should be beneficial to organisations, particularly if they provide examples of such measures and the personnel to whom they may apply to.

Subscribe to Ropes & Gray Viewpoints by topic here.