On 8 December 2023, the ‘trilogue’ negotiations between the EU’s three main institutions (the European Commission, the European Parliament, and the European Council) concluded with a provisional agreement being reached on the text of the EU Artificial Intelligence Act. This will allow the Act to become law, subject to formal adoption. The Act is expected to become law in 2024 and become fully applicable from 2026 (although, as discussed below, certain requirements of the Act will now apply sooner, just six to twelve months after the Act becomes law).
Background
As a reminder, the Act, once in force, will establish the world’s first piece of legislation specifically governing the regulation of artificial intelligence across multiple sectors. It proposes a set of harmonised rules throughout the EU to regulate AI using a risk-based approach. This would entail an outright ban on AI tools deemed to carry unacceptable risks (such as those used to classify individuals based on their social behaviour or personal characteristics) and impose strict regulation on ‘high-risk’ AI (as might be found, for example, in autonomous vehicles). Less onerous obligations would apply to low- and minimal-risk AI. Much like the GDPR, the Act will have the so-called ‘Brussels effect’, applying to any organisation implementing systems in the EU, serving EU-based users (even if the supplier organisation is based outside the EU), or utilising AI outputs within the EU.
What has changed?
Changes to the text of the Act are still being finalised following the provisional agreement having been reached. However, the European Council’s press release on the conclusion of the trilogue negotiations, together with the European Parliament’s statement and various media reports, suggest several modifications and clarifications to the text originally presented by the European Commission. Some of the most significant of these are detailed further below.
¾ Changes that may benefit AI suppliers
- Open-source software. Subject to certain exceptions, free and open-source models with publicly available parameters will, reportedly, be out of scope. This will be welcome to developers concerned that the Act might create a chilling effect on open collaboration in the AI ecosystem, given the lack of clarity in the Act’s original draft as to whether releasing open-source software would fall within its reach.
- Research and innovation exemptions. Responding to criticisms that the Act could stifle European innovation, an exemption from its obligations (subject to certain safeguards and conditions) will be included to cover AI ‘regulatory sandboxes’ established by national governments to develop, train, and test innovative AI systems before they are placed on the market.
- More proportionate financial penalties. Taking inspiration from the GDPR, fines for violations of the Act had been set as the greater of a predetermined sum or a percentage of annual global turnover (see further below). While this remains the case, smaller tech companies will welcome the proposal in the provisional agreement to impose ‘more proportionate caps’ in relation to these fines imposed on SMEs and start-ups.
¾ Revisions that AI suppliers may consider less welcome
- Foundation models and General Purpose AI Systems (GPAIS). A recent proposal by the European Parliament to revise the Act’s scope to encompass the regulation of all foundation models (irrespective of their risk classification or intended use cases) threatened to derail the Act’s progress altogether. In light of strong opposition by France, Germany, and Italy to this proposal, the provisional agreement now stipulates that GPAIS must provide summaries of information about pre-training data (including as regards copyright) and that GPAIS with more than 10^25 floating point operations (FLOPs) will be considered to have ‘systemic risk’, and, consequently, be subject to additional model evaluation, testing, mitigation, and reporting requirements. Despite this compromise, tech companies may be disappointed by what could be perceived as a last-minute attempt to regulate AI technology itself, rather than specific use cases as had initially been envisaged.
- Fines for non-compliance remain high. Despite the introduction of more proportionate ceilings for financial penalties that could be imposed on SMEs and start-ups (see above), the maximum possible fine under the Act remains significantly higher than that possible under the GDPR, being the greater of €35m or 7% of annual global turnover in relation to the most serious breaches of the Act.
- Requirements for a fundamental rights assessment. The previous draft of the Act’s text required a fundamental rights impact assessment to be undertaken prior to the implementation of any AI system classified on high risk (on the basis this could harm citizens’ safety, health, or other rights). This requirement will be extended to the use of AI in specific sectors, including insurance and banking. This may delay customers operating in those sectors from entering into contracts for the provisions of AI-powered systems.
- Earlier deadlines for compliance. The original text provided for a two-year grace period after the Act comes into force for businesses to achieve compliance with its requirements. This timeline has been amended as follows, with certain provisions taking effect earlier (increasing pressure on businesses that have yet to digest the Act’s implications to prepare for its entry into force):
- 6 months after the Act comes into force: provisions regarding prohibited uses of AI will apply.
- 12 months after the Act comes into force: the rules on transparency, governance, high-risk AI systems and GPAIS will apply.
- 24 months after the Act comes into force: the remainder of the Act’s provisions will apply.
¾ Other amendments to note
- Definition of AI systems. The Act’s definition of a regulated ‘AI system’ has been aligned with the most recent definition proposed by the OECD, i.e. a ‘machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments’. The OECD adds that ‘[d]ifferent AI systems vary in their levels of autonomy and adaptiveness after deployment’. While some have welcomed this alignment, the OECD’s recently updated definition nevertheless remains potentially broad in scope. Further work on the technical detail in the Act’s recitals will likely be needed to clarify the distinction between AI systems subject to its requirements and simpler software systems that are not within its scope.
- Individuals may report violations. Both legal entities and natural persons will be able to complain to the competent market surveillance authority concerning any alleged non-compliance with the Act.
- Real-time biometric identification (RBI) systems. Subject to prior judicial authorisation, a series of narrow exceptions enabling the use of RBI in public spaces has been established in the provisional agreement for law enforcement purposes. This is subject to strict conditions, such as limiting the use of real-time RBI to target searches of victims and suspects of specific crimes mentioned in the Act.
- Ban on untargeted scraping of facial images. Scraping facial images from the Internet or from CCTV footage in order to create facial recognition databases will be prohibited.
What’s next?
Technical details are still being ironed out, which is why the final text of the Act is not yet publicly available. This process is likely to take up to 3 months. Once amended, the compromise text will be submitted for endorsement by EU member states, following which it will be formally prepared for adoption and final publication in early 2024. As stated above, the Act’s provisions are expected to take effect between 6 to 24 months after publication.
Businesses looking to deploy or supply AI at scale would be well advised in the meantime to develop a robust, high-level AI strategy and governance procedure and assemble an advisory team to ensure compliance with the Act once it comes into force.