Corporate adoption of artificial intelligence is on an upwards trajectory and AI increasingly plays an integral role within businesses across all sectors of the UK economy. As corporate transactional lawyers, we are more often having to consider AI issues in a deal context and the use of AI to support boards of directors is on the rise. As AI adoption grows, so too does the need for regulation. On 29 March 2023 the Government released its white paper, AI regulation: a pro-innovation approach.
The white paper confirmed that the Government will not initially introduce legislation to regulate AI in the UK, but rather seeks to establish a framework underpinned by the following five key principles for existing regulators to implement.
Centralised functions of Government will be implemented to monitor and assess the impact of the application of the framework and principles by existing regulators. For a more detailed summary of the white paper and the proposed framework and principles, please see our article here.
Reflecting on the white paper from a corporate perspective
One issue in attempting to regulate AI is the difficulty in defining it. Rigid legal definitions tend to be incompatible with a subject matter so diverse and ever evolving and can quickly become outdated.
There is, of course, existing legislation that attempts to define AI. In the transactional context, for example, we often need to consider the definition of AI that applies for the purposes of the National Security and Investment Act 2021 ("NSI Act"): "technology enabling the programming or training of a device or software to— (i) perceive environments through the use of data; (ii) interpret data using automated processing designed to approximate cognitive abilities; (iii) make recommendations, predictions or decisions; with a view to achieving a specific objective."
Where an entity researching or developing AI for specific purposes is to be acquired, the NSI Act requires a notification to be made to the Government.While this definition may work for the purposes of the NSI Act, a general definition aimed to capture all forms of AI for broader applicability, is more complicated.
The white paper recognises this difficulty and aims to 'future-proof' its framework by defining AI with reference to two key characteristics; (1) "adaptivity" and (2) "autonomy". This intends to capture any systems that are trained, operate by inferring patterns and connections through data, develop the ability to perform new forms of inference and/or can make decisions without the express intent or control of a human. By avoiding a one-dimensional definition of AI, the white paper hopes to cater for any new or unanticipated technologies that are autonomous and adaptive.
Another danger of AI regulation is incoherence across existing legislative frameworks. We have mentioned that AI is already regulated under the NSI Act. The Data Protection Act 2018 and the Financial Services and Markets Act 2000 are two further examples of existing legislation dealing with AI. The white paper framework and principles will add an additional layer of regulation with which corporations will need to comply. Regulatory inconsistency risks stifling innovation and discouraging businesses from adopting AI solutions. The white paper notes that regulatory coordination and consistency in enforcement will be required to support businesses to invest confidently in AI, and purports to establish a clear and unified approach to regulation to address this.
While there is clearly regulatory uncertainty and risk, the corporate adoption of AI solutions is nevertheless trending upwards. In a 2022 report commissioned by the Department of Digital, Culture, Media and Sports, it is predicted that by 2040 the overall adoption rate of AI technology by UK businesses would increase from 15% at present, to 34.8%, with 1.3 million UK businesses utilising AI systems. We are already seeing widespread use of AI systems by UK businesses as part of their corporate strategies. AI tools for data management and analysis, in particular, are now commonly deployed to produce intelligence that assists boards of directors with everything from recognising market trends to recruiting staff and managing risk.
There is also a future in which AI can play a direct role in the decision-making process at board-level. In 2022, Chinese entity NetDragon Websoft Holdings Limited announced that it had appointed 'Tang Yu', an "AI-powered virtual humanoid robot", as CEO of its flagship subsidiary Fujian NetDragon Websoft Co., Limited. While adoption of AI of this nature is less common, it presents an opportunity to revolutionise corporate decision-making and drive strategic growth for UK businesses. The hope is that AI regulation will serve to encourage, rather than suffocate, the innovation required to achieve this.
Ultimately, it remains to be seen how regulators will approach implementing the white paper principles in practice, and whether the framework strikes the desired balance between regulation and innovation.