Legislative Approaches to AI: European Union v. UK
After an initial announcement in early 2021, the UK government recently launched its first national artificial intelligence (AI) strategy. This new strategy indicates that the UK may be considering deviating from the legislative approach taken by the European Commission in its ‘IA package’.
The European Commission has published a proposal for an EU-wide legislative framework for AI (the EU Regulation) which is part of the Commission’s overall “AI package”. The AI legal framework addresses the risks generated by specific uses of AI and focuses on imposing prescribed obligations with respect to those high-risk use cases, including the obligation to undertake relevant risk assessments, put in place mitigation systems such as human monitoring and provide transparent information to users.
The intention of the EU regulation is to have a single set of complementary rules, with extraterritorial application. This means that AI providers who make their systems available in the European Union, or whose systems affect people in the European Union or have an exit in the European Union, regardless of their country of establishment , will be required to comply with EU regulations. Non-compliance could result in General Data Protection Regulation-type fines for businesses and providers, with proposed fines of up to € 30 million ($ 34.8 million) or 6% of the total. worldwide turnover.
The national AI strategy does not provide a UK legislative framework for AI, but it does give signs that the UK’s approach will be different from that taken by the European Commission. Currently, the UK regulates AI through cross-industry legislation. In 2018, the UK government endorsed the House of Lords view that “comprehensive AI specific regulation (such as the EU) at this stage would be inappropriate” and that “existing sector regulators are best positioned to examine the impact on their sector. “
The national AI strategy outlines four main reasons why an industry-led approach, rather than a comprehensive European-style approach, makes sense:
- The limits of AI’s potential damage are gray.
- AI use cases can be very complex.
- Empowering regulators and industries to respond and work with innovators in their sectors to advise on the interpretation of existing regulations will allow for a much faster response to individual harms.
- It can be difficult to tell the difference between the specific impact of AI versus other external factors, such as other technological changes taking place.
In its strategy, the UK government recognizes that there are challenges to be addressed under these sector specific regulations. These included
- inconsistent or contradictory approaches between sectors,
- overlap between regulatory mandates,
- the possibility that there are problems between the gaps,
- narrow framework of AI regulation around existing legislation, and
- Growing international focus on developing cross-sector AI regulations (which could undermine UK national efforts to develop a cohesive approach).
These challenges raise the question of whether the UK’s current approach is adequate. A forthcoming white paper from the Office for Artificial Intelligence will address this issue, as well as examining alternative approaches.
In the European Union, the European Parliament and EU member states must adopt the European Commission’s AI proposals for the EU regulation to enter into force.
In the United Kingdom, the next white paper from the Office for Artificial Intelligence should detail the proposed British position on the governance and regulation of AI, as well as the challenges of the sectoral approach. This should be published in early 2022.
In response to the national AI strategy and the EU regulation, the UK Department for Culture, Media and Sport is consulting on potential AI-related reforms of the data protection framework . The closure is scheduled for November 19, 2021.