M. Metin Uzun, PhD candidate, University of Exeter

 

The emergence of Generative AI models, particularly ChatGPT 3 and beyond, signals the start of a new summer in AI technology. Interest in AI among policymakers is increasing globally. The infiltration of AI into various sectors has quickly directed regulatory attention towards it. Various regulatory approaches have been developed to reduce risks associated with AI, including legislation, standards, civil liability, soft law, and industry standards. The pace at which regulations are being implemented seems to mirror the rapid proliferation of AI as if they were soulmates destined to be together. However, the progress of AI regulation has become a crucial aspect of the ongoing race towards advancing AI. Governments and organisations are constantly pushing the limits of AI capabilities, which has brought the need for comprehensive regulatory frameworks to the forefront. Hence, the differences and fragmentation in AI regulation across jurisdictions are a reflection of the tension in the “AI race”. This has created a new arena for global regulatory competition, where governments and organisations must strive to strike a balance between safeguarding against potential harm and fostering innovation. Indeed, the current landscape of AI regulations is characterised by two contexts: “horizontal” and “vertical”.

The European Union (EU) is leading the way in regulating AI by taking a horizontal risk regulation approach. In 2021, the European Commission proposed the EU AI Act, which aims to regulate AI systems available in the EU Market. The regulation categorises systems according to the level of risk they pose, ranging from minimal risk to unacceptable levels of risk, with corresponding obligations for each level. The text was further debated and revised before the European Parliament's April 26, 2023, vote. A political agreement was reached on April 27, and a key committee vote was held on May 11, 2023. Indeed, the AI Act is expected to be implemented by 2024, potentially becoming a global benchmark as his elderly brother called GDPR. Similar to the GDPR, the reach of this Act extends widely, encompassing not only users of AI technology within the EU but also providers who introduce AI systems into the EU market or utilise them within the EU.

 

Furthermore, The Digital Markets Act (DMA) and Digital Services Act (DSA) also encompass AI technologies and introduce additional requirements, addressing the potential risks associated with AI-driven systems in digital markets and services. The combined impact of the AI Act, DMA, and DSA reflects the EU's comprehensive approach towards regulating AI and ensuring the “responsible”, “human-centric” and “accountable” using and governing of AI-driven technologies. The AI regulation of the EU is anticipated to influence changes in products offered in non-EU countries, resulting in what is known as the "Brussels Effect." This de facto effect refers to the indirect impact of EU regulations, motivating companies to adjust their products and practices to conform to EU standards, even beyond the scope of EU jurisdiction. Indeed, the EU's regulatory framework and its global impact underscore the EU's position as a significant driver in shaping AI governance and fostering a harmonised approach to AI regulation and standards worldwide.

However, AI Act still needs to overcome certain obstacles to impact a global scale significantly. As part of its rulemaking, the EU is seeking to implement transparency measures in so-called general-purpose AI. “Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training” the European Parliament noted on May 111. Nevertheless, OpenAI CEO Sam Altman warned that the company may withdraw its services from the European market in response to the EU AI Act. Speaking to reporters after a speech in London, Altman said he had "many concerns" about the EU AI Act. which is currently being finalised by lawmakers. “The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back. They are still talking about it.” said by Altman. OpenAI has previously stated that it is not advisable for individual countries to impose restrictive regulations on AI, such as limiting what it can say or setting low thresholds for regulation, as this may hinder its potential. Although the EU AI Act has set the benchmark for global regulation, the OPENAI tension between EU is a significant and early warning of the potential dangers of broad regulation in a constantly changing regulatory environment.

 

Diverse approaches and perspectives mark the global landscape of AI regulation. While the EU leads in comprehensive regulation with the proposed EU AI Act, the UK prioritises adaptive and flexible and pro-innovative regulation approach. Unlike the EU's risk-oriented and horizontal approach to AI, the UK and the US adopt more flexible regulatory policies. For example, the UK regulatory policy does not intend to adopt new legislation or create a new regulator for AI. Instead, it will require existing regulators, such as the UK Information Commissioner's Office (ICO), to take charge of promoting, establishing, and overseeing responsible AI in their respective industries.

Overall, as AI continues to shape our societies, ongoing discussions and collaborations are crucial to develop harmonised and effective regulatory frameworks that ensure responsible AI deployment and protect human rights. The AI Act aims to make a significant global impact through the Brussels Effect. However, it is important to consider the potential consequences of implementing a horizontal AI regulation that may not be able to keep up with the fast-paced, evolving technology landscape. On the other hand, as the EU's regulatory framework spreads its influence far and wide, we can all look forward to the harmonious world of AI governance. It's worth noting that this AI summer could usher in a new era of information skill asymmetries that may prove to be challenging and perplexing for regulators.