The UK government is in the process of drafting new artificial intelligence regulations, following the prime minister’s pledge months earlier to “not rush” the creation of laws governing the rapidly developing technology.
TakeAway Points:
- The UK government has started crafting new legislation to regulate artificial intelligence, months after the prime minister vowed “not to rush” setting up rules for the fast-growing technology.
- Previously, the UK has been reluctant to push for legal interventions in the development and rollout of AI models for fear tough regulation might stymie industry growth
- Sarah Cardell warned last week that she had real concerns that a small number of tech companies creating AI foundation models may have both the ability and the incentive to shape these markets in their own interest.
UK Begins Creating AI Rules
According to two sources briefed on the preparations, such legislation would likely place restrictions on the construction of huge language models, the general-purpose technology that powers AI products like OpenAI’s ChatGPT.
The sources indicated it was not fixed in stone what this law would cover or when it would be released, and underlined nothing would be introduced imminently. But one of them indicated that it would likely look to compel that companies building the most advanced models divulge their algorithms to the government and offer evidence that they have carried out safety testing.
Concerns About AI Potential Threats
The proposals coincide with growing concerns about possible negative effects from regulators, such as the UK competition watchdog. These include the potential for general-purpose models to be used to create harmful materials, as well as the possibility that technology could introduce biases that affect specific demographics.
“Officials are exploring moving on regulation for the most powerful AI models,” said one of the people briefed on the situation, adding that the Department for Science, Innovation, and Technology is “developing its thinking” on what AI legislation would look like.
Another person said the rules would apply to the large language models that sit behind AI products such as ChatGPT, rather than the applications themselves.
Sarah Cardell, the chief executive of the UK’s Competition and Markets Authority, warned last week that she had “real concerns” that a small number of tech companies creating AI foundation models “may have both the ability and the incentive to shape these markets in their own interest”.
The regulator identified an “interconnected web” of more than 90 partnerships and strategic investments involving the same companies: Google, Apple, Microsoft, Meta, Amazon and Nvidia.
Cautions on AI Regulations
The UK has been reluctant to push for legal interventions in the development and rollout of AI models for fear tough regulation might stymie industry growth. It has instead relied on voluntary agreements with governments and companies, ruling out legislation in the short term.
In November, Viscount Jonathan Camrose, minister for AI, said there would be no UK law on AI “in the short term”. Rishi Sunak, the prime minister, said a month earlier that “the UK’s answer is not to rush to regulate”.
“This is a point of principle; we believe in innovation. How can we write laws that make sense for something that we don’t yet fully understand?” Sunak said in October.
The European Union has adopted a more robust stance, with the European Parliament passing the AI Act last month, which establishes some of the first and strictest regulations for the technology.
AI start-ups have criticised the EU’s regulations, citing concerns that they are overly stringent and could impede innovation. As a result of this stringent legislation, other nations, including the United Arab Emirates and Canada, have moved in to try to lure some of Europe’s most promising tech companies to relocate.
Until now, the UK has delegated responsibility to current regulators to clarify which current legislation applies to AI; these watchdogs have been asked to submit papers by the end of this month detailing their plans to regulate AI in their respective fields.
AI in Online Safety Act
The Online Safety Act was passed in October with the intention of protecting adults and children on the internet. The media regulator Ofcom has released its strategy and is currently investigating how generative AI may fall under its purview.
In a recent consultation response, government authorities identified so-called “general purpose” AI models—those that are extremely intelligent and versatile for use on a variety of tasks—as possible candidates for additional legal and regulatory action.
The strategy of focusing on models based on scale, such as “general purpose” or “frontier models,” which are frequently used to characterise the huge language type supporting applications like ChatGPT or Google’s Gemini, has drawn criticism from IT businesses.
“At the moment, the regulators are working on a rather crude rule of thumb that, if future models are of a particular size or surpass a particular size, there should be greater disclosure,” Nick Clegg, president of global affairs at Meta, said last Tuesday.
“I don’t think anyone thinks that over time that is the most rational way of going about things,” he added, “because in the future you will have smaller, fine-tuned models aimed at particular purposes that could arguably be more worthy of greater scrutiny than very large, hulking, great all-purpose models that might be less worrisome.”
“As we’ve previously said, all countries will eventually need to introduce some form of AI legislation, but we will not rush to do so until there is a clear understanding of the risks,” a government spokesperson said.
“That’s because it would ultimately result in measures that would quickly become ineffective and outdated.”