Skip to main content

Artificial Intelligence: Regulation

Volume 834: debated on Tuesday 14 November 2023


Asked by

To ask His Majesty’s Government, following the action taken by the United States in respect of regulating artificial intelligence, including the recent signing of an Executive Order, whether they have plans to introduce similar provisions in UK law.

In the AI regulation White Paper we set out our first steps towards establishing a regulatory framework for AI. We are aligned with the United States in taking a proportionate, context-based and evidence-led approach to AI regulation. The White Paper did not commit to new legislation at this stage. However, we have not ruled out legislative action in future as and when there is evidence of substantial risks, where non-statutory measures would be ineffective.

My Lords, I am a little disappointed in the Minister’s response, but we welcome the discussions that took place at Bletchley Park. While the Prime Minister says he will not rush to regulate, as the Minister knows, other jurisdictions— the US and the EU—are moving ahead. Labour in government would act swiftly to implement a number of checks on firms developing this most powerful form of frontier AI. A Bill might not have been in the King’s Speech, but that does not mean that the Government cannot legislate. Will the Minister today commit to doing so?

The Government are by no means anti legislation; we are simply anti legislation that is developed in advance of fully understanding the implications of the technology, its benefits and indeed its risks. This is a widely shared view. One of the results of the Bletchley summit that the noble Lord mentioned will be a state-of-the-science report convened by Professor Bengio to take forward our understanding on this, so that evidence-based legislation can then as necessary be put in place. As I say, we feel that we are very closely aligned to the US approach in this area and look forward to working closely with the US and others going forward.

My Lords, the Government have noted that AI’s large language models are trained using copyrighted data and content that is scraped from the internet. This will constitute intellectual property infringement if it is not licensed. What steps are the Government taking to ensure that technology companies seek rights holders’ informed consent?

This is indeed a serious and complex issue, and yesterday I met the Creative Industries Council to discuss it. Officials continue to meet regularly both with creative rights holders and with innovating labs, looking for common ground with the goal of developing a statement of principles and a code of conduct to which all sides can adhere. I am afraid to say that progress is slow on that; there are disagreements that come down to legal interpretations across multiple jurisdictions. Still, we remain convinced that there is a landing zone for all parties, and we are working towards that.

My Lords, I welcome what the Minister has just said, and he clearly understands this technology, its risks and indeed its opportunities, but is he not rather embarrassed by the fact that the Government seem to be placing a rather higher priority on the regulation of pedicabs in London than on AI regulation?

I am pleased to reassure the noble Lord that I am not embarrassed in the slightest. Perhaps I can come back with a quotation from Yann LeCun, one of the three godfathers of AI, who said in an interview the other week that regulating AI now would be like regulating commercial air travel in 1925. We can more or less theoretically grasp what it might do, but we simply do not have the grounding to regulate properly because we lack the evidence. Our path to the safety of AI is to search for the evidence and, based on the evidence, to regulate accordingly.

My Lords, an absence of regulation in an area that holds such enormous repercussions for the whole of society will not spur innovation but may impede it. The US executive order and the EU’s AI Act gave AI innovators and companies in both these substantial markets greater certainty. Will it not be the case that innovators and companies in this country will comply with that regulation because they will want to trade in that market, and we will then be left with external regulation and none of our own? Why are the Government not doing something about this?

I think there are two things. First, we are extremely keen, and have set this out in the White Paper, that the regulation of AI in this country should be highly interoperable with international regulation—I think all countries regulating would agree on that. Secondly, I take some issue with the characterisation of AI in this country as unregulated. We have very large areas of law and regulation to which all AI is subject. That includes data protection, human rights legislation, competition law, equalities law and many other laws. On top of that, we have the recently created central AI risk function, whose role is to identify risks appearing on the horizon, or indeed cross-cutting AI risks, to take that forward. On top of that, we have the most concentrated and advanced thinking on AI safety anywhere in the world to take us forward on the pathway towards safe, trustworthy AI that drives innovation.

My Lords, given the noble Viscount’s emphasis on the gathering of evidence and evidence-based regulation, can we anticipate having a researchers’ access to data measure in the upcoming Data Protection and Digital Information Bill?

I thank the noble Baroness for her question and recognise her concern. In order to be sure that I answer the question properly, I undertake to write to her with a full description of where we are and to meet her to discuss further.

My Lords, I declare my technology interests as in the register. Does my noble friend agree that it is at least worth regulating at this stage to require all those developing and training AI to publish all the data and all the IP they use to train that AI on, not least for the point around ensuring that all IP obligations are complied with? If this approach were taken, it would enable quite a distance to be travelled in terms of people being able to understand and gain explainability of how the AI is working.

I am pleased to tell my noble friend that, following a request from the Secretary of State, the safety policies of Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft, OpenAI and others have been published and will go into what we might call a race to the top—a competitive approach to boosting AI safety. As for enshrining those practices in regulation, that is something we continue to look at.

My Lords, further to the question from the noble Lord, Lord Holmes, around data—that the power of the AI is in many ways defined by the quality of the data—does the Minister have any concern that the Prime Minister’s friend, Elon Musk, for example, owns a huge amount of sentiment data through Twitter, a huge amount of transportation data through Tesla, and a huge amount of communication data through owning more than half the satellites orbiting the planet? Does he not see that there might be a need to regulate the ownership of data across different sectors?

Indeed. Of course, one of the many issues with regulating AI is that it falls across so many different jurisdictions. It would be very difficult for any one country, including the US, to have a single bit of legislation that acted on the specific example that the noble Lord mentions. That is why it is so important for us to operate on an international basis and why we continue not just with the AI safety summit at Bletchley Park but working closely with the G7 and G20, bodies of the UN, GPAI and others.

My Lords, there is significant public interest in the companies developing artificial intelligence working together on common safety standards, but in doing so they may run the risk of falling foul of competition law. Will the Minister be talking to the Competition and Markets Authority to make sure that one public good, preventing anti-competitive practices, does not impede another public good, the development of common safety standards?

Yes, indeed. It is a really important point that the development of AI as a set of technologies is going to oblige us to work across regulators in a variety of new ways to which we are not yet used. That is indeed one of the functions of the newly formed central AI risk function within DSIT.

My Lords, can I back up the question from the noble Baroness, Lady Kidron, on access to data by research workers, particularly health data? Without access to that data, we will not be able to develop generative AI such as retinal scans, for instance, and many other developments in healthcare.

Yes, indeed. In healthcare in this country, we have what perhaps may well be the greatest dataset for healthcare analysis in the world. We want to make use of that for analysis purposes and to improve health outcomes for everybody. We do, of course, have to be extremely careful as we use that, because that is as private as data can possibly get.