Committee (9th Day)
Relevant document: 28th Report from the Delegated Powers Committee
Clause 49: “Regulated user-generated content”, “user-generated content”, “news publisher content”
Amendment 125
Moved by
125: Clause 49, page 47, line 22, at end insert—
“(c) machine-generated content is to be regarded as user-generated content of a service if—(i) the creation or use of the machine-generated content involves interacting with user-generated content,(ii) it takes the form or identity of a user,(iii) it provides content that constitutes illegal, primary priority content or priority content, or would constitute it if created in another format, or(iv) a user has in any way facilitated any element of the generation by way of a command, prompt, or any other instruction, however minimal.”Member’s explanatory statement
This amendment would add machine-generated content to regulated content in the bill and gives meaning to how it could be regarded as ‘user-generated content’ of the service, and allows virtual and augmented reality material to be treated on an equal basis as on other formats.
My Lords, I rise to introduce this group. On Tuesday in Committee, I said that having reached day 8 of the Committee we had all found our roles; now, I find myself in a different role. The noble Baroness, Lady Kidron, is taking an extremely well-earned holiday and was never able to be in the House today. She has asked me to introduce this group and specifically to speak to Amendment 125 in her name.
I strongly support all the amendments in the group, particularly those that would result in a review, but will limit my words to Amendment 125. I also thank the other co- signatories, the noble Baroness, Lady Finlay, who is in her place, and my noble friend Lord Sarfraz, who made such a compelling speech at Second Reading on the need for the Bill to consider emerging technologies but who is also, sadly, abroad, on government business.
I start with something said by Lord Puttnam, and I paraphrase: that we were forbidden from incorporating the word “digital” throughout the whole process of scrutiny of the communications Act in 2002. As a number of us observed at the time, he said, it was a terrible mistake not to address or anticipate these issues when it was obvious that we would have to return to it all at some later date. The Online Safety Bill is just such a moment: “Don’t close your eyes and hope”, he said, “but look to the future and make sure that it is represented in the Bill”.
With that in mind, this amendment is very modest. I will be listening carefully, as I am sure the noble Baroness, Lady Kidron, will from a distance, to my noble friend the Minister because if each aspect of this amendment is already covered in the Bill, as I suspect he will want to say, then I would be grateful if he could categorically explain how that is the case at the Dispatch Box, in sufficient detail that a future court of law can clearly understand it. If he cannot state that then I will be asking the House, as I am sure the noble Baroness, Lady Kidron, would, to support the amendment’s inclusion in the Bill.
There are two important supporters of this amendment. If the Committee will forgive me, I want to talk briefly about each of them because of the depth of understanding of the issues they have. The first is an enforcement officer who I shall not name, but I and the noble Baroness, Lady Kidron, want to thank him and his team for the extraordinary work that they do, searching out child sexual abuse in the metaverse. The second, who I will come to in a little bit, is Dr Geoff Hinton, the inventor of the neural network and most often referred to as “the godfather of AI”, whom the noble Baroness, Lady Kidron, met last week. Both are firm supporters of this amendment.
The amendment is part of a grouping labelled future-proofing but, sadly, this is not in the future. It is with us now. The rise of child sexual abuse in the metaverse is growing phenomenally. Two months ago, at the behest of the Institution of Engineering and Technology, the noble Baroness, Lady Kidron, hosted a small event at which members of a specialist police unit explained to colleagues from both Houses that what they were finding online was amongst the worst imaginable, but was not adequately caught by existing laws. I should just warn those listening to or reading this—I am looking up at the Public Gallery, where I see a number of young people listening to us—that I am about to briefly recount some really horrific stuff from what we saw and heard.
The quality of AI imagery is now at the point where a realistic AI image of a child can be produced. Users are able to produce or order indecent AI images, based on a child known to them. Simply by uploading a picture of a next door neighbour’s child or a family member, or taking a child’s image from social media and putting that face on existing abuse images, they can create a body for that picture or, increasingly, make it 3D and take it into an abuse room. The type of imagery produced can vary from suggestive or naked to penetrative sex; for the most part, I do not think I should be repeating in this Chamber the scenarios that play out.
VR child avatars can be provided with a variety of bespoke abuse scenarios, which the user can then interact with. Tailor-made VR experiences are being advertised for production on demand. They can be made to meet specific fetishes or to feature a specific profile of a child. The production of these VR abuse images is a commercial venture. Among the many chilling facts we learned was that the Oculus Meta Quest 2, which is the best-selling VR headset in the UK, links up to an app that is downloaded on to the user’s mobile phone. Within that app, the user can search for other users to follow and engage with—either through the VR headset or via instant messaging in their mobile app. A brief search through the publicly viewable user profiles on this app shows a huge number of profiles with usernames indicative of a sexual interest in children.
Six weeks after the event, the noble Baroness, Lady Kidron, spoke to the same officer. He said that already the technology was a generation on—in just six weeks. The officer made a terrible and terrifying prediction: he said that in a matter of months this violent imagery, based on and indistinguishable from an actual known child, will evolve to include moving 3D imagery and that at that point, the worlds of VR and AI will meet and herald a whole new phase in offending. I will quote this enforcement officer. He said:
“I hate to think where we will be in six months from now”.
While this group is labelled as future-proofing the Bill, I remind noble Lords that in six months’ time, the provisions of the Bill will not have been implemented. So this is not about the future; it is actually about the now.
Even though what I am describing is abhorrent, to some it may appear to be a victimless crime or a thought crime that might take the place of real crimes, since it could be argued that nobody gets hurt. There are three points to say against that. First, evidence shows that rehearsing child-abuse fantasies online radically accelerates the offender pathway—the length of time between looking at images and abusing a child. Secondly, the relative anonymity of the online world has enabled and supercharged the spread of such content and risks, normalising its production and consumption. Thirdly, the current advances in AI allow perpetrators to create and share thousands of images of a child in a matter of minutes. That leaves the police overwhelmed with the impossible task of distinguishing between the AI-created children and the real children who are being abused. The sheer volume of abuse imagery can remain undiscovered and therefore unreached. This is a perverse and chilling game of whack-a-mole.
A small band of enforcement officers are crying out for our help because they are concerned that existing law does not reach this material and that blurring the role of machine and master risks undermining their ability to enforce the law. While Sections 62 to 69 and Schedule 13 of the Coroners and Justice Act 2009 go some way towards bringing certain computer-generated images into the scope of the law, much of the sexual offences law fails to reach the online world. As a result, the enforcement community is struggling to deal with the new generation of automated and semi-automated systems that create not only abuse images but abusive scenarios at the touch of a button. As the police officer explained to us, the biggest change required is the provision of specific offences covering virtual abuse in the VR social environment, to protect children in those areas against the psychological impact of virtual abuse.
This amendment makes a small change to the definition of “content”, to make clear that machine-generated content is to be regarded as user-generated content of a service, under the following circumstances: first, if the creation or use of the content interacts with user-generated content; secondly, if it takes the form or identity of a user; thirdly, if it provides content that would reach the bar of illegal primary priority content or priority content in another format; and finally, if a user has in any way facilitated any element of the generation by way of a command prompt or any other instruction, however minimal. This would go a long way to support the police in their unenviable task.
When my noble friend the Minister responds, I would ask that he confirms that the scope of the Bill—user-to-user services and search—does not fetter law enforcement. We discussed services of limited functionality being out of scope earlier in Committee, when discussing Amendment 2. For example, would a person or an automated process creating this material at scale, with no user-to-user functionality, be out of scope? The concern must be that existing laws covering child sexual abuse do not address the current state of technology, and this Bill may be drawn too narrowly to catch the abuse that is happening at ever-increasing scale.
Finally, this brings me to Dr Geoff Hinton. After a decade at Google, he retired and has chosen to speak freely about his profound worries concerning the future of AI, joining the chorus of those on the front line who are demanding that we regulate it before it is too late. I am a keen and enthusiastic early adopter of new technology, but we should listen very carefully to his concerns. He says that AI systems can learn and provide a compelling view of the world at such speed and scale that, in the hands of bad actors, they will in the very near future obliterate any version of a common reality. A deluge of fake images, videos and texts will be the data upon which future AI-driven communication will be built, leaving all of us unable to distinguish between fact and fiction. That is a very scary view of the world and we should take his professional concern very seriously, particularly when we focus on this Bill and how we protect our children in this world.
Given the scope of the Bill, we obviously will not be able to address every one of the hopes or fears of AI as it stretches out ahead of us, but it is a huge mistake for the Online Safety Bill to pretend that this future is not already with us. In this amendment and the whole group, we are attempting to put in the Bill the requirements to recognise those future dangers. As Dr Hinton has made clear, it is necessary to treat the fake as if it were real today, because we are no longer certain what is fake and what is real. We do a disservice to our children if we do not recognise that reality today.
I appreciate that I have spoken for far too long on this very small amendment. It closes a loophole which means that if machine-generated material is imitating user-to-user behaviour, takes the form of a user, or would in another context meet the bar of illegal primary priority content or priority content, it should be treated as such under the safety duties of the Bill. That is all it does. This would prevent the police standing by as the horrific rise in the use of abuse rooms—which act as a rehearsal for abusing children—continues. It is much needed and an essential first step down this road. I beg to move.
My Lords, I am very grateful to the noble Baroness, Lady Harding, for the way she introduced this group of amendments. I have added my name to Amendment 125 and have tabled probing Amendments 241 and 301 in an attempt to future-proof the Bill. As the noble Baroness has said, this is not the future but today, tomorrow and forever, going forwards.
I hope that there are no children in the Public Gallery, but from my position I cannot see.
There are some children in the Public Gallery.
Then I shall slightly modify some of the things I was going to say.
When this Bill was conceived, the online world was very different from how it is today. It is hard to imagine how it will look in the future. I am very grateful to the noble Baroness, Lady Berridge, and the Dawes Centre for Future Crime at UCL, for information that they have given to me. I am also grateful to my noble friend Lady Kidron, and the enforcement officers who have shared with us images which are so horrific that I wish that I had never seen them—but you cannot unsee what you have seen. I admire how they have kept going and maintained a moral compass in their work.
The metaverse is already disrupting the online world as we know it. By 2024, it is estimated that there will be 1.7 billion mobile augmented-reality user devices worldwide. More than one-fifth of five to 10 year-olds already have a virtual reality headset of their own, or have asked for similar technology as a gift. The AI models are also developing quickly. My Amendment 241 would require Ofcom to be alert to the ways in which emerging technologies allow for activities that are illegal in the real world to be carried out online, to identify the places where the law is not keeping up to date with technological developments.
The metaverse seems to have 10 attributes. It is multiuser and multipurpose, content is user-generated, it is immersive, and spatial interactions occur in virtual reality or have physical environments enhanced by augmented reality. Its digital aspects do not expire when the experience ends, and it is multiplatform and interoperable, as users move between platforms. Avatars are involved, and in the metaverse there is ownership of the avatars or other assets such as virtual property, cryptocurrency et cetera. These attributes allow it to be used to master training scenarios of complex situations, such as in surgical training for keyhole surgery, where it can improve accuracy rapidly. On the horizon are brain/computer interfaces, which may be very helpful in rehabilitative adaptation after severe neurological damage.
These developments have great potential. However, dangers arise when virtual and augmented reality devices are linked to such things as wearable haptic suits, which allow the user to feel interactions through physical sensation, and teledildonics, which are electronic devices that simulate sexual interaction.
With the development of deep-fake imagery, it is now possible for an individual to order a VR experience of abusing the image of a child whom they know. The computer-generated images are so realistic that they are almost impossible to distinguish from those that would be cartoon-generated. An avatar can sexually assault the avatar of a minor, and such an avatar of the minor can be personalised. Worryingly, there have been growing reports of these assaults and rapes happening. Since the intention of VR is to trick the human nervous system into experiencing perceptual and bodily reactions, while such a virtual assault may not involve physical touching, the psychological, neurological and emotional experience can be similar to a physical assault.
This fuels sex addiction and violence addiction, and is altering the offender pathway: once the offender has engaged with VR abuse material, there is no desire to go back to 2D material. Offenders report that they want more: in the case of VR, that would be moving to live abuse, as has been said. The time from the development of abnormal sexual desires to real offending is shortened as the offender seeks ever-increasing and diverse stimulation to achieve the same reward. Through Amendment 125, such content would be regarded as user-generated.
Under Amendment 241, Ofcom could suggest ways in which Parliament may want to update the current law on child pornography to catch such deep-fake imagery, as these problematic behaviours are illegal in the real world but do not appear to be illegal online or in the virtual world.
Difficulties also arise over aspects of terrorism. It is currently a criminal offence to attend a terrorist training ground. Can the Minister confirm that Amendment 136C, which we have debated and which will be moved in a later group, would make attending a virtual training ground illegal? How will Ofcom be placed to identify and close any loopholes?
The Dawes Centre for Future Crime has identified 31 unique crime threats or offences which are risks in the metaverse, particularly relating to child sexual abuse material, child grooming, investment scams, hate crime, harassment and radicalisation.
I hope the Minister can confirm that the Bill already applies to the metaverse, with its definition of user-to-user services and technology-neutral terminology, and that its broad definition of “encountering” includes experiencing content such as haptic suits or virtual or augmented reality through the technology-neutral expression “or other automated tool”. Can the Minister also confirm that the changes made in the other place in Clause 85 require providers of metaverse services to consider the level of risk of the service being used for the commission or facilitation of a priority offence?
The welcome addition to the Bill of a risk assessment duty, however, should be broadened to include offences which are not only priority offences. I ask the Minister: will the list of offences in Schedules 5 to 7 to the Bill be amended to include the option of adding to this list to cover other harmful offences such as sexual offences against adults, impersonation scams, and cyber physical attacks such as cyber burglary, which can lead to planned burglary, attacks on key infrastructure and assault?
The ability to expand the risk assessment criteria could future-proof the Bill against such offences by keeping the list open, rather than closed as it is at the moment, to other serious offences committed in user-to-user or combined service providers. Such duties should apply across all services, not only those in category 1, because the smaller platforms, which are not covered by empowerment duties, may present a particularly high risk of illegal content and harmful behaviours.
Can the Minister therefore please tell us how content that is illegal in the real world will be reported, and how complaints can be made when it is encountered, if it is not a listed priority offence in the Bill? Will the Government expand the scope to cover not only illegal content, as defined in Clauses 207 and 53, but complex activities and interactions that are possible in the metaverse? How will the list of priority offences be expanded? Will the Government amend the Bill to enable Ofcom to take a risk-based approach to identifying who becomes classified as a category 1 provider?
I could go on to list many other ways in which our current laws will struggle to remain relevant against the emerging technologies. The list’s length shows the need for Ofcom to be able to act and report on such areas—and that Parliament must be alive to the need to stay up to date.
My Lords, I am grateful to the noble Baroness, Lady Finlay of Llandaff, for tempering her remarks. On tempering speeches and things like that, I can inform noble Lords that the current school group have been escorted from the Chamber, and no further school groups will enter for the duration of the debate on this group of amendments.
My Lords, I rise to support Amendment 241, in the name of the noble Baroness, Lady Finlay, as she mentioned. I also spoke in the Private Member’s Bill that the noble Baroness previously brought before your Lordships’ House, in a similar vein, regarding future-proofing.
The particular issue in Amendment 241 that I wish to address is
“the extent to which new communications and internet technologies allow for behaviours which would be in breach of the law if the equivalent behaviours were committed in the physical world”.
The use of “behaviours” brings into sharp focus the applicability of the Online Safety Bill in the metaverse. Since that Private Member’s Bill, I have learned much about future-proofing from the expert work of the Dawes Centre for Future Crime at UCL. I reached out to the centre as it seemed to me that some conduct and crimes in the physical world would not be criminal if committed in the metaverse.
I will share the example, which seems quite banal, that led me to contact them. The office meeting now takes place in the metaverse. All my colleagues are represented by avatars. My firm has equipped me with the most sophisticated haptic suit. During the meeting, the avatar of one of my colleagues slaps the bum of my avatar. The haptic suit means that I have a physical response to that, to add to the fright and shock. Even without such a suit, I would be shocked and frightened. Physically, I am, of course, working in my own home.
My Lords, I apologise to my noble friend. I ask that we pause the debate to ask this school group to exit the Chamber. We do not think that the subject matter and content will be suitable for that audience. I am very sorry. The House is pausing.
In this moment while we pause, I congratulate the noble Lord, the Government Whip, for being so vigilant: some of us in the Chamber cannot see the whole Gallery. It is appreciated.
I, too, thank my noble friend the Government Whip. I apologise too if I have spoken out of discourtesy in the Committee: I was not sure whose name was on which amendment, so I will continue.
Physically, I am, of course, working in my home. If that behaviour had happened in the office, it would be an offence, an assault: “intentional or reckless application of unlawful force to another person”. It will not be an offence in the metaverse and it is probably not harassment because it is not a course of conduct.
Although the basic definition of user-to-user content covers the metaverse, as does encountering, as has been mentioned in relation to content under Clause 207, which is broad enough to cover the haptic suits, the restriction to illegal content could be problematic, as the metaverse is a complex of live interactions that mimics real life and such behaviours, including criminal ones. Also, the avatar of an adult could sexually assault the avatar of a child in the metaverse, and with haptic technologies this would not be just a virtual experience. Potentially even more fundamentally than Amendment 125, the Bill is premised on the internet being a solely virtual environment when it comes to content that can harm. But what I am seeking to outline is that conduct can also harm.
I recognise that we cannot catch everything in this Bill at this moment. This research is literally hot off the press; it is only a few weeks old. At the very least, it highlights the need for future-proofing. I am aware that some of the issues I have highlighted about the fundamental difference between conduct and content refer to clauses noble Lords may already have debated. However, I believe that these points are significant. It is just happenstance that the research came out and is hot off the press. I would be grateful if the Minister would meet the Dawes Centre urgently to consider whether there are further changes the Government need to make to the Bill to ensure that it covers the harms I have outlined.
My Lords, I have put my name to Amendments 195, 239 and 263. I also strongly support Amendment 125 in the name of my noble friend Lady Kidron.
During this Committee there have been many claims that a group of amendments is the most significant, but I believe that this group is the most significant. This debate comes after the Prime Minister and the Secretary of State for Science and Technology met the heads of leading AI research companies in Downing Street. The joint statement said:
“They discussed safety measures … to manage risks”
and called for
“international collaboration on AI safety and regulation”.
Surely this Bill is the obvious place to start responding to those concerns. If we do not future-proof this Bill against the changes in digital technology, which are ever increasing at an ever-faster rate, it will be obsolete even before it is implemented.
My greatest concern is the arrival of AI. The noble Baroness, Lady Harding, has reminded us of the warnings from the godfather of AI, Geoffrey Hinton. If he is not listened to, who on earth should we be listening to? I wholeheartedly support Amendment 125. Machine-generated content is present in so much of what we see on the internet, and its presence is increasing daily. It is the future, and it must be within scope of this Bill. I am appalled by the examples that the noble Baroness, Lady Harding, has brought before us.
In the Communications and Digital Committee inquiry on regulating the internet, we decided that horizon scanning was so important that we called for a digital authority to be created which would look for harms developing in the digital world, assess how serious a threat they posed to users and develop a regulated response. The Government did not take up these suggestions. Instead, Ofcom has been given the onerous task of enforcing the triple shield which under this Bill will protect users to different degrees into the future.
Amendment 195 in the name of the right reverend Prelate the Bishop of Oxford will ensure that Ofcom has knowledge of how well the triple shield is working, which must be essential. Surveys of thousands of users undertaken by companies such as Kantar give an invaluable snapshot of what is concerning users now. These must be fed into research by Ofcom to ensure that future developments across the digital space are monitored, updated and brought to the attention of the Secretary of State and Parliament on a regular basis.
Amendment 195 will reveal trends in harms which might not be picked up by Ofcom under the present regime. It will look at the risk arising for individuals from the operation of Part 3 services. Clause 12 on user empowerment duties has a list of content and characteristics from which users can protect themselves. However, the characteristics for which or content with which users can be abused will change over time and these changes need to be researched, anticipated and implemented.
This Bill has proved in its long years of gestation that it takes time to change legislation, while changes on the internet take just minutes or are already here. The regime set up by these future-proofing amendments will at least go some way to protecting users from these fast-evolving harms. I stress to your Lordships’ Committee that this is very much precautionary work. It should be used to inform the Secretary of State of harms which are coming down the line. I do not think it will give power automatically to expand the scope of harms covered by the regime.
Amendment 239 inserts a new clause for an Ofcom future management of risks review. This will help feed into the Secretary of State review regime set out in Clause 159. Clause 159(3)(a) currently looks at ensuring that regulated services are operating using systems and process which, so far as relevant, are minimising the risk of harms to individuals. The wording appears to mean that the Secretary of State will be viewing all harms to individuals. I would be grateful if the Minister could explain to the Committee the scope of the harms set out in Clause 159(3)(a)(i). Are they meant to cover only the harms of illegality and harms to children, or are they part of a wider examination of the harms regime to see whether it needs to be contracted or expanded? I would welcome an explanation of the scope of the Secretary of State’s review.
The real aim of Amendment 263 is to ensure that the Secretary of State looks at research work carried out by Ofcom. I am not sure how politicians will come to any conclusions in the Clause 159 review unless they are required to look at all the research published by Ofcom on future risk. I would like the Minister to explain what research the Secretary of State would rely on for this review unless this amendment is accepted. I hope Amendment 263 will also encourage the Secretary of State to look at possible harms not only from content, but also from the means of delivering this content.
This aim was the whole point of Amendment 261, which has already been debated. However, it needs to be borne in mind when considering that harms come not just from content, but also from the machine technology which delivers it. Every day we read about new developments and threats posed by a fast-evolving internet. Today it is concerns about ChatGPT and the race for the most sophisticated artificial intelligence. The amendments in this group will provide much-needed reinforcement to ensure that the Online Safety Bill remains a beacon for continuing safety online.
My Lords, I shall speak in favour of Amendments 195, 239 and 263, tabled in the names of my right reverend friend the Bishop of Oxford, the noble Lord, Lord Clement-Jones, and the noble Viscount, Lord Colville of Culross, who I thank for his comments.
My right reverend friend the Bishop of Oxford regrets that he is unable to attend today’s debate. I know he would have liked to be here. My right reverend friend tells me that the Government’s Centre for Data Ethics and Innovation, of which he was a founding member, devoted considerable resource to horizon scanning in its early years, looking for the ways in which AI and tech would develop across the world. The centre’s analysis reflected a single common thread: new technologies are developing faster than we can track them and they bring with them the risk of significant harms.
This Bill has also changed over time. It now sets out two main duties: the illegal content duty and the children duty. These duties have been examined and debated for years, including by the joint scrutiny committee. They are refined and comprehensive. Risk assessments are required to be “suitable and sufficient”, which is traditional language from 20 years of risk-based regulation. It ensures that the duties are fit for purpose and proportionate. The duties must be kept up to date and in line with any service changes. Recent government amendments now helpfully require companies to report to Ofcom and publish summaries of their findings.
However, in respect of harms to adults, in November last year the Government suddenly took a different tack. They introduced two new groups of duties as part of a novel triple shield framework, supplementing the duty to remove illegal harms with a duty to comply with their own terms of service and a duty to provide user empowerment tools. These new duties are quite different in style to the illegal content and children duties. They have not benefited from the prior years of consultation.
As this Committee’s debates have frequently noted, there is no clear requirement on companies to assess in the round how effective their implementation of these new duties is or to keep track of their developments. The Government have changed this Bill’s system for protecting adults online late in the day, but the need for risk assessments, in whatever system the Bill is designed around, has been repeated again and again across Committee days. Even at the close of day eight on Tuesday, the noble Lords, Lord Allan of Hallam and Lord Clement-Jones, referred explicitly to the role of risk assessment in validating the Bill’s systems of press reforms. Surely this persistence across days and groups of debate reflects the systemically pivotal role of risk assessments in what is, after all, meant to be a systems and processes rather than a content-orientated Bill.
But it seems that many people on many sides of this Committee believe that an important gap in risk assessment for harms to adults has been introduced by these late changes to the Bill. My colleague the right reverend Prelate is keen that I thank Carnegie UK for its work across the Bill, including these amendments. It notes:
“Harms to adults which might trickle down to become harms to children are not assessed in the current Bill”.
The forward-looking parts of its regime need to be strengthened to ensure that Parliament and the Secretary of State review new ways in which harms manifesting as technology race along, and to ensure that they then have the right advice for deciding what to do about them. To improve that advice, Ofcom needs to risk assess the future and then to report its findings.
As the Committee can see, Amendment 195 is drawn very narrowly, out of respect for concerns about freedom of expression, even though the Government have still not explained how risk assessment poses any such threat. Ofcom would be able to request information from companies, using its information-gathering powers in Clause 91, to complete its future-proofing risk assessment. That is why, as Carnegie again notes,
“A risk assessment required of OFCOM for the purposes of future proofing alone could fill this gap”
in the Bill’s system,
“without even a theoretical threat to freedom of expression”.
Amendment 239 would require Ofcom to produce a forward-looking report, based on a risk assessment, to inform the Secretary of State’s review of the regime.
Amendment 263 would complete this systemic implementation of risk assessment by ensuring that future reviews of the regime by the Secretary of State include a broad assessment of the harms arising from regulated services, not just regulated content. This amendment would ensure ongoing consideration of risk management, including whether the regime needs expanding or contracting. I urge the Minister to support Amendments 195, 239 and 263.
My Lords, like others, I thank the Whips for intervening to protect children from hearing details that are not appropriate for the young. I have to say that I was quite relieved because I was rather squirming myself. Over the last two days of Committee, I have been exposed to more violent pornographic imagery than any adult, never mind a child, should be exposed to. I think we can recognise that this is certainly a challenging time for us.
I do not want any of the comments I will now make to be seen as minimising understanding of augmented reality, AI, the metaverse and so on, as detailed so vividly by the noble Baronesses, Lady Harding and Lady Finlay, in relation to child safety. However, I have some concerns about this group, in terms of proportionality and unintended outcomes.
Amendment 239, in the names of the right reverend Prelate the Bishop of Oxford, the noble Lord, Lord Clement-Jones, and the noble Viscount, Lord Colville of Culross, sums up some of my concerns about a focus on future-proofing. This amendment would require Ofcom to produce reports about future risks, which sounds like a common-sense demand. But my question is about us overly focusing on risk and never on opportunities. There is a danger that the Bill will end up recommending that we see these new technologies only in a negative way, and that we in fact give more powers to expand the scope for harmful content, in a way that stifles speech.
Beyond the Bill, I am more generally worried about what seems to be becoming a moral panic about AI. The precautionary principle is being adopted, which could mean stifling innovation at source and preventing the development of great technologies that could be of huge benefit to humanity. The over-focus on the dangers of AI and augmented reality could mean that we ignore the potential large benefits. For example, if we have AI, everyone could have an immediately responsive GP in their pocket—goodness knows that, for those trying to get an appointment, that could be of great use and benefit. It could mean that students have an expert tutor in every subject, just one message away. The noble Baroness, Lady Finlay, spoke about the fantastic medical breakthroughs that augmented reality can bring to handling neurological damage. Last night, I cheered when I saw how someone who has never been able to walk now can, through those kinds of technologies. I thought, “Isn’t this a brilliant thing?” So all I am suggesting is that we have to be careful that we do not see these new technologies only as tools for the most perverted form of activity among a small minority of individuals.
I note, with some irony, that fewer qualms were expressed by noble Lords about the use of AI when it was proposed to scan and detect speech or images in encrypted messages. As I argued at the time, this would be a threat to WhatsApp, Signal and so on. Clauses 110 and 124 have us using AI as a blunt proactive technology of surveillance, despite the high risks of inaccuracy, error and false flags. But there was great enthusiasm for AI then, when it was having an impact on individuals’ freedom of expression—yet, here, all we hear are the negatives. So we need to be balanced.
I am also concerned about Amendment 125, which illustrates the problem of seeing innovation only as a threat to safety and a potential problem. For example, if the Bill considers AI-generated content to be user-generated content, only large technology companies will have the resources—lawyers and engineers—necessary to proceed while avoiding crippling liability.
In practice, UK users risk being blocked out from new technologies if we are not careful about how we regulate here. For example, users in the European Union currently cannot access Google Bard AI assistant because of GDPR regulations. That would be a great loss because Google Bard AI is potentially a great gain. Despite the challenges of the likes of ChatGPT and Bard AI that we keep reading about, with people panicking that this will lead to wide-scale cheating in education and so on, this has huge potential as a beneficial technology, as I said.
I have mentioned that one of the unintended consequences—it would be unintended—of the whole Bill could be that the UK becomes a hostile environment for digital investment and innovation. So start-ups that have been invested in—like DeepMind, a Google-owned and UK-based AI company—could be forced to leave the UK, doing huge damage to the UK’s digital sector. How can the UK be a science and technology superpower if we end up endorsing anti-innovation, anti-progress and anti-business measures by being overly risk averse?
I have the same concerns about Amendment 286, which requires periodic reviews of new technology content environments such as the metaverse and other virtual augmented reality settings. I worry that it will not be attractive for technology companies to confidently invest in new technologies if there is this constant threat of new regulations and new problems on the horizon.
I have a query that mainly relates to Amendment 125 but that is also more general. If virtual augmented reality actually involves user-to-user interaction, like in the metaverse, is it not already covered in the Bill? Why do we need to add it in? The noble Baroness, Lady Harding, said that it has got to the point where we are not able to distinguish fake from real, and augmented reality from reality. But she concludes that that means that we should treat fake as real, which seems to me to rather muddy the waters and make it a fait accompli. I personally—
I am sorry to interrupt, but I will make a clarification; the noble Baroness is misinterpreting what I said. I was actually quoting the godfather of AI and his concerns that we are fast approaching a space where it will be impossible—I did not say that it currently is—to distinguish between a real child being abused and a machine learning-generated image of a child being abused. So, first, I was quoting the words of the godfather of AI, rather than my own, and, secondly, he was looking forward—only months, not decades—to a very real and perceived threat.
I personally think that it is pessimistic view of the future to suggest that humanity cannot rise to the task of being able to distinguish between deep fakes and real images. Organising all our lives, laws and liberties around the deviant predilections of a minority of sexual offenders on the basis that none of us will be able to tell the difference in the future, when it comes to that kind of activity, is rather dangerous for freedom and innovation.
My Lords, I will speak very briefly. I could disagree with much of what the noble Baroness just said, but I do not need to go there.
What particularly resonates with me today is that, since I first entered your Lordships’ House at the tender age of 28 in 1981, this is the first time I can ever remember us having to rein back what we are discussing because of the presence of young people in the Public Gallery. I reflect on that, because it brings home the gravity of what we are talking about and its prevalence; we cannot run away or hide from it.
I will ask the Minister about the International Regulatory Cooperation for a Global Britain: Government Response to the OECD Review of International Regulatory Cooperation of the UK, published 2 September 2020. He will not thank me for that, because I am sure that he is already familiar and word-perfect with this particular document, which was pulled together by his noble friend, the noble Lord, Lord Callanan. I raise this because, to think that we can in any way, shape or form, with this piece of legislation, stem the tide of what is happening in the online world—which is happening internationally on a global basis and at a global level—by trying to create regulatory and legal borders around our benighted island, is just for the fairies. It is not going to happen.
Can the Minister tell us about the degree to which, at an international level, we are proactively talking to, and learning from, other regulators in different jurisdictions, which are battling exactly the same things that we are? To concentrate the Minister’s mind, I will point out what the noble Lord, Lord Callanan, committed the Government to doing nearly three years ago. First, in relation to international regulatory co-operation, the Government committed to
“developing a whole-of-government IRC strategy, which sets out the policies, tools and respective roles of different departments and regulators in facilitating this; … developing specific tools and guidance to policy makers and regulators on how to conduct IRC; and … establishing networks to convene international policy professionals from across government and regulators to share experience and best practice on IRC”.
I am sure that, between now and when he responds, he will be given a detailed answer by the Bill team, so that he can tell us exactly where the Government, his department and Ofcom are in carrying out the commitments of the noble Lord, Lord Callanan.
My Lords, although I arrived a little late, I will say, very briefly, that I support the amendments wholeheartedly. I support them because I see this as a child protection issue. People viewing AI, I believe, will lead to them going out to find real children to sexually abuse. I will not take up any more time, but I wholeheartedly agree with everything that has been said, apart from what the noble Baroness, Lady Fox, said. I hope that the Minister will look very seriously at the amendments and take them into consideration.
My Lords, on behalf of my noble friend Lord Clement-Jones, I will speak in support of Amendments 195, 239, 263 and 286, to which he added his name. He wants me to thank the Carnegie Trust and the Institution of Engineering and Technology, which have been very helpful in flagging relevant issues for the debate.
Some of the issues in this group of amendments will range much more widely than simply the content we have before us in the Online Safety Bill. The right reverend Prelate the Bishop of Chelmsford is right to flag the question of a risk assessment. People are flagging to us known risks. Once we have a known risk, it is incumbent on us to challenge the Minister to see whether the Government are thinking about those risks, regardless of whether the answer is something in the Online Safety Bill or that there needs to be amendments to wider criminal law and other pieces of legislation to deal with it.
Some of these issues have been dealt with for a long time. If you go back and look at the Guardian for 9 May 2007, you will see the headline,
“Second Life in virtual child sex scandal”.
That case was reported in Germany about child role-playing in Second Life, which is very similar to the kind of scenarios described by various noble Lords in this debate. If Second Life was the dog that barked but did not bite, we are in quite a different scenario today, not least because of the dramatic expansion in broadband technology, for which we can thank the noble Baroness, Lady Harding, in her previous role. Pretty much everybody in this country now has incredible access, at huge scale, to high-speed broadband, which allows those kinds of real life, metaverse-type environments to be available to far more people than was possible with Second Life, which tended to be confined to a smaller group.
The amendments raise three significant groups of questions: first, on scope, and whether the scope of the Online Safety Bill will stretch to what we need; secondly, on behaviour, including the kinds of new behaviours, which we have heard described, that could arise as these technologies develop; and, finally, on agency, which speaks to some of the questions raised by the noble Baroness, Lady Fox, on AIs, including the novel questions about who is responsible when something happens through the medium of artificial intelligence.
On scope, the key question is whether the definition of “user-to-user”, which is at the heart of the Bill, covers everything that we would like to see covered by the Bill. Like the noble Baroness, Lady Harding, I look forward to the Minister’s response; I am sure that he has very strongly prepared arguments on that. We should take a moment to give credit to the Bill’s drafters for coming up with these definitions for user-to-user behaviours, rather than using phrases such as, “We are regulating social media or specific technology”. It is worth giving credit, because a lot of thought has gone into this, over many years, with organisations such as the Carnegie Trust. Our starting point is a better starting point than many other legislative frameworks which list a set of types of services; we at least have something about user-to-user behaviours that we can work with. Having said that, it is important that we stress-test that definition. That is what we are doing today: we are stress-testing, with the Minister, whether the definition of “user-to-user” will still apply in some of the novel environments.
It certainly seems likely—and I am sure that the Minister will say this—that a lot of metaverse activity would be in scope. But we need detailed responses from the Minister to explain why the kinds of scenario that have been described—if he believes that this is the case; I expect him to say so—would mean that Ofcom would be able to demand things of a metaverse provider under the framework of the user-to-user requirements. Those are things we all want to see, including the risk assessments, the requirement to keep people away from illegal content, and any other measures that Ofcom deems necessary to mitigate the risks on those platforms.
It will certainly be useful for the Minister to clarify one particular area. Again, we are fortunate in the UK that pseudo-images of child sexual abuse are illegal and have been illegal for a long time. That is not the case in every country around the world, and the noble Lord, Lord Russell, is quite right to say that this an area where we need international co-operation. Having dealt with it on the platforms, some countries have actively chosen not to criminalise pseudo-images; others just have not considered it.
In the UK, we were ahead of the game in saying, “If it looks like a photo of child abuse, we don’t care whether you created it on Photoshop, or whatever—it is illegal”. I hope that the Minister can confirm that avatars in metaverse-type environments would fall under that definition. My understanding is that the legislation refers to photographs and videos. I would interpret an avatar or activity in a metaverse as a photo or video, and I hope that is what the Government’s legal officers are doing.
Again, it is important in the context of this debate and the exchange that we have just had between the noble Baronesses, Lady Harding and Lady Fox, that people out there understand that they do not get away with it. If you are in the UK and you create a child sexual abuse image, you can be taken to court and go to prison. People should not think that, if they do it in the metaverse, it is okay—it is not okay, and it is really important that that message gets out there.
This brings us to the second area of behaviours. Again, some of the behaviours that we see online will be extensions of existing harms, but some will be novel, based on technical capabilities. Some of them we should just call by their common or garden term, which is sexual harassment. I was struck by the comments of the noble Baroness, Lady Berridge, on this. If people go online and start approaching other people in sexual terms, that is sexual harassment. It does not matter whether it is happening in a physical office, on public transport, on traditional social media or in the metaverse—sexual harassment is wrong and, particularly when directed at minors, a really serious offence. Again, I hope that all the platforms recognise that and take steps to prevent sexual harassment on their platforms.
That is quite a lot of the activity that people are concerned about, but others are much more complex and may require updates to legislation. Those are particularly activities such as role-playing online, where people play roles and carry out activities that would be illegal if done in the real world. That is particularly difficult when it is done between consenting adults, when they choose to carry out a role-playing activity that replicates an illegal activity were it to take place in the real world. That is hard—and those with long memories may remember a group of cases around Operation Spanner in the 1990s, whereby a group of men was prosecuted for consensual sadomasochistic behaviour. The case went backwards and forwards, but it talked to something that the noble Baroness, Lady Fox, may be sympathetic to—the point at which the state should intervene on sexual activities that many people find abhorrent but which take place between consenting adults.
In the context of the metaverse, I see those questions coming front and centre again. There are all sorts of things that people could role-play in the metaverse, and we will need to take a decision on whether the current legislation is adequate or needs to be extended to cater for the fact that it now becomes a common activity. Also important is the nature of it. The fact that it is so realistic changes the nature of an activity; you get a gut feeling about it. The role-playing could happen today outside the metaverse, but once you move it in there, something changes. Particularly when children are involved, it becomes something that should be a priority for legislators—and it needs to be informed by what actually happens. A lot of what the amendments seek to do is to make sure that Ofcom collects the information that we need to understand how serious these problems are becoming and whether they are, again, something that is marginal or something that is becoming mainstream and leading to more harm.
The third and final question that I wanted to cover is the hardest one—the one around agency. That brings us to thinking about artificial intelligence. When we try to assign responsibility for inappropriate or illegal behaviour, we are normally looking for a controlling mind. In many cases, that will hold true online as well. I know that the noble Lord, Lord Knight of Weymouth, is looking at bots—and with a classic bot, you have a controlling mind. When the bots were distributing information in the US election on behalf of Russia, that was happening on behalf of individuals in Russia who had created those bots and sent them out there. We still had a controlling mind, in that instance, and a controlling mind can be prosecuted. We have that in many instances, and we can expect platforms to control them and expect to go after the individuals who created the bots in the same way that we would go after things that they do as a first party. There is a lot of experience in the fields of spam and misinformation, where “bashing the bots” is the daily bread and butter of a lot of online platforms. They have to do it just to keep their platforms safe.
We can also foresee a scenario with artificial intelligence whereby it is less obvious that there is a controlling mind or who the controlling mind should be. I can imagine a situation whereby an artificial intelligence has created illegal content, whether that is child sexual abuse material or something else that is in the schedule of illegal content in the Bill, without the user having expected it to happen or the developer having believed or contemplated that it could happen. Let us say that the artificial intelligence goes off and creates something illegal, and that both the user and the developer can show the question that they asked of the artificial intelligence and show how they coded it, showing that neither of them intended for that thing to happen. In the definition of artificial intelligence, it has its own agency in that scenario. The artificial intelligence cannot be fined or sent to prison. There are some things that we can do: we can try to retrain it, or we can kill it. There is always a kill switch; we should never forget that with artificial intelligence. Sam Altman at OpenAI can turn off ChatGPT if it is behaving in an illegal way.
There are some really important questions around that issue. There is the liability for the specific instance of the illegality happening. Who do we hold liable? Even if everyone says that it was not their intention, is there someone that we can hold liable? What should the threshold be at which we can execute that death sentence on the AI? If an AI is being used by millions of people and on a small number of occasions it does something illegal, is that sufficient? At what point do we say that the AI is rogue and that, effectively, it needs to be taken out of operation? Those are much wider questions than we are dealing with immediately with in the Bill, but I hope that the Minister can at least point to what the Government are thinking about these kind of legal questions, as we move from a world of user-to-user engagement to user-to-user-to-machine engagement, when that machine is no longer a creature of the user.
I have had time just to double-check the offences. The problem that exists—and it would be helpful if my noble friend the Minister could confirm this—is that the criminal law is defined in terms of person. It is not automatic that sexual harassment, particularly if you do not have a haptic suit on, would actually fall within the criminal law, as far as I understand it, which is why I am asking the Minister to clarify. That was the point that I was making. Harassment per se also needs a course of conduct, so if it was not a touch of your avatar in a sexual nature, it clearly falls outside criminal law. That is the point of clarification that we might need on how the criminal law is framed at the moment.
I am grateful to the noble Baroness. That is very helpful.
That is exactly the same issue with child sexual abuse images—it is about the way in which criminal law is written. Not surprisingly, it is not up to date with evolution of technology.
I am grateful for that intervention as well. That summarises the core questions that we have for the Minister. Of the three areas that we have for him, the first is the question of scope and the extent to which he can assure us that the Bill as drafted will be robust in covering the metaverse and bots, which are the issues that have been raised today. The second is on behaviours and to the two interventions that we have just had. We have been asking whether, with the behaviours that are criminal today, that criminality will stretch to new, similar forms of behaviour taking place in new environments—let us put it that way. The behaviour, the intent and the harm are the same, but the environment is different. We want to understand the extent to which the Government are thinking about that, where that thinking is happening and how confident they are that they can deal with that.
Finally, on the question of agency, how do the Government expect to deal with the fact that we will have machines operating in a user-to-user environment when the connection between the machine and another individual user is qualitatively different from anything that we have seen before? Those are just some small questions for the Minister on this Thursday afternoon.
My Lords, the debate on this group has been a little longer, deeper and more important than I had anticipated. It requires all of us to reflect before Report on some of the implications of the things we have been talking about. It was introduced masterfully by the noble Baroness, Lady Harding, and her comments—and those from the noble Baronesses, Lady Finlay and Lady Berridge—were difficult to listen to at times. I also congratulate the Government Whip on the way he handled the situation so that innocent ears were not subject to some of that difficult listening. But the questions around the implications of virtual reality, augmented reality and haptic technology are really important, and I hope the Minister will agree to meet with the noble Baroness, Lady Berridge, and the people she referenced to reflect on some of that.
The noble Baroness, Lady Fox, raised some of the right questions around the balance of this debate. I am a technology enthusiast, so I will quote shortly from my mobile phone, which I use for good, although a lot of this Bill is about how technology is used for bad. I am generally of the view that we have a responsibility to put some safety rails around this technology. I know that the noble Baroness agrees, in respect of children in particular. As ever, in responding to her, I end up saying “It’s all about balance” in the same way as the Minister ends up saying “It’s all about unintended consequences”.
Amendments 283ZZA and 283ZZB in my name are, as the noble Lord, Lord Allan, anticipated, about who controls autonomous bots. I was really grateful to hear his comments, because I put down the amendments on a bit of a hunch without being that confident that I understood what I was talking about technically. He understands what he is talking about much better than I do in this regard, so it is reassuring that I might be on to something of substance.
I was put on to it by reading a New York Times article about Geoffrey Hinton, the now labelled “Godfather of AI”. The article stated:
“Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyse. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually run that code on their own”.
As a result, I went to OpenAI’s ChatGPT and asked whether it could create code. Of course, it replied that it could help me with creating code. I said, “Can you code me a Twitter bot?” It said, “Certainly, I can help you create a basic Twitter bot using Python. Here is an example of a Twitter bot that post tweets”. Then I got all the instructions on how to do it. The AI will help me get on and create something that starts then to be able to create autonomous behaviours and activity. It is readily available to all of us now, and that should cause us some concern.
The Bill certainly needs to clarify—as the amendment tabled by the noble Baroness, Lady Kidron, and introduced so well by the noble Baroness, Lady Harding, goes to—whether or not a bot is a user. If a bot is a user and the Minister can assure us of that, things get a lot easier. But given that it is possible to code a realistic avatar generating its own content and behaviour in the metaverse, the core question I am driving at is: who is responsible for that behaviour? Is it the person who is deemed to be controlling it, as it says in Clause 170(7), which talks about
“a person who may be assumed to control the bot or tool”?
As the noble Lord, Lord Allan, said, that is not always going to be that straightforward when behaviours start to be something that the AI itself generates, and it generates behaviours that are not expected by the person who might be perceived to have controlled it. No one really controls it; the creator does not necessarily control it. I am just offering the simple amendment “or owns it” to allow some legal culpability to be clarified. It might be that the supplier of the virtual environment is culpable. These are questions that I am seeking to answer with my amendment from the Minister, so that we get clarity on how Ofcom is supposed to regulate all of these potential harms in the future.
Some months ago, I went to a Speaker’s Lecture given by Stuart Russell, who delivered the Reith Lectures around AI. He talked about the programming of an AI-powered vacuum cleaner that was asked to clear up as much dirt as possible. What then plays out is that the vacuum cleaner gets a bit of dirt up off the carpet and then spews it out and picks it up again, because that is the way of maximising the intent of the programming. It is very difficult to anticipate the behaviour of AI if you do not get the instructions exactly right. And that is the core of what we are worried about. Again, when I asked ChatGPT to give me some guidance on a speaking note to this question, it was quite helpful in also guiding me towards an embedded danger of bias and inequity. The AI is trained by data; we know a certain amount about the bias of data, but it is difficult to anticipate how that will play out as the AI feeds and generates its own data.
The equity issues that can then flow are something that we need to be confident that this legislation will be able to deal with. As the right reverend Prelate the Bishop of Chelmsford reminded us, when the legal but harmful elements of the Bill were taken out between draft stage and publication, we lost the assessment of future risk as being something that was in place before, which I think was an unintended consequence of taking those things out. It would be great to see those back, as Amendment 139 and Amendment 195 from the right reverend Prelate the Bishop of Oxford suggest. The reporting that the noble Baroness, Lady Finlay, is proposing in her amendments is important in giving us as Parliament a sense of how this is going. My noble friend Lord Stevenson tabled Amendment 286 to pay particular regard to the metaverse, and I support that.
Ultimately, the key test for the Minister is, as others have said, that tech is changing really fast. It is changing the online environment and our relationship with it as humans very quickly indeed; the business models will change really quickly as a result and they, by and large, are likely to drive quite a lot of the platform behaviour. But can the regulator, as things are currently set out in this legislation, react and change quickly enough in response to that highly dynamic environment? Can we anticipate that what is inconceivable at the moment is going to be regulatable by this Bill? If not, we need to make sure that Parliament has opportunities to revisit this. As I have said before, I strongly support post-legislative scrutiny; I personally think a permanent Joint Committee of both Houses around digital regulation, so that we have some sustained body of expertise of parliamentarians in both Houses to keep up with this, would be extremely useful to Parliament.
As a whole, I think these amendments are really helpful to the Minister and to Parliament in pointing us towards where we can strengthen the future-proofing of the Bill. I look forward to the Minister’s response.
My Lords, this has been a grim but important debate to open the Committee’s proceedings today. As my noble friend Lady Harding of Winscombe and others have set out, some of the issues and materials about which we are talking are abhorrent indeed. I join other noble Lords in thanking my noble friend Lord Harlech for his vigilance and consideration for those who are watching our proceedings today, to allow us to talk about them in the way that we must in order to tackle them, but to ensure that we do so sensitively. I thank noble Lords for the way they have done that.
I pay tribute also to those who work in this dark corner of the internet to tackle these harms. I am pleased to reassure noble Lords that the Bill has been designed in a way that responds to emerging and new technologies that may pose a risk of harm. In our previous debates, we have touched on explicitly naming certain technologies and user groups or making aspects of the legislation more specific. However, one key reason why the Government have been resistant to such specificity is to ensure that the legislation remains flexible and future-proofed.
The Bill has been designed to be technology-neutral in order to capture new services that may arise in this rapidly evolving sector. It confers duties on any service that enables users to interact with each other, as well as search services, meaning that any new internet service that enables user interaction will be caught by it.
Amendment 125, tabled by the noble Baroness, Lady Kidron—whose watchful eye I certainly feel on me even as she takes a rare but well-earned break today—seeks to ensure that machine-generated content, virtual reality content and augmented reality content are regulated content under the Bill. I am happy to confirm to her and to my noble friend Lady Harding who moved the amendment on her behalf that the Bill is designed to regulate providers of user-to-user services, regardless of the specific technologies they use to deliver their service, including virtual reality and augmented reality content. This is because any service that allows its users to encounter content generated, uploaded or shared by other users is in scope unless exempt. “Content” is defined very broadly in Clause 207(1) as
“anything communicated by means of an internet service”.
This includes virtual or augmented reality. The Bill’s duties therefore cover all user-generated content present on the service, regardless of the form this content takes, including virtual reality and augmented reality content. To state it plainly: platforms that allow such content—for example, the metaverse—are firmly in scope of the Bill.
The Bill also ensures that machine-generated content on user-to-user services created by automated tools or machine bots will be regulated by the Bill where appropriate. Specifically, Clause 49(4)(b) means that machine-generated content is regulated unless the bot or automated tool producing the content is controlled by the provider of the service. This approach ensures that the Bill covers scenarios such as malicious bots on a social media platform abusing users, or when users share content produced by new tools, such as ChatGPT, while excluding functions such as customer service chatbots which are low risk. Content generated by an artificial intelligence bot and then placed by a user on a regulated service will be regulated by the Bill. Content generated by an AI bot which interacts with user-generated content, such as bots on Twitter, will be regulated by the Bill. A bot that is controlled by the service provider, such as a customer service chatbot, is out of scope; as I have said, that is low risk and regulation would therefore be disproportionate. Search services using AI-powered features will be in scope of the search duties.
The Government recognise the need to act both to unlock the opportunities and to address the potential risks of this technology. Our AI regulation White Paper sets out the principles for the responsible development of AI in the UK. These principles, such as safety and accountability, are at the heart of our approach to ensuring the responsible development and use of artificial intelligence. We are creating a horizon-scanning function and a central risk function which will enable the Government to monitor future risks.
The Bill does not distinguish between the format of content present on a service. Any service that allows its users to encounter content generated, uploaded or shared by other users is in scope unless exempt, regardless of the format of that content. This includes virtual and augmented reality material. Platforms that allow such content, such as the metaverse, are firmly in scope of the Bill and must take the required steps to protect their users from harm. I hope that gives the clarity that my noble friend and others were seeking and reassurance that the intent of Amendment 125 is satisfied.
The Bill will require companies to take proactive steps to tackle all forms of online child sexual abuse, including grooming, live streaming, child sexual abuse material and prohibited images of children. If AI-generated content amounts to a child’s sexual exploitation or abuse offence in the Bill, it will be subject to the illegal content duties. Regulated providers will need to take steps to remove this content. We will shortly bring forward, and have the opportunity to debate in Committee, a government amendment to address concerns relating to the sending of intimate images. This will cover the non-consensual sharing of manufactured images—more commonly known as deepfakes. The possession and distribution of altered images that appear to be indecent photographs of children is ready covered by the indecent images of children offences, which are very serious offences with robust punishment in law.
The noble Baroness, Lady Finlay of Llandaff, asked about an issue touched on in Amendment 85C. Under their illegal content safety duties, companies must put in place safety measures that mitigate and manage the risks identified in their illegal content risk assessment. As part of this, in-scope services such as Meta will be required to assess the level of risk of their service being used for the commission or facilitation of a priority offence. They will then be required to mitigate any such risks. This will ensure that providers implement safety by design measures to mitigate a broad spectrum of factors that enable illegal activity on their platforms. This includes when these platforms facilitate new kinds of user-to-user interactions that may result in offences manifesting themselves in new ways online.
Schedules 5, 6 and 7, which list the priority offences, are not static lists and can be updated. To maintain flexibility and to keep those lists responsive to emerging harms and legislative changes, the Secretary of State has the ability to designate additional offences as priority offences via statutory instrument, subject to parliamentary scrutiny. It should be noted that Schedule 7 already contains several sexual offences, including extreme pornography, so-called revenge pornography and sexual exploitation, while Schedule 6 is focused solely on child sexual abuse and exploitation offences. Fraud and financial offences are also listed in Schedule 7. In this way, these offences are already captured, and mean that all in-scope services must take proactive measures to tackle these types of content. These schedules have been designed to focus on the most serious and prevalent offences, where companies can take effective and meaningful action. They are, therefore, primarily focused on offences that can be committed online, so that platforms are able to take effective steps proactively to identify and tackle such offences. If we were to add offences to these lists that could not be effectively tackled, it would risk spreading companies’ resources too thinly and diluting their efforts to tackle the offences we have listed in the Bill.
The Bill establishes a differentiated approach to ensure that it is proportionate to the risk of harm that different services pose. Category 1 services are subject to additional duties, such as transparency, accountability and free speech duties, as well as duties such as protections for journalistic and democratic content. These duties reflect the influence of the major platforms over our online democratic discourse. The designation of category 1 services is based on how easily, quickly and widely user-generated content is disseminated. This reflects how those category 1 services have the greatest influence over public discourse because of their high reach. Requiring all companies to comply with the full range of category 1 duties would impose a disproportionate regulatory burden on smaller companies, which do not exert the same amount of influence over public discourse. This would divert their resources away from the vital task of tackling illegal content and protecting children.
The noble Baroness, Lady Finlay, also asked about virtual training grounds. Instruction or training for terrorism is illegal under existing terrorism legislation, and terrorism is listed as a priority offence in this Bill. Schedule 5 to the Bill lists the terrorism offences that constitute priority offences. These are drawn from existing terrorism legislation, including the Terrorism Act 2000, the Anti-terrorism, Crime and Security Act 2001 and the Terrorism Act 2006. Section 6 of the 2006 Act covers instruction or training for terrorism and Section 2 of that Act covers dissemination of terrorist publications. Companies in scope of the Online Safety Bill will be required to take proactive steps to prevent users encountering content that amounts to an offence under terrorism legislation.
Amendments 195, 239, 263, 241, 301 and 286 seek to ensure that the Bill is future-proofed to keep pace with emerging technologies, as well as ensuring that Ofcom is able to monitor and identify new threats. The broad scope of the Bill means that it will capture all services that enable user interaction as well as search services, enabling its framework to continue to apply to new services that have not yet been invented. In addition, the Government fully agree that Ofcom must assess future risks and monitor the emergence of new technologies. That is why the Bill already gives Ofcom broad horizon-scanning and robust information-gathering powers, and why it requires Ofcom to carry out extensive risk assessments. These will ensure that it can effectively supervise and regulate new and emerging user-to-user services.
Ofcom is already conducting extensive horizon scanning and I am pleased to confirm that it is planning a range of research into emerging technologies in relation to online harms. The Bill also requires Ofcom to review and update its sectoral risk assessments, risk profiles and codes of practice to ensure that those reflect the risks and harms of new and emerging technology. The amendments before us would therefore duplicate existing duties and powers for Ofcom. In addition, as noble Lords will be aware, the Bill already has built-in review mechanisms to ensure that it works effectively.
My right honourable friends the Prime Minister and the Secretary of State for Science, Innovation and Technology are clear that artificial intelligence is the defining technology of our time, with the potential to bring positive changes, but also that the success of this technology is founded on having the right guardrails in place, so that the public can have the confidence that artificial intelligence is being used in a safe and responsible way. The UK’s approach to AI regulation will need to keep pace with the fast-moving advances in this technology. That is why His Majesty’s Government have deliberately adopted an agile response to unlock opportunities, while mitigating the risks of the technology, as outlined in our AI White Paper. We are engaging extensively with international partners on these issues, which have such profound consequences for all humankind.
Clause 159 requires the Secretary of State to undertake a review into the operation of the regulatory framework between two and five years after the provisions come into effect. This review will consider any new emerging trends or technologies, such as AI, which could have the potential to compromise the efficacy of the Bill in achieving its objectives. I am happy to assure the noble Viscount, Lord Colville of Culross, and the right reverend Prelate the Bishop of Chelmsford that the review will cover all content and activity being regulated by the Bill, including legal content that is harmful to children and content covered by user-empowerment tools. The Secretary of State must consult Ofcom when she carries out this review.
Will the review also cover an understanding of what has been happening in criminal cases where, in some of the examples that have been described, people have tried to take online activity to court? We will at that point understand whether the judges believe that existing offences cover some of these novel forms of activity. I hope the review will also extend not just to what Ofcom does as a regulator but to understand what the courts are doing in terms of the definitions of criminal activity and whether they are being effective in the new online spaces.
I believe it will. Certainly, both government and Parliament will take into account judgments in the court on this Bill and in related areas of law, and will, I am sure, want to respond.
It is not just the judgments of the courts; it is about how the criminal law as a very basic point has been framed. I invite my noble friend the Minister to please meet with the Dawes Centre, because it is about future crime. We could end up with a situation in which more and more violence, particularly against women and girls, is being committed in this space, and although it may be that the Bill has made it regulated, it may not fall within the province of the criminal law. That would be a very difficult situation for our law to end up in. Can my noble friend the Minister please meet with the Dawes Centre to talk about that point?
I am happy to reassure my noble friend that the director of the Dawes Centre for Future Crime sits on the Home Office’s Science Advisory Council, whose work is very usefully fed into the work being done at the Home Office. Colleagues at the Ministry of Justice keep criminal law under constant review, in light of research by such bodies and what we see in the courts and society. I hope that reassures my noble friend that the points she raised, which are covered by organisations such as the Dawes Centre, are very much in the mind of government.
The noble Lord, Lord Allan of Hallam, explained very effectively the nuances of how behaviour translates to the virtual world. He is right that we will need to keep both offences and the framework under review. My noble friend Lady Berridge asked a good and clear question, to which I am afraid I do not have a similarly concise answer. I can reassure her that generated child sexual abuse and exploitation material is certainly illegal, but she asked about sexual harassment via a haptic suit; that would depend on the specific circumstances. I hope she will allow me to respond in writing, at greater length and more helpfully, to the very good question she asked.
Under Clause 56, Ofcom will also be required to undertake periodic reviews into the incidence and severity of content that is harmful to children on the in-scope services, and to recommend to the Secretary of State any appropriate changes to regulations based on its findings. Clause 141 also requires Ofcom to carry out research into users’ experiences of regulated services, which will likely include experiences of services such as the metaverse and other online spaces that allow user interaction. Under Clause 147, Ofcom may also publish reports on other online safety matters.
The questions posed by the noble Lord, Lord Russell of Liverpool, about international engagement are best addressed in a group covering regulatory co-operation, which I hope we will reach later today. I can tell him that we have introduced a new information-sharing gateway for the purpose of sharing information with overseas regulators, to ensure that Ofcom can collaborate effectively with its international counterparts. That builds on existing arrangements for sharing information that underpin Ofcom’s existing regulatory regimes.
The amendments tabled by the noble Lord, Lord Knight of Weymouth, relate to providers’ judgments about when content produced by bots is illegal content, or a fraudulent advertisement, under the Bill. Clause 170 sets out that providers will need to take into account all reasonably available relevant information about content when making a judgment about its illegality. As we discussed in the group about illegal content, providers will need to treat content as illegal when this information gives reasonable grounds for inferring that an offence was committed. Content produced by bots is in scope of providers’ duties under the Bill. This includes the illegal content duties, and the same principles for assessing illegal content will apply to bot-produced content. Rather than drawing inferences about the conduct and intent of the user who generated the content, the Bill specifies that providers should consider the conduct and the intent of the person who can be assumed to have controlled the bot at the point it created the content in question.
The noble Lord’s amendment would set out that providers could make judgments about whether bot-produced content is illegal, either by reference to the conduct or mental state of the person who owns the bot or, alternatively, by reference to the person who controls it. As he set out in his explanatory statement and outlined in his speech, I understand he has brought this forward because he is concerned that providers will sometimes not be able to identify the controller of a bot, and that this will impede providers’ duties to take action against illegal content produced by them. Even when the provider does not know the identity of the person controlling the bot, however, in many cases there will still be evidence from which providers can draw inferences about the conduct and intent of that person, so we are satisfied that the current drafting of the Bill ensures that providers will be able to make a judgment on illegality.
My concern is also whether or not the bot is out of control. Can the Minister clarify that issue?
It depends on what the noble Lord means by “out of control” and what content the bot is producing. If he does not mind, this may be an issue which we should go through in technical detail and have a more free-flowing conservation with examples that we can work through.
Providers will consider contextual evidence such as the circumstances in which the content was created or information about how the bot normally behaves on the site. In many cases, the person who “owns” the bot may be the same person who controls it. In such instances, providers will be required to consider the conduct and mental state of the owner when considering whether the relevant bot has produced illegal content. Where the ownership of the bot is relevant, it will already be captured by this clause, but I am very happy to kick the tyres of that with the noble Lord and any others who wish to join us.
This is a very interesting discussion; the noble Lord, Lord Knight, has hit on something really important. When somebody does an activity that we believe is criminal, we can interrogate them and ask how they came to do it and got to the conclusion that they did. The difficulty is that those of us who are not super-techy do not understand how you can interrogate a bot or an AI which appears to be out of control on how it got to the conclusion that it did. It may be drawing from lots of different places and there may be ownership of lots of different sources of information. I wonder whether that is why we are finding how this will be monitored in future so concerning. I am reassured that the noble Lord, Lord Knight of Weymouth, is nodding; does the Minister concur that this may be a looming problem for us?
I certainly concur that we should discuss the issue in greater detail. I am very happy to do so with the noble Lord, the noble Baroness and others who want to do so, along with officials. If we can bring some worked examples of what “in control” and “out of control” bots may be, that would be helpful.
I hope the points I have set out in relation to the other issues raised in this group and the amendments before us are satisfactory to noble Lords and that they will at this point be content not to press their amendments.
My Lords, I thank all noble Lords who have contributed to a thought-provoking and, I suspect, longer debate than we had anticipated. At Second Reading, I think we were all taken aback when this issue was opened up by my noble friend Lord Sarfraz; once again, we are realising that this requires really careful thought. I thank my noble friend the Minister for his also quite long and thoughtful response to this debate.
I feel that I owe the Committee a small apology. I am very conscious that I talked in quite graphic detail at the beginning when there were still children in the Gallery. I hope that I did not cause any harm, but it shows how serious this is that we have all had to think so carefully about what we have been saying—only in words, without any images. We should not underestimate how much this has demonstrated the importance of our debates.
On the comments of the noble Baroness, Lady Fox, I am a huge enthusiast, like the noble Lord, Lord Knight, for the wonders of the tech world and what it can bring. We are managing the balance in this Bill to make sure that this country can continue to benefit from and lead the opportunities of tech while recognising its real and genuine harms. I suggest that today’s debate has demonstrated the potential harm that the digital world can bring.
I listened carefully—as I am certain the noble Baroness, Lady Kidron, has been doing in the digital world—to my noble friend’s words. I am encouraged by what he has put on the record on Amendment 125, but there are some specific issues that it would be helpful for us to talk about, as he alluded to, after this debate and before Report. Let me highlight a couple of those.
First, I do not really understand the technical difference between a customer service bot and other bots. I am slightly worried that we are defining in the specific one type of bot that would not be captured by this Bill. I suspect that there might be others in future. We must think carefully through whether we are getting too much into the specifics of the technology and not general enough in making sure we capture where it could go. That is one example.
Secondly, as my noble friend Lady Berridge would say, I am not sure that we have got to the bottom of whether this Bill, coupled with the existing body of criminal law, will really enable law enforcement officers to progress the cases as they see fit and protect vulnerable women—and men—in the digital world. I very much hope we can extend the conversation there. We perhaps risk getting too close to the technical specifics if we are thinking about whether a haptic suit is in or out of scope of the Bill; I am certain that there will be other technologies that we have not even thought about yet that we will want to make sure that the Bill can capture.
I very much welcome the spirit in which this debate has been held. When I said that I would do this for the noble Baroness, Lady Kidron, I did not realise quite what a huge debate we were opening up, but I thank everyone who has contributed and beg leave to withdraw the amendment.
Amendment 125 withdrawn.
Amendment 125A not moved.
Clause 49 agreed.
Clause 50: “Recognised news publisher”
Amendment 126 not moved.
Amendment 126A
Moved by
126A: Clause 50, page 48, line 31, at end insert “, and
(iii) is not a sanctioned entity (see subsection (3A)).”Member’s explanatory statement
The effect of this amendment, combined with the next amendment in the Minister’s name, is that any entity which is designated for the purposes of sanctions regulations is not a “recognised news publisher” under this Bill, with the result that the Bill’s protections which relate to “news publisher content” don’t apply.
Amendment 126A agreed.
Amendment 127 not moved.
Amendment 127A
Moved by
127A: Clause 50, page 49, line 9, at end insert—
“(3A) A “sanctioned entity” is an entity which—(a) is designated by name under a power contained in regulations under section 1 of the Sanctions and Anti-Money Laundering Act 2018 that authorises the Secretary of State or the Treasury to designate persons for the purposes of the regulations or of any provisions of the regulations, or (b) is a designated person under any provision included in such regulations by virtue of section 13 of that Act (persons named by or under UN Security Council Resolutions).”Member’s explanatory statement
The effect of this amendment, combined with the preceding amendment in the Minister’s name, is that any entity which is designated for the purposes of sanctions regulations is not a “recognised news publisher” under this Bill, with the result that the Bill’s protections which relate to “news publisher content” don’t apply.
Amendment 127A agreed.
Clause 50, as amended, agreed.
Clause 51 agreed.
Clause 52: Restricting users’ access to content
Amendments 127B and 127C
Moved by
127B: Clause 52, page 50, line 23, after second “the” insert “voluntary”
Member’s explanatory statement
This amendment and the next amendment in the Minister’s name ensure that restrictions on a user’s access to content resulting from the user voluntarily activating any feature of a service do not count as restrictions on users’ access for the purposes of Part 3 of the Bill.
127C: Clause 52, page 50, line 25, leave out from “service” to “, or” in line 26 and insert “(for example, features, functionalities or settings included in compliance with the duty set out in section 12(2) or (6) (user empowerment))”
Member’s explanatory statement
This amendment and the previous amendment in the Minister’s name ensure that restrictions on a user’s access to content resulting from the user voluntarily activating any feature of a service do not count as restrictions on users’ access for the purposes of Part 3 of the Bill.
Amendments 127B and 127C agreed.
Clause 52, as amended, agreed.
Clause 53: “Illegal content” etc
Amendments 128 to 130 not moved.
Clause 53 agreed.
Schedule 5 agreed.
Schedule 6: Child sexual exploitation and abuse offences
Amendments 131 to 133 not moved.
Schedule 6 agreed.
House resumed. Committee to begin again not before 2.19 pm.