Skip to main content

Online Safety Bill

Volume 831: debated on Thursday 6 July 2023

Report (1st Day) (Continued)

Clause 6: Providers of user-to-user services: duties of care

Amendment 32

Moved by

32: Clause 6, page 5, line 29, at end insert—

“(ba) the duties about assessments related to adult user empowerment set out in section (Assessment duties: user empowerment),”Member’s explanatory statement

This amendment ensures that the new duties in the new Clause proposed after Clause 11 in my name are imposed on providers of Category 1 services.

My Lords, as noble Lords will be aware, the Government removed the legal but harmful provisions from the Bill in another place, given concerns about freedom of expression. I know that many noble Lords would not have taken that approach, but I am grateful for their recognition of the will of the elected House in this regard as well as for their constructive contributions about ways of strengthening the Bill while continuing to respect that.

I am therefore glad to bring forward a package of amendments tabled in my name relating to adult safety. Among other things, these strengthen our existing approach to user empowerment and terms of service by rebalancing the power over the content adults see and interact with online, moving the choice away from unaccountable technology companies and towards individual users.

First, we are introducing a number of amendments, which I am pleased to say have the support of the Opposition Front Bench, which will introduce a comprehensive duty on category 1 providers to carry out a full assessment of the incidence of user empowerment content on their services. The amendments will mean that platforms can be held to account by Ofcom and their users when they fail to assess the incidence of this kind of content on their services or when they fail to offer their users an appropriate ability to control whether or not they view it.

Amendments 19 to 21 and 26—I am grateful to noble Lords opposite for putting their names to them—will strengthen the user empowerment content duty. Category 1 providers will now need proactively to ask their registered adult users how they would like the control features to be applied. We believe that these amendments achieve two important aims that your Lordships have been seeking from these duties: first, they ensure that they are more visible for registered adult users; and, secondly, they offer better protection for young adult users.

Amendments 55 and 56, tabled by the noble Lord, Lord Clement-Jones, my noble friend Lord Moylan and the noble Baroness, Lady Fox of Buckley, seek to provide users with a choice over how the tools are applied for each category of content set out in Clause 12(10), (11) and (12). The legislation gives platforms the flexibility to decide what tools they offer in compliance with Clause 12(2). A blanket approach is unlikely to be consistent with the duty on category 1 services to have particular regard to the importance of protecting users’ freedom of expression when putting these features in place. Additionally, the measures that Ofcom will recommend in its code of practice must consider the impact on freedom of expression so are unlikely to be a blanket approach.

Amendments 58 and 63 would require providers to set and enforce consistent terms of service on how they identify the categories of content to which Clause 12(2) applies; and to apply the features to content only when they have reasonable grounds to infer that it is user empowerment content. I assure noble Lords that the Bill’s freedom of expression duties will prevent providers overapplying the features or adopting an inconsistent or capricious approach. If they do, Ofcom can take enforcement action.

Amendments 59, 64 and 181, tabled by the noble Lord, Lord Clement-Jones, seek to require that the user empowerment and user verification features are provided at no cost. I reassure the noble Lord that the effect of these amendments is already achieved by the drafting of Clause 12. Category 1 providers will be compliant with their duties only if they proactively ask all registered users whether or not they want to use the user empowerment content features, which would not be possible with a paywall. Amendment 181 is similar and applies to user verification. While the Bill does not specify that verification must be free of charge, category 1 providers can meet the duties in the Bill only by offering all adult users the option to verify themselves.

Turning to Amendment 204, tabled by the noble Baroness, Lady Finlay of Llandaff, I share her concern about the impact that self-harm and suicide content can have. However, as I said in Committee, the Bill goes a long way to provide protections for both children and adults from this content. First, it includes the new criminal offence of encouraging or assisting self-harm. This then feeds through into the Bill’s illegal content duties. Companies will be required to take down such content when it is reported to them by users.

Beyond the illegal content duties, there are specific protections in place for children. The Government have tabled amendments designating content that encourages, promotes or provides instructions as a category of primary priority content, meaning that services will have to prevent children of all ages encountering it. For adults, the Government listened to concerns and, as mentioned, have strengthened the user empowerment duties to make it easier for adult users to opt in to using them by offering a forced choice. We have made a careful decision, however, to balance these protections with users’ right to freedom of expression and therefore cannot require platforms to treat legal content accessed by adults in a prescribed way. That is why, although I share the noble Baroness’s concerns about the type of content that she mentions, I cannot accept her amendment and hope that she will agree.

The Bill’s existing duties require category 1 platforms to offer users the ability to verify their identity. Clause 12 requires category 1 platforms to offer users the ability to filter out users who have not verified their identity. Amendment 183 from my noble friend Lord Moylan seeks to give Ofcom the discretion to decide when it is and is not proportionate for category 1 services to offer users the ability to verify their identity. We do not believe that these will be excessively burdensome, given that they will apply only to category 1 companies, which have the resource and capacity to offer such tools.

Amendment 182 would require platforms to offer users the option to make their verification status visible. The existing duty in Clause 57, in combination with the duty in Clause 12, will already provide significant protections for adults from anonymous abuse. Adult users will now be able to verify their own status and decide to interact only with other verified users, whether or not their status is visible. We do not believe that this amendment would provide additional protections.

The Government carefully considered mandating that all users display their verification status, which may heighten some users’ safety, but it would be detrimental to vulnerable users, who may need to remain anonymous for perfectly justifiable reasons. Further government amendments in my name will expand the types of information that Ofcom can require category 1, 2A and 2B providers to publish in their transparency reports in relation to user empowerment content.

Separately, but also related to transparency, government Amendments 189 and 202 make changes to Clause 67 and Schedule 8. These relate to category 1 providers’ duties to create clear and accessible terms of service and apply them consistently and transparently. Our amendments tighten these parts of the Bill so that all the providers’ terms through which they might indicate that a certain type of content is not allowed on their service, are captured by these duties.

I hope that noble Lords will therefore accept the Government amendments in this group and that my anticipatory remarks about their amendments will give them some food for thought as they make their contributions. I beg to move.

My Lords, I speak to Amendments 56, 58, 63 and 183 in my name in this group. I have some complex arguments to make, but time is pressing, so I shall attempt to do so as briefly as possible. I am assisted in that by the fact that my noble friend on the Front Bench very kindly explained that the Government are not going to accept my worthless amendments, without actually waiting to hear what it is I might have said on their behalf.

None the less, I turn briefly to Amendment 183. The Bill has been described, I think justly, as a Twitter-shaped Bill: it does not take proper account of other platforms that operate in different ways. I return to the question of Wikipedia, but also platforms such as Reddit and other community-driven platforms. The requirement for a user-verification tool is of course intended to lead to the possibility that ordinary, unverified users—people like you and me—could have the option to see only that content which comes from those people who are verified.

This is broadly a welcome idea, but when we combine that with the fact that there are community-driven sites such as Wikipedia where there are community contributions and people who contribute to those sites are not always verified—sometimes there are very good reasons why they would want to preserve their anonymity —we end up with the possibility of whole articles having sentences left out and so on. That is not going to happen; the fact is that nobody such as Wikipedia can operate a site like that, so it is another one of those existential questions that the Government have not properly grappled with and really must address before we come to Third Reading, because this will not work the way it is.

As for my other amendments, they are supportive of and consistent with the idea of user verification, and they recognise—as my noble friend said—that user verification is intended to be a substitute for the abandoned “legal but harmful” clause. I welcome the abandonment of that clause and recognise that this provision is more consistent with individual freedom and autonomy and the idea that we can make choices of our own, but it is still open to the possibility of abuse by the platforms themselves. The amendments that I am put forward address, first, the question of what should be the default position. My argument is that the default position should be that filtering is not on and that one has to opt into it, because that that seems to me the adult proposition, the adult choice.

The danger is that the platforms themselves will either opt you into filtering automatically as the default, so you do not see what might be called the full-fat milk that is available on the internet, or that they harass you to do so with constant pop-ups, which we already get. If you go on the Nextdoor website, you constantly get the pop-up saying, “You should switch on notifications”. I do not want notifications; I want to look at it when I want to look at it. I do not want notifications, but I am constantly being driven into pressing the button that says, “Switch on notifications”. You could have something similar here—constantly being driven into switching on the filters—because the platforms themselves will be very worried about the possibility that you might see illegal content. We should guard against that.

Secondly, on Amendment 58, if we are going to have user verification—as I say, there is a lot to be said for that approach—it should be applied consistently. If the platform decides to filter out racist abuse and you opt in to filtering out racist abuse or some other sort of specified abuse, it has to filter all racist abuse, not simply racist abuse that comes from people they do not like; or, with gender assignment abuse, they cannot filter out stuff from only one side or other of the argument. The word “consistently” that is included here is intended to address that, and to require policies that show that, if you opt in to having something filtered out, it would be done on a proper, consistent and systematic basis and not influenced by the platform’s own particular political views.

Finally, we come to Amendment 63 and the question of how this is communicated to users of the internet. This amendment would force the platforms to make these policies about how user verification will operate a part of their terms and conditions in a public and visible way and to ensure that those provisions are applied consistently. It goes a little further than the other amendments—the others could stand on their own—but would also add a little bit more by requiring public and consistent policies that people can see. This works with the grain of what the Government are trying to do; I do not see that the Government can object to any of this. There is nothing wrecking here. It is trying to make everything more workable, more transparent and more obvious.

I hope, given the few minutes or short period of time that will elapse between my sitting down and the Minister returning to the Dispatch Box, that he will have reflected on the negative remarks that he made in his initial speech and will find it possible to accept these amendments now that he has heard the arguments for them.

My Lords, I will not engage with the amendments of the noble Lord, Lord Moylan, since mine are probably the diametric opposite of what he has been saying.

I say, first, on behalf of the noble Baroness, Lady Finlay, that she regrets very much not being able to be here. Amendment 204 in her name is very much a Samaritans amendment. The Samaritans have encouraged her to put it forward and encourage us to support it. It is clear that the Minister has got his retaliation in first and taken the wind out of all our sails right at the beginning. Nevertheless, that does not mean that we cannot come back at the Minister and ask for further and better particulars of what he has to say.

Clearly the Government’s decision to bring in the new offence of encouraging or assisting self-harm is welcome. However—certainly in the view of the Samaritans—this will only bring into the remit of the Bill content that encourages serious self-harm, which must reach the high threshold amounting to grievous bodily harm. Their view, therefore, is that much harmful content will still be left untouched and available to criminals online. This could include information, depictions, instructions and advice on methods of self-harm and suicide. It would also include content that portrays self-harm and suicide as positive or desirable, and graphic descriptions or depictions of self-harm and suicide.

Perhaps the Minister could redouble his efforts to assure us as to how the Bill will take a comprehensive approach to placing duties on all platforms to reduce all dangerous suicide and self-harm content, such as detailed instructions on how people can harm themselves, for adults as well as children. This should also be in respect of smaller sites; it is not just the larger category 1 sites that will need to proactively remove priority illegal content, whatever the level of detail in their risk assessment. I hope I have done my duty by the noble Baroness, Lady Finlay, who very much regrets that she was not able to be here.

My own Amendments 55, 59, 64 and 181 are about changes in social media. The Bill really began its life at the high point of the phase where services were free to the user and paid for by adverts. The noble Lord talked about this being a Twitter Bill. Well, to some extent we are influenced by what Twitter has been doing over the last 12 months: it has begun to charge for user-verification services and some features, and other services are adopting versions of what you might call this premium model. So there is a real concern that Clause 12 might not be as comprehensive as the Minister seems to be asserting. I assume that it is covered by the “proportionate” wording in Clause 12, and therefore it would not be proportionate—to put it the other way round—if they charged for this service. I would very much like the Minister to give the detail of that, so I am not going to cover the rest of the points that I would otherwise have made.

The Minister said that a blanket approach would not be appropriate for user-empowerment control features. The thought that people have had is that a platform might choose to have a big red on/off button that would try to cover all the types of content that could be subject to this kind of user-empowerment tool. I do not think the contents of Clause 12 are as clear as the Minister perhaps considers they could be, but they go with the grain of the new government amendments. I should have said right at the beginning—although many of us regret the deletion of “legal but harmful” from the original draft Bill—that the kind of assessment that is going to be made is a step in the right direction and demonstrates that the Minister was definitely listening in Committee. However, if a blanket approach of this kind is taken, that would not be in the spirit of where these user-empowerment tools are meant to go. I welcome what the Minister had to say, but again I would like the specifics of where he thinks the wording is helpful in making sure that we have a much more granular form of user-empowerment control feature when this eventually comes into operation.

Finally, I return to user verification. This is very much in the footsteps of the Joint Committee. The noble Baroness, Lady Merron, spoke very well in Committee to what was then Amendment 41, which was in the name of the noble Lord, Lord Stevenson. It would required category 1 services to make visible to users whether another user was verified or non-verified.

Amendment 182 to Clause 57 is a rather different animal, but we are again trying to get improvements to what is in the clause at the moment. It tries to focus even more on empowering users by giving them choice. Alongside offering UK users a choice to verify, it will ensure that users are also offered a choice to make that verification visible to others. In a sense, it goes very much with the grain of what the Government have been moving towards with their approach to the use of user-empowerment tools and giving choice at the outset. That is, in a sense, the compromise between default and non-default, as we discussed in Committee. This offers users a different kind of choice, but nevertheless an important choice.

Just as the Bill would not force any UK users to verify, so this amendment would not force any UK users to make their choice to verify visible. All it would do is require that platforms offer them an option. Research suggests that most UK users would choose to verify and to make that visible; I am sure that the Minister is familiar with some of the research. New research published this week by Clean Up the Internet, based on independent opinion polling conducted by Opinium, found that 78% of UK social media users say that it would be helpful to be able to see which social media accounts have been verified to help them avoid scams. Almost as many—77%—say that being able to see which accounts have been verified would help with identifying bullies or trolls. Some 72% say it would help with spotting false or misleading news stories, and 68% say it would help with buying products or services.

Ofcom’s own research into online fraud, published in March this year, found:

“A warning from the platform that content or messages come from an unverified source”

is the single most popular measure platforms could introduce to help users avoid getting drawn into scams. So it would be an extremely popular move for the Minister to accept my amendment, as I am sure he would appreciate.

Is he not outrageous, trying to make appeals to one’s good humour and good sense? But I support him.

I will say only three things about this brief but very useful debate. First, I welcome the toggle-on, toggle-off resolution: that is a good move. It makes sure that people make a choice and that it is made at an appropriate time, when they are using the service. That seems to be the right way forward, so I am glad that that has come through.

Secondly, I still worry that terms of service, even though there are improved transparency measures in these amendments, will eventually need some form of power for Ofcom to set de minimis standards. So much depends on the ability of the terms of service to carry people’s engagement with the social media companies, including the decisions about what to see and not to see, and about whether they want to stay on or keep off. Without some power behind that, I do not think that the transparency will take it. However, we will leave it as it is; it is better than it was before.

Thirdly, user ID is another issue that will come back. I agree entirely with what the noble Lord, Lord Clement-Jones, said: this is at the heart of so much of what is wrong with what we see and perceive as happening on the internet. To reduce scams, to be more aware of trolls and to be aware of misinformation and disinformation, you need some sense of who you are talking to, or who is talking to you. There is a case for having that information verified, whether or not it is done on a limited basis, because we need to protect those who need to have their identities concealed for very good reason—we know all about that. As the noble Lord said, it is popular to think that you would be a safer person on the internet if you were able to identify who you were talking to. I look forward to hearing the Minister’s response.

My Lords, I will speak very briefly to Amendments 55 and 182. We are now at the stage of completely taking the lead from the Minister and the noble Lords opposite—the noble Lords, Lord Stevenson and Lord Clement-Jones—that we have to accept these amendments, because we need now to see how this will work in practice. That is why we all think that we will be back here talking about these issues in the not too distant future.

My noble friend the Minister rightly said that, as we debated in Committee, the Government made a choice in taking out “legal but harmful”. Many of us disagree with that, but that is the choice that has been made. So I welcome the changes that have been made by the Government in these amendments to at least allow there to be more empowerment of users, particularly in relation to the most harmful content and, as we debated, in relation to adult users who are more vulnerable.

It is worth reminding the House that we heard very powerful testimony during the previous stage from noble Lords with personal experience of family members who struggle with eating disorders, and how difficult these people would find it to self-regulate the content they were looking at.

In Committee, I proposed an amendment about “toggle on”. Anyone listening to this debate outside who does not know what we are talking about will think we have gone mad, talking about toggle on and toggle off, but I proposed an amendment for toggle on by default. Again, I take the Government’s point, and I know my noble friend has put a lot of work into this, with Ministers and others, in trying to come up with a sensible compromise.

I draw attention to Amendment 55. I wonder if my noble friend the Minister is able say anything about whether users will be able to have specific empowerment in relation to specific types of content, where they are perhaps more vulnerable if they see it. For example, the needs of a user might be quite different between those relating to self-harm and those relating to eating disorder content or other types of content that we would deem harmful.

On Amendment 182, my noble friend leapt immediately to abusive content coming from unverified users, but, as we have heard, and as I know, having led the House’s inquiry into fraud and digital fraud last year, there will be, and already is, a prevalence of scams. The Bill is cracking down on fraudulent advertisements but, as an anti-fraud measure, being able to see whether an account has been verified would be extremely useful. The view now is that, if this Bill is successful—and we hope it is—in cracking down on fraudulent advertising, then there will be even more reliance on what is called organic reach, which is the use of fake accounts, where verification therefore becomes more important. We have heard from opinion polling that the public want to see which accounts are or are not verified. We have also heard that Amendment 182 is about giving users choice, in making clear whether their accounts are verified; it is not about compelling people to say whether they are verified or not.

As we have heard, this is a direction of travel. I understand that the Government will not want to accept these amendments at this stage, but it is useful to have this debate to see where we are going and what Ofcom will be looking at in relation to these matters. I look forward to hearing what my noble friend the Minister has to say about these amendments.

My Lords, I speak to Amendment 53, on the assessment duties, and Amendment 60, on requiring services to provide a choice screen. It is the first time we have seen these developments. We are in something of a see-saw process over legal but harmful. I agree with my noble friend Lord Clement-Jones when he says he regrets that it is no longer in the Bill, although that may not be a consistent view everywhere. We have been see-sawing backwards and forwards, and now, like the Schrödinger’s cat of legal but harmful, it is both dead and alive at the same time. Amendments that we are dealing with today make it a little more alive that it was previously.

In this latest incarnation, we will insist that category 1 services carry out an assessment of how they will comply with their user-empowerment responsibility. Certainly, this part seems reasonable to me, given that it is limited to category 1 providers, which we assume will have significant resources. Crucially, that will depend on the categorisations—so we are back to our previous debate. If we imagine category 1 being the Meta services and Twitter, et cetera, that is one thing, but if we are going to move others into category 1 who would really struggle to do a user empowerment tool assessment—I have to use the right words; it is not a risk assessment—then it is a different debate. Assuming that we are sticking to those major services, asking them to do an assessment seems reasonable. From working on the inside, I know that even if it were not formalised in the Bill, they would end up having to do it as part of their compliance responsibilities. As part of the Clause 8 illegal content risk assessment, they would inevitably end up doing that.

That is because the categories of content that we are talking about in Clauses 12(10) to (12) are all types of content that might sometimes be illegal and sometimes not illegal. Therefore, if you were doing an illegal content risk assessment, you would have to look at it, and you would end up looking at types of content and putting them into three buckets. The first bucket is that it is likely illegal in the UK, and we know what we have to do there under the terms of the Bill. The second is that it is likely to be against your terms of service, in which case you would deal with it there. The third is that it is neither against your terms of service nor against UK law, and you would make a choice about that.

I want to focus on what happens once you have done the risk assessment and you have to have the choice screen. I particularly want to focus on services where all the content in Clause 12 is already against their terms of service, so there is no gap. The whole point of this discussion about legal but harmful is imagining that there is going to be a mixed economy of services and, in that mixed economy, there will be different standards. Some will wish to allow the content listed in Clause 12—self-harm-type content, eating disorder content and various forms of sub-criminal hate speech. Some will choose to do that—that is going to be their choice—and they will have to provide the user empowerment tools and options. I believe that many category 1 providers will not want to; they will just want to prohibit all that stuff under their terms of service and, in that case, offering a choice is meaningless. That will not make the noble Lord, Lord Moylan, or the noble Baroness, Lady Fox, very happy, but that is the reality.

Most services will just say that they do not want that stuff on their platform. In those cases, I hope that what we are going to say is that, in their terms of service, when a user joins a service, they can say that they have banned all that stuff anyway, so they are not going to give the user a user empowerment tool and, if the user sees that stuff, they should just report it and it will be taken down under the terms of service. Throughout this debate I have said, “No more cookie banners, please”. I hope that we are not going to require people, in order for them to comply with this law, to offer a screen that people then click through. It is completely meaningless and ineffective. For those services that have chosen under their terms of service to restrict all the content in Clause 12, I hope that we will be saying that their version of the user empowerment tool is not to make people click anything but to provide education and information and tell them where they can report the content and have it taken down.

Then there are those who will choose to protect that content and allow it on their service. I agree with the noble Lord, Lord Moylan, that this is, in some sense, Twitter-focused or Twitter-driven legislation, because Twitter tends to be more in the freedom of speech camp and to allow hate speech and some of that stuff. It will be more permissive than Facebook or Instagram in its terms, and it may choose to maintain that content and it will have to offer that screen. That is fine, but we should not be making services do so when they have already prohibited such content.

The noble Lord, Lord Moylan, mentioned services that use community moderators to moderate part of the service and how this would apply there. Reddit is the obvious example, but there are others. If you are going to have user empowerment—and Reddit is more at the freedom of expression end of things—then if there are some subreddits, or spaces within Reddit that allow hate speech or the kind of speech that is in Clause 12, it would be rational to say that user empowerment in the context of Reddit is to be told that you can join these subreddits and you are fine or you can join those subreddits and you are allowing yourself to be exposed to this kind of content. What would not make sense would be for Reddit to do it individual content item by content item. When we are thinking about this, I hope that the implementation would say that, for a service with community-moderated spaces, and subspaces within the larger community, user empowerment means choosing which subspaces you enter, and you would be given information about them. Reddit would say to the moderators of the subreddits, “You need to tell us whether you have any Clause 12-type content”—I shall keep using that language—“and, if you are allowing it, you need to make sure that you are restricted”. But we should not expect Reddit to restrict every individual content item.

Finally, as a general note of caution, noble Lords may have detected that I am not entirely convinced that these will be hugely beneficial tools, perhaps other than for a small subset of Twitter users, for whom they are useful. There is an issue around particular kinds of content on Twitter, and particular Twitter users, including people in prominent positions in public life, for whom these tools make sense. For a lot of other people, they will not be particularly meaningful. I hope that we are going to keep focused on outcomes and not waste effort on things that are not effective.

As I say, many companies, when they are faced with this, will look at it and say, “I have limited engineering time. I could build all these user empowerment tools or I could just ban the Clause 12 stuff in my terms of service”. That would not be a great outcome for freedom of expression; it might be a good outcome for the people who wanted to prohibit legal but harmful in the first place. You are going to do that as a really hard business decision. It is much more expensive to try to maintain these different regimes and flag all this content and so on. It is simpler to have one set of standards.

I think most services will just adopt the Clause 12 content restrictions into their terms of service and have done with it. I do not think we want to create a perverse situation where we say you must allow some in order to have a tool to block it. I certainly had the experience at Facebook where people were saying to me, “Why does Facebook not have safe search to prevent nudity?” I would say, “Our terms ban nudity”, and then they would say, “But you need safe search”. I would say, “It is banned. You are not supposed to have it. Why would I have a tool to block something that should not be there in the first place?” I hope we are not going to go down that path and create those perverse incentives.

This is crucially about deploying resources so I hope if we are going ahead with the user empowerment tools we will assess them and be ruthless about deploying resources where they work best. I do not think anyone is going to cry for category 1 companies. They have plenty of resources; they can build stuff. But the pool of engineers is not infinite and if we are asking them to spend their time on user empowerment tools that very few people use and are not producing huge safety benefits, frankly, I would rather they take those engineers and put them on something else, such as scanning algorithms which can pick up the priority content for children.

I hope we keep all that in mind as we do this. We are going to build the user empowerment tools. It is a logical response once we had decided to take legal but harmful out, but I think we should approach it with a note of caution that we do not assume it is necessarily going to be a fix everywhere and in the same way on all platforms. For some platforms, it might be quite meaningless; for others, potentially, it is something people will want to use.

My Lords, I am happy to acknowledge and recognise what the Government did when they created user empowerment duties to replace legal but harmful. I think they were trying to counter the dangers of over-paternalism and illiberalism that oblige providers to protect adult users from content that allegedly would cause them harm.

At least the new provisions brought into the Bill have a different philosophy completely. They enhance users’ freedom as individuals and allow them to apply voluntary content filters and freedom of choice, on the principle that adults can make decisions for themselves.

In case anyone panics, I am not making a philosophical speech. I am reminding the Government that that is what they said to us—to everybody—“We are getting rid of legal but harmful because we believe in this principle”. I am worried that some of the amendments seem to be trying to backtrack from that different basis of the Bill—and that more liberal philosophy—to go back to the old legal but harmful. I say to the noble Lord, Lord Allan of Hallam, that the cat is distinctly not dead.

The purpose of Amendment 56 is to try to ensure that providers also cannot thwart the purpose of Clause 12 and make it more censorious and paternalistic. I am not convinced that the Government needed to compromise on this as I think Amendment 60 just muddies the waters and fudges the important principle that the Government themselves originally established.

Amendment 56 says that the default must be no filtering at all. Then users have to make an active decision to switch on the filtering. The default is that you should be exposed to a full flow of ideas and, if you do not want that, you have to actively decide not to and say that you want a bowdlerised or sanitised version.

Amendment 56 takes it a bit further, in paragraph (b), and applies different levels of filtering in terms of content of democratic importance and journalistic content. In the Bill itself, the Government accept the exceptional nature of those categories of content, and this just allows users to be able to do the same and say, “No; I might want to filter some things out but bear in mind the exceptional importance of democratic and journalistic content”. I worry that the government amendments signal to users that certain ideas are dangerous and must be hidden. That is my big concern. In other words, they might be legal but they are harmful: that is what I think these amendments try to counter.

One of the things that worries me about the Bill is the danger of echo chambers. I know we are concentrating on harms, but I think echo chambers are harmful. I started today quite early at Blue Orchid at 55 Broadway with a big crowd of sixth formers involved in debating matters. I complimented Keir Starmer on his speech on the importance of oracy and encouraging young people to speak. I stressed to all the year 12 and year 13 young people that the important thing was that they spoke out but also that they listened to contrary opinions and got out of their safe spaces and echo chambers. They were debating very difficult topics such as commercial surrogacy, cancel culture and the risks of contact sports. I am saying all that to them and then I am thinking, “We have now got a piece of legislation that says you can filter out all the stuff you do not want to hear and create your own safe space”. So I just get anxious that we do not inadvertently encourage in the young—I know this is for all adults—that antidemocratic tendency to not want to hear what you do not want to hear, even when it would be good to hear as many opinions as possible.

I also want to press the Minister on the problem of filtering material that targets race, religion, sex, sexual orientation, disability and gender reassignment. I keep trying to raise the problem that it could lead to diverse philosophical views around those subjects also being removed by overzealous filtering. You might think that you know what you are asking to be filtered out. If you say you want to filter out material that is anti-religion, you might not mean that you do not want any debates on religious tolerance. For example, there was that major controversy over the “The Lady of Heaven” film. I know the Minister was interested, as I was, in the dangers of censorship in relation to that. You would not want, because you said, “Don’t target me for my religion”, to not be able to access that debate.

I think there is a danger that we are handing a lot of power to filterers to make filtering decisions based on their values when we are not clear about what they are. Look at what has happened with the banks in the last few days. Their values have closed down people’s bank accounts because they disagree on values. Again, we say “Don’t target on race”, but I have been having lots of arguments with people recently who have accused the Government, through their Illegal Migration Bill, of being racist. I think we just need to know that we are not accepting an ideological filtering of what we see.

Amendment 63 is key because it requires providers’ terms of service to include provisions about how content to which Clause 12(2) applies is identified, precisely to try to counter these problems. It imposes a duty on providers to apply those provisions consistently, as the noble Lord, Lord Moylan, explained. The point that providers have to set out how they identify content that is allegedly hostile, for example, to religion, or racially abusive, is important because this is about empowering users. Users need to know whether this will be done by machine learning or will it be a human doing it. Do they look for red flags and, if so, what are the red flags? How are these things decided? That means that providers have to state clearly and be accountable for their definition of any criteria that could justify them filtering out and disturbing the flow of democratic information. It is all about transparency and accountability in that sense.

Finally, in relation to Amendment 183, I am worried about the notion of filtering out content from unverified users for a range of reasons. It indicates somehow that there is a direct link between being unverified or anonymous and harm or being dodgy, which I think that is illegitimate. It has already been explained that there will be a detrimental impact on certain organisations —we have talked about Reddit, but I like to remember Mumsnet. There are quite a lot of organisations with community-centred models, where the structure is that influencers broadcast to their followers and where there are pseudonymous users. Is the requirement to filter out those contributors likely to lead to those models collapsing? I need to be reassured on this because I am not convinced at all. As has been pointed out, there will be a two-tier internet because those who are unable or unwilling to disclose their identity online or to be verified by someone would be or could be shut out from public discussions. That is a very dangerous place to have ended up, even though I am sure it is not what the Government intend.

My Lords, I am grateful for the broad, if not universal, support for the amendments that we have brought forward following the points raised in Committee. I apologise for anticipating noble Lords’ arguments, but I am happy to expand on my remarks in light of what they have said.

My noble friend Lord Moylan raised the question of non-verified user duties and crowdsourced platforms. The Government recognise concerns about how the non-verified user duties will work with different functionalities and platforms, and we have engaged extensively on this issue. These duties are only applicable to category 1 platforms, those with the largest reach and influence over public discourse. It is therefore right that such platforms have additional duties to empower their adult users. We anticipate that these features will be used in circumstances where vulnerable adults wish to shield themselves from anonymous abuse. If users decide that they are restricting their experience on a particular platform, they can simply choose not to use them. In addition, before these duties come into force, Ofcom will be required to consult effective providers regarding the codes of practice, at which point they will consider how these duties might interact with various functionalities.

My noble friend and the noble Lord, Lord Allan of Hallam, raised the potential for being bombarded with pop-ups because of the forced-choice approach that we have taken. These amendments have been carefully drafted to minimise unnecessary prompts or pop-ups. That is why we have specified that the requirement to proactively ask users how they want these tools to be applied is applicable only to registered users. This approach ensures that users will be prompted to make a decision only once, unless they choose to ignore it. After a decision has been made, the provider should save this preference and the user should not be prompted to make the choice again.

The noble Lord, Lord Clement-Jones, talked further about his amendments on the cost of user empowerment tools as a core safety duty in the Bill. Category 1 providers will not be able to put the user empowerment tools in Clause 12 behind a pay wall and still be compliant with their duties. That is because they will need to offer them to users at the first possible opportunity, which they will be unable to do if they are behind a pay wall. The wording of Clause 12(2) makes it clear that providers have a duty to include user empowerment features that an adult user may use or apply.

The Minister may not have the information today, but I would be happy to get it in writing. Can he clarify exactly what will be expected of a service that already prohibits all the Clause 12 bad stuff in their terms of service?

I will happily write to the noble Lord on that.

Clause 12(4) further sets out that all search user empowerment content tools must be made available to all adult users and be easy to access.

The noble Lord, Lord Clement-Jones, on behalf of the noble Baroness, Lady Finlay, talked about people who will seek out suicide, self-harm or eating-disorder content. While the Bill will not prevent adults from seeking out legal content, it will introduce significant protections for adults from some of the most harmful content. The duties relating to category 1 services’ terms of service are expected hugely to improve companies’ own policing of their sites. Where this content is legal and in breach of the company’s terms of service, the Bill will force the company to take it down.

We are going even further by introducing a new user empowerment content-assessment duty. This will mean that where content relates to eating disorders, for instance, but which is not illegal, category 1 providers need fully to assess the incidence of this content on their service. They will need clearly to publish this information in accessible terms of service, so users will be able to find out what they can expect on a particular service. Alternatively, if they choose to allow suicide, self-harm or eating content disorder which falls into the definition set out in Clause 12, they will need proactively to ask users how they would like the user empowerment content features to be applied.

My noble friend Lady Morgan was right to raise the impact on vulnerable people or people with disabilities. While we anticipate that the changes we have made will benefit all adult users, we expect them particularly to benefit those who may otherwise have found it difficult to find and use the user empowerment content features independently—for instance, some users with types of disabilities. That is because the onus will now be on category 1 providers proactively to ask their registered adult users whether they would like these tools to be applied at the first possible opportunity. The requirement also remains to ensure that the tools are easy to access and to set out clearly what tools are on offer and how users can take advantage of them.

On the granularity of choice for different tools, as pressed by the noble Lord, Lord Clement-Jones, the forced choice user empowerment amendment has been drafted in such a way to ensure that, should platforms offer users a range of tools to comply with their duties, users will get a choice about each tool that they offer. For instance, if a provider offers users one tool that will reduce the likelihood that they see certain categories of content and another that alerts them to the nature of it, they will get separate choices about whether they want these tools to be applied. This will ensure that users have even more control over their experience online. A blanket on/off choice for all user empowerment features is unlikely to be consistent with the duty on category 1 services to have particular regard to the importance of protecting users’ freedom of expression when putting in place these features, which can be found in Clause 18. Additionally, duties under the Human Rights Act 1998 and the requirement to consult experts on freedom of expression mean that the measures Ofcom will recommend in its codes of practice must consider the impact of freedom of expression so are unlikely to take a blanket approach.

I hope all that goes some way towards reassuring the noble Baroness, Lady Fox, that freedom of expression is baked into all these amendments. Many of the questions she raises come down to the choice of users. We are not forcing people by having a default on or off. We are encouraging all users to make a decision about what material they see as adults on the internet within the law. If, like her and like me, they want to continue to see that, they should continue to keep their settings broad and know that they will encounter things with which they may disagree or that may offend them.

Amendment 32 agreed.

Amendment 33

Moved by

33: Clause 6, page 5, line 37, leave out “duty about record-keeping set out in section 19(9)” and insert “duties about record-keeping set out in section 19(8A) and (9)”

Member’s explanatory statement

This amendment ensures that the new duties in Clause 19 proposed by amendments in my name to that clause are imposed on providers of Category 1 services.

Amendment 33 agreed.

Clause 10: Children’s risk assessment duties

Amendment 34

Moved by

34: Clause 10, page 9, line 13, after “8” insert “and, in the case of services likely to be accessed by children which are Category 1 services, the duties about assessments set out in section (Assessment duties: user empowerment)”

Member’s explanatory statement

This amendment inserts a signpost to the new duties imposed on providers of Category 1 services by the new Clause proposed after Clause 11 in my name.

My Lords, I will speak to the government amendments now but not anticipate the non-government amendments in this group.

As noble Lords know, protecting children is a key priority for this Bill. We have listened to concerns raised across your Lordships’ House about ensuring that it includes the most robust protections for children, particularly from harmful content such as pornography. We also recognise the strength of feeling about ensuring the effective use of age-assurance measures, by which we mean age verification and age estimation, given the important role they will have in keeping children safe online.

I thank the noble Baroness, Lady Kidron, and my noble friends Lady Harding of Winscombe and Lord Bethell in particular for their continued collaboration over the past few months on these issues. I am very glad to have tabled a significant package of amendments on age assurance. These are designed to ensure that children are prevented from accessing pornography, whether it is published by providers in scope of the Part 5 duties or allowed by user-to-user services that are subject to Part 3 duties. The Bill will be explicit that services will need to use highly effective age verification or age estimation to meet these new duties.

These amendments will also ensure that there is a clear, privacy-preserving and future-proof framework governing the use of age assurance, which will be overseen by Ofcom. Our amendments will, for the first time, explicitly require relevant providers to use age verification or age estimation to protect children from pornography. Publishers of pornographic content, which are regulated in Part 5, will need to use age verification or age estimation to ensure that children are not normally able to encounter content which is regulated provider pornographic content on their service.

Further amendments will ensure that, where such tools are proactive technology, Ofcom may also require their use for Part 5 providers to ensure compliance. Amendments 279 and 280 make further definitional changes to proactive technology to ensure that it can be recommended or required for this purpose. To ensure parity across all regulated pornographic content in the Bill, user-to-user providers which allow pornography under their terms of service will also need to use age verification or age estimation to prevent children encountering pornography where they identify such content on their service. Providers covered by the new duties will also need to ensure that their use of these measures meets a clear, objective and high bar for effectiveness. They will need to be highly effective at correctly determining whether a particular user is a child. This new bar will achieve the intended outcome behind the amendments which we looked at in Committee, seeking to introduce a standard of “beyond reasonable doubt” for age assurance for pornography, while avoiding the risk of legal challenge or inadvertent loopholes.

To ensure that providers are using measures which meet this new bar, the amendments will also require Ofcom to set out, in its guidance for Part 5 providers, examples of age-verification and age-estimation measures which are highly effective in determining whether a particular user is a child. Similarly, in codes of practice for Part 3 providers, Ofcom will need to recommend age-verification or age-estimation measures which can be used to meet the new duty to use highly effective age assurance. This will meet the intent of amendments tabled in Committee seeking to require providers to use measures in a manner approved by Ofcom.

I confirm that the new requirement for Part 3 providers will apply to all categories of primary priority content that is harmful to children, not just pornography. This will mean that providers which allow content promoting or glorifying suicide, self-harm and eating disorders will also be required to use age verification or age estimation to protect children where they identify such content on their service.

Further amendments clarify that a provider can conclude that children cannot access a service—and therefore that the service is not subject to the relevant children’s safety duty—only if it uses age verification or age estimation to ensure that children are not normally able to access the service. This will ensure consistency with the new duties on Part 3 providers to use these measures to prevent children’s access to primary priority content. Amendment 34 inserts a reference to the new user empowerment duties imposed on category 1 providers in the child safety duties.

Amendment 214 will require Part 5 providers to publish a publicly available summary of the age-verification or age-estimation measures that they are using to ensure that children are not normally able to encounter content that is regulated provider pornographic content on their service. This will increase transparency for users on the measures that providers are using to protect children. It also aligns the duties on Part 5 providers with the existing duties on Part 3 providers to include clear information in terms of service on child protection measures or, for search engines, a publicly available statement on such measures.

I thank the noble Baroness, Lady Kidron, for her tireless work relating to Amendment 124, which sets out a list of age-assurance principles. This amendment clearly sets out the important considerations around the use of age-assurance technologies, which Ofcom must have regard to when producing its codes of practice. Amendment 216 sets out the subset of principles which apply to Part 5 guidance. Together, these amendments ensure that providers are deploying age-assurance technologies in an appropriate manner. These principles appear as a full list in Schedule 4. This ensures that the principles can be found together in one place in the Bill. The wider duties set out in the Bill ensure that the same high standards apply to both Part 3 and Part 5 providers. These principles have been carefully drafted to avoid restating existing duties in the Bill. In accordance with good legislative drafting practice, the principles also do not include reference to other legislation which already directly applies to providers. In its relevant guidance and codes, however, Ofcom may include such references as it deems appropriate.

Finally, I highlight the critical importance of ensuring that users’ privacy is protected throughout the age-assurance processes. I make it clear that privacy has been represented in these principles to the furthest degree possible, by referring to the strong safeguards for user privacy already set out in the Bill.

In recognition of these new principles and to avoid duplication, Amendment 127 requires Ofcom to refer to the age-assurance principles, rather than to the proactive technology principles, when recommending age-assurance technologies that are also proactive technology.

We have listened to the points raised by noble Lords about the importance of having clear and robust definitions in the Bill for age assurance, age verification and age estimation. Amendment 277 brings forward those definitions. We have also made it clear that self-declared age, without additional, more robust measures, is not to be regarded as age verification or age estimation for compliance with duties set out in the Bill. Amendment 278 aligns the definition of proactive technology with these new definitions.

The Government are clear that the Bill’s protections must be implemented as quickly as is feasible. This entails a complex programme of work for the Government and Ofcom, as well as robust parliamentary scrutiny of many parts of the regime. All of this will take time to deliver. It is right, however, that we set clear expectations for when the most pressing parts of the regulation—those targeting illegal content and protecting children—should be in place. These amendments create an 18-month statutory deadline from the day the Bill is passed for Ofcom’s implementation of those areas. By this point, Ofcom must submit draft codes of practice to the Secretary of State to be laid in Parliament and publish its final guidance relating to illegal content duties, duties about content harmful to children and duties about pornography content in Part 5. This also includes relevant cross-cutting duties, such as content reporting procedures, which are relevant to illegal content and content harmful to children.

In line with convention, most of the Bill’s substantive provisions will be commenced two months after Royal Assent. These amendments ensure that a set of specific clauses will commence earlier—on the day of Royal Assent—allowing Ofcom to begin vital implementation work sooner than it otherwise would have done. Commencing these clauses early will enable Ofcom to launch its consultation on draft codes of practice for illegal content duties shortly after Royal Assent.

Amendment 271 introduces a new duty on Ofcom to produce and publish a report on in-scope providers’ use of age-assurance technologies, and for this to be done within 18 months of the first date on which both Clauses 11 and 72(2), on pornography duties, are in force. I thank the noble Lord, Lord Allan of Hallam, for the amendment he proposed in Committee, to which this amendment responds. We believe that this amendment will improve transparency in how age-assurance solutions are being deployed by providers, and the effectiveness of those solutions.

Finally, we are also making a number of consequential and technical amendments to the Bill to split Clauses 11 and 25 into two parts. This is to ensure these do not become unwieldy and that the duties are clear for providers and for Ofcom. I beg to move.

Debate on Amendment 34 adjourned.

Consideration on Report adjourned.

House adjourned at 7.12 pm.