Skip to main content

Online Safety Bill

Volume 829: debated on Tuesday 9 May 2023

Committee (5th Day)

Relevant document: 28th Report from the Delegated Powers Committee

Debate on Amendment 33B resumed.

My Lords, I will speak to Amendment 155 in my name, and I am grateful for the support of the noble Baroness, Lady Fox of Buckley, and my noble friend Lord Strathcarron. Some of my remarks in Committee last week did not go down terribly well with Members and, in retrospect, I realise that that was because I was the only Member of the Committee that day who did not take the opportunity to congratulate the noble Baroness, Lady Kidron, on her birthday. So at this very late stage—a week later —I make good that deficiency and hope that, in doing so, I will get a more jocular and welcoming hearing than I did last week. I will speak in a similar vein, though on a different topic and part of the Bill.

This amendment relates to Clause 65, which has 12 subsections. I regard the first subsection as relatively uncontroversial; it imposes a duty on all service providers. The effect of this amendment would be to remove all the remaining subsections, which fall particularly on category 1 providers. What Clause 65 does, in brief, is to make it a statutory obligation for category 1 providers to live up to their terms of service. Although it does not seek to specify what the terms of service must be, it does, in some ways, specify how they should be operated once they have been written—I regard that as very odd, and will come back to the reason why.

I say at the outset that I understand the motivation behind this section of the Bill. It addresses the understandable feeling that if a service provider of any sort says that they have terms of service which mean that, should there be complaints, they will be dealt with in a certain way and to a certain timetable and that you will get a response by a certain time, or if they say that they will remove certain material, that they should do what they say they will do in the terms of service. I understand what the clause is trying to do —to oblige service providers to live up to their terms of service—but this is a very dangerous approach.

First of all, while terms of service are a civil contract between the provider and the user, they are not an equal contract, as we all know. They are written for the commercial benefit and advantage of the companies that write them—not just in the internet world; this is generally true—and they are written on a take it or leave it basis. Of course, they cannot be egregiously disadvantageous to the customer or else the customer would not sign up to them; none the less, they are drafted with the commercial and legal advantage of the companies in question. Terms of service can be extreme. Noble Lords may be aware that, if you have a bank account, the terms of service that your bank has, in effect, imposed on you almost certainly include a right for the bank to close your account at any time it wishes and to give no reason for doing so. I regard that as an extreme terms of service provision, but it is common. They are not written as equal contracts between consumers and service providers.

Why, therefore, would we want to set terms of service in statute? That is what this clause does: to make them enforceable by a regulator under statute. Moreover, why would we want to do it when the providers we are discussing will have, in practice, almost certainly drafted their terms of service under the provisions of a foreign legal system, which we are then asking our regulator to ensure is enforced? My objection is not to try to find a way of requiring providers to live up to the terms of service they publish—indeed, the normal process for doing so would be through a civil claim; instead, I object to the method of doing so set out in this section of the Bill.

We do not use this method with other terms of service features. For example, we do not have a regulator who enforces terms of service on data protection; we have a law that says what companies must do to protect data, and then we expect them to draft terms of service, and to conduct themselves in other ways, that are compatible with that law. We do not make the terms of services themselves enforceable through statute and regulation, yet that is what this Bill does.

When we look at the terms of service of the big providers on the internet—the sorts of people we have in mind for the scope of the Bill—we find that they give themselves, in their terms of service, vast powers to remove a wide range of material. Much of that would fall—I say this without wanting to be controversial —into the category of “legal but harmful”, which in some ways this clause is reviving through the back door.

Of course, what could be “harmful” is extremely wide, because it will have no statutory bounds: it will be whatever Twitter or Google say they will remove in their terms of service. We have no control over what they say in their terms of service; we do not purport to seek such control in the Bill or in this clause. Twitter policy, for example, is to take down material that offends protected characteristics such as “gender” and “gender identity”. Now, those are not protected characteristics in the UK; the relevant protected characteristics in the Equality Act are “sex” and “gender reassignment”. So this is not enforcing our law; our regulator will be enforcing a foreign law, even though it is not the law we have chosen to adopt here.

YouTube policy during the pandemic prohibited material that contradicted the views of health authorities. Even my right honourable friend David Davis was removed for opposing Covid passes, but that was a legitimate political position to take and contribution to make. There is no obligation on the platforms to protect free speech or to have respect to Article 10 of the European Convention on Human Rights. They are not in any sense bound by the European convention; most of them are not in any sense European. I think very strongly that this whole section is very dangerous.

I posit an extreme case that requires a slight exercise of the imagination. Imagine if a Russian platform were to gain a significant presence in the UK. It is not impossible: nobody would have predicted TikTok emerging from China so quickly not very long ago. Imagine the terms of service said, quite in compliance with Russian law, that it would remove any material that included the words “war” and “Ukraine” together; “special military operation” would be all right, but “war” and “Ukraine” would not. Imagine that it was relatively inefficient at doing this and left such material up. Are we not in a position, as a result of this section of the Bill, of obliging Ofcom to seek to enforce that term of its service contract on a Russian platform? How absurd that would be in an extreme case, but the parallel exists with the American and other platforms.

I very much hope that my noble friend will say what I want to say, which is that, yes, there is an issue and we would like to do something. We understand the motivation here, but this is very much the wrong way of going about it. It is inimical to free speech and it leads to absurd conclusions.

I support Amendment 44. I am pleased that, as part of the new triple shield, the Government have introduced Clause 12 on “User empowerment duties”, which allow users to protect themselves, not just from abusive posts from other users but from whole areas of content. In the Communications and Digital Committee’s inquiry, we had plenty of evidence from organisations representing minorities and people with special characteristics who are unable adequately to protect themselves from the hate they receive online. I am glad that subsections (10) to (12) recognise specific content and users with special characteristics who are targets of abuse and need to be able to protect themselves, but subsection (3) requests that these features should be

“designed to effectively … reduce the likelihood of the user encountering content”

they want to avoid. I am concerned that “effectively” will be interpreted subjectively by platforms in scope and that each will interpret it differently.

At the moment, it will not be possible for Ofcom to assess how thoroughly the platforms have been providing these empowerment tools of protection for users. If the features are to work, there must be an overview of how effective they are being and how well they are working. When the former Secretary of State, Michelle Donelan, was asked about this, she said that there was nothing in this clause to pin an assessment on. It seems to me that the lists in Clause 12 create plenty of criteria on which to hang an assessment.

The new duties in Clause 12 provide for control tools for users against very specific content that is abusive or incites hatred on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation. However, this list is not exhaustive. There will inevitably be areas of content for which users have not been given blocking tools, including pornography, violent material and other material that is subject to control in the offline world.

Not only will the present list for such tools need to be assessed for its thoroughness in allowing users to protect themselves from specific harms, but surely the types of harm from which they need to protect themselves will change over time. Ofcom will need regularly to assess where these harms are and make sure that service providers regularly update their content-blocking tools. Without such an assessment, it will be hard for Ofcom and civil society to understand what the upcoming concerns are with the tools.

The amendment would provide a transparency obligation, which would demand that service providers inform users of the risks present on the platform. Surely this is crucial when users are deciding what to protect themselves from.

The assessment should also look for unintended restrictions on freedom of expression created by the new tools. If the tools are overprotective, they could surely create a bubble and limit users’ access to information that they might find useful. For example, the user might want to block material about eating disorders, but the algorithm might interpret that to mean limiting the user’s access to content on healthy lifestyles or nutrition content. We are also told that the algorithms do not understand irony and humour. When the filters are used to stop content that is abusive or incites hatred on the basis of users’ particular characteristics, they might also remove artistic, humorous or satirical content.

Repeatedly, we are told that the internet creates echo chambers, where users read only like-minded opinions. These bubbles can create an atmosphere where freedom of expression is severely limited and democracy suffers. A freedom of expression element to the assessment would also, in these circumstances, be critical. We are told that the tech platforms often do not know what their algorithms do and, not surprisingly, they often evolve beyond their original intentions. Assessments on the tools demanded by Clause 12 need to be carefully investigated to ensure that they are keeping up to date with the trends of abuse on the internet but also for the unintended consequences they might create, curbing freedom of expression.

Throughout the Bill, there is a balancing act between freedom of expression and protection from abuse. The user empowerment tools are potentially very powerful, and neither the service providers, the regulators nor the Government know what their effects will be. It is beholden upon the Government to introduce an assessment to check regularly how the user empowerment duties are working; otherwise, how can they be updated, and how can Ofcom discover what content is being unintentionally controlled? I urge the Minister, in the name of common sense, to ensure that these powerful tools unleashed by the Bill will not be misused or become outdated in a fast-changing digital world.

My Lords, I thank the noble Lord, Lord Moylan, for his words—I thought I was experiencing time travel there—and am sympathetic to many of the issues that he has raised, although I think that some of the other amendments in the group tackle those issues in a slightly different way.

I support Amendments 44 and 158 in the name of the right reverend Prelate the Bishop of Oxford. Requiring a post-rollout assessment to ensure that the triple shield acts as we are told it will seems to be a classic part of any regulatory regime that is fit for purpose: it needs to assess whether the system is indeed working. The triple shield is an entirely new concept, and none of the burgeoning regulatory systems around the world is taking this approach, so I hope that both the Government and Ofcom welcome this very targeted and important addition to the Bill.

I will also say a few words about Amendments 154 and 218. It seems to me that, in moving away from legal but harmful—which as a member of the pre-legislative committee I supported, under certain conditionality that has not been met, but none the less I did support it—not enough time and thought have been given to the implications of that. I do not understand, and would be grateful to the Minister if he could help me understand, how Ofcom is to determine whether a company has met its own terms and conditions—and by any means, not only by the means of a risk assessment.

I want to make a point that the noble Baroness, Lady Healy, made the other day—but I want to make it again. Taking legal but harmful out and having no assessment of whether a company has met its general safety duties leaves the child safety duties as an island. They used to be something that was added on to a general system of safety; now they are the first and only port of call. Again, because of the way that legal but harmful fell out of the Bill, I am not sure whether we have totally understood how the child risk assessments sit without a generally cleaned up or risk-assessed digital environment.

Finally, I will speak in support of Amendment 160, which would have Ofcom say what “adequate and appropriate” terms are. To a large degree, that is my approach to the problem that the noble Lord, Lord Moylan, spoke about: let Parliament and the regulator determine what we want to see—as was said on the data protection system, that is how it is—and let us have minimum standards that we can rightly expect, based on UK law, as the noble Lord suggested.

I am not against the triple shield per se, but it radically replaced an entire regime of assessment, enforcement and review. I think that some of the provisions in this group really beg the Government’s attention, in order to make sure that there are no gaping holes in the regime.

My Lords, I will speak to Amendments 44 and 158 in the name of the right reverend Prelate the Bishop of Oxford. I also note my support for the amendments in the name of the noble Lord, Lord Stevenson of Balmacara, to ensure the minimum standard for a platform’s terms of service. My noble friend Lord Moylan has just given an excellent speech on the reasons why these amendments should be considered.

I am aware that the next group of amendments relates to the so-called user empowerment tools, so it seems slightly bizarre to be speaking to Amendment 44, which seeks to ensure that these user empowerment tools actually work as the Government hope they will, and Amendment 158, which seeks to risk assess whether providers’ terms of service duties do what they say and report this to Ofcom. Now that the Government have watered down the clauses that deal with protection for adults, like other noble Lords, I am not necessarily against the Government’s replacement—the triple shield—but I believe that it needs a little tightening up to ensure that it works properly. These amendments seem a reasonable way of doing just that. They would ensure greater protection for adults without impinging on others’ freedom of expression.

The triple shield relies heavily on companies’ enforcement of terms of service and other vaguely worded duties, as the noble Viscount mentioned, that user empowerment tools need to be “easily accessible” and “effective”—whatever that means. Unlike with other duties in the Bill, such as those on illegal content and children’s duties, there is no mechanism to assess whether these new measures are working; whether the way companies are carrying out these duties is in accordance with the criteria set out; and whether they are indeed infringing freedom of expression. Risk assessments are vital to doing just that, because they are vital to understanding the environment in which services operate. They can reduce bureaucracy by allowing companies to rule out risks which are not relevant to them, and they can increase user safety by revealing new risks, thereby enabling the future-proofing of a regime. Can the Minister give us an answer today as to why risk assessment duties on these two strands of the triple shield—terms of service and user empowerment tools—were removed? If freedom of speech played a part in this, perhaps he could elaborate why he thinks undertaking a risk assessment is in any way a threat.

Without these amendments, the Bill cannot be said to be a complete risk management regime. Companies will, in effect, be marking their own homework when designing their terms of service and putting their finger in the air when it comes to user empowerment tools. There will be no requirement for them to explain either to Ofcom or indeed to service users the true nature of the harms that occur on their service, nor the rationale behind any decisions they might make in these two fundamental parts of their service.

Since the Government are relying so heavily on their triple shield to ensure protection for adults, to me, not reviewing two of the three strands that make up the triple shield seems like fashioning a three-legged stool with completely uneven legs: a stool that will not stand up to the slightest pressure when used. Therefore, I urge the Minister to look again and consider reinstating these protections in the Bill.

My Lords, this group of amendments looks at the treatment of legal content accessed by adults. The very fact that Parliament feels that legislation has a place in policing access to legal material is itself worrying. This door was opened by the Government in the initial draft Bill, but, as we have already heard, after a widespread civil liberties backlash against the legal but harmful clauses, we are left with Clause 65. As has been mentioned, I am worried that this clause, and some of the amendments, might well bring back legal but harmful for adults by the back door. One of the weasel words here is “harmful”. As I have indicated before, it is difficult to work out from the groupings when to raise which bit, so I am keeping that for your Lordships until later and will just note that I am rather nervous about the weasel word “harmful”.

Like many of us, I cheered at the removal of the legal but harmful provisions, but I have serious reservations about their replacement with further duties via terms of service, which imposes a duty on category 1 services to have systems and processes in place to take down or restrict access to content, and to ban or suspend users in accordance with terms of service, as the noble Lord, Lord Moylan, explained. It is one of the reasons I support his amendment. It seems to me to be the state outsourcing the grubby job of censorship to private multinational companies with little regard for UK law.

I put my name to Amendment 155 in the name of the noble Lord, Lord Moylan, because I wanted to probe the Government’s attitude to companies’ terms of service. Platforms have no obligation to align their terms of service with freedom of expression under UK law. It is up to them. I am not trying to impose on them what they do with their service users. If a particular platform wishes to say, “We don’t want these types of views on our platform”, fine, that is its choice. But when major platforms’ terms of service, which are extensive, become the basis on which UK law enforces speech, I get nervous. State regulators are to be given the role of ensuring that all types of lawful speech are suppressed online, because the duty applies to all terms of service, whatever they are, regarding the platforms’ policies on speech suppression, censorship, user suspension, bans and so on. This duty is not restricted to so-called harmful content; it is whatever content the platform wishes to censor.

What is more, Clause 65 asks Ofcom to ensure that individuals who express lawful speech are suspended or banned from platforms if in breach of the platforms’ Ts & Cs, and that means limiting those individuals from expressing themselves more widely, beyond the specific speech in question. That is a huge green light to interfere in UK citizens’ freedom of expression, in my opinion.

I stress that I am not interested in interfering in the terms and conditions of private companies, although your Lordships will see later that I have an amendment demanding that they introduce free-speech clauses. That is because of the way we seem to be enacting the law via the terms of service of private companies. They should of course be free to dictate their own terms of service, and it is reasonable that members of the public should know what they are and expect them to be upheld. But that does not justify the transformation of these private agreements into statutory duties—that is my concern.

So, why are we allowing this Bill to ask companies to enforce censorship policies in the virtual public square that do not exist in UK law? When companies’ terms of service permit the suppression of speech, that is up to them, but when they supress speech far beyond the limitations of speech in UK law and are forced to do so by a government regulator such as Ofcom, are we not in trouble? It means that corporate terms of service, which are designed to protect platforms’ business interests, are trumping case law on free speech that has evolved over many years.

Those terms of service are also frequently in flux, according to fashion or ownership; one only has to look at the endless arguments, which I have yet to understand, about Twitter’s changing terms of service after the Elon Musk takeover. Is Ofcom’s job to follow Elon Musk’s ever-changing terms of service and enforce them on the British public as if they are law?

The terms and conditions are therefore no longer simply a contract between a company and the user; their being brought under statute means that big tech will be exercising public law functions, with Ofcom as the enforcer, ensuring that lawful speech is suppressed constantly, in line with private companies’ terms of service. This is an utter mess and not in any way adequate to protect free speech. It is a fudge by the Government: they were unpopular on “lawful but harmful”, so they have outsourced it to someone else to do the dirty work.

My Lords, it has been interesting to hear so many noble Lords singing from the same hymn sheet—especially after this weekend. My noble friend Lord McNally opened this group by giving us his wise perspective on the regulation of new technology. Back in 2003, as he mentioned, the internet was not even mentioned in the Communications Act. He explained how regulation struggles to keep up and how quantum leaps come with a potential social cost; all that describes the importance of risk assessment of these novel technologies.

As we have heard from many noble Lords today, on Report in the Commons the Government decided to remove the adult safety duties—the so-called “legal but harmful” aspect of the Bill. I agree with the many noble Lords who have said that this has significantly weakened the protection for adults under the Bill, and I share the scepticism many expressed about the triple shield.

Right across the board, this group of amendments, with one or two exceptions, rightly aims to strengthen the terms of service and user empowerment duties in the Bill in order to provide a greater baseline of protection for adults, without impinging on others’ freedom of speech, and to reintroduce some risk-assessment requirement on companies. The new duties will clearly make the largest and riskiest companies expend more effort on enforcing their terms of service for UK users. However, the Government have not yet presented any modelling on what effect this will have on companies’ terms of service. I have some sympathy with what the noble Lord, Lord Moylan, said: the new duties could mean that terms of service become much longer and lawyered. This might have an adverse effect on freedom of expression, leading to the use of excessive takedown measures rather than looking at other more systemic interventions to control content such as service design. We heard much the same argument from the noble Baroness, Lady Fox. They both made a very good case for some of the amendments I will be speaking to this afternoon.

On the other hand, companies that choose to do nothing will have an easier life under this regime. Faced with stringent application of the duties, companies might make their terms of service shorter, cutting out harms that are hard to deal with because of the risk of being hit with enforcement measures if they do not. Therefore, far from strengthening protections via this component of the triple shield, the Bill risks weakening them, with particular risks for vulnerable adults. As a result, I strongly support Amendments 33B and 43ZA, which my noble friend Lord McNally spoke to last week at the beginning of the debate on this group.

Like the noble Baroness, Lady Kidron, I strongly support Amendments 154, 218 and 160, tabled by the noble Lord, Lord Stevenson, which would require regulated services to maintain “adequate and appropriate” terms of service, including provisions covering the matters listed in Clause 12. Amendment 44, tabled by the right reverend Prelate the Bishop of Oxford and me, inserts a requirement that services to which the user empowerment duties apply

“must make a suitable and sufficient assessment of the extent to which they have carried out the duties in this section including in each assessment material changes from the previous assessment such as new or removed user empowerment features”.

The noble Viscount, Lord Colville, spoke very well to that amendment, as did the noble Baronesses, Lady Fraser and Lady Kidron.

Amendment 158, also tabled by me and the right reverend Prelate, inserts a requirement that services

“must carry out a suitable and sufficient assessment of the extent to which they have carried out the duties under sections 64 and 65 ensuring that assessment reflects any material changes to terms of service”.

That is a very good way of meeting some of the objections that we have heard to Clause 65 today.

These two amendments focus on risk assessment because the new duties do not have an assessment regime to work out whether they work, unlike the illegal content and children’s duties, as we have heard. Risk assessments are vital to understanding the environment in which the services are operating. A risk assessment can reduce bureaucracy by allowing companies to rule out risks which are not relevant to them, and it can increase user safety by revealing new risks and future-proofing a regime.

The Government have not yet provided, in the Commons or in meetings with Ministers, any proper explanation of why risk assessment duties have been removed along with the previous adult safety duties, and they have not explained in detail why undertaking a risk assessment is in any way a threat to free speech. They are currently expecting adults to manage their own risks, without giving them the information they need to do so. Depriving users of basic information about the nature of harms on a service prevents them taking informed decisions as to whether they want to be on it at all.

Without these amendments, the Bill cannot be said to be a complete risk management regime. There will be no requirement to explain to Ofcom or to users of a company’s service the true nature of the harms that occur on its service, nor the rationale behind the decisions made in these two fundamental parts of the service. This is a real weakness in the Bill, and I very much hope that the Minister will listen to the arguments being made this afternoon.

My Lords, I thank noble Lords from all sides of the House for their contributions and for shining a light on the point the noble Lord, Lord Clement-Jones, made near the end of his remarks about the need to equip adults with the tools to protect themselves.

It is helpful to have these amendments, because they give the Minister the opportunity to accept—as I hope he will—a number of the points raised. It seems a long time since the noble Lord, Lord McNally, introduced this group, but clearly it has given us all much time to reflect. I am sure we will see the benefits of that in the response from the Minister. Much of the debate on the Bill has focused on child safety and general practicalities, but this group helpfully allows us to focus on adults and the operation of the Government’s replacement for the legal but harmful section of the Bill. As the noble Baroness, Lady Fraser, rightly said, perhaps some tightening up of the legislation before us would be helpful. These amendments give us that chance.

My noble friend Lord Lipsey has put forward a number of amendments, which helpfully focus on the whole area of adult risk assessments, which were required under the previous iteration of the Bill but have since been drastically watered down. I would be grateful if the Minister could give some explanation as to why we find ourselves in that situation, and perhaps take the opportunity to pick up a number of the points raised in the amendments.

Quite a lot of the debate has focused around the amendments put forward in the name of the right reverend Prelate the Bishop of Oxford. These amendments take a somewhat different approach, because they require service providers to assess the extent to which their user empowerment tools are meeting the obligations laid out in Clause 12. The noble Viscount, Lord Colville, in his helpful remarks, said that it was right to keep up to date with the trends in abuse. This is a point that has come up repeatedly in our discussion: the need to make sure that this legislation is entirely fit for purpose and is able to move with the kind of changes that he referred to.

My noble friend Lord Stevenson has four very helpful amendments in this group, which focus on the minimum standards in platforms’ terms of service. This is an area we began to probe during a debate last week, where the answer seemed to be that, because terms of service are already complicated, we should not add to them. The issue here is really how we get the terms of service in the right place. All these amendments, again, take us there.

I was interested in the comments by the noble Lord, Lord Moylan, about enforceability, but again, on the issue of terms of service, the problem for me is inconsistency. We should seek to bring consistency as well as usefulness and applicability into those terms of service.

We will come on to broader amendments about user empowerment tools in the next group, but there clearly is a gap between what the Government have promised adult users, and what they are likely to end up with when the new regime is fully operational. I hope we will hear from the Minister how that gap may be closed.

I listened with great interest to the noble Baroness, Lady Fox. It is important to say that the issue here is whether algorithms should power the amount and nature of materials that come the way of users. The amendments seek to assist users to have that control, not to just be at the mercy of algorithms. It is about not individual pieces but what people can have control on. The amendments are useful in that respect. We know that there is much legal content which carries a risk of harm to adults, particularly vulnerable adults, who are not actually helped by the Bill. We need confidence that filters and other empowerment tools will make a genuine difference.

I hope that the Minister will accept that a number of these amendments are particularly helpful in strengthening the Bill, and that he will find a way to accept that form of strengthening.

I am very grateful to the noble Lords who have spoken on the amendments in this group, both this afternoon and last Tuesday evening. As this is a continuation of that debate, I think my noble friend Lord Moylan is technically correct still to wish the noble Baroness, Lady Kidron, a happy birthday, at least in procedural terms.

We have had a very valuable debate over both days on the Bill’s approach to holding platforms accountable to their users. Amendments 33B, 41A, 43ZA, 138A and 194A in the names of the noble Lords, Lord Lipsey and Lord McNally, and Amendment 154 in the name of the noble Lord, Lord Stevenson of Balmacara, seek to bring back the concept of legal but harmful content and related adult risk assessments. They reintroduce obligations for companies to consider the risk of harm associated with legal content accessed by adults. As noble Lords have noted, the provisions in the Bill to this effect were removed in another place, after careful consideration, to protect freedom of expression online. In particular, the Government listened to concerns that the previous legal but harmful provisions could create incentives for companies to remove legal content from their services.

In place of adult risk assessments, we introduced new duties on category 1 services to enable users themselves to understand how these platforms treat different types of content, as set out in Clauses 64 and 65. In particular, this will allow Ofcom to hold them to account when they do not follow through on their promises regarding content they say that they prohibit or to which they say that they restrict access. Major platforms already prohibit much of the content listed in Clause 12, but these terms of service are often opaque and not consistently enforced. The Bill will address and change that.

I would also like to respond to concerns raised through Amendments 41A and 43ZA, which seek to ensure that the user empowerment categories cover the most harmful categories of content to adults. I reassure noble Lords that the user empowerment list reflects input from a wide range of interested parties about the areas of greatest concern to users. Platforms already have strong commercial incentives to tackle harmful content. The major technology companies already prohibit most types of harmful and abusive content. It is clear that most users do not want to see that sort of content and most advertisers do not want their products advertised alongside it. Clause 12 sets out that providers must offer user empowerment tools with a specified list of content to the extent that it is proportionate to do so. This will be based on the size or capacity of the service as well as the likelihood that adult users will encounter the listed content. Providers will therefore need internally to assess the likelihood that users will encounter the content. If Ofcom disagrees with the assessment that a provider has made, it will have the ability to request information from providers for the purpose of assessing compliance.

Amendments 44 and 158, tabled by the right reverend Prelate the Bishop of Oxford, seek to place new duties on providers of category 1 services to produce an assessment of their compliance with the transparency, accountability, freedom of expression and user empowerment duties as set out in Clauses 12, 64 and 65 and to share their assessments with Ofcom. I am sympathetic to the aim of ensuring that Ofcom can effectively assess companies’ compliance with these duties. But these amendments would enable providers to mark their own homework when it comes to their compliance with the duties in question. The Bill has been designed to ensure that Ofcom has responsibility for assessing compliance and that it can obtain sufficient information from all regulated services to make judgments about compliance with their duties. The noble Baroness, Lady Kidron, asked about this—and I think the noble Lord, Lord Clement-Jones, is about to.

I hope the Minister will forgive me for interrupting, but would it not be much easier for Ofcom to assess compliance if a risk assessment had been carried out?

I will come on to say a bit more about how Ofcom goes about that work.

The Bill will ensure that providers have the information they need to understand whether they are in compliance with their duties under the Bill. Ofcom will set out how providers can comply in codes of practice and guidance that it publishes. That information will help providers to comply, although they can take alternative action if they wish to do so.

The right reverend Prelate’s amendments also seek to provide greater transparency to Ofcom. The Bill’s existing duties already account for this. Indeed, the transparency reporting duties set out in Schedule 8 already enable Ofcom to require category 1, 2A and 2B services to publish annual transparency reports with relevant information, including about the effectiveness of the user empowerment tools, as well as detailed information about any content that platforms prohibit or restrict, and the application of their terms of service.

Amendments 159, 160 and 218, tabled by the noble Lord, Lord Stevenson, seek to require user-to-user services to create and abide by minimum terms of service recommended by Ofcom. The Bill already sets detailed and binding requirements on companies to achieve certain outcomes. Ofcom will set out more detail in codes of practice about the steps providers can take to comply with their safety duties. Platforms’ terms of service will need to provide information to users about how they are protecting users from illegal content, and children from harmful content.

These duties, and Ofcom’s codes of practice, ensure that providers take action to protect users from illegal content and content that is harmful to children. As such, an additional duty to have adequate and appropriate terms of service, as envisaged in the amendments, is not necessary and may undermine the illegal and child safety duties.

I have previously set out why we do not agree with requiring platforms to set terms of service for legal content. In addition, it would be inappropriate to delegate this much power to Ofcom, which would in effect be able to decide what legal content adult users can and cannot see.

Amendment 155, tabled by my noble friend Lord Moylan, seeks to clarify whether and how the Bill makes the terms of service of foreign-run platforms enforceable by Ofcom. Platforms’ duties under Clause 65 apply only to the design, operation and use of the service in the United Kingdom and to UK users, as set out in Clause 65(11). Parts or versions of the service which are used in foreign jurisdictions—

On that, in an earlier reply the Minister explained that platforms already remove harmful content because it is harmful and because advertisers and users do not like it, but could he tell me what definition of “harmful” he thinks he is using? Different companies will presumably have a different interpretation of “harmful”. How will that work? It would mean that UK law will require the removal of legal speech based on a definition of harmful speech designed by who—will it be Silicon Valley executives? This is the problem: UK law is being used to implement the removal of content based on decisions that are not part of UK law but with implications for UK citizens who are doing nothing unlawful.

The noble Baroness’s point gets to the heart of the debate that we have had. I talked earlier about the commercial incentive that there is for companies to take action against harmful content that is legal which users do not want to see or advertisers do not want their products to be advertised alongside, but there is also a commercial incentive to ensure that they are upholding free speech and that there are platforms on which people can interact in a less popular manner, where advertisers that want to advertise products legally alongside that are able to do so. As with anything that involves the market, the majority has a louder voice, but there is room for innovation for companies to provide products that cater to minority tastes within the law.

My Lords, my noble friend has explained clearly how terms of service would normally work, which is that, as I said myself, a business might write its own terms of service to its own advantage but it cannot do so too egregiously or it will lose customers, and businesses may aim themselves at different customers. All this is part of normal commercial life, and that is understood. What my noble friend has not really addressed is the question of why uniquely and specifically in this case, especially given the egregious history of censorship by Silicon Valley, he has chosen to put that into statute rather than leave it as a commercial arrangement, and to make it enforceable by Ofcom. For example, when my right honourable friend David Davis was removed from YouTube for his remarks about Covid passes, it would have been Ofcom’s obligation not to vindicate his right to free speech but to cheer on YouTube and say how well it had done for its terms of service.

Our right honourable friend’s content was reuploaded. This makes the point that the problem at the moment is the opacity of these terms and conditions; what platforms say they do and what they do does not always align. The Bill makes sure that users can hold them to account for the terms of service that they publish, so that people can know what to expect on platforms and have some form of redress when their experience does not match their expectations.

I was coming on to say a bit more about that after making some points about foreign jurisdictions and my noble friend’s Amendment 155. As I say, parts or versions of the service that are used in foreign jurisdictions but not in the UK are not covered by the duties in Clause 65. As such, the Bill does not require a provider to have systems and processes designed to enforce any terms of service not applicable in the UK.

In addition, the duties do not give powers to Ofcom to enforce a provider’s terms of service directly. Ofcom’s role will be focused on ensuring that platforms have systems and processes in place to enforce their own terms of service consistently rather than assessing individual pieces of content.

Requiring providers to set terms of service for specific types of content suggests that the Government view that type of content as harmful or risky. That would encourage providers to prohibit such content, which of course would have a negative impact on freedom of expression, which I am sure is not what my noble friend wants to see. Freedom of expression is essential to a democratic society. Throughout the passage of the Bill, the Government have always committed to ensuring that people can speak freely online. We are not in the business of indirectly telling companies what legal content they can and cannot allow online. Instead, the approach that we have taken will ensure that platforms are transparent and accountable to their users about what they will and will not allow on their services.

Clause 65 recognises that companies, as private entities, have the right to remove content that is legal from their services if they choose to do so. To prevent them doing so, by requiring them to balance this against other priorities, would have perverse consequences for their freedom of action and expression. It is right that people should know what to expect on platforms and that they are able to hold platforms to account when that does not happen. On that basis, I invite the noble Lords who have amendments in this group not to press them.

My Lords, in his opening remarks, the Minister referred to the fact that this debate began last Tuesday. Well, it did, in that I made a 10-minute opening speech and the noble Baroness, Lady Stowell, rather elegantly hopped out of this group of amendments; perhaps she saw what was coming.

How that made me feel is perhaps best summed up by what the noble Earl, Lord Howe, said earlier when he was justifying the business for tomorrow. He said that adjournments were never satisfactory. In that spirit, I wrote to the Leader of the House, expressing the grumbles I made in my opening remarks. He has written back in a very constructive and thoughtful way. I will not delay the Committee any longer, other than to say that I hope the Leader of the House would agree to make his reply available for other Members to read. It says some interesting things about how we manage business. It sounds like a small matter but if what happened on Tuesday had happened in other circumstances in the other place, business would probably have been delayed for at least an hour while the usual suspects picked holes in it. If the usual channels would look at this, we could avoid some car crashes in future.

I am pleased that this group of amendments has elicited such an interesting debate, with fire coming from all sides. In introducing the debate, I said that probably the only real advice I could give the Committee came from my experience of being on the pre-legislative scrutiny committee in 2003. That showed just how little we were prepared for the tsunami of new technology that was about to engulf us. My one pleasure was that we were part of forming Ofcom. I am pleased that the chairman of Ofcom, the noble Lord, Lord Grade, has assiduously sat through our debates. I suspect he is thinking that he had better hire some more lawyers.

We are trying to get this right. I have no doubt that all sides of the House want to get this legislation through in good shape and for it to play an important role. I am sure that the noble Lord, Lord Grade, never imagined that he would become a state regulator in the kind of ominous way in which the noble Baroness, Lady Fox, said it. Ofcom has done a good job and will do so in future.

There is a problem of getting definitions right. When I was at the Ministry of Justice, I once had to entertain a very distinguished American lawyer. As I usually did, I explained that I was not a lawyer. He looked at me and said, “Then I will speak very slowly”. There is a danger, particularly in this part of the Bill, of wandering into a kind of lawyer-fest. It is important that we are precise about what powers we are giving to whom. Just to chill the Minister’s soul, I remember being warned as well about Pepper v Hart. What he says at the Dispatch Box will be used to interpret what Parliament meant when it gave this or that power.

The debate we have had thus far has been fully justified in sending a few warning signals to the Minister that it is perhaps not quite right yet. It needs further work. There is a lot of good will on all sides of the House to get it right. For the moment, I beg leave to withdraw my amendment.

Amendment 33B withdrawn.

Clause 12: User empowerment duties

Amendment 34

Moved by

34: Clause 12, page 12, line 9, leave out “if they wish to increase their control over” and insert “to control”

Member’s explanatory statement

This amendment, and another in the name of Baroness Morgan, would require Category 1 providers to ensure that the default options are the safest for users in regard to suicide, self-harm, eating disorders and the abuse and hate content already determined to be harmful as part of the Government’s “triple shield” approach.

My Lords, it is a great pleasure to speak to this group of amendments. As it is the first time I have spoken at this stage of the Bill’s proceedings, I declare my interest as a trustee and founder of the mental health charity the Loughborough Wellbeing Centre, which is relevant to this group. If it is lawyers’ confession time, then I am also going to confess to being a non-practising solicitor. But I can assure those Members of the House who are not lawyers that they do not need to be lawyers or ex-lawyers to understand the very simple proposition at the heart of this group of amendments.

Amendments 34 and 35 are in my name, along with those of the noble Baroness, Lady Parminter, the right reverend Prelate the Bishop of Gloucester and the noble Lord, Lord Griffiths of Burry Port. I am very grateful to them for their support for these amendments, which are also supported by the Football Association, Kick It Out, Beat, YoungMinds, the Royal College of Psychiatrists, the British Psychological Society, Mind, the Mental Health Network, the NHS Confederation, Rethink Mental Illness and Mental Health UK. I thank particularly the Mental Health Foundation for its support with making the points that we will cover in this group.

As we have already heard, and rightly, it is difficult with a Bill of this complexity to debate just one topic in a particular group. Although I have not spoken, it has been a great privilege to listen to your Lordships on earlier groups. We have already talked this afternoon and previously about the Government’s triple-shield approach and the replacement of that for the “legal but harmful” provisions that were taken out of the Bill. We have heard that the triple shield consists of the removal of illegal content, the takedown of material in breach of own terms of service—we have just been talking about that—and the provision to adults of greater choice over the content that they see online using these platforms. What we are talking about in this group of amendments is that third leg—I had put “limb” but have changed it because of what my noble friend Lady Fraser said—of the triple-shield categories, so that user empowerment tools should be on by default.

The change suggested by this proposal would require users on these platforms to flip a switch and choose whether to opt in to some of the most dangerous content available online, rather than receiving it by default. This adopts the Government’s existing approach of giving users choice over what they see but ensures that the default is that they will not be served this kind of material unless they actively choose to see it. The new offence on encouragement to serious self-harm, which the Government have committed to introducing, might form part of the solution here. But we cannot criminalise all the legal content that treads the line between glorification and outright encouragement, and no similar power is proposed to address eating disorder content. I know that others will talk about that, and I pay tribute to the work of Vicky Ford MP in relation to eating disorders; she has been brave enough to share her own experiences of those disorders.

During the Bill’s journey through Parliament, we have heard how vulnerable users often internalise the harmful and hateful content that they see online, which in turn can lead to users deliberately seeking out harmful content in an attempt to normalise self-destructive thoughts and behaviours. We have heard how Molly Russell, for example, viewed tweets which normalised her thoughts on self-harm and suicide; we have also heard how people with eating disorders often get what is called “inspiration” on platforms such as Tumblr, Instagram and TikTok.

We know from various studies that viewing this content has a negative effect on people’s mental well-being. A study carried out by the University of Oxford found that viewing images of self-harm often encouraged individuals to start self-harming, and concluded:

“Young people who self-harm are likely to use the internet in ways that increases their risk”.

Research by the Samaritans provided similar results, with 77% of respondents answering that they sometimes or often self-harmed in the same or similar ways after viewing self-harm imagery.

The Mental Health Foundation polled over 3,300 people and found that 67% of the public agreed or strongly agreed that they do not wish to be exposed to harmful content unless they explicitly choose to see it. I think my noble friend the Minister, perhaps not referring to this research, also said this earlier.

As we have also heard from the noble Baroness, Lady Merron, who is not in her place, even if a user is not searching for harmful content, they can be led to it through the algorithms. This includes pro-suicide, pro-self-harm, pro-anorexia and pro-bulimia content. In other words, it is too easy for users to see harmful content on these platforms, and this needs to change.

The Government chose to change from the legal but harmful to the triple-shield approach. However, the user empowerment tools introduced are neither new nor ground-breaking, because a lot of social media platforms already claim to have filters in place, giving users the ability to hide certain content from their timelines. But many users do not know that they are there, or how to use them properly. As it stands, the Government’s solution will be largely ineffective unless these tools are on by default.

Another point I suspect others will make, which we heard in the briefings before this group, is that vulnerability does not stop at the age of 18, so why would there be a cliff edge where there is protection from known harmful content for those under 18 but not for those over 18? As somebody made clear in the Samaritans briefing, which a number of us attended, people can be sectioned for their own protection after the age of 18. Adults, and particularly the vulnerable, may not be in a position to self-protect, and the trouble with not having the tools on by default is that we are yet again putting the burden to self-protect on the vulnerable and potential victims without taking responsibility as a society for this.

There is of course a wider point here—perhaps not for this debate but I am sure it will come up again—which is that not seeing the content does not mean that it does not exist. We will return to this when we debate content that is violent against women and girls. The noble Baroness, Lady Fox, has already referred to the content set out in subsections (10), (11) and (12) of this clause. Does the fact that it is listed mean we are saying that such harmful content is still OK to circulate on the internet, just because people are not seeing it? I would say this raises broader questions, but it is perhaps not a debate for today.

These two amendments would ensure that platforms’ design involves the safest options being on by default. They are two straightforward, common-sense amendments that, as the noble Viscount, Lord Colville—who is not here now—said, balance the understandable concerns about freedom of speech with safety. They do not stop the publication of this objectionable material, but they offer others, particularly the most vulnerable, a real choice about whether they see it. I would argue that it is our minimum duty to make sure these safety protections are on by default. I beg to move.

My Lords, it is a pleasure to be collaborating with the noble Baroness, Lady Morgan. We seem to have been briefed by the same people, been to the same meetings and drawn the same conclusions. However, there are some things that are worth saying twice and, although I will try to avoid a carbon copy of what the noble Baroness said, I hope the central points will make themselves.

The internet simply must be made to work for its users above all else—that is the thrust of the two amendments that stand in our names. Through education and communication, the internet can be a powerful means of improving our lives, but it must always be a safe platform on which to enjoy a basic right. It cannot be said often enough that to protect users online is to protect them offline. To create a strict division between the virtual and the public realms is to risk ignoring how actions online can have life and death repercussions, and that is at the heart of what these amendments seek to bring to our attention.

I was first made aware of these amendments at a briefing from the Samaritans, where we got to know each other. There I heard the tragic accounts of those whose loved ones had taken their own lives due to exposure to harmful content online. I will not repeat their accounts—this is not the place to do that—but understanding only a modicum of their grief made it obvious to me that the principle of “safest option by default” must underline all our decision-making on this.

I applaud the work already done by Members of this House to ensure the safety of young people online. Yet it is vital, as the noble Baroness has said, that we do not create a drop-off point for future users—one in which turning 18 means sudden exposure to the most harmful content lurking online, as it is always there. Those most at risk of suicide due to exposure to harmful content are aged between their late teens and early 20s. In fact, a 2017 inquiry into the suicides of young people found harmful content accessed online in 26% of the deaths of under 20s and 13% of the deaths of 20 to 24 year-olds. It is vital for us to empower users from their earliest years.

In the Select Committee—I see fellow members sitting here today—we have been looking at digital exclusion and the need for education at all levels for those using the internet. Looking for good habits established in the earliest years is the right way to start, but it goes on after that, because the world that young people go on to inhabit in adulthood is one where they are already in control of the internet—if they had the education earlier. Adulthood comes with the freedom to choose how one expresses oneself online—of course it does—but this must not be at the cost of their continuing freedom from the most insidious content that puts their mental health at risk. Much mention has been made of the triple shield and I need not go there again. Its origins and perhaps deficiencies have been mentioned already.

The Center for Countering Digital Hate recently conducted an experiment, creating new social media accounts that showed interest in body image and mental health. This study found that TikTok served suicide-related content to new accounts within 2.6 minutes, with eating disorder content being recommended within 8 minutes. At the very least, these disturbing statistics tell us that users should have the option to opt in to such content, and not have to suffer this harm before later opting out. While the option to filter out certain categories of content is essential, it must be toggled on by default if safety is to be our primary concern.

The principle of safest by default creates not only a less harmful environment, but one in which users are in a position to define their own online experience. The space in which we carry out our public life is increasingly located on a small number of social media platforms—those category 1 platforms already mentioned several times—which everyone, from children to pensioners, uses to communicate and share their experiences.

We must then ensure that the protections we benefit from offline continue online: namely, protection from the harm and hate that pose a threat to our physical and mental well-being. When a child steps into school or a parent into their place of work, they must be confident that those with the power to do so have created the safest possible environment for them to carry out their interactions. This basic confidence must be maintained when we log in to Twitter, Instagram, TikTok or any other social media giant.

My Lords, my Amendment 43 tackles Clause 12(1), which expressly says that the duties in Clause 12 are to “empower” users. My concern is to ensure that, first, users are empowered and, secondly, legitimate criticism around the characteristics listed in Clause 12(11) and (12), for example, is not automatically treated as abusive or inciting hatred, as I fear it could be. My Amendment 283ZA specifies that, in judging content that is to be filtered out after a user has chosen to switch on various filters, the providers act reasonably and pause to consider whether they have “reasonable grounds” to believe that the content is of the kind in question—namely, abusive or problematic.

Anything under the title “empower adult users” sounds appealing—how can I oppose that? After all, I am a fan of the “taking back control” form of politics, and here is surely a way for users to be in control. On paper, replacing the “legal but harmful” clause with giving adults the opportunity to engage with controversial content if they wish, through enhanced empowerment tools, sounds positive. In an earlier discussion of the Bill, the noble Baroness, Lady Featherstone, said that we should treat adults as adults, allowing them to confront ideas with the

“better ethics, reason and evidence”—[Official Report, 1/2/23; col. 735.]

that has been the most effective way to deal with ideas from Socrates onwards. I say, “Hear, hear” to that. However, I worry that, rather than users being in control, there is a danger that the filter system might infantilise adult users and disempower them by hard-wiring into the Bill a duty and tendency to hide content from users.

There is a general weakness in the Bill. I have noted that some platforms are based on users moderating their own sites, which I am quite keen on, but this will be detrimentally affected by the Bill. It would leave users in charge of their own moderation, with no powers to decide what is in, for example, Wikipedia or other Wikimedia projects, which are added to, organised and edited by a decentralised community of users. So I will certainly not take the phrase “user empowerment” at face value.

I am slightly concerned about linguistic double-speak, or at least confusion. The whole Bill is being brought forward in a climate in which language is weaponised in a toxic minefield—a climate of, “You can’t say that”. More nerve-rackingly, words and ideas are seen as dangerous and interchangeable with violent acts, in a way that needs to be unpicked before we pass this legislation. Speakers can be cancelled for words deemed to threaten listeners’ safety—but not physical safety; the opinions are said to be unsafe. Opinions are treated as though they cause damage or harm as viscerally as physical aggression. So lawmakers have to recognise the cultural context and realise that the law will be understood and applied in it, not in the abstract.

I am afraid that the language in Clause 12(1) and (2) shows no awareness of this wider backdrop—it is worryingly woolly and vague. The noble Baroness, Lady Morgan, talked about dangerous content, and all the time we have to ask, “Who will interpret what is dangerous? What do we mean by ‘dangerous’ or ‘harmful’?”. Surely a term such as “abusive”, which is used in the legislation, is open to wide interpretation. Dictionary definitions of “abusive” include words such as “rude”, “insulting” and “offensive”, and it is certainly subjective. We have to query what we mean by the terms when some commentators complain that they have been victims of online abuse, but when you check their timelines you notice that, actually, they have been subject just to angry, and sometimes justified, criticism.

I recently saw a whole thread arguing that the Labour Party’s recent attack ads against the Prime Minister were an example of abusive hate speech. I am not making a point about this; I am asking who gets to decide. If this is the threshold for filtering content, there is a danger of institutionalising safe space echo chambers. It can also be a confusing word for users, because if someone applies a user empowerment tool to protect themselves from abuse, the threshold at which the filter operates could be much lower than they intend or envisage but, by definition, the user would not know what had been filtered out in their name, and they have no control over the filtering because they never see the filtered content.

The same is true of the Bill’s use of the term “incites hatred”. The word “hatred” in 2023 is highly contentious in the public arena. Indeed, over the last decade Parliament has wrestled with criminal offences around the incitement of hatred, and safeguards were built into legislation in the past, including free speech clauses in controversial areas such as religion. However, it seems to me that in this Bill the word “hatred” is just free floating. A user who understands “incites hatred” to cover really malicious, nasty content might not realise how much other content could be filtered out by the filtering tool if it operates at a low threshold of understanding what inciting hatred is.

It is also the case that inciting hatred around protected characteristics is fraught as an issue offline, let alone online. There are huge rows about whether accusations of Islamophobia and inciting hatred of Muslims are sometimes used to avoid open debates on extreme Islamist views. For example, will images such as the cartoons in the Charlie Hebdo magazine be seen as inciting hatred by some, and will they get filtered out? Similarly, some say that accusations of anti-Semitism—inciting hatred of Jewish people—are used to quell legitimate criticism of Israeli policy. I could go on.

I am not making a comment on any of those issues, other than to note that those who think that using hatred as a basis for filtering online content is easy need to get out a bit more—and that is before we even get to the gender wars. Regularly, those who assert the immutability of biological sex are accused of whipping up hatred against trans people; Joanna Cherry MP has had a talk cancelled by the Stand Comedy Club for just that. Even though the label “transphobic hate speech” directed at Joanna Cherry MP is totally illegitimate, in my opinion, because she is a crusader for women’s rights and lesbian rights, it does not matter whether you and I agree or whether we should have an argument; that is what debate is. We have to ask who from a big tech company will filter out material or decide what is, or is not, hatred. These are the kinds of issues that, we have to note, are difficult.

It is worth asking the Minister: who do the Government envisage will do the filtering? Do online filterers, let alone algorithms or machine learning, have the qualifications to establish what constitutes abuse or hatred? In other professions, from the College of Policing to overzealous HR departments and senior management teams in universities, we have seen overcaution in censoring and banning material under the auspices of hatred, abuse and that weasel word “harm”. Rather than empowering users, will the Bill not empower a new staff team of filterers trained in their own company’s equality, diversity and inclusion norms to use filtering tools at the lowest common denominator, leading to over-removal policies that err on the side of caution in order to comply with regulations? All that Amendment 43 does is to borrow the language of “discussion or criticism” from the free speech clause in the stirring up hatred offences section of the Public Order Act 1986 to try to lift the threshold at which Clause 12(11) and (12) might kick in. It is not ideal, but there is a lot at stake.

I completely oppose those amendments that promote a default setting. They are clearly advocating a censorious approach to legal speech. I rather liked an analogy that I heard the IEA’s Matthew Lesh use recently when he said, “Imagine if, when you go to a bookshop, you have to ask the shop assistant to let you into the special room that contains harmful books”. Of course, material is still accessible, but creating a barrier to accessing certain speech that is perhaps uncomfortable in terms of religion, race or gender also forces people to identify themselves. If you have to say, “Please can I go into the harmful speech section?”, or go into the harmful section of the bookshop, immediately you label yourself as pro-dangerous or pro-harmful material.

If those advocating these provisions are so certain about the righteousness of knowing that this speech is problematic, it would be more honest to simply outlaw it. What is more, the director of Defend Digital Me, Jen Persson, has raised concern that, by considering all adults to be at risk of harm in that way, the Bill will infantilise us, because it assumes that adults are inherently vulnerable. It is a sort of paternalistic Big Brother that we want to avoid in the Bill.

Finally, it is damaging in a democracy to have a proliferation of things that are unsayable. As the Bill reflects, so much debate takes place online, so it seems our responsibility as legislators to encourage a diversity of views to circulate, rather than carelessly or inadvertently to narrow the range of what circulates. On previous groups we mentioned Germany’s infamous legislation, brought in in 2017, which is now facing major opposition at home. Danish free-speech think tank Justitia notes that though

“the German government’s adoption of the NetzDG was a good faith initiative to curb hate online, the law has provided a blueprint for Internet censorship that is being used to target dissent and pluralism.”

I fear that unless we are very careful this section will do the same.

My Lords, it is a pleasure to follow the noble Baroness, Lady Fox. I am afraid that on this issue, as I am sure she would expect, we profoundly disagree. I am delighted to support the amendment of the noble Baroness, Lady Morgan, and those from my noble friend Lord Clement-Jones, which do the same sort of thing and address the critical issue of what is a proportionate response, respecting the fact that the position for adults is different from that for children. What is a proportionate response, recognising that there is a large cadre of vulnerable people who need help to manage the beneficial but also worrying tool which is social media?

I shall cover only the issues on which I have any degree of competence in this complex field, which is to speak about the importance of this amendment because of the particular nature of eating disorders. I declare an interest as the mother of a young adult who has eating disorders and had them when she was a child. The noble Baroness, Lady Fox, talked about the need to allow adults to use their reason. Let me tell the Committee about people with eating disorders: I would love it if I could get my daughter to be as reasonable as she is when I talk to her about the benefits of proportional representation, where she can beat me hands down, when I try but fail to get her to put food in her mouth.

Eating disorders have two issues of relevance to this debate, and they are why I support the case for the strongest protection for them, the default being that people should have to opt in to have access to harmful content. First, eating disorders are intensely controlling. They suck people in, and they are not just about not eating; they control how they exercise; they control who they see; they are a control mechanism over a person’s whole life. I reject the idea that you can get someone who is controlled, day and night, by an eating disorder to make the decision to opt out of accessing social media content, when we know that people with eating disorders gravitate towards it because it provides them with content that sustains their illness. It provides them with communities of other users— the pro-mia and pro-ana sites, which sound incredibly comforting but are actually communities of people that encourage people, sometimes literally, to starve themselves to death. That controlling nature means that, for me, people having to opt in is the best way forward: it is a controlling illness.

Secondly, eating disorders are a very competitive illness. If you have anorexia, you want to be the thinnest. In the old days, that meant that you would cook food that you would not eat, but you would get your sister to eat it and you would feel good because you were thinner. Of course, with social media, you can now access all these websites where you can see people with nasogastric tubes and see people who are doing much “better”. As the noble Baroness, Lady Morgan, said, in that dreadful phrase, they provide “thinspiration”: people look for thinness and compare themselves to other people. It is an insatiable desire, so the idea that they will voluntarily opt out of that is just away with the fairies.

As I say, we need a proportionate response. I appreciate that people with eating disorders may well choose to opt in, but I think that the state in the first place should require that people have to opt into that choice. We have heard about the various mental health organisations that have made that case, but in thinking about this and talking to Rose about it, I think there is another fundamental reason why it is right that the state should take this approach. As the noble Baroness, Lady Morgan, said, eating disorders can start at a young age, but they can also start after the age of 18. If someone in their mid-20s—or mid-30s or mid-40s—is starting to feel a bit uncomfortable about their body image and starting to get some rather odd views about food but does not yet have an eating disorder, that is the time when, if they get support and do not get encouragement, we might be able to stop them getting sucked into these appalling vortexes of eating disorders. If we have this provision that people have to opt in, they might not see that content which, as has been mentioned, is being pushed at them—the right reverend Prelate the Bishop of Oxford gave examples the other week of how these sites feed you stuff immediately as soon as you start going down this route. If people have to opt in, we might just have that chance of stopping them getting an eating disorder.

Yes, people have to be given access to some of this material in a free society, but it is the role of the state to protect the vulnerable, and the particular nature of eating disorders means that, for me, this amendment is vital.

My Lords, it is a privilege to follow the noble Baroness, Lady Parminter, in her very moving and personal speech. I am sorry that I was unable to speak to the previous group of amendments, some of which were in my name, because, due to unavoidable business in my diocese, I was not able to be present when that debate began late last Tuesday. However, it is very good to be able to support this group of amendments, and I hope tangentially to say something also in favour of risk assessment, although I am conscious that other noble Lords have ably made many of the points that I was going to make.

My right reverend friend the Bishop of Gloucester has added her name in support of amendments in this group, and I also associate myself with them—she is not able to be here today. As has been said, we are all aware that reaching the threshold of 18 does not somehow award you with exponentially different discernment capabilities, nor wrap those more vulnerable teenagers in some impermeable cotton wool to protect them from harm.

We are united, I think, in wanting to do all we can to make the online space feel safe and be safe for all. However, there is increasing evidence that people do not believe that it is. The DCMS’s own Public Attitudes to Digital Regulation survey is concerning. The most recent data shows that the number of UK adults who do not feel safe and secure online increased from 38% in November/December 2021 to 45% in June/July 2022. If that trend increases, the number will soon pass half, with more than half of UK adults not feeling safe and secure online.

It is vital that we protect society’s most vulnerable. When people are vulnerable through mental illness or other challenges, they are surely not able to protect themselves from being exposed to damaging online content by making safe choices, as we have just heard. In making this an opt-in system, we would save lives when people are at a point of crisis.

In listening to our debates, I sometimes feel that we have not grasped in our deliberations as a Committee the inequality of arms which exists in an individual faced with the entire internet. We have heard analogies this afternoon of a bookshop, and we might think of a supermarket. We might also think of a debate in the Athenian Agora many years ago, when people debated person to person, with an equality of arms and intellect. There is no such equality of arms when it comes to exposure to the internet and social media. I will categorise five things which break this equality down—they all begin with “A”, if your Lordships like alliteration.

The first is advertising. The whole expertise of the advertising industry, commercially driven through applications, places its weight on the individual. The accumulated skill of how to sell more to more people is focused and channelled through all the social media we are concerned with regulating.

The second is access. Through the mobile phone in the 19 year-old’s pocket, and in mine, social media and app producers have access 24/7, in the most private and intimate moments of our lives, to influence and shape our minds. There is no physical boundary of going to a bookshop; it is present wherever we are.

The third “A” is access to our data. The people who are pushing things at us know more about us than the closest members of our families, because they study every purchase. Every click is interpreted. Every inquiry that we search is channelled back into access to our data and used to pressure the individual and to shape their choices in the offline world as well as the online one.

Fourthly, all this information and skill is then channelled algorithmically and driven by the power of algorithms. It is multiplied, and multiplied again, in ways that no consumer fully understands or can measure.

Fifthly, we are now on the threshold of much of the content to which we and others are exposed being energised and powered by artificial intelligence, so that the problems we have seen to date are multiplying and will be multiplied hugely in the coming decade.

I believe that people will look back on the first two decades of the 21st century—the time that the noble Lord, Lord McNally, referred to, from 2003, when we did not envisage what was coming, to this Bill in 2023—as a time of complete madness. They will see it as a time when we created such harmful, toxic environments—not only for children and young people but for adults—that it affected the mental health of a generation profoundly. This Bill is an opportunity to draw a line in the sand and to remedy that. The user empowerment tools and adult risk assessments offer us very important tools. We must take this opportunity and fight back against this inequality of arms. I support these amendments.

My Lords, I contribute to this debate on the basis of my interests as laid out in the register: as chief executive of Cerebral Palsy Scotland; my work with the Scottish Government on people with neurological conditions; and as a trustee of the Neurological Alliance of Scotland. It is an honour to follow the right reverend Prelate, whose point about the inequality people experience in the online world is well made. I want to be clear that when I talk about ensuring online protection for people with disabilities, I do not assume that all adults with disabilities are unable to protect themselves. As the right reverend Prelate and the noble Lord, Lord Griffiths of Burry Port, pointed out, survey after survey demonstrates how offline vulnerabilities translate into the online world, and Ofcom’s own evidence suggests that people with physical disabilities, learning disabilities, autism, mental health issues and others can be classed as being especially vulnerable online.

The Government recognise that vulnerable groups are at greater risk online, because in its previous incarnations, this Bill included greater protection for such groups. We spoke in a previous debate about the removal of the “legal but harmful” provisions and the imposition of the triple shield. The question remains from that debate: does the triple shield provide sufficient protection for these vulnerable groups?

As I have said previously this afternoon, user empowerment tools are the third leg of the triple shield, but they put all the onus on users and no responsibility on the platforms to prevent individuals’ exposure to harm. Amendments 36, 37 and 38A, in the name of the noble Lord, Lord Clement-Jones, seek simply to make the default setting for the proposed user empowerment tools to be “on”. I do not pretend to understand how, technically, this will happen, but it clearly can, because the Bill requires platforms to ensure that this is the default position to ensure protection for children. The default position in those amendments protects all vulnerable people, and that is why I support them—unlike, I fear, Amendment 34 from my noble friend Lady Morgan, which lists specific categories of vulnerable adults. I would prefer that all vulnerable people be protected from being exposed to harm in the first place.

Nobody’s freedom of expression is affected in any way by this default setting, but the overall impact on vulnerable individuals in the online environment would, I assure your Lordships, be significant. Nobody’s ability to explore the internet or to go into those strange rooms at the back of bookshops that the noble Baroness, Lady Fox, was talking about would be curtailed. The Government have already stated that individuals will have the capacity to seek out these tools and turn them on and off, and that they must be easily accessible. So individuals with capacity will be able to find the settings and set them to explore whatever legal content they choose.

However, is it not our duty to remember those who do not have capacity? What about adults with learning difficulties and people at a point of crisis—the noble Baroness, Lady Parminter, movingly spoke about people with eating disorders—who might not be able to turn to those tools due to their affected mental state, or who may not realise that what they are seeing is intended to manipulate? Protecting those users from encountering such content in the first place surely tips the balance in favour of turning the tools on by default.

I am very sad that the noble Baroness, Lady Campbell of Surbiton, cannot be here, because her contribution to this debate would be powerful. But, from her enormous experience of work with disabled people, this is her top priority for the Bill.

In preparing to speak to these amendments, I looked back to the inquiry in the other place into online abuse and the experience of disabled people that was prompted by Katie Price’s petition after the shocking abuse directed at her disabled son Harvey. In April 2019 the Government responded to that inquiry by saying that they were

“aware of the disproportionate abuse experienced by disabled people online and the damage such abuse can have on people’s lives, career and health”—

and the Government pledged to act.

The internet is a really important place for disabled people, and I urge the Government to ensure that it remains a safe place for all of us and to accept these amendments that would ensure the default settings are set to on.

My Lords, I rise to support the amendments in the name of the noble Baroness, Lady Morgan. I do so somewhat reluctantly, not because I disagree with anything that she said but because I would not necessarily start from here. I want to briefly say three very quick things about that and then move on to Amendments 42 and 45, which are also in this group.

We already have default settings, and we are pretending that this is a zero-sum game. The default settings at the moment are profiling us, filtering us and rewarding us; and, as the right reverend Prelate said in his immensely powerful speech, we are not starting at zero. So I do share the concerns of the noble Baroness, Lady Fox, about who gets to choose—some of us on this side of the debate are saying, “Can we define who gets to choose? Can Parliament choose? Can Ofcom choose? Can we not leave this in the hands of tech companies?” So on that I fully agree. But we do have default settings already, and this is a question of looking at some of the features as well as the content. It is a weakness of the Government’s argument that it keeps coming back to the content rather than the features, which are the main driver of what we see.

The second thing I want to say—this is where I am anxious about the triple shield—is: does not knowing you are being abused mean that you are not abused? I say that as someone with some considerable personal abuse. I have my filter on and I am not on social media, but my children, my colleagues and some of the people I work with around the world do see what is said about me—it is a reputational thing, and for some of them it is a hurtful thing, and that is why I am reluctant in my support. However, I do agree with all the speakers who have said that our duty is to start with those people who are most vulnerable.

I want to mention the words of one of the 5Rights advisers—a 17 year-old girl—who, when invited to identify changes and redesign the internet, said, “Couldn’t we do all the kind things first and gradually get to the horrible ones?” I think that this could be a model for us in this Chamber. So, I do support the noble Baroness.

I want to move briefly to Amendment 42, which would see an arbitrary list of protected characteristics replaced by the Equality Act 2010. This has a lot to do with a previous discussion we had about human rights, and I want to say urgently to the Minister that the offer of the Online Safety Bill is not to downgrade human rights, children’s rights and UK law, but rather to bring forward a smart and comprehensive regime to hold companies accountable for human rights, children’s rights and UK law. We do not want to have a little list of some of our children’s rights or of some of our legislation; we would like our legislation and our rights embedded in the Bill.

I have to speak for Amendment 45. I express my gratitude to the noble Lord, Lord Stevenson, for tabling it. It would require Ofcom, six months after the event, to ask whether children need these user empowerment tools. It is hugely important. I remind the Committee that children have not only rights but an evolving capacity to be out there in the world. As I said earlier, the children’s safety duties have a cliff-edge feel to them. As children go out into the world on the cusp of adulthood, maybe they would like to have some of these user empowerment tools.

My Lords, the noble Baroness, Lady Kidron, said words to the effect that perhaps we should begin by having particular regard for certain vulnerabilities, but we are dealing with primary legislation and this really concerns me. Lists such as in Clause 12 are really dangerous. It is not a great way to write law. We could be with this law for a long time.

I took the Communications Act 2003 through for Her Majesty’s Opposition, and we were doing our absolute best to future-proof the legislation. There was no mention of the internet in that piece of legislation. With great respect to the noble Lord, Lord McNally, with whom I sparred in those days, in was not that Act that introduced Ofcom but a separate Act. The internet was not even mentioned until the late Earl of Northesk introduced an amendment with the word “internet” to talk about the investigative powers Act.

The reality is that we already had Facebook, and tremendous damage being done through it to people such as my daughter. Noble Lords will remember that in the early days it was Oxford, Cambridge, Yale and Harvard; that is how it all began. It was an amazing thing, and we could not foresee what would happen but there was a real attempt to future-proof. If you start having lists such as in Clause 12, you cannot just add on or change. Cultural mores change. This list, which looks great in 2023, might look really odd in about 2027. Different groups will have emerged and say, “Well, what about me, what about me?”.

I entirely agree with the noble Baroness, Lady Fox. Who will be the decider of what is right, what is rude or what is abusive? I have real concerns with this. The Government have had several years to get this right. I say that with great respect to my noble friend the Minister, but we will have to think about these issues a little further. The design of the technology around all this is what we should be imposing on the tech companies. I was on the Communications and Digital Committee in 2020 when that was a key plank of our report, following the inquiry that we carried out and prior to the Joint Committee, then looking at this issue of “legal but harmful”, et cetera. I am glad that was dropped because—I know that I should not say this—when I asked a civil servant what was meant by “harmful”, he said, “Well, it might upset people”.

It is a very subjective thing. This is difficult for the Government. We must do all we can to support the Government in trying to find the right solutions, but I am sorry to say that I am a lawyer—a barrister—and I worry. We are trying to make things right but, remember, once it is there in an Act, it is there. People will use that as a tool. In 2002, at New Scotland Yard, I was introduced to an incredible website about 65 ways to become a good paedophile. Where does that fit in Clause 12? I have not quite worked that out. Is it sex? What is it? We have to be really careful. I would prefer having no list and making it more general, relying on the system to allow us to opt in.

I support my noble friend Lady Morgan’s amendment on this, which would make it easier for people to say, “Well, that’s fine”, but would not exclude people. What happens if you do not fit within Clause 12? Do you then just have to suck it up? That is not a very House of Lords expression, but I am sure that noble Lords will relate to it.

We have to go with care. I will say a little more on the next group of amendments, on anonymity. It is really hard, but what the Government are proposing is not quite there yet.

That seemed to be provoked by me saying that we must look after the vulnerable, but I am suggesting that we use UK law and the rights that are already established. Is that not better than having a small list of individual items?

My Lords, I support the noble Baroness, Lady Buscombe, on the built-in obsolescence of any list. It would very soon be out of date.

I support the amendments tabled by the noble Lord, Lord Clement-Jones, and by the noble Baroness, Lady Morgan of Cotes. They effectively seek a similar aim. Like the noble Baroness, Lady Fraser, I tend towards those tabled by the noble Lord, Lord Clement-Jones, because they seem clearer and more inclusive, but I understand that they are trying for the same thing. I also register the support for this aim of my noble friend Lady Campbell of Surbiton, who cannot be here but whom I suspect is listening in. She was very keen that her support for this aim was recorded.

The issue of “on by default” inevitably came up at Second Reading. Then and in subsequent discussions, the Minister reiterated that a “default on” approach to user empowerment tools would negatively impact people’s use of these services. Speaking at your Lordships’ Communications and Digital Committee, on which I sat at the time, Minister Scully went further, saying that the strongest option, of having the settings off in the first instance,

“would be an automatic shield against people’s ability to explore what they want to explore on the internet”.

According to the Government’s own list, this was arguing for the ability to explore content that abuses, targets or incites hatred against people with protected characteristics, including race and disability. I struggle to understand why protecting this right takes precedence over ensuring that groups of people with protected characteristics are, well, protected. That is our responsibility. It is precedence, because switching controls one way is not exactly the same as switching them the other way. It is easy to think so, but the noble Baroness, Lady Parminter, explained very clearly that it is not the same. It is undoubtedly easier for someone in good health and without mental or physical disabilities to switch controls off than it is for those with disabilities or vulnerabilities to switch them on. That is self-evident.

It cannot be right that those most at risk of being targeted online, including some disabled people—not all, as we have heard—and those with other protected characteristics, will have the onus on them to switch on the tools to prevent them seeing and experiencing harm. There is a real risk that those who are meant to benefit from user empowerment tools, those groups at higher risk of online harm, including people with a learning disability, will not be able to access the tools because the duties allow category 1 services to design their own user empowerment tools. This means that we are likely to see as many versions of user empowerment tools as there are category 1 services to which this duty applies.

Given what we know about the nature of addiction and self-harm, which has already been very eloquently explained, it surely cannot be the intention of the Bill that those people who are in crisis and vulnerable to eating disorders or self-harm, for example, will be required to seek and activate a set of tools to turn off the very material that feeds their addiction or encourages their appetite for self-harm.

The approach in the Bill does little to prevent people spiralling down this rabbit hole towards ever more harmful content. Indeed, instead it requires people to know that they are approaching a crisis point, and to have sufficient levels of resilience and rationality to locate the switch and turn on the tools that will protect them. That is not how the irrational or distressed mind works.

So, all the evidence that we have about the existence of harm which arises from mental states, which has been so eloquently set out in introducing the amendments— I refer again to my noble friend Lady Parminter, because that is such powerful evidence—tips the balance in favour, I believe, of setting the tools to be on by default. I very much hope the Minister will listen and heed the arguments we have heard set out by noble Lords across the Committee, and come back with some of his own amendments on Report.

Before the noble Baroness sits down, I wanted to ask for clarification, because I am genuinely confused. When it comes to political rights for adults in terms of their agency, they are rights which we assume are able to be implemented by everyone. But we recognise that in the adult community —this is offline now; I mean in terms of how we understand political rights—there may well be people who lack capacity or are vulnerable, and we take that into account. But we do not generally organise political rights and access to, for example, voting or free speech around the most vulnerable in society. That is not because we are insensitive or inhumane, or do not understand. The moving testimonies we have heard about people with eating disorders and so on are absolutely spot-on accurate. But are we suggesting that the world online should be organised around vulnerable adults, rather than adults and their political rights?

I do not have all the answers, but I do think we heard a very powerful point from the right reverend Prelate. In doing the same for everybody, we do not ensure equality. We need to have varying approaches, in order that everybody has equality of access. As the Bill stands, it says nothing about vulnerable adults. It simply assumes that all adults have full capacity, and I think what these amendments seek to do is find a way to recognise that simply thinking about children, and then that everybody aged 18 is absolutely able to take care of themselves and, if I may say, “suck it up”, is not the world we live in. We can surely do better than that.

My Lords, I rise briefly to support Amendments 34 and 35, from the noble Baroness, Lady Morgan, and others in this essential group. It is not enough to say the new triple shield will help prevent adults seeing harmful but legal material if they so wish. Having removed “harmful but legal” from the original Bill, there is now a need to ensure that the default options are the safest for users in regard to suicide, self-harm, eating disorders and abuse and hate content.

As the Bill stands, adults can still see the most dangerous content online. Young people over 18 may be especially vulnerable if faced with a torrent of images edited digitally to represent unattainable beauty standards; it can result in poor body image detrimental to mental health, resulting in shame, anxiety and, in some cases, suicide. As other noble Lords have said, anorexia has the highest mortality rate of any mental health problem. We know pro-anorexia sites are rife online. Vulnerable adults should be protected.

These amendments would make a real difference to the Bill. Changing the user empowerment provisions to require category 1 providers to have the safest options as the default for users would be a straightforward way of increasing the protection of most internet users who do not want to have this material bombard them. It would not overburden the tech companies and could do some good. It would not curtail freedom of speech, as tech-savvy users could easily flip a switch if they wished to opt in to some of the most dangerous content, which will still be available online, rather than receiving it by default.

Even with the Government’s best intentions to prevent encouragement of serious self-harm, we know they cannot criminalise all the legal content that treads the line between glorification and outright encouragement, as the noble Baroness, Lady Morgan, said. As the Communications and Digital Select Committee, on which I now serve, said in its 2021 report,

“the Online Safety Bill should require category 1 platforms to give users a comprehensive toolkit of settings, overseen by Ofcom, allowing users to decide what types of content they see and from whom. Platforms should be required to make these tools easy to find and use. The safest settings should always be the default”.

I hope the Government accept these valuable and simple amendments. They are supported by the Mental Health Foundation, to whom I owe thanks for this briefing, together with many other experts in the field of mental health.

My Lords, this is my first contribution to the Bill, and I feel I need to apologise in advance for my lack of knowledge and expertise in this whole field. In her initial remarks, the noble Baroness, Lady Morgan of Cotes, was saying “Don’t worry, because you don’t need to be a lawyer”. Unfortunately, I do not have any expertise in the field of the internet and social media and all of that as well, so I will be very brief in all of my remarks on the Bill. But I feel that I cannot allow the Bill to go past without at least making a few remarks, as equalities spokesperson for the Lib Dems. The issues are of passionate importance to me, and of course to victims of online abuse, and it is those victims for whom I speak today.

In this group, I will address my remarks to Amendments 34 and 35, in which we have discussed content deemed to be harmful—suicide, self-harm, eating disorders and abuse and hate content—under the triple shield approach, although this content discussion has strayed somewhat during the course of the debate.

Much harmful material, as we have heard, initially comes to the user uninvited. I do not pretend to understand how these algorithms work, but my understanding is that if you open one, they literally click into action, increasing more and more of this kind of content being fed to you in your feed. The suicide of young Molly Russell is a typical example of the devastating consequences of how much damage these algorithms can contribute. I am glad that the Bill will go further to protect children, but it still leaves adults—some young and vulnerable—without some protection and with the same amount of automatic exposure to harmful content, which algorithms can increase with engagement, which could have overwhelming impacts on their mental health, as my noble friend Lady Parminter so movingly and eloquently described.

So this amendment means a user would have to make an active, conscious choice to be exposed to such content: an opt out rather than an opt in. This has been discussed at length by noble Lords a great deal more versed in the subject than me. But surely the only persons or organisations who would not support this would be the ones who do not have the best interests of the vulnerable users we have been talking about this afternoon at heart. I hope the Minister will confirm in his remarks that the Government do.

My Lords, I had not intended to speak in this debate because I now need to declare an unusual interest, in that Amendment 38A has been widely supported outside this Chamber by my husband, the Member of Parliament for Weston-super-Mare. I am not intending to speak on that amendment but, none the less, I mention it just in case.

I rise to speak because I have been so moved by the speeches, not least the right reverend Prelate’s speech. I would like just to briefly address the “default on” amendments and add my support. Like others, on balance I favour the amendments in the name of the noble Lord, Lord Clement-Jones, but would willingly throw my support behind my noble friend Lady Morgan were that the preferred choice in the Chamber.

I would like to simply add two additional reasons why I ask my noble friend the Minister to really reflect hard on this debate. The first is that children become teenagers, who become young adults, and it is a gradual transition—goodness, do I feel it as the mother of a 16 year-old and a 17 year-old. The idea that on one day all the protections just disappear completely and we require our 18 year-olds to immediately reconfigure their use of all digital tools just does not seem a sensible transition to adulthood to me, whereas the ability to switch off user empowerment tools as you mature as an adult seems a very sensible transition.

Secondly, I respect very much the free speech arguments that the noble Baroness, Lady Fox, made but I do not think this is a debate about the importance of free speech. It is actually about how effective the user empowerment tools are. If they are so hard for non-vulnerable adults to turn off, what hope have vulnerable adults to be able to turn them on? For the triple shield to work and the three-legged stool to be effective, the onus needs to be on the tech companies to make these user empowerment tools really easy to turn on and turn off. Then “default on” is not a restriction on freedom of speech at all; it is simply a means of protecting our most vulnerable.

My Lords, this has been a very thoughtful and thought-provoking debate. I start very much from the point of view expressed by the noble Baroness, Lady Kidron, and this brings the noble Baroness, Lady Buscombe, into agreement—it is not about the content; this is about features. The noble Baroness, Lady Harding, made exactly the same point, as did the noble Baroness, Lady Healy—this is not about restriction on freedom of speech but about a design feature in the Bill which is of crucial importance.

When I was putting together the two amendments that I have tabled, I was very much taken by what Parent Zone said in a recent paper. It described user empowerment tools as “a false hope”, and rightly had a number of concerns about undue reliance on tools. It said:

“There is a real danger of users being overwhelmed and bewildered”.

It goes on to say that

“tools cannot do all the work, because so many other factors are in play—parental styles, media literacy and technological confidence, different levels of vulnerability and, crucially, trust”.

The real question—this is why I thought we should look at it from the other side of things in terms of default—is about how we mandate the use of these user empowerment tools in the Bill for both children and adults. In a sense, my concerns are exactly the opposite of those of the noble Baroness, Lady Fox—for some strange, unaccountable reason.

The noble Baroness, Lady Morgan, the noble Lord, Lord Griffiths, the right reverend Prelate and, notably, my noble friend Lady Parminter have made a brilliant case for their amendment, and it is notable that these amendments are supported by a massive range of organisations. They are all in this area of vulnerable adults: the Mental Health Foundation, Mind, the eating disorder charity Beat, the Royal College of Psychiatrists, the British Psychological Society, Rethink Mental Illness, Mental Health UK, and so on. It is not a coincidence that all these organisations are discussing this “feature”. This is a crucial aspect of the Bill.

Again, I was very much taken by some of the descriptions used by noble Lords during the debate. The right reverend Prelate the Bishop of Oxford said that young people do not suddenly become impervious to content when they reach 18, and he particularly described the pressures as the use of AI only increases. I thought the way the noble Baroness, Lady Harding, described the progression from teenagehood to adulthood was extremely important. There is not some sort of point where somebody suddenly reaches the age of 18 and has full adulthood which enables then to deal with all this content.

Under the Bill as it stands, adult users could still see and be served some of the most dangerous content online. As we have heard, this includes pro-suicide, pro-anorexia and pro-bulimia content. One has only to listen to what my noble friend Lady Parminter had to say to really be affected by the operation, if you like, of social media in those circumstances. This is all about the vulnerable. Of course, we know that anorexia has the highest mortality rate of any mental health problem; the NHS is struggling to provide specialist treatment to those who need it. Meanwhile, suicide and self-harm-related content remains common and is repeatedly implicated in deaths. All Members here who were members of the Joint Committee remember the evidence of Ian Russell about his daughter Molly. I think that affected us all hugely.

We believe now you can pay your money and take your choice of whichever amendment seems appropriate. Changing the user empowerment provisions to require category 1 providers to have either the safest options as default for users or the terms of my two amendments is surely a straightforward way of protecting the vast majority of internet users who do not want this material served to them.

You could argue that the new offence of encouragement to serious self-harm, which the Government have committed to introducing, might form part of the solution here, but you cannot criminalise all the legal content that treads the line between glorification and outright encouragement. Of course, we know the way the Bill has been changed. No similar power is proposed, for instance, to address eating disorder content.

The noble Baroness, Lady Healy, quoted our own Communications and Digital Committee and its recommendations about a comprehensive toolkit of settings overseen by Ofcom, allowing users to decide what types of content they see and from whom. I am very supportive of Amendment 38A from the noble Lord, Lord Knight, which gives a greater degree of granularity about the kind of user, in a sense, that can communicate to users.

Modesty means that of course I prefer my own amendments and I agree with the noble Baronesses, Lady Fraser, Lady Bull and Lady Harding, and I am very grateful for their support. But we are all heading in the same direction. We are all arguing for a broader “by default” approach. The onus should not be on these vulnerable adults in particular to switch them on, as the noble Baroness, Lady Bull, said. It is all about those vulnerable adults and we must, as my noble friend Lady Burt, said, have their best interests at heart, and that is why we have tabled these amendments.

My Lords, this has been one of the most important debates we have had so far in Committee, covering most of the issues in Clause 12—effectively, the replacement of the legal but harmful provisions that were in the draft Bill with the user empowerment tools, introducing the new element of the triple shield, or the three-legged stool as we are now going to describe it thanks to the noble Baroness, Lady Fraser. It is about how we as adults are empowered to protect ourselves from harmful content and, most crucially, the amplification of the harm caused by the systems used on the platforms.

I welcome subsections (4) and (5) of Clause 12, on ease of use and ease of access to the tools. Many platforms already offer these sort of tools. The noble Lord, Lord Clement-Jones, referred to the ParentZone research that has been circulated, which talked about a Facebook tool to prevent autoplay of ads. It took ParentZone’s tech-savvy researcher—not the noble Baroness, Lady Burt—three and a half hours to work out how to turn autoplay off. The research also found that 30% of tools had changed in the last year, so this is an ever-moving target for people to chase after.

The reality is that most of us do not have the time, even if we have the inclination, to deal with all these things. We already have user empowerment tools for unsubscribing from junk emails—and how many of us can be bothered to go through all that all the time? Sometimes I do but sometimes I just have to delete them and move on. We have to manage cookies; sometimes I do and sometimes I do not because I do not have time. That is why we need to look seriously at putting some of these tools on by default, with easily accessible settings to then turn them off if desired.

I therefore support Amendments 34 and 35, tabled by the noble Baroness, Lady Morgan, although I support those from the noble Lord, Lord Clement-Jones, more, which is why I put my name to them before the debate started. What the noble Baroness said about self-harm, suicide and eating disorders is really important. Again, this is less about people never being able to see individual items of content relating to those things and much more about restraining the platforms from bombarding us with similar content, as happened to Molly Russell and others. Here, of course, as many noble Lords have said, we should be mindful of the vulnerability of many young adults and other adults to the same experience that was implicated in Molly’s death.

According to Refuge’s research, which has been circulated, just over one in three UK women have experienced online abuse or harassment on social media, and perpetrators of domestic abuse are increasingly turning to technology as a tool to further their abuse. A briefing sent by the Royal College of Psychiatrists says that, according to NHS England, only 57.5% of 17 to 24 year-olds feel safe using social media in this country. Why not improve their safety as adults by having them opt in to seeing potentially harmful content—this is particularly important to some vulnerable adults with limited capacity to make decisions about internet and social media use—without limiting the freedom of adults to see this content if they want to?

The noble Lord, Clement-Jones, with Amendments 36 and 37, to which I added my name, is essentially going back to some of the debate about safety by design. As the right reverend Prelate set out so powerfully, the platforms are designed to maximise engagement, time spent on their site, data collection and the targeting of advertising. It is about their business model, not our safety. Artificial intelligence has no ethical constraint, and these user empowerment tools allow us to shift the algorithm in our favour, including to make us safer. To toggle them off is to side with the business model regardless of adult safety; to toggle them on is to side with adults having a more pleasant but slightly less engaging experience. Whose side is the Minister on? We look forward to hearing.

Just to clarify, in a way we have reduced this debate to whether the default position should be on or off, although in fact that is only one aspect of this. My concern, and what I maybe spent too long talking about, is what happens if we turn the toggles to “on”. The assumption we keep making is that once they are on, we are safe. The difficulty is that the categories of what is filtered out after turning them on are not necessarily what the user thinks they are. I am simply asking how you get around that; otherwise, we think it is too easy—turn it on or off; press the button. Is it not problematic for us all if, in thinking you are going to stop seeing hate, hate turns out actually to be legitimate and interesting political ideas?

As ever, the noble Baroness is an important voice in bursting our bubble in the Chamber. I continue to respect her for that. It will not be perfect; there is no perfect answer to all this. I am siding with safety and caution rather than a bit of a free-for-all. Sometimes there might be overcaution and aspects of debate where the platforms, the regulator, the media, and discussion and debate in this Chamber would say, “The toggles have got it wrong”, but we just have to make a judgment about which side we are on. That is what I am looking forward to hearing from the Minister.

These amendments are supported on all sides and by a long list of organisations, as listed by the noble Baroness, Lady Morgan, and the noble Lord, Lord Clement-Jones. The Minister has not conceded very much at all so far to this Committee. We have heard compelling speeches, such as those from the noble Baroness, Lady Parminter, that have reinforced my sense that he needs to give in on this when we come to Report.

I will also speak to my Amendment 38A. I pay tribute to John Penrose MP, who was mentioned by the noble Baroness, Lady Harding, and his work in raising concerns about misinformation and in stimulating discussion outside the Chambers among parliamentarians and others. Following discussions with him and others in the other place, I propose that users of social media should have the option to filter out content the provenance of which cannot be authenticated.

As we know, social media platforms are often awash with content that is unverified, misleading or downright false. This can be particularly problematic when it comes to sensitive or controversial topics such as elections, health or public safety. In these instances, it can be difficult for users to know whether the information presented to them is accurate. Many noble Lords will be familiar with the deep-fake photograph of the Pope in a white puffa jacket that recently went viral, or the use of imagery for propaganda purposes following the Russian invasion of Ukraine.

The Content Authenticity Initiative has created an open industry standard for content authenticity and provenance. Right now, tools such as Adobe Photoshop allow users to turn on content credentials to securely attach provenance data to images and any edits then made to those images. That technology has now been adopted by camera manufacturers such as Leica and Nikon, so the technology is there to do some of this to help give us some reassurance.

Amendment 38A would allow users to filter out unverified content and is designed to flag posts or articles that do not come from a reliable source or have not been independently verified by a reputable third party. Users could then choose to ignore or filter out such content, ensuring that they are exposed only to information that has been vetted and verified. This would not only help users to make more informed decisions but help to combat the spread of false information on social media platforms. By giving users the power to filter out unverified content, we can help to ensure that social media platforms are not used to spread harmful disinformation or misinformation.

Amendments 42 and 45, in the name of my noble friend Lord Stevenson, are good and straightforward improvements to the Bill, suggesting that Clause 12(11) should include all the protected characteristics in the Equality Act 2010. I listened closely to the argument of the noble Baroness, Lady Buscombe, about listing the various characteristics, but I think it makes much more sense to do what my noble friend is suggesting and just reference the Equality Act. If we chose separately to amend the Equality Act to change protected characteristics, that change would then flow through into this Bill. That seems to be a much more sensible and pragmatic way to proceed. Similarly, in Amendment 45 my noble friend is suggesting that consideration should be given by the Secretary of State as to whether any of these tools should also be available to child users, and we have heard good arguments in favour of that when it comes to dealing with the cliff-edge effect, as described by the noble Baroness, Lady Kidron.

Lastly—and, perhaps the Committee will forgive me, slightly bizarrely—Amendment 283ZA, in the name of the noble Baroness, Lady Fox, to Clause 170(7) in Part 11 invites us to consider, among other things, whether a bot can act reasonably. Given that self-coding bots are now technically possible, I would be fascinated to hear the Minister’s view on whether an autonomous bot can have reason. I asked ChatGPT this question. For the help of the Committee, it replied:

“As an artificial intelligence language model, I do not have consciousness or emotions, and therefore, I do not have a capacity for ‘reason’ in the way that humans understand it. However, I am programmed to use advanced algorithms that allow me to process and analyze large amounts of data, recognize patterns, and provide reasoned responses based on the information available to me. This allows me to simulate a form of reasoning, but it is important to note that it is not the same as human reasoning, as I do not have subjective experiences or personal biases. Ultimately, my abilities are limited to the algorithms and data that have been programmed into my system, and I cannot generate my own subjective experiences or judgments.”

That is the view of the algorithm as to whether or not bots can have reason. I look forward to the Minister’s response.

My Lords, the Government recognise the objectives of the amendments in this group: to strengthen protections for adults online. I hope noble Lords will agree that the Bill will indeed significantly improve the safety of all adult users, particularly those who are more vulnerable.

The user empowerment content features will not be the only measures in the Bill that will protect adults. They will act as a final layer of protection, coming after the duties on illegal content and the requirement on category 1 providers to uphold their terms of service. However, as the Clause 12 duties apply to legal content, we need to tread carefully and not inadvertently restrict free expression.

Amendments 34 and 35 in the name of my noble friend Lady Morgan of Cotes and Amendments 36 and 37 in the name of the noble Lord, Lord Clement-Jones, seek to require category 1 services to have their user empowerment content features in operation by default for adult users. The Government share concerns about users who experience disproportionate levels of abuse online or those who are more susceptible to suicide, self-harm or eating disorder content, but these amendments encroach on users’ rights in two ways.

First, the amendments intend to make the decision on behalf of users about whether to have these features turned on. That is aimed especially at those who might not otherwise choose to use those features. The Government do not consider it appropriate to take that choice away from adults, who must be allowed to decide for themselves what legal content they see online. That debate was distilled in the exchange just now between the noble Lord, Lord Knight, and the noble Baroness, Lady Fox, when the noble Lord said he would err on the side of caution, even overcaution, while he characterised the other side as a free-for-all. I might say that it was erring on the side of freedom. That is the debate that we are having, and should have, when looking at these parts of the Bill.

Secondly, the amendments would amount to a government requirement to limit adults’ access to legal content. That presents real concerns about freedom of expression, which the Government cannot accept.

Does the Minister therefore think that the Government condone the current system, where we are inundated algorithmically with material that we do not want? Are the Government condoning that behaviour, in the way that he is saying they would condone a safety measure?

We will come to talk about algorithms and their risks later on. There is an important balance to strike here that we have debated, rightly, in this group. I remind noble Lords that there are a range of measures that providers can put in place—

Because of the importance of that point in relation to what the Minister is about to say, we should be clear about this point: is he ruling out the ability to prioritise the needs and requirements of those who are effectively unable to take the decisions themselves in favour of a broader consideration of freedom of expression? It would be helpful for the future of this debate to be clear on that point.

We will come in a moment to the provisions that are in the Bill to make sure that decisions can be taken by adults, including vulnerable adults, easily and clearly. If the noble Lord will allow, I will cover that point.

I was in the middle of reminding noble Lords that there are a range of measures that providers can put in place under these duties, some of which might have an impact on a user’s experience if they were required to be switched on by default. That may include, for example, restricting a user’s news feed to content from connected users, adding to the echo chamber and silos of social media, which I know many noble Lords would join me in decrying. We think it is right that that decision is for individual users to make.

The Bill sets out that the user empowerment content tools must be offered to all adult users and must be easy to access—to go the point raised just now as well as by my noble friend Lady Harding, and the noble Baroness, Lady Burt, and, as noble Lords were right to remind us, pushed by the noble Baroness, Lady Campbell of Surbiton, who I am pleased to say I have been able to have discussions with separately from this Committee.

Providers will also be required to have clear and accessible terms of service about what tools are offered on their service and how users might take advantage of them. Ofcom will be able to require category 1 services to report on user empowerment tools in use through transparency reports. Ofcom is also bound by the Communications Act 2003 and the public sector equality duty, so it will need to take into account the ways that people with certain characteristics, including people with disabilities, may be affected when performing its duties, such as writing the codes of practice for the user empowerment duties.

I think the Minister is trying to answer the point raised by my noble friend about vulnerable adults. I am interested in the extent to which he is relying on the Equality Act duty on Ofcom then to impact the behaviour of the platforms that it is regulating in respect of how they are protecting vulnerable adults. My understanding is that the Equality Act duty will apply not to the platforms but only to Ofcom in the way that it regulates them. I am unclear how that is going to provide the protection that we want.

That is right. Platforms are not in the public sector, so the public sector equality duty does not apply to them. However, that duty applies to Ofcom, taking into account the ways in which people with certain characteristics can be affected through the codes of practice and the user empowerment duties that it is enforcing. So it suffuses the thinking there, but the duty is on Ofcom as a public sector body.

We talk later in Clause 12(11) of some of the characteristics that are similar in approach to the protected characteristics in the Equality Act 2010. I will come to that again shortly in response to points made by noble Lords.

I want to say a bit about the idea of there being a cliff edge at the age of 18. This was raised by a number of noble Lords, including the noble Lord, Lord Griffiths, my noble friends Lady Morgan and Lady Harding and the noble Baroness, Lady Kidron. The Bill’s protections recognise that, in law, people become adults when they turn 18—but it is not right to say that there are no protections for young adults. As noble Lords know, the Bill will provide a triple shield of protection, of which the user empowerment duties are the final element.

The Bill already protects young adults from illegal content and content that is prohibited in terms and conditions. As we discussed in the last group, platforms have strong commercial incentives to prohibit content that the majority of their users do not want to see. Our terms of service duties will make sure that they are transparent about and accountable for how they treat this type of content.

In law, there is nothing. I am engaging with the point that there is no cliff edge. There are protections for people once they turn 18. People’s tastes and risk appetites may change over time, but there are protections in the Bill for people of all ages.

Surely, this is precisely the point that the noble Baroness, Lady Kidron, was making. As soon as you reach 18, there is no graduation at all. There is no accounting for vulnerable adults.

There is not this cliff edge which noble Lords have feared—that there are protections for children and then, at 18, a free for all. There are protections for adult users—young adults, older adults, adults of any age—through the means which I have just set out: namely, the triple shield and the illegal content provisions. I may have confused the noble Lord in my attempt to address the point. The protections are there.

There is an element of circularity to what the Minister is saying. This is precisely why we are arguing for the default option. It allows this vulnerability to be taken account of.

Perhaps it would help if the Minister wanted to just set out the difference for us. Clearly, this Committee has spent some time debating the protection for children, which has a higher bar than protection for adults. It is not possible to argue that there will be no difference at the age of 18, however effective the first two elements of the triple shield are. Maybe the Minister needs to think about coming at it from the point of view of a child becoming an adult, and talk us through what the difference will be.

Once somebody becomes an adult in law at the age of 18, they are protected through the triple shield in the Bill. The user empowerment duties are one element of this, along with the illegal content duties and the protection against content prohibited in terms and conditions and the redress through Ofcom.

The legislation delivers protection for adults in a way that preserves their choice. That is important. At the age of 18, you can choose to go into a bookshop and to encounter this content online if you want. It is not right for the Government to make decisions on behalf of adults about the legal content that they see. The Bill does not set a definition of a vulnerable adult because this would risk treating particular adults differently, or unfairly restricting their access to legal content or their ability to express themselves. There is no established basis on which to do that in relation to vulnerability.

Finally, we remain committed to introducing a new criminal offence to capture communications that intentionally encourage or assist serious self-harm, including eating disorders. This will provide another layer of protection on top of the regulatory framework for both adults and children.

I understand all of that—I think—but that is not the regime being applied to children. It is really clear that children have a safer, better experience. The difference between those experiences suddenly happening on an 18th birthday is what we are concerned about.

Before the Minister stands up—a new phrase—can he confirm that it is perfectly valid to have a choice to lift the user empowerment tool, just as it is to impose it? Choice would still be there if our amendments were accepted.

It would be, but we fear the chilling effect of having the choice imposed on people. As the noble Baroness, Lady Fox, rightly put it, one does not know what one has not encountered until one has engaged with the idea. At the age of 18, people are given the choice to decide what they encounter online. They are given the tools to ensure that they do not encounter it if they do not wish to do so. As the noble Lord has heard me say many times, the strongest protections in the Bill are for children. We have been very clear that the Bill has extra protections for people under the age of 18, and it preserves choice and freedom of expression online for adult users—young and old adults.

My noble friend Lady Buscombe asked about the list in Clause 12(11). We will keep it under constant review and may consider updating it should compelling evidence emerge. As the list covers content that is legal and designed for adults, it is right that it should be updated by primary legislation after a period of parliamentary scrutiny.

Amendments 42 and 38A, tabled by the noble Lords, Lord Stevenson of Balmacara and Lord Knight of Weymouth, respectively, seek to change the scope of user empowerment content features. Amendment 38A seeks to expand the user empowerment content features to include the restriction of content the provenance of which cannot be authenticated. Amendment 42 would apply features to content that is abusive on the basis of characteristics protected under the Equality Act 2010.

The user empowerment content list reflects areas where there is the greatest need for users to be offered choice about reducing their exposure to types of content. While I am sympathetic to the intention behind the amendments, I fear they risk unintended consequences for users’ rights online. The Government’s approach recognises the importance of having clear, enforceable and technically feasible duties that do not infringe users’ rights to free expression. These amendments risk undermining this. For instance, Amendment 38A would require the authentication of the provenance of every piece of content present on a service. This could have severe implications for freedom of expression, given its all-encompassing scope. Companies may choose not to have anything at all.

I will try to help the Minister. If the amendment has been poorly drafted, I apologise. It does not seek to require a platform to check the provenance of every piece of content, but content that is certified as having good provenance would have priority for me to be able to see it. In the Bill, I can see or not see verified users. In the same way, I could choose to see or not see verified content.

Thank you. I may be reading the noble Lord’s Amendment 38A excessively critically. I will look at it again. To try to reassure the noble Lord, the Bill already ensures that all services take steps to remove illegal manufactured or manipulated content when they become aware of it. Harmful and illegal misinformation and disinformation is covered in that way.

Amendment 42 would require providers to try to establish on a large scale what is a genuinely held belief that is more than an opinion. In response, I fear that providers would excessively apply the user empowerment features to manage that burden.

A number of noble Lords referred to the discrepancy between the list—

Several times in the Bill—but this is a clear example—the drafters have chosen to impose a different sequence of words from that which exists in statute. The obvious one here is the Equality Act, which we have touched on before. The noble Baroness, Lady Buscombe, made a number of serious points about that. Why have the Government chosen to list, separately and distinctively, the characteristics which we have also heard, through a different route, the regulator will be required to uphold in respect of the statute, while the companies will be looking to the text of the Bill, when enacted? Is that not just going to cause chaos?

The discrepancy comes from the point we touched on earlier. Ofcom, as a public body, is subject to the public sector equality duty and therefore the list set out in the Equality Act 2010. The list at Clause 12(11) relates to content which is abusive, and is therefore for providers to look at. While the Equality Act has established an understanding of characteristics which should be given special protection in law, it is not necessarily desirable to transpose those across. They too are susceptible to the point made by my noble friend Lady Buscombe about lists set out in statute. If I remember rightly, the Equality Act was part of a wash-up at the end of that Parliament, and whether Parliament debated that Bill as thoroughly as it is debating this one is a moot point.

The noble Lord made that point before, and I was going to pick him up on it. It really is not right to classify our legislation by whether it came through in a short or long period. We are spending an awfully long time on this but that is not going to make it any better. I was involved in the Equality Act, and I have the scars on my back to prove it. It is jolly good legislation and has stood the test of time. I do not think the point is answered properly by simply saying that this is a better way of doing it. The Minister said that Clause 12(11) was about abuse targets, but Clause 12(12) is about “hatred against people” and Clause 12(13) is a series of explanatory points. These provisions are all grist to the lawyers. They are not trying to clarify the way we operate this legislation, in my view, to the best benefit of those affected by it.

The content which we have added to Clause 12 is a targeted approach. It reflects input from a wide range of interested parties, with whom we have discussed this, on the areas of content that users are most concerned about. The other protected characteristics that do not appear are, for instance, somebody’s marriage or civil partnership status or whether they are pregnant. We have focused on the areas where there is the greatest need for users to be offered the choice about reducing their exposure to types of content because of the abuse they may get from it. This recognises the importance of clear, enforceable and technically feasible duties. As I said a moment ago in relation to the point made by my noble friend Lady Buscombe, we will keep it under review but it is right that these provisions be debated at length—greater length than I think the Equality Bill was, but that was long before my time in your Lordships’ House, so I defer to the noble Lord’s experience and I am grateful that we are debating them thoroughly today.

I will move now, if I may, to discuss Amendments 43 and 283ZA, tabled by the noble Baroness, Lady Fox of Buckley. Amendment 43 aims to ensure that the user empowerment content features do not capture legitimate debate and discussion, specifically relating to the characteristics set out in subsections (11) and (12). Similarly, her Amendment 283ZA aims to ensure that category 1 services apply the features to content only when they have reasonable grounds to infer that it is user empowerment content.

With regard to both amendments, I can reassure the noble Baroness that upholding users’ rights to free expression is an integral principle of the Bill and it has been accounted for in drafting these duties. We have taken steps to ensure that legitimate online discussion or criticism will not be affected, and that companies make an appropriate judgment on the nature of the content in question. We have done this by setting high thresholds for inclusion in the content categories and through further clarification in the Bill’s Explanatory Notes, which I know she has consulted as well. However, the definition here deliberately sets a high threshold. By targeting only abuse and incitement to hatred, it will avoid capturing content which is merely challenging or robust discussion on controversial topics. Further clarity on definitions will be provided by Ofcom through regulatory guidance, on which it will be required to consult. That will sit alongside Ofcom’s code of practice, which will set out the steps companies can take to fulfil their duties.

I appreciate the Minister’s comments but, as I have tried to indicate, incitement to hatred and abuse, despite people thinking they know what those words mean, is causing huge difficulty legally and in institutions throughout the land. Ofcom will have its work cut out, but it was entirely for that reason that I tabled this amendment. There needs to be an even higher threshold, and this needs to be carefully thought through.

But as I think the noble Baroness understands from that reference, this is a definition already in statute, and with which Parliament and the courts are already engaged.

The Bill’s overarching freedom of expression duties also apply to Clause 12. Subsections (4) to (7) of Clause 18 stipulate that category 1 service providers are required to assess the impact on free expression from their safety policies, including the user empowerment features. This is in addition to the duties in Clause 18(2), which requires all user-to-user services to have particular regard to the importance of protecting freedom of expression when complying with their duties. The noble Baroness’s Amendment 283ZA would require category 1 providers to make judgments on user empowerment content to a similar standard required for illegal content. That would be disproportionate. Clause 170 already specifies how providers must make judgments about whether content is of a particular kind, and therefore in scope of the user empowerment duties. This includes making their judgment based on “all relevant information”. As such, the Bill already ensures that the user empowerment content features will be applied in a proportionate way that will not undermine free speech or hinder legitimate debate online.

Amendment 45, tabled by the noble Lord, Lord Stevenson of Balmacara, would require the Secretary of State to lay a Statement before Parliament outlining whether any of the user empowerment duties should be applied to children. I recognise the significant interest that noble Lords have in applying the Clause 12 duties to children. The Bill already places comprehensive requirements on Part 3 services which children are likely to access. This includes undertaking regular risk assessments of such services, protecting children from harmful content and activity, and putting in place age-appropriate protections. If there is a risk that children will encounter harm, such as self-harm content or through unknown or unverified users contacting them, service providers will need to put in place age- appropriate safety measures. Applying the user empowerment duties for child users runs counter to the Bill’s child safety objectives and may weaken the protections for children—for instance, by giving children an option to see content which is harmful to them or to engage with unknown, unverified users. While we recognise the concerns in this area, for the reasons I have set out, the Government do not agree with the need for this amendment.

I will resist the challenge of the noble Lord, Lord Knight, to talk about bots because I look forward to returning to that in discussing the amendments on future-proofing. With that, I invite noble Lords—

I noted the points made about the way information is pushed and, in particular, the speech of the right reverend Prelate. Nothing in the Government’s response has really dealt with that concern. Can the Minister say a few words about not the content but the way in which users are enveloped? On the idea that companies always act because they have a commercial imperative not to expose users to harmful material, actually, they have a commercial imperative to spread material and engage users. It is well recorded that a lot of that is in fact harmful material. Can the Minister speak a little more about the features rather than the content?

We will discuss this when it comes to the definition of content in the Bill, which covers features. I was struck by the speech by the right reverend Prelate about the difference between what people encounter online, and the analogy used by the noble Baroness, Lady Fox, about a bookshop. Social media is of a different scale and has different features which make that analogy not a clean or easy one. We will debate in other groups the accumulated threat of features such as algorithms, if the noble Baroness, Lady Kidron, will allow me to go into greater detail then, but I certainly take the points made by both the right reverend Prelate and the noble Baroness, Lady Fox, in their contributions.

My Lords, I thank my noble friend very much indeed, and thank all noble Lords who have taken part. As the noble Lord, Lord Knight, said, this has been an important debate—they are all important, of course—but I think this has really got to the heart of parts of the Bill, parts of why it has been proposed in the first place, and some choices the Government made in their drafting and the changes they have made to the Bill. The right reverend Prelate reminded us, as Bishops always do, of the bigger picture, and he was quite right to do so. There is no equality of arms, as he put it, between most of us as internet users and these enormous companies that are changing, and have changed, our society. My noble friend was right—and I was going to pick up on it too—that the bookshop example given by the noble Baroness, Lady Fox, is, I am afraid, totally misguided. I love bookshops; the point is that I can choose to walk into one or not. If I do not walk into a bookshop, I do not see the books promoting some of the content we have discussed today. If they spill out on to the street where I trip over them, I cannot ignore them. This would be even harder if I were a vulnerable person, as we are going to discuss.

Noble Lords said that this is not a debate about content or freedom of expression, but that it is about features; I think that is right. However, it is a debate about choice, as the noble Lord, Lord Clement-Jones, said. I am grateful to each of those noble Lords who supported my amendments; we have had a good debate on both sets of amendments, which are similar. But as the noble Lord, Lord Griffiths, said, some of the content we are discussing, particularly in subsection (10), relating to suicide, pro-self-harm and pro-anorexia content, has literal life or death repercussions. To those noble Lords, and those outside this House, who seem to think we should not worry and should allow a total free-for-all, I say that we are doing so, in that the Government, in choosing not to adopt such amendments, are making an active choice. I am afraid the Government are condoning the serving up of insidious, deliberately harmful and deliberately dangerous content to our society, to younger people and vulnerable adults. The Minister and the Government would be better off if they said, “That is the choice that we have made”. I find it a really troubling choice because, as many noble Lords will know, I was involved in this Bill a number of years ago—there has been a certain turnover of Culture Secretaries in the last couple of years, and I was one of them. I find the Government’s choice troubling, but it has been made. As the noble Lord, Lord Knight, said, we are treating children differently from how we are treating adults. As drafted, there is a cliff edge at the age of 18. As a society, we should say that there are vulnerabilities among adults, as we do in many walks of life; and exactly as the noble Baroness, Lady Parminter, so powerfully said, there are times when we as a House, as a Parliament, as a society and as a state, should say we want to protect people. There is an offer here in both sets of amendments—I am not precious about which ones we choose—to have that protection.

I will of course withdraw the amendment today, because that is the convention of the House, but I ask my noble friend to reflect on the strength of feeling expressed by the House on this today; I think the Whip on the Bench will report as well. I am certain we will return to this on Report, probably with a unified set of amendments. In the algorithmic debate we will return to, the Government will have to explain, in words of one syllable, to those outside this House who worry about the vulnerable they work with or look after, about the choice that the Government have made in not offering protections when they could have done, in relation to these enormously powerful platforms and the insidious content they serve up repeatedly.

Amendment 34 withdrawn.

Amendments 35 to 37 not moved.

I advise the Committee that if Amendment 38 is agreed to, I shall not be able to call Amendment 38A by reason of pre-emption.

Amendment 38

Moved by

38: Clause 12, page 12, line 24, leave out subsection (6)

Member’s explanatory statement

This amendment, along with the other amendment to Clause 12 in the name of Lord Moylan, removes requirements on sites to display, on demand, only the parts of a conversation (or in the case of collaboratively-edited content, only the parts of a paragraph, sentence or article) that were written by “verified” users, and to prevent other users from amending (e.g. improving), or otherwise interacting with, such contributions.

My Lords, I am going to endeavour to be relatively brief. I rise to move Amendment 38 and to speak to Amendments 39, 139 and 140 in this group, which are in my name. All are supported by my noble friend Lord Vaizey of Didcot, to whom I am grateful.

Amendments 38 and 39 relate to Clause 12. They remove subsections (6) and (7) from the Bill; that is, the duty to filter out non-verified users. Noble Lords will understand that this is different from the debate we have just had, which was about content. This is about users and verification of the users, rather than the harm or otherwise of the content. I am sure I did not need to say that, but perhaps it helps to clarify my own thinking to do so. Amendments 139 and 140 are essentially consequential but make it clear that my amendments do not prohibit category 1 services from offering this facility. They make it a choice, not a duty.

I want to make one point only in relation to these amendments. It has been well said elsewhere that this is a Twitter-shaped Bill, but it is trying to apply itself to a much broader part of the internet than Twitter, or things like it. In particular, community-led services like Wikipedia, to which I have made reference before, operate on a totally different basis. The Bill seeks to create a facility whereby members of the public like you and me can, first, say that we want the provider to offer a facility for verifying those who might use their service, and secondly, for us, as members of the public, to be able to say we want to see material from only those verified accounts. However, the contributors to Wikipedia are not verified, because Wikipedia has no system to verify them, and therefore it would be impossible for Wikipedia, as a category 1 service, to be able to comply with this condition on its current model, which is a non-commercial, non-profit one, as noble Lords know from previous comments. It would not be able to operate this clause; it would have to say that either it is going to require every contributing editor to Wikipedia to be verified first in order to do so, which would be extremely onerous; or it would have to make it optional, which would be difficult, but lead to the bizarre conclusion that you could open an article on Wikipedia and find that some of its words or sentences were blocked, and you could not read them because those amendments to the article had been made by someone who had not been verified. Of course, putting a system in place to allow that absurd outcome would itself be an impossible burden on Wikipedia.

My complaint—as always, in a sense—about the Bill is that it misfires. Every time you touch it, it misfires in some way because it has not been properly thought through. It is perhaps trying to do too much across too broad a front, when it is clear that the concern of the Committee is much narrower than trying to bowdlerize Wikipedia articles. That is not the objective of anybody here, but it is what the Bill is tending to do.

I will conclude by saying—I invite my noble friend to comment on this if he wishes; I think he will have to comment on it at some stage—that in reply to an earlier Committee debate, I heard him say somewhat tentatively that he did not think that Wikipedia would qualify as a category 1 service. I am not an advocate for Wikipedia; I am just a user. But we need to know what the Government’s view is on the question of Wikipedia and services like it. Wikipedia is the only community-led service, I think, of such a scale that it would potentially qualify as category 1 because of its size and reach.

If the Minister’s view is that Wikipedia would not qualify as a category 1 service—in which case, my amendments are irrelevant because it would not be caught by this clause—then he needs to say so. More than that, he needs to say on what basis it would not qualify as a category 1 service. Would it be on the face of the Bill? If not, would it be in the directions given by the Secretary of State to the regulator? Would it be a question of the regulator deciding whether it was a category 1 service? Obviously, if you are trying to run an operation such as Wikipedia with a future, you need to know which of those things it is. Do you have legal security against being determined as a category 1 provider or is it merely at the whim—that is not the right word; the decision—of the regulator in circumstances that may legitimately change? The regulator may have a good or bad reason for changing that determination later. You cannot run a business not knowing these things.

I put it to noble Lords that this clause needs very careful thinking through. If it is to apply to community-led services such as Wikipedia, it is an absurdity. If it is not to apply to them because what I think I heard my noble friend say pertains and they are not, in his view, a category 1 service, why are they not a category 1 service? What security do they have in knowing either way? I beg to move.

My Lords, I will speak to Amendment 106 in my name and the names of my noble and learned friend Lord Garnier and the noble Lord, Lord Moore of Etchingham. This is one of five amendments focused on the need to address the issue of activist-motivated online bullying and harassment and thereby better safeguard the mental health and general well-being of potential victims.

Schedule 4, which defines Ofcom’s objectives in setting out codes of practice for regulated user-to-user services, should be extended to require the regulator to consider the protection of individuals from communications offences committed by anonymous users. The Government clearly recognise that there is a threat of abuse from anonymous accounts and have taken steps in the Bill to address that, but we are concerned that their approach is insufficient and may be counterproductive.

I will explain. The Government’s approach is to require large social media platforms to make provision for users to have their identity verified, and to have the option of turning off the ability to see content shared by accounts whose owners have not done this. However, all this would mean is that people could not see abuse being levelled at them. It would not stop the abuse happening. Crucially, it would not stop other people seeing it, or the damage to his or her reputation or business that the victim may suffer as a result. If I am a victim of online bullying and harassment, I do not want to see it, but I do not want it to be happening at all. The only means I have of stopping it is to report it to the platform and then hope that it takes the right action. Worse still, if I have turned off the ability to see content posted by unverified—that is, anonymous—accounts, I will not be able to complain to the platform as I will not have seen it. It is only when my business goes bust or I am shunned in the street that I realise that something is wrong.

The approach of the Bill seems to be that, for the innocent victim—who may, for example, breed livestock for consumption—it is up that breeder to be proactive to correct harm already done by someone who does not approve of eating meat. This is making a nonsense of the law. This is not how we make laws in this country —until now, it seems. Practically speaking, the worst that is likely to happen is that the platform might ban their account. However, if their victims have had no opportunity to read the abuse or report it, even that fairly low-impact sanction could not be levelled against them. In short, the Bill’s current approach, I am sorry to say, would increase the sense of impunity, not lessen it.

One could argue that, if a potential abuser believes that their victim will not read their abuse, they will not bother issuing it. Unfortunately, this misunderstands the psyche of the online troll. Many of them are content to howl into the void, satisfied that other people who have not turned on the option to filter out content from unverified accounts will still be able to read it. The troll’s objective of harming the victim may be partially fulfilled as a result.

There is also the question of how much uptake there will be of the option to verify one’s identity, and numerous questions about the factors that this will depend on. Will it be attractive? Will there be a cost? How quick and efficient will the process be? Will platforms have the capacity to implement it at scale? Will it have to be done separately for every platform?

If uptake of verification is low, most people simply will not use the option to filter content of unverified accounts, even if it means that they remain more susceptible to abuse, since they would be cutting themselves off from most of their users. Clearly, that is not an option for anyone using social media for any promotional purpose. Even those who use it for purely social reasons will find that they have friends who do not want to be verified. Fundamentally, people use social media because other people use it. Carving oneself off from most of them defeats the purpose of the exercise.

It is not clear what specific measures the Bill could take to address the issue. Conceivably, it could simply ban online platforms from maintaining user accounts whose owners have not had their identities verified. However, this would be truly draconian and most likely lead to major platforms exiting the UK market, as the noble Baroness, Lady Fox, has rightly argued in respect of other possible measures. It would also be unenforceable, since users could simply turn on a VPN, pretend to be from some other country where the rules do not apply and register an account as though they were in that country.

There are numerous underlying issues that the Bill recognises as problems but does not attempt to prescribe solutions for. Its general approach is to delegate responsibility to Ofcom to frame its codes of practice for operators to follow in order to effectively tackle these problems. Specifically, it sets out a list of objectives that Ofcom, in drawing up its codes of practice, will be expected to meet. The protection of users from abuse, specifically by unverified or anonymous users, would seem to be an ideal candidate for inclusion in this list of amendments. If required to do so, Ofcom could study the issue closely and develop more effective solutions over time.

I was pleased to see, in last week’s Telegraph, an article that gave an all too common example of where the livelihood of a chef running a pub in Cornwall has suffered what amounts to vicious abuse online from a vegan who obviously does not approve of the menu, and who is damaging the business’s reputation and putting the chef’s livelihood at risk. This is just one tiny example, if I can put it that way, of the many thousands that are happening all the time. Some 584 readers left comments, and just about everyone wrote in support of the need to do something to support that chef and tackle this vicious abuse.

I return to a point I made in a previous debate: livelihoods, which we are deeply concerned about, are at stake here. I am talking not about big business but about individuals and small and family businesses that are suffering—beyond abuse—loss of livelihood, financial harm and/or reputational damage to business, and the knock-on effects of that.

House resumed. Committee to begin again not before 7.41 pm.