Skip to main content

Deepfakes: General Election

Volume 838: debated on Wednesday 8 May 2024


Asked by

To ask His Majesty’s Government what steps they are taking to ensure political deepfakes on social media are not used to undermine the outcome of the general election.

My Lords, we are working to ensure we are ready to respond to the full range of threats to our democratic processes, including through the Defending Democracy Taskforce. It is already an election offence to make false statements of fact about the personal character or conduct of a candidate before or during an election. Additionally, under the Online Safety Act, where illegal political deepfakes are shared on social media, they must be removed.

My Lords, Google’s Kent Walker has talked of the “very serious” threat posed by AI-generated deepfakes and disinformation. The Prime Minister, the Leader of the Opposition and the Mayor of London have all been the subject of deepfakes, so it is not surprising that the Home Secretary has identified a critical window for collective action to preserve the integrity of the forthcoming election. Obviously, monitoring online content is important, but that will not prevent malign individuals or hostile foreign states trying to interfere in the forthcoming elections at home and abroad. Will the Minister finally take up our proposals to use the Data Protection Bill to fill the deepfake gap left by the Online Safety Act so that we can all have confidence in the outcome of the general election?

I start by saying that I very much share the view of the importance of protecting the forthcoming general election—and indeed every election—from online deepfakes, whether generated by AI or any other means. I think it is worth reminding the House that a range of existing criminal offences, such as the foreign interference offence, the false communications offence and offences under the Representation of the People Act, already address the use of deepfakes to malignly influence elections. While these Acts will go some way to deterring, I also think it is important to remind the House of the crucial non-legislative measures that we can take, continue to take and will take up to the completion of the election.

My Lords, would my noble friend not agree that there is an issue regarding the distortion of what politicians say, both through video and through the written word? Would he give me some indication of what the position is regarding Hansard and the coverage of what is said in this House and in the other place? Are we sufficiently protected if that written record is distorted or abused by others in the media?

Indeed—and let me first thank my noble friend for bringing up this important matter. That sounds to me like something that would be likely to be applied under the false communications offence in the Online Safety Act—Section 179—although I would not be able to say for sure. The tests that it would need to meet are that the information would have to be knowingly false and cause non-trivial physical or psychological harm to those offended, but that would seem to be the relevant offence.

My Lords, does not the Question from the noble Baroness, Lady Jones, highlight that we must hold to account with legal liability not only those who create this kind of deepfake content and facilitate its spread, but those who enable the production of deepfakes with software, such as by having standards and risk-based regulation for generative AI systems, which the Government in their White Paper have resolutely refused to do?

The Government set out in their White Paper response that off-the-shelf AI software that can in part be used to create these kinds of deepfakes is not, in and of itself, something that we are considering placing any ban on. However, there are ranges of software, a sort of middle layer to the AI production, that can greatly facilitate the production of deepfakes of all kinds, not just political but other kinds of criminal deepfakes—and there the Government would be actively considering moving against those purpose-built criminal tools.

My Lords, given the use of deepfakes and malign disinformation facilitated by data theft, has the noble Viscount taken note of what the Biden Administration decided to do last week? The President signed into law the ability to ban TikTok, and the Chinese-owned company that owns it, because of America’s experience in the mid-term elections in 2022 and the elections in Taiwan earlier this year. Does the Minister not worry that, unless we take similar powers in the United Kingdom, the same thing will happen here?

Well, some of the enforcement measures under the Online Safety Act do allow for very significant moves against social media platforms that misuse their scale and presence to malign ends in this way, but of course the noble Lord is absolutely right and we will continue to look closely at the moves by the Biden Administration to see what we can learn from them for our approach.

My Lords, I pay tribute to Andy Street for the way he responded to the circumstances in what was an incredibly close race. He must have been hugely disappointed. Sadly, another candidate in that race has since made false accusations of racism against a Labour volunteer, posting the volunteer’s name, picture and social media account, with the result that the volunteer subsequently received death threats in both calls and emails. Will the Minister join all noble Lords in condemning this kind of behaviour and confirm that, in his view, attacking party volunteers falls fully within the range of threats to the democratic process?

First, let me absolutely endorse the noble Lord’s sentiment: this is a deplorable way to behave that should not be tolerated. From hearing the noble Lord speak of the actions, my assumption is that they would fall foul of the false communications offence under Section 179 of the Online Safety Act. As I say, these actions are absolutely unacceptable.

My Lords, noble Lords will be aware of the threat of AI-generated deepfake election messages flooding the internet during an election campaign. At the moment, only registered users have to put a digital imprint giving the provenance of the content on unpaid election material. Does the Minister think that a requirement to put a digital imprint on all unpaid election material should be introduced to counter fake election messages?

The noble Viscount is right to point to the digital imprint regime as one of the tools at our disposal for limiting the use of deepfakes. I think we would hesitate to have a blanket law that all materials of any kind would be required to have a digital imprint on them—but, needless to say, we will take away the idea and consider it further.

My Lords, if, at the very height of the forthcoming general election, deepfakes were to emerge, what would be the role of Ofcom, in particular regarding the taking down of material that is manifestly false? Does Ofcom have the resources necessary to do this?

In the regrettable scenario mentioned by the noble Lord, such actions would generally fall to the Joint Election Security and Preparedness Unit and the election cell that will have been set up for the duration of the election to conduct rapid operational rebuttal and other responses to such things. We would not necessarily look to Ofcom until after the event because of the speed at which things would have to move.

My Lords, it is not just technology that can undermine the outcome of general elections; the Government are facilitating it, too. Jacob Rees-Mogg, former Business Secretary, famously said that voter ID rules were an attempt to “gerrymander” the electoral system. Does the Minister have any empirical evidence to show that the introduction of the voter ID system has reduced alleged fraud or encouraged more people to vote?

It is a very interesting question, but I am afraid I have no information on that as it is not DSIT’s area at all. I will be very happy to find out and write to the noble Lord if that would help.