Skip to main content

Ethics and Artificial Intelligence

Volume 634: debated on Wednesday 17 January 2018

I beg to move,

That this House has considered ethics and artificial intelligence.

It is a pleasure to serve under your chairmanship, Dame Cheryl. I welcome the Minister to her new role, following the reshuffle last week. She leaves what was also a wonderful role in Government—I can say that from personal experience—but I am sure that she will find the challenges of this portfolio interesting and engaging. No doubt she is already getting stuck in.

I would like to start with the story of Tay. Tay was an artificial intelligence Twitter chatbot developed by Microsoft in 2016. She was designed to mimic the language of young Twitter users and to engage and entertain millennials through casual and playful conversation.

“The more you chat with Tay the smarter she gets”,

the company boasted. In reality, Tay was soon corrupted by the Twitter community. Tay began to unleash a torrent of sexist profanity. One user asked,

“Do you support genocide?”,

to which Tay gaily replied, “I do indeed.” Another asked,

“is Ricky Gervais an atheist?”

The reply was,

“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.

Those are some of the tamer tweets. Less than 24 hours after her launch, Microsoft closed her account. Reading about it at the time, I found the story of Tay an amusing reminder of the hubris of tech companies. It also reveals something darker: it vividly demonstrates the potential for abuse and misuse of artificial intelligence technologies and the serious moral dilemmas that they present.

I say at the outset that I believe artificial intelligence can be a force for good, if harnessed correctly. It has the potential to change lives, to empower and to drive innovation. In healthcare, the use of AI is already revolutionising the way health professionals diagnose and treat disease. In transport, the rise of autonomous vehicles could drastically reduce the number of road deaths and provide incredible new opportunities for millions of disabled people. In our everyday lives, new AI technologies are streamlining menial tasks, giving us more time in the day for meaningful work, for leisure or for our family and friends. We are on the cusp of something quite extraordinary and we should not aim deliberately to suppress the growth of new AI, but there are pressing moral questions to be answered before we jump head first into AI excitement. It is vital that we address those urgent ethical challenges presented by new technology.

I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI. How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?

I congratulate the hon. Lady on this debate; it is a fascinating area and I am grateful to be able to speak. On her last point, I understand that in parts of the United States where that technology is used, there are instances where the judges go one step further and rely on those decisions as reasons to do things. The decision is made on incorrect information in the first instance, and then judges say that because a machine has made that decision, it must be even better than manual intervention.

The hon. Gentleman is quite right to raise that concern, because that goes to the heart of the issue, particularly when risk data is presented as incontrovertible fact and is relied on for the decision. It is absolutely essential that those decisions can be interrogated and understood, and that any bias is identified. That is why ethics must be at the heart of this whole issue, even before systems are developed in the first place.

In addition to the likely reoffending data, there is a female sex robot designed with a “frigid” setting, which is programmed to resist sexual advances. We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned. AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.

On machine learning, a report last year by the Royal Society highlighted a range of concerns among members of the public. Some were worried about the potential for direct harm, from accidents in autonomous vehicles to the misdiagnosis of disease in healthcare. Others were more concerned about potential job losses or the perceived loss of humanity that could result from wider use of machine learning. The importance of public engagement and dialogue was acknowledged by the Minister’s Department in its 2016 report. I would welcome an update from her on the kind of public engagement work she thinks is important with regard to AI.

I will turn to the related considerations of transparency and accountability. When we talk about transparency in the context of AI, what we really mean is that we want to understand how AI systems think and to understand their decision-making processes. We want to avoid situations of “black-boxing”, where we cannot understand, access or explain the decisions that technology makes. In practice, that transparency means several things: it might involve creating logging mechanisms that give us a step-by-step account of the processes involved in the decision making; or it could mean providing greater visibility of data access. I would be interested to hear the Minister’s thoughts on the relative merits of those practices. Either way, transparency is particularly important for those instances when we want to challenge decisions made by AI systems. Transparency informs accountability. If we can see how decisions are made, it is easier for us to understand what has happened and who is responsible when things go wrong.

Increasingly, major companies such as Deutsche Bank and Citigroup are turning to machine learning algorithms to streamline and refine their recruitment processes. Let us suppose that we suspect that an algorithm is biased towards candidates of a particular race and gender. If the decision-making process of the algorithm is opaque, it is hard to even work out whether employment law is being broken—an issue I know will be close to the Minister’s heart. Transparency is crucial when it comes to the accountability of new AI. We must ensure that when things go wrong, people can be held accountable, rather than shrugging and responding that the computer says “don’t know”.

I will try not to intervene too much, but the point about transparency in the process and the decision making relates to the data that is used as an input. It is often the case in these instances that machine learning is simply about correlations and patterns in a wide scheme of data. If that data is not right in the first instance, subjective and inaccurate decisions are created.

I entirely concur; one of the long-standing rules of computer programming is “garbage in, garbage out”. That holds true here. Again, that is why transparency about what goes in is so important. I hope that the Minister will tell us what regulations are being considered to ensure that AI systems are designed in a way that is transparent, so that somebody can be held accountable, and how AI bias can be counteracted.

Increased transparency is crucial, but it is also vital that we put safeguards in place to make sure that that does not come at the cost of people’s privacy or security. Many AI systems have access to large datasets, which may contain confidential personal information or even information that is a matter of national security. Take, for example, an algorithm that is used to analyse medical records: we would not want that data to be accessible arbitrarily by third parties. The Government must be mindful of privacy considerations when tackling transparency, and they must look at ways of strengthening capacity for informed consent when it comes to the use of people’s personal details in AI systems.

We must ensure that AI systems are fair and free from bias. Returning to recruitment, algorithms are trained using historical data to develop a template of characteristics to target. The problem is that historical data itself often reveals pre-existing biases. Just a quarter of FTSE 350 directors are women, and fewer than one in 10 are from an ethnic minority; the majority of leaders are white men. It is therefore easy to see how companies’ use of hiring algorithms trained on past data about the characteristics of their leaders might reinforce existing gender and race imbalances.

The software company Sage has developed a code of practice for ethical AI. Its first principle stresses the need for AI to reflect the diversity of the users it serves. Importantly, that means ensuring that teams responsible for building AI are diverse. We all know that the computer science industry is heavily male dominated, so the people who develop AI systems are mainly men. It is not hard to see how that might have an impact on the fairness of new technology. Members may remember that Apple launched a health app that enabled people to do everything from tracking their inhaler use to tracking how much molybdenum they were getting from their soy beans, but did not allow someone to track their menstrual cycle.

We also need to be clear about who stands to benefit from new AI technology and to think about distributional effects. We want to avoid a situation where power and wealth lie exclusively in the hands of those with access to and understanding of these new technologies.

I congratulate the hon. Lady on securing the debate. It is reassuring that Liberal Democrat and Conservative Members are present to debate this important issue, albeit slightly disappointing that ours are the only parties represented. Will she join me in welcoming the centre for data ethics and innovation, which was announced in the Budget at the end of last year? Does she agree that it is important that whatever measures we take are UK-wide, so that statistics, ethics and the way we use data are standardised—to a very high standard—across the United Kingdom?

The hon. Gentleman, who is a fellow representative from Scotland, pre-empts the next section of my speech.

We need to develop good standards across the whole United Kingdom, but this issue in many ways transcends national boundaries. We must develop international consensus about how to deal with it, and I hope the UK takes a leading role in that. Parliament has started to look at the issue in recent years: the Select Committee on Science and Technology has produced a couple of reports about it, and the new House of Lords Select Committee on Artificial Intelligence is already doing great work and collecting interesting evidence. The Government have perhaps been slow to engage properly with ethical questions, but I have strong hopes that that will change now that the Minister is in post.

I very much welcome the announcement in the Budget of a new centre for data ethics and innovation. That is a good start, albeit long overdue. I found that announcement while reading the Red Book during the Budget debate—it was on page 45—and I even welcomed it in my speech. I am not sure anyone else had noticed it. I would welcome a clear update from the Minister on the expected timeline for that centre to be up and running. Where does she expect it to be based? What about the recruitment of its chair and key members of staff? How does she see it playing a role in advising policy making and engaging with relevant stakeholders?

I am concerned that the major Government-commissioned report, “Growing the artificial intelligence industry in the UK”, which was published in October, entirely omitted ethical questions. It specifically said:

“Resolving ethical and societal questions is beyond the scope and the expertise of this industry-focused review, and could not in any case be resolved in our short time-frame.”

I say very strongly that ethical questions should not be an afterthought. They should not be an add-on or a “nice to have”. Ethical discourse should be properly embedded in policy thinking. It should be a fundamental part of growing the AI industry, and it must therefore be a key job of the centre for data ethics and innovation. The Government have an important role to play, but I hope that the centre will work closely with industry too, because the way that industry tackles this issue is vital.

Regulation is important, and there are probably some gaps in it that we need to fill and get right, but this issue cannot be solved by regulation alone. I am interested in the Minister’s thoughts about that. Every doctor who enters the medical profession must swear the Hippocratic oath. Perhaps a similar code or oath of professional ethics could be developed for people working in AI—let me float the idea that it could be called the Lovelace oath in memory of the mother of modern computing—to ensure that they recognise their responsibility to embed ethics in every decision they take. That needs to become part and parcel of the way industry works.

Before I conclude, let me touch briefly on an issue that is outside the Minister’s brief but is nevertheless important. I am deeply concerned about the potential for lethal autonomous weapons—weapons that can seek and attack targets without human intervention—to cause absolute devastation. The ability for an algorithm to decide who to kill, and the morality of that, should worry us all. I very much hope that the Minister will work closely with her colleagues in the Ministry of Defence. The UK needs to lead discussions with other countries to get international consensus on the production and regulation of such weapons—ideally a consensus that they should be stopped—and to ensure that ethics are considered throughout.

We want the UK to continue to be a world leader in artificial intelligence, but it is vital that we also lead the discussion and set international standards about its ethics, in conjunction with other countries. Technology does not respect international borders; this is a global issue. We should not underestimate the astonishing potential of AI—leading academics are already calling this the fourth industrial revolution—but we must not shirk from addressing the difficult questions. What we are doing is a step in the right direction, but it is not enough. We need to go further, faster. After all, technology is advancing at a speed we have not seen before. We cannot afford to sit back and watch. Ethics must be embedded in the way AI develops, and the United Kingdom should lead the way.

It is a pleasure to serve under your chairmanship, Dame Cheryl. I congratulate the hon. Member for East Dunbartonshire (Jo Swinson) on securing this important debate and on her fascinating and well-argued speech. As she kindly pointed out, I am new to the position of Minister for digital and creative industries. She will know from her ministerial experience that there is a great deal to absorb in any new brief, and I thank her for this opportunity to get involved and absorbed in the ethical considerations of artificial intelligence so early in my new role.

We understand the disruptive potential of transformative technologies, and we stand ready for the adoption of AI, which is going on around us and is important to the future of our industrial strategy. In their review of AI and the industrial strategy, Dame Wendy Hall and Jérôme Pesenti identified a range of opportunities for the UK to build and grow its AI capacity. The forthcoming AI sector deal will take forward their key recommendations about skills and data, and a wider AI grand challenge will keep the UK at the forefront of AI technology and the wider data revolution. Those ambitions will be underpinned by a new Government office for AI. We are building the capacity to address the issues that accompany these technological advancements: issues of trust, ethics and governance; effective take-up by business and consumers; and the transition of skills and labour requirements.

Regarding trust, AI already delivers a wide range of benefits, from healthcare to logistics, biodiversity and business, but we are fully aware that AI brings new challenges, as the hon. Lady mentioned, in privacy, accountability and transparency as well as the important issue of bias, on which she shared a number of concerning examples with the House.

The uses of data in AI and machine learning are developing in valuable but potentially unsettling ways, because of the pace of adoption, as the hon. Lady outlined. We have different concerns and tolerances about trust and fairness depending on the application of AI, varying, for instance, between retail, finance and medicine. We will need to consider specific answers to those challenges in the different sectors if we are to foster the necessary level of trust. Confidence and trust are essential to driving adoption and innovation.

We must ensure that these new technologies work for the benefit of everyone: citizens, businesses and wider society. We are therefore integrating strong privacy protections and accountability into how automated decisions affect users. A strong, effective regulatory regime is therefore vital. In the UK we already benefit from the Information Commissioner’s Office, a well-respected independent body tasked with protecting personal data. Important decisions on everything from autonomous cars to medical diagnosis and decisions on finance and sentencing—and indeed applications to defence—cannot be delegated solely to algorithms. Human judgment and oversight remain essential.

I completely accept the principle that strong regulation is required for data, and it is important that organisations such as the ICO lead that—even if I have some concerns about some of what has come out on the general data protection regulation in recent months. Is it not the responsibility of all of us here, the ICO, Ministers and wider civic society to start discussing privacy more over the long term? We have probably got to have a cultural discussion about privacy, because we have ownership of data, but to accrue the benefits that come from some automation and artificial intelligence we must also be willing to give over some elements of that data for the wider good.

My hon. Friend touches on some important considerations. There has been a debate in healthcare on how much should be private and how much should be anonymised and shared for the general good, as he outlines. I agree that that discussion needs to involve citizens, business, policy makers and technology specialists.

We will introduce a digital charter, which will underpin the policies and actions needed to drive innovation and growth while making the UK the safest and fairest place to be online. A key pillar of the charter will be the centre for data ethics and innovation, which will look ahead to advise Government and regulators on the best means of stewarding ethical, safe and innovative uses of AI and all data, not just personal data. It will be for the chair of the centre to decide how they should engage with their stakeholders and build a wider discussion, as my hon. Friend suggested is necessary. We expect that they will want to engage with academia, industry, civil society and indeed the wider public to build the future frameworks in which AI technology can thrive and innovate safely.

We may find the solutions to many AI challenges in particular sectors by making sure that, with the right tools, application of the existing rules can keep up, rather than requiring completely new rules just for AI. We all need to identify and understand the ethical and governance challenges posed by uses of such a new data source and decision-making process, now and in the future. We must then determine how best to identify appropriate rules, establish new norms and evolve policy and regulations.

When it comes to AI take-up and adoption, we need senior decision makers in business and the public sector first to understand and then discuss the opportunities and implications of AI. We want to see high-skill, well-paid jobs created, but we also want the benefits of AI, as a group of new general-purpose technologies, to be felt across the whole economy and by citizens in their private lives. The Government are therefore working closely with industry towards that end. As I said earlier, we will establish a new AI council to act as a leadership body and, in partnership with Government, champion adoption across the whole economy. Further support will come from Tech Nation as it establishes a national network of hubs to support such growth.

A highly skilled and diverse workforce is critical to growing AI in the UK. We therefore support the tech talent charter initiative to gain commitment to greater workforce diversity. The hon. Lady explained well in her speech why diversity in the tech workforce is important to the ethical considerations we are debating. As we expand our base of world-class AI experts by investing in 200 new AI PhDs and AI fellowships through the Alan Turing Institute, we will still need to attract the best and brightest people from around the world, so we have doubled the amount of exceptional talent visas to 2,000. I will take the point about the need for diversity when it comes to reviewing such applications. All of that will ensure that UK businesses have a workforce ready to shape the coming opportunities.

With regard to transition, we will see strong adaptation in our labour markets, where our aim should be lifelong learning opportunities to help people adapt to the changing pace of technology, which will bring new jobs and productivity gains. We must hope that those will increase employment. We know that some jobs may be displaced, and often for good reasons: dangerous, repetitive or tedious parts of work can now be carried out more quickly, accurately and safely by machines. None the less, human judgment and creativity will still be required to design and manage them.

On employment, may I impress on the Minister that in that disruption, the Government should be there to help some of those workers pushed out of employment to retrain and find a new place and role in the economy, keeping up with the pace of technology as it develops?

I heartily agree with my hon. Friend. He will be pleased to know that the Department for Business, Energy and Industrial Strategy—my former Department—is working closely with Matthew Taylor to consult on all of his recommendations. The Secretary of State has taken personal responsibility for improving the quality of work. Work should be good and rewarding.

A study from last year suggests that digital technologies including AI can create a net total of 80,000 new jobs annually for a country such as the UK. We want people to be able to capitalise on those opportunities, as my hon. Friend suggested. We already have a resilient and diverse labour market, which has adapted well to automation, creating more, higher paying jobs at low risk of automation. However, as the workplace continues to change, people must be equipped to adapt to it easily. Many roles, rather being directly replaced, will evolve to incorporate new technologies.

The Minister has mentioned the centre for data ethics. Can she update us on when it is likely to be up and running, what the timetable is for recruiting the chair and so on? It would be helpful to know when we can expect that.

We want to proceed at pace, because it is an important part of our programme of dealing with the ethics of this issue. We plan to consult on the plans for a permanent centre in the next few months, and I will welcome the hon. Lady’s input.

Undeniably, substantial changes lie ahead. Therefore, in terms of enabling people to reskill and take advantage of the changes and opportunities in the workplace, a national retraining scheme will help people. We also have plans to upskill 8,000 computer science teachers and work with industry to set up a new national centre for computing education, with a brief to encourage more girls to take advantage of the new technologies in their learning.

Substantial changes lie ahead and, as we push these new technologies, we will also strive to keep people and businesses sufficiently skilled, adaptable and assured. The measures are in place, and I have taken heart from the hon. Lady’s speech about the importance of these ethical considerations. I assure her that they will be uppermost in our minds as we develop policy.

Question put and agreed to.

Sitting suspended.