Social and political impact of Artificial Intelligence (AI) and new technologies.
Setting the scene
I’d like to start by taking you back to last year’s showdown between Deepmind’s AlphaGo program, and Lee Sedol, 18-times world champion of the board game Go. That match ended in a historic 4-1 victory for the computer, a feat which many artificial intelligence experts had thought was 10 years away.
During the second game, AlphaGo played a move – move 37 – which left everyone watching, from the audience to the Go Grandmasters, scratching their heads. No-one could understand the thinking behind it. Yet as the game unfolded it became clear that the computer had devised a winning strategy that had never been seen before in the game’s 2,500 year history.
This year, Deepmind published a compendium of 50 games that AlphaGo has played against itself. One World Champion described the games as “like nothing I’ve ever seen before – they’re how I imagine games from far in the future.” These games have provided insights that are now reshaping the way that Go is played.
What I think is incredible about this story is not just its historic importance in the unfolding story of machine intelligence, but the fact that move 37 can plausibly be described as an act of creative thought. In that move, our own human abilities were exceeded. But it didn’t destroy the confidence of those who enjoy playing Go – rather it turbo-charged global interest in the game, opening up new worlds of gameplay that were previously hidden from us.
For these reasons, move 37 encapsulates not only the fear that AI will come to supplant qualities that we previously thought were immune from automation, but also the hope that it will ultimately enable us to solve some of humanity’s trickiest problems.
The promise of AI
Let’s take a step back. In an important sense, AI is nothing new. We have been augmenting human intelligence with technology since the invention of writing. Clay tablets, parchment scrolls and paper books revolutionised our ability to record and recall information, and to disseminate new ideas.
Similarly, the hype around AI is nothing new. Since Alan Turing wrote the first academic paper on AI in 1950, the promise of thinking machines has captivated us. In fact we have been through repeated cycles of excitement and breathless predictions of an imminent breakthrough, followed by disillusionment – what some have termed ‘AI winters’.
We are currently living through an ‘AI spring’, and this particular one does appear to be different, because two things have changed. First, we are witnessing an explosion in the quantity of data in existence. US internet users alone now use 2.7 million gigabytes of data every single minute. And with everything from industrial machinery to cars to central heating generating data, those numbers will increase exponentially.
This flood of data has become so intrinsic to our economy that it has been reasonably described as “the new oil”. And no-where more so than in the business of AI. Data is the raw material that machine learning systems need in order to refine their algorithms. The more we produce of it, the smarter those systems can become.
Secondly, the data revolution has been accompanied by a similar ramping up of computing power. The latest iPhone, for example, can process up to 600 billion operations per second, making it many millions of times more than the computer which guided the Apollo 11 spacecraft to the moon. These mind-bogglingly fast machines are capable of sifting through reams of data – including unstructured, real-time data – spotting patterns that are far beyond the capabilities of the human mind.
Both of these phenomena are, of course, the natural consequence of the exponential growth of digital technology described by Gordon Moore, the co-founder of Intel, in 1965. And after 5 decades during which computing power has doubled roughly every 18 months, we are now entering an extraordinary period where each round of innovation ushers in hugely accelerated advances.
It is this growth which has made things like driverless cars a reality, where only ten years ago they were science fiction. And while it is early days for the use of AI in ad tech, your businesses too are likely to benefit in the near future from algorithms that can get deeper, real-time insights into what individual consumers want, allowing you to tailor creative content on the fly and create highly personalised advertising.
Politics, tech and AI
Why, you might ask, am I getting involved in this debate? I don’t claim to be the most technologically savvy of politicians, but I have seen first hand what technophobia can do in the hands of policy makers.
In government, the Home Office under Theresa May persistently lobbied for powers to require companies to gather data on law-abiding British citizens. When I challenged their assumptions and asked them to justify the intrusion into people’s private lives, their eyes glazed over. When I objected to the idea of banning end-to-end encryption, I was accused in the right-wing tabloid press of being on the side of the terrorists. When I wanted to set up a review into digital surveillance in the wake of the Edward Snowden leaks, my Conservative coalition colleagues strained every sinew to block it.
There is a culture in Whitehall, among some politicians and officials alike, that views concern about free speech and privacy on the internet as a self-indulgent eccentricity or worse.
Now I’m not saying that I or my team in Government had all the right answers, but that experience convinced me that those who see technology as a threat to be controlled, limited or neutered are generally wrong. With a bit of patience and creativity we can design smarter policies that aim to get the best from technology without sacrificing important social values.
So I think there is a urgent need for politicians to begin to grapple with the unfolding revolution in AI, biotech, robotics and other cutting edge technologies, before we find ourselves in the same position again.
Part of the problem is the air of unreality surrounding the way the big tech companies themselves talk about AI. I worry that for most people, if they have heard the term AI at all, it’s unlikely to have positive connotations. They’re more likely to have heard the warnings about ‘robots taking your job’, or the campaign to get the UN to ban ‘killer robots’.
Is the future really so bleak?
Universal Basic Income
Consider the fashionable support in Silicon Valley for a Universal Basic Income, or UBI – the idea that the state could provide a cash income to every citizen, regardless of whether they are in work. Elon Musk, for example, thinks this is going to “necessary”, as “there will be fewer and fewer jobs that a robot cannot do better”.
This sounds grim, but Musk, like others, thinks there’s a bright side to the disruption. Automation will make everything incredibly cheap, so the cost of supporting the population will be manageable. People will be freed from the shackles of work to “do other things and more complex things, more interesting things”.
It is strikingly similar to the vision laid out by Karl Marx: under communism, he predicted, “society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.”
History has taught us to be sceptical of claims that our economic system will collapse under the weight of its own contradictions and usher in a new utopia. True, capitalism, left unchecked, has a tendency to minimise the value of labour and maximise profit. But the elevation of this observation into an unstoppable force of history that will, inexorably, lead to the replacement of all human labour with machines, resulting in turn in the collapse of wage-earning consumer demand, thus requiring a collectivised wage from the state instead – this strikes me as an article of faith that can and should be punctured.
The impact of automation on jobs
While not everyone agrees with the UBI prescription, there are plenty of voices who agree with the diagnosis. Predictions of job losses are widespread. A report from Oxford University in 2013 suggested that 47% of jobs could be rendered obsolete within 20 years. The Bank of England has warned that up to 15 million British jobs could be at risk, notably those occupations involving administrative, clerical and production tasks.
But a good many economists say that what’s happening on the ground doesn’t bear much relation to the predictions. Automation is already a major feature in the workplace, yet we are experiencing unusually high levels of employment in many developed economies, especially those with the highest levels of digitalisation. There are more people in work in the UK then ever before. If automation was leading to a net replacement of humans with machines, we would see output per worker rise, and it hasn’t. In fact, productivity has stalled since the crisis of 2008, after averaging around 2% per year for the previous decade.
Smartphones are everywhere, and while they’re certainly keeping us entertained, there is no sign they’re making us more productive.
Anxiety about technology-driven unemployment is, of course, a recurring theme throughout history. Aristotle speculated about machines putting people out of work. Elizabeth I refused a patent on a knitting machine because she feared it would deprive her “poor subjects” of employment, “thus making them beggars”. The Luddites smashed the machines that were putting professional weavers out of work. In 1964, a committee of public intellectuals presented a report to President Lyndon Johnson confidently predicting that the ‘cybernation revolution’, brought about by “the combination of the computer and the automated self-regulating machine”, was about to make millions of Americans jobless.
What is clear is that these predictions of permanent, systemic joblessness have always turned out to be wrong. Far from impoverishing workers, previous rounds of technological innovation have forced down prices, stimulated demand, and pushed up wages. New types of jobs have been created that were previously unthinkable. Just imagine what people would have said 50 years ago if they read a list of the job titles held by people in this room.
The consequences of disruption are often counter-intuitive. Consider the example of automated cash machines. ATMs were introduced in the US in the 1970s, and by 2010 there were around 400,000 of them. You would expect that human bank tellers would have become all but extinct over that period. Yet in actual fact their numbers increased slightly, from around 500,000 in 1980 to 550,000 in 2010.
How is that possible? For two reasons. First, because the ATM reduced the costs of running a bank branch, it made it cheaper to open new branches. So overall, the number of tellers went up as new branches were established. Secondly, because the introduction of the new technology changed the nature of the job itself. Instead of counting out banknotes, tellers were freed up to do other valuable tasks in the bank that required a human touch. They became the face of ‘relationship banking’, spending more time face to face with customers.
There is an important truth here: certain human qualities are very hard to replace with machines – good judgement, common sense, the flexibility to deal with the unexpected, personal contact. As the MIT economist David Autor puts it, we tend to overlook the fact that tasks that cannot be done by machines are often tasks that are complemented by automation.
Of course, none of this means that technology couldn’t in theory result in mass unemployment in the future. But if we take history as our guide we should be very sceptical of the doom-sayers. In 1900, 41 percent of the US workforce was employed in agriculture; by 2000, that share had fallen to just 2 percent. If technology was a net destroyer of jobs, there would be a lot of people today sitting in the fields with nothing to do.
Different this time?
The question which we should ask is “is it going to be different this time?”.
There is some reason to think it might be. AI is a general purpose technology that could impact almost every area of our lives, and very quickly. Where the steam engine took more than 100 years for its disruptive effects to be felt across all applicable industries, AI is happening fast. There will be much less time to adapt.
And unlike previous inventions, AI isn’t primarily about replacing manual labour – its real impact will be felt in jobs in data-rich service industries which we thought would never be done by computers because they require cognitive skills. Yet computers are already carrying out complex tasks from medical diagnosis to the analysis of legal documents, to music composition.
There are also some troubling, wider trends within the labour market.
Inequality has been rising across the developed world since the 1970s. Wealth is concentrated in fewer hands. The share of GDP allocated to labour has fallen while the proportion going to capital has risen. Technological change has favoured those who have skills that are relevant to new, ICT-enabled technologies, while disadvantaging those whose skills are relatively easy to automate. As a result, the market is increasingly polarised between highly skilled posts on the one hand, and jobs requiring few formal skills on the other, with very little in between.
So a more realistic, but nonetheless worrying, scenario is that AI exacerbates these trends rather than eliminating employment altogether. Tech continues to benefit capital at the expense of labour. Work becomes more polarised between the skilled and de-skilled. Wage growth continues to flatline. We all experience a more turbulent politics which feeds off inequality and a sense of grievance. Sure, new jobs are created, but more slowly than we would like. Things become cheaper, but consumer confidence also takes a hit. In other words, we may be in for a new era which avoids mass unemployment but is characterised by continued insecurity, inequality and volatility.
The public policy response
In politics, one of the simplest divides is between optimists and pessimists, between those who embrace change and those who shun it. Liberalism is an inherently optimistic creed. Liberals have tended to embrace change as an inevitable feature of modern society, concentrating not on preserving the status quo but ensuring that no one group or set of vested interest dominates, and that change benefits society at large.
It won’t have escaped anyone in this room that the backlash – or ‘techlash’ – against the big US internet companies is in full swing. While smaller companies still enjoy some immunity, the big five are in danger of being seen in the same light as bankers and politicians.
Some of that anger – for example on tax – is both predictable and legitimate. The big firms have much more to do to prove that they are good global citizens. In other areas, they’re being unfairly caricatured, often by a print media that has an ulterior motive to discredit social media because of its success in attracting online advertising revenues which otherwise might have been spent on newspapers.
This stand-off risks leaving all of us worse off. As a society, wide-eyed enthusiasm for the novelties of tech is giving way to an angry, curdled cynicism. That is bad news for an industry which can only thrive when it can inspire people about the positive promise of change, and bad for society if we shun the positive potential for change.
So this is where I stand: I am unambiguously optimistic about the potential for technology to enhance, not destroy, the values and principles we hold dear. I reject the doomsayers and cynics who appeal to atavistic fears of change. But I also believe we need to think harder about exactly how we use technology, and how we regulate it, for wider social good.
Because we are not helpless – we are forgetting that we have a say in this story. Humanity is not destined for a life of servitude under robot overlords. We should resist technological determinism – the view that technology imposes inescapable choices upon us. Technology must be deployed in the service of society, not the other way round. We can’t and shouldn’t stop innovation, but we can decide how we want innovation to promote our collective wellbeing.
We forget all too easily that in politics, every action produces a reaction. Technological change creates its own political chain reaction. As the economist Jonathan Portes observed in a recent paper, “[the] broader economic forces pushing against the interests of workers will not, on their own, determine the course of history. The Luddites were doomed to fail; but their successors – trade unionists who sought to improve working conditions and Chartists who demanded the vote so that they could restructure the economy and the state – mostly succeeded. The test will be whether our political and social institutions are up to the challenge.”
So, for example, while it may be true that technology will naturally tend to concentrate wealth in the hands of a few global companies, there is no reason why we should not tax that wealth more fairly and redistribute it for the wider public good. There is nothing revolutionary about that statement – it’s what we do all the time; it’s the very basis of a redistributive tax system.
What’s really needed – and what’s missing – is the capacity for leaders in both the tech and political world to work together. Politicians all too often believe that they can grab a few easy headlines by condemning the big tech companies, and too many folk in Silicon Valley believe all Governments are the enemy. This combination – of ignorance and fear amongst some politicians, and arrogance and naivety amongst some parts of the tech community – means the two talk past each other.
To their credit, there are some industry leaders like Mark Zuckerberg and Bill Gates who have started to grapple with the wider, complex debates around tech and society. The newly-minted ‘Partnership on AI’ – an alliance of the big firms designed to advance understanding of AI and its use for public good – is hugely welcome. But, a bit like the enlightened industrialists of the 19th century who pioneered factory safety and workers’ rights, those at the top of the tree don’t always find it easy to practice what they preach, and they cannot affect wider change on their own.
Because leadership also has to come from governments and civil society. The debate, for example, about a Universal Basic Income should be raging in the corridors of Westminster and Whitehall. (Not that it’s possible, of course, to have a rational debate about anything these days as so much energy is consumed by the great vortex of Brexit.)
If I was still an MP, I would be saying loudly and clearly that I believe work remains a source of dignity, status and self-esteem for men and women across society. In many ways, how and where we work is at the core of our social existence. A Utopia in which humans are ‘freed’ from work is only possible if you believe that work is inherently bad. I do not. Of course working conditions – from wages to the hours worked – can and should be improved for millions of our fellow citizens. But to leap from that to aspiring to abolish work altogether is not a leap I would ever lightly advocate.
The reality is that UBI is a cypher for other debates. People are decanting their fears about inequality, wealth, power, and insecurity into a discussion about how we manage in the future, when they are really entitled to know the answers to those questions today. These are all challenges that we should be dealing with now because they already affect us. Instead of dreaming of a future of worklessness, we should be redoubling our efforts to improve the world of work today.
The truth is we don’t know a lot about the future. But we do know that the next 20 years will bring an acceleration of skills-biased technological change. So let’s plan on that basis, and start to think through what social policies we need.
It is not an original observation, but a sustained and major investment in education and training is self-evidently essential. I don’t yet see enough of a commitment to do this from government. As we start to see the productivity dividend from AI, we should capture some of that additional value through the tax system, and use the money to cushion the transition for people whose jobs are at the highest risk of disruption. Instead of a Universal Basic Income, we should instead explore how we can fund expanded universal basic services – including lifelong learning and training – from the additional revenues derived from technological innovation.
This approach to enhancing the employability of people throughout their working life is something which is already practised in countries like Denmark. The Danish ‘flexicurity’ system protects workers rather than jobs, allowing employers to hire and fire easily but guaranteeing generous unemployment benefits for those who find themselves out of work, with free retraining programmes. We should think of emulating that in this country.
AI raises a lot of issues, from ethical questions around diversity and algorithmic bias to legal questions about how we should regulate self-learning technologies and who has liability when things go wrong.
Today, I have only touched on my own reactions to the ongoing debate about the impact of technology on the world of work.
But there are 3 further areas where I intend to speak out and where the small think tank I have established, Open Reason, plans to conduct further research.
First, global governance. The question of whether any technology will be used for good or ill depends in many ways on the regulatory and legal context in which it is allowed to operate. In a developed democracy like our own, I have some confidence that those in charge of the development of AI, both in government and in the private sector, can be held to account, and are ultimately answerable to parliament and the courts. The Government commissioned review by Dame Wendy Hall, the announcement at the budget of a Centre for Data Ethics and Innovation, the publication of an industrial strategy covering AI earlier this week, with the promise of a Government Office for AI and an ‘AI Council’ – all are all steps in the right direction, although it’s not yet clear where the money is coming from for the latter.
In authoritarian capitalist systems like China, however, I do not have the same confidence that the development of AI will be governed by the same high standards of accountability and transparency.
We know that much of the research into AI is going on in secret in closed, autocratic regimes, and we know why. As Vladimir Putin recently said, “whoever becomes the leader in this sphere will become the ruler of the world.” Russia’s skill at manipulating news to subvert the democratic process is already well-known, but you can bet that Putin has his eye on the potential for AI to revolutionise everything from cyber warfare to the suppression of his own political opponents.
What this means is we are effectively in an arms race. Our scruples about designing fairness into AI systems must look quaint to some of these countries. For this reason, we should be working with other democracies through the EU, UN and OECD to develop global agreements on transparency, standards and ethics.
Secondly, the question of who controls our data needs to be resolved, because it is at the heart of the relationship of trust which will need to exist between the individual and the entities responsible for the machines which analyse that data.
As a liberal, I believe that people are sovereign, and that the individual should be at the heart of the internet, not treated as an afterthought. As a civil libertarian, I am uncomfortable with my data being collected without my consent, and even less comfortable if I knew that intelligent machines were poring over it and drawing conclusions about my private life on behalf of the government or big business.
The EU’s controversial General Data Protection Regulation attempts to answer some of these questions, of course. I know that the provisions in the GDPR about active consent will have far-reaching effects on many of your businesses, as will the forthcoming ePrivacy regulation. I think it is a matter of real regret that the new regulation has been drawn up in a way that doesn’t enjoy the confidence of many of the business that will be affected by it. And an even bigger shame that future opportunities to amend and improve the law will be lost to the UK as we will have been relegated to mute rule-takers, deprived of our seat at the EU negotiating table.
However, for better or worse, it is going to be the new reality from May next year, and I know that this industry has the ingenuity to find ways to adapt.
When it comes to AI, it seems inevitable that the rules will need to be updated. It is one thing to manage the risk of bias or error in the settings and algorithms deployed by machines to process personal data; and another to establish the circumstances in which we are comfortable allowing intelligent machines to make choices that affect our lives based on a process of reasoning which is inevitably going to be opaque to human observers. There are huge issues of fairness, transparency and accountability here which point to the need for constantly evolving regulation.
One recent practical idea which might provide some security against the misuse of personal data by intelligent algorithms is an approach operated by Deepmind in the hospitals where it handles patient information. ‘Verifiable Data Audit’ uses blockchain technology to create a permanent ledger of every interaction with every line of data, including the purpose for which it has been interrogated. As Deepmind’s co-founder Mustapha Suleyman describes it, this is “to allow our partner hospitals to provably check in real-time that we’re only using patient data for approved purposes”. If something like this became the standard for AI, it could shift the balance of power decisively in favour of the individual while building the trust needed to convince people that the technology is being used responsibly.
Finally, I believe it is time for a new deal between government and the tech industry. If the ideological divide continues to widen, there are severe risks to both sides. We need to find ways for the two worlds to meet and develop a shared set of values and interests.
The most important risk is that by continuing to squabble, we lose the opportunity to work together to solve some of humanity’s biggest problems: the ecological crisis, the erosion of democracy and rise of authoritarian populism, the undermining of ‘truth’, and the persistence of inequality. AI could be a potent tool for addressing all of these issues.
What might this look like? First, the industry should stop scaring people with catastrophic and speculative visions of the future which are based on conjecture rather than hard fact, and politicians should stop scoring cheap points by denouncing the tech industry without grappling with the underlying issues.
Instead, we need a positive vision of a future enabled by technology, including AI, and a partnership between politicians and the tech industry to direct our new capabilities for public good. No-one to date has really managed to capture the public imagination with a vision of how AI could improve society. That needs to change.
The recent suggestion from the Tony Blair Institute, that there should be a Secretary of State for Digital and Technology around the Cabinet table is the kind of institutional reform that is urgently needed. As a minimum, we need to establish a high-powered standing forum bringing both sides together with the authority to take decisions about the use of technology for the public good.
In conclusion, we are not helpless. The future will be determined by us human beings, not bloodless algorithms. The unfolding technological revolution – with AI at its core – presents many challenges to society, but many opportunities too.
We need to act fast to develop new forms of governance, accountability and regulation – both at national and international level – to exploit the best of what this technology has to offer, and shun the worst.
There are deep-seated problems in society – of insecurity, a lack of trust, and inequality – which were not invented by technology but could, in some circumstances, be exacerbated by it. It is in our power to ensure that this does not happen. As technological innovation moves at a breakneck pace, we must ensure that the benefits of increased wealth and productivity are properly redistributed across society and that as many people as possible can benefit from the new skills needed to operate successfully in a modern economy.
But this can only be achieved if the mutual suspicion, recrimination and ignorance which too often characterises the relationship between the world of politics and the world of technology is overcome. That is the urgent task before us.