Article by Professor George Feiger
Executive Dean, Aston Business School








































— Jan 18, 2018

Before joining Aston Business School in June 2013 Professor Feiger was the Chief Executive of a $3.4 billion wealth management company in the United States. Private sector roles have included Head of Strategic Planning at the Bank of America’s world banking division, Senior Partner at McKinsey and Co, Global Head of Investment Banking for SBC Warburg, and Global Head of Onshore Private Banking at UBS.

His academic credentials include appointments as Associate Professor of Finance at Stanford University’s Graduate School of Business and Lecturer in Economics at Harvard. He has served on the Advisory Board of the Berkeley Centre for Law, Business and Economics. As a student, he gained an undergraduate degree from Monash in Australia, a PhD from Harvard and a Fulbright Fellowship.[/su_note]

It is commonplace to observe that “robots” – the machines, the software, the algorithms, Artificial Intelligence in its broadest sense – are displacing workers and taking up tasks at an accelerating rate. Robots assemble cars, algorithms drive harvesters according to GPS signals, computer algorithms beat humans at chess and Go and diagnose diseases. There is no comparable consensus on what this implies for the future of work and society.

A useful caricature is to contrast the “Luddite” view, that mass unemployment will be created, with the “optimistic” view that all past technological revolutions have increased employment and welfare and that this one will too. Expressed this way, the Luddites seem obvious losers. Too few people mention that the industrialisation of the textile industry in the UK in the 18th Century destroyed the textile industries of countries like India and indeed caused mass unemployment. Or that the industrialisation of farming has moved hundreds of millions of people off the land into cities where, up to now, they have found work as cheap industrial labourers but at the expense of the more costly industrial labourers in the advanced economies.

However we may rehash history, we need to deal with what appears different this time. The difference derives from Artificial Intelligence. Past innovation amplified the power of human labour; this time, for an increasing number of activities, the human, that is the brain, is also supplied by the machine. This inevitably will concentrate human work and its resulting income, in progressively fewer hands. Some examples:

Robots now assemble cars and iPhones. More cars are produced each year but employment in the auto industry shrinks. New, high-paid jobs have of course been created, in software writing, robot maintenance and robot design. But many fewer, and much more highly paid, than jobs on the assembly lines of the past.

A doctor can’t remember all the symptoms of all the diseases, or the many sequential tests needed to winkle out a correct diagnosis. A computer can. For many purposes, a nurse practitioner and a computer will be more effective (correct more often), more accessible and cheaper than a GP. Of course many illnesses are related to behaviour and personal counselling will remain essential, but we will need fewer GPs and more nurses (and the AI software will improve its diagnostic abilities all by itself). For surgery, robots and software will make surgeons more productive and reduce errors, which will mean we will require fewer surgeons and fewer remedial processes and facilities.

That is, both at the low-paid worker and the high-paid worker end of the spectrum, AI is and will greatly improve quality while reducing cost (that is, human work). It also increases the return to human capital while lowering it for other forms of capital. Robots and software become cheaper all the time. The best designers and innovators become more expensive. Money moves from savers to the creative class. You get the point. What does this all augur for the future?

Up to now, our society has distributed income and wealth on the principle that you get what you earn (one might call it the “eat what you kill” principle). University graduates get paid more than people without a degree because they are more productive, people with a job generally live better than those on unemployment benefits and so on. One might look on this as a moral principle but it reflects an economic truth – that if people don’t work there will be nothing to live on. The development of AI permits the separation of output from human labour, for the first time. In a physical sense, a smaller and smaller proportion of the population can produce most of what everyone might need. With our current distribution system, this wonderful outcome in fact creates problems for more and more people, the ones who aren’t in the creative class.

Again, you get the point. What might be done? The quickest step is one we took quite a while ago, when industrial productivity had risen markedly – to spread the income around by creating “artificial demand” for labour. What I mean is that we went from a 6 day working week to a 5 day working week by creating the 2 day weekend. People take this for granted but it only came about widely in the 1950s. A move to a 3 day weekend would have a remarkable effect in “income redistribution” without taxation. France tried this with the compulsory 35 hour working week, but with old technology this just decreases the amount of output and thus the output per person. We would have to phase this in more cleverly but we did it once so it can be done again.

Longer term, of course, such moves will not be enough and we will need to contemplate very different economic arrangements.