Final dates! Join the tutor2u subject teams in London for a day of exam technique and revision at the cinema. Learn more

In the News

Will artificial intelligence bring a new phase of rapid productivity growth?

Jim Riley

31st January 2019

Nobel Laureate Bob Solow pronounced thirty years ago that “you can see the computer age everywhere but in the productivity statistics”.

At the start of the 1980s, the world entered the digital age. Fax machines transformed communications. The introduction of personal computers made high powered computing available to all.

But it took time to work out how to make best use of these major changes in technology. In the 1980s, output per worker in the US grew by only 1.4 per cent a year. But between 1995 and 2005, this had accelerated to 2.1 per cent.

We are on the cusp of another acceleration in productivity growth. This is due to artificial intelligence (AI).

Even the mention of AI strikes fear into many hearts. Surely this will cause massive job losses? That is one way to boost productivity, but hardly desirable.

In fact, to date most of the applications of AI in companies have not replaced workers. Rather, they have supplemented what employees do, enabling them to be more productive.

Two recent pieces in the Harvard Business Review provide firm evidence for this. Satya Ramswamy found that the most common use of AI and data analytics was in back office functions, particularly IT and finance and accounting where the processes were already at least partly automated.

Thomas H. Davenport and Rajeev Ronanki came to the same conclusion in a detailed survey of 152 companies. AI was used, for example, to extract information from emails to update customer contact information or changes to orders and to read contracts.

Developments within the techniques of AI itself suggest that practical applications of the concept are about to spread much more widely.

There was a surge of research interest in AI in the 1980s and 1990s. It did not lead to very much. Essentially, in this phase of development, people tried to get machines to think like humans. If you wanted a translation, for example, your algorithm had to try to learn spelling, the correct use of grammar and so on. But this proved too hard.

The real breakthrough was during the 2000s. Researchers realised that algorithms were much better than humans at one particular task. Namely, matching patterns.

To develop a good translator, you give the machine some documents in English, say, and the same ones translated into French. The algorithm learns how to match the patterns. It does not know any grammar. It does not even know it is “reading” English and French. So at one level, it is stupid, not intelligent. But it exceptionally good at matching up the patterns.

In the jargon, this is “supervised machine learning”. A brand new study in the MIT Technology Review shows that purely scientific advances in this field are slowing down markedly.

In other words, in the space of a single decade, this has become a mature analytical technology. One that can be used with confidence in practical applications, in the knowledge that it is unlikely to be made obsolete by new developments.

Productivity looks set to boom in the 2020s.

Jim Riley

Jim co-founded tutor2u alongside his twin brother Geoff! Jim is a well-known Business writer and presenter as well as being one of the UK's leading educational technology entrepreneurs.

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.