To understand how advances in artificial intelligence are likely to change the workplace — and the work of managers — you need to know where AI delivers the most value.
To appreciate how useful this framing can be, let’s review the rise of computer technology through the same lens. Moore’s law, the long-held view that the number of transistors on an integrated circuit doubles approximately every two years, dominated information technology until just a few years ago. What did the semiconductor revolution reduce the cost of? In a word: arithmetic.
This answer may seem surprising since computers have become so widespread. We use them to communicate, play games and music, design buildings, and even produce art. But deep down, computers are souped-up calculators. That they appear to do more is testament to the power of arithmetic. The link between computers and arithmetic was clear in the early days, when computers were primarily used for censuses and various military applications. Before semiconductors, “computers” were humans who were employed to do arithmetic problems. Digital computers made arithmetic inexpensive, which eventually resulted in thousands of new applications for everything from data storage to word processing to photography.
AI presents a similar opportunity: to make something that has been comparatively expensive abundant and cheap. The task that AI makes abundant and inexpensive is prediction — in other words, the ability to take information you have and generate information you didn’t previously have. In this article, we will demonstrate how improvement in AI is linked to advances in prediction. We will explore how AI can help us solve problems that were not previously prediction oriented, how the value of some human skills will rise while others fall, and what the implications are for managers. Our speculations are informed by how technological change has affected the cost of previous tasks, allowing us to anticipate how AI may affect what workers and managers do.
Machine Learning and PredictionThe recent advances in AI come under the rubric of what’s known as “machine learning,” which involves programming computers to learn from example data or past experience. Consider, for example, what it takes to identify objects in a basket of groceries. If we could describe how an apple looks, then we could program a computer to recognize apples based on their color and shape. However, there are other objects that are apple-like in both color and shape. We could continue encoding our knowledge of apples in finer detail, but in the real world, the amount of complexity increases exponentially.
Environments with a high degree of complexity are where machine learning is most useful. In one type of training, the machine is shown a set of pictures with names attached. It is then shown millions of pictures that each contain named objects, only some of which are apples. As a result, the machine notices correlations — for example, apples are often red. Using correlates such as color, shape, texture, and, most important, context, the machine references information from past images of apples to predict whether an unidentified new image it’s viewing contains an apple.
When we talk about prediction, we usually mean anticipating what will happen in the future. For example, machine learning can be used to predict whether a bank customer will default on a loan. But we can also apply it to the present by, for instance, using symptoms to develop a medical diagnosis (in effect, predicting the presence of a disease). Using data this way is not new. The mathematical ideas behind machine learning are decades old. Many of the algorithms are even older. So what has changed?
Recent advances in computational speed, data storage, data retrieval, sensors, and algorithms have combined to dramatically reduce the cost of machine learning-based predictions. And the results can be seen in the speed of image recognition and language translation, which have gone from clunky to nearly perfect. All this progress has resulted in a dramatic decrease in the cost of prediction.