Artificial intelligence: AI could change everything, but probably not too fast
Okay, who said that? No one, unless we’re ready to start calling big language models people. What I did was ask ChatGPT describe the economic effects of artificial intelligence; it went on for a long time, so it was an excerpt.
I think many of us who have played with the large language models that are widely discussed in the AI rubric (although there are almost metaphysical debates about whether we should call it intelligence) have been shocked at how much they can now sound like humans. . And you can bet that they, or their descendants, will eventually take over a significant number of the tasks currently performed by humans.
Like previous leaps in technology, this will make the economy more productive, but will also likely harm some workers whose skills have been devalued. Although the term “Luddites” is often used to describe those who are simply prejudiced against new technology, the early Luddites were skilled artisans who suffered real economic damage from the introduction of mechanical looms and knitting looms.
But this time, how big will those effects be? And how quickly will they appear? The answer to the first question is that no one really knows. Predictions about the economic impact of technology are notoriously unreliable. Second, history shows that it will take longer for the big economic effects of AI to materialize than many people seem to expect.
Consider the implications of previous advances in computing. Gordon Moore, the founder of Intel, which introduced the microprocessor in 1971, died last month. He became famous for his prediction that the number of transistors in a computer chip would double every two years, a prediction that proved to be stunningly accurate for half a century. The implications of Moore’s Law are all around us, especially in the powerful computers, also known as smartphones, that almost everyone carries with them these days.
Discover stories that interest you
For a long time, however, the economic return on this impressive increase in computing power has been surprisingly elusive. Why did a huge, sustained surge in computing power take so long to pay off for the economy? In 1990, economic historian Paul David published one of my favorite economics articles of all time, The Dynamo and the Computer. He drew a parallel between the effects of information technology and earlier technologies. technology revolution, electrification of industry.
As David noted, electric motors became widely available in the 1890s. But having the technology is not enough. You also need to figure out what to do with it.
To take full advantage of the benefits of electrification, manufacturers had to rethink the design of factories. Pre-electric factories were multi-story buildings with cramped work spaces because it was necessary to efficiently use the steam engine in the basement, which set the machines in motion through a system of shafts, gears and pulleys.
It took a while to realize that each machine, powered by its own engine, allowed for sprawling one-story factories with wide aisles that made it easy to move materials, not to mention assembly lines. As a result, significant productivity gains through electrification did not materialize until after World War I.
Of course, as David predicted, the economic returns from information technology finally showed up in the 1990s, when filing cabinets and dictation secretaries finally gave way to cubicle farms. (What? Do you think that technological progress is always attractive?) The lag in this economic gain even turned out to be as long as the lag behind electrification.
But this story still presents a few mysteries. First, why was the first IT productivity boom (maybe another one if chatbot enthusiasm is justified) so short-lived; it mostly only lasted about ten years.
And even while it lasted, productivity growth during the IT boom was no greater than during the generation-long boom after World War II, which was noticeable in fact that it didn’t seem to be driven by any radical new technology.
In 1969, the celebrated management consultant Peter Drucker published The Age of Gap, which correctly predicted major changes in the structure of the economy, but the title of the book implies—correctly, I think—that the preceding period of extraordinary economic growth was actually an age of continuity. , an era during which the basic contours of the economy did not change much, even as America became much richer.
Or to put it another way, the great boom from the 1940s until around the 1970s seems to have been largely based on the use of technologies such as the internal combustion engine that have been around for decades, which should make us even more skeptical about attempts to use the latest technological developments to predict economic growth.
This does not mean that AI will not have huge economic impacts. But history says they won’t come quickly. ChatGPT and everything that follows is probably the economic history of the 2030s, not the next few years.
This does not mean that we should ignore the consequences of a possible boom caused by artificial intelligence. Large language models in their current form should not affect economic forecasts for the next year, and probably should not have much of an impact on economic forecasts for the next decade. But the long-term outlook for economic growth looks better now than it did before computers began to imitate humans so well.
And long-term economic forecasts matter, even if they are always wrong, because they underlie long-term budget forecasts, which in turn help guide current policy in a number of areas. Not to mention giving too much importance to this, but anyone who predicts a radical acceleration of economic growth thanks to AI, which will lead to a significant increase in tax revenue, and at the same time predicts a future financial crisis if we do not drastically reduce spending on Medicare and Social Security. provision doesn’t make much sense.