Is innovation exploding or expiring? Published on "Techonomy", April 2014

Published by: Techonomy, April 30, 2014,Forum Agenda, September 1, 2014, and Harvard Business Review Chinese, December 23, 2014

The stories we hear about the economy influence the way we behave, which in turn influences the economy. As Nobel laureate Robert Shiller puts it, sometimes these stories “inspire us to go out and spend, start businesses, build new factories and office buildings.” Other times, “they put fear in our hearts and impel us to sit tight, save our resources, curtail spending and reduce risk.”

So it may be with today’s competing narratives on innovation: one portrays a civilization that has run out of big ideas; the other suggests we are on the brink of a new industrial revolution.

We wanted flying cars, instead we got 140 characters. That summary of the first narrative is the subtitle of VC firm Founders Fund’s manifesto. It asks: “Have we reached the end of the line, a sort of technological end of history?” And it suggests that we have run out of big ideas: “The future that people in the 1960s hoped to see is still the future we’re waiting for today. Instead of Captain Kirk and the USS Enterprise, we got the Priceline Negotiator and a cheap flight to Cabo.”

Some economists who concur point to productivity statistics: hourly output per worker in the U.S. rose after WWII, but dropped in the 1970s. It bumped back between 1996 and 2004, but has dropped again to pre-1990s levels. In a Chicago Fed study of 21 advanced countries, all but one experienced lower productivity growth between 2000 and 2010 than in the previous decade.

In view of this data, Northwestern University economist Robert Gordon concludesthat the world is returning to a regime in which growth is not intensive–driven by technological progress, but extensive–driven at best by adding more or better labor, capital, and resources. Most innovations of the past decade, he argues, “did not fundamentally change labor productivity or the standard of living in the way that electric light, motor cars, or indoor plumbing changed it.” They merely represent gradual improvements to existing capabilities: “the iPod replaced the CD Walkman; the smartphone replaced the garden-variety ‘dumb’ cell phone … and iPads provided further competition with traditional personal computers.”

Several arguments support such “stagnation hypotheses.” One refers to the “burden of knowledge”: as ideas accumulate, thinkers are slower to catch up with the frontier of their scientific or technical speciality. Pierre Azoulay and Benjamin Jones find that the U.S. R&D worker contributed almost seven times more to total factor productivity in 1950 than in 2000.

Another argument refers to capital allocation. Harvard Business School professor Clayton Christensen differentiates the deployment of capital into empowering innovations that lead to new products, and efficiency innovations which make existing ones cheaper. Societies progress if capital liberated by efficiency innovation flows into empowering innovation. Today, “capital liberated by efficiency innovation gets reinvested into still more efficiency innovation,” Christensen says. He suggests that this is driven by the “Doctrine of New Finance,” which leads corporations to rationalize capital even in times when capital is abundant. In other words, corporate leaders fall victim to a cognitive bias.

A more radical twist of this argument points to vested interests. Economist Tyler Cowen at George Mason University notes that many of the big innovations of recent decades either have primarily private benefits or produce benefits of questionable value. Nobelist Joseph Stiglitz at Columbia points out that, just before the collapse of Lehman Brothers, the financial sector prided itself on its innovativeness. Yet, upon closer inspection, it became clear that “most of this innovation involved devising better ways of … exploiting market power.”

Is this enough to proclaim the end of innovation?

To put things into perspective, the “we have run out of big ideas” story is not new. In 1992, Robert Solow famously argued, “You can see the computer era everywhere but in the productivity statistics.” Shortly thereafter, productivity growth picked up, fuelled by technologies that radically lowered the cost and increased the speed of transportation and communication. In hindsight, Solow’s assessment might have been driven more by his views of the past than his vision of the future.

Robert Gordon’s analysis may one day be found to be similarly flawed. In aresponse to Gordon’s article, Kevin Kelly, Founding Executive Editor of Wiredmagazine, predicts that when students in 2095 are asked to write about why Gordon was wrong in 2012, they will say things like, “He missed the impact of the real inventions of this revolution: big data, ubiquitous mobile, quantified self, cheap AI, and personal work robots – he was looking backward instead of forward.”

Kelly’s response implies that humanity is not running out of ideas; on the contrary, he sees it beginning another industrial revolution. Erik Brynjolfsson and Andrew McAfee formulated this vision eloquently in The Second Machine Age. The first machine age began with the steam engine that substituted our physical power; the second age, the authors explain, will be driven by artificial intelligence and ubiquitous connectivity, and machines will substitute for and augment our cognitive power.

Brynjolfsson and McAfee optimistically expect that artificial intelligence will do ever more, from trivial tasks such as recognizing our friends’ faces to more substantive ones like driving cars. In addition, they argue, technology is making rapid communication, information acquisition, and knowledge-sharing more democratic and egalitarian.

Who will benefit from these technology trends? Commentators such as Tyler Cowen,Jeffrey Sachs, Laura Tyson, and Martin Wolf suggest that as productivity in manufacturing rises and more routine brainwork is computerized, middle-income jobs could diminish. Already, Cowen points out, the median male salary in the U.S. today is lower than it was in 1969. One recent study by Carl Frey and Michael Osborne suggests that 47 percent of U.S. jobs are at risk of computerization.

Cheap and rapid communication is a double-edged sword, too. It allows millions to enter labor markets, for instance, through platforms like Flitto, which crowdsources translation by doling it out in bits and pieces to millions of users, at the same time that it yields “digital sweatshop” wages to all those workers, some contend. And, when it comes to more demanding tasks, digital communication may have an opposite employment effect: in education, for instance, students might prefer to listen virtually to lectures from the best minds in the field, rather than lectures at their local institution.

So, to borrow from William Gibson, is the future already here, but just not very evenly distributed?

Not necessarily. Individual workers may historically have suffered from technological progress, but new productive uses were found for the workforce as such. According to a study by the McKinsey Global Institute, all but one 10-year rolling pe­riod since 1929 has recorded increases in both U.S. productivity and employment. Also, while tech­nology is associated with rising inequality within countries, as a structural driver of globalization it has reduced inequality across them.

Since 2009, the world has been waiting for some story to bring back hope and confidence – and to restart economic growth. At the peak of the crisis, the first tale of innovation ruled the headlines. Today, as the world slowly pulls itself out of the worst economic crisis in decades, the second tale is back, reinvigorating confidence in our innovative and entrepreneurial capacity.

This is good news. Yet with concerns over inequality and employment looming large, it cannot stop here. The question we must ask is: How can innovation generate more and better value for all – for the organizations we lead, the people we serve, and the societies to which we belong?

About Sebastian Buckup