Tag Archives: The Economist

Chart of the Day, 28 Jan 2015: Oil, Cornucopians, Peakists and Jeremy Grantham

The stunning collapse in oil and metal prices since last summer (see yesterday’s post) has brought the cornucopians and abundantites crawling out of the wood work. From an (otherwise very good) article in The Economist of 17th January titled “Let there be light”.

An increase in supply, a surprising resilience in production in troubled places such as Iraq and Libya, and the determination of Saudi Arabia and its Gulf allies not to sacrifice market share in the face of falling demand have led to a spectacular plunge in the oil price, which has fallen by half from its 2014 high. This has dealt a final blow to the notion of “peak oil”. There is no shortage of hydrocarbons in the Earth’s crust, and no sign that mankind is about to reach “peak technology” for extracting them.

Frankly, this is just sloppy thinking from The Economist: the second sentence, which talks of a “final blow” to the notion of peak oil, doesn’t follow on from the first.

In short, the paragraph muddles the short term and the long term. Why is a fall in oil prices barely six months’ old a “final blow” to the notion of peak oil? And while fracking shows we are far from “peak technology”, it says nothing about price. Can tight oil keep coming to market for years to come at current prices? I think not. For a longer treatment of oil supply versus oil demand, see my more detailed post titled “Has Shale Killed Peak Oil“.

One of the most vocal advocates of the ‘peakist’ or ‘depletist’ hypothesis is Jeremy Grantham, who has used The Quarterly Letter of GMO as a platform for his views. The chart below is taken from The Third Quarter 2014 letter (click for larger image):

U.S. Average Hourly Manufacturing Earnings:Oil Price per Barrel jpeg

Grantham points out that in 1940 one hour’s work for an American engaged in manufacturing could buy 20% 0f a barrel of oil. At the twin peaks of oil abundance–1972 and 1999–the same wage could buy over a barrel of oil. But those days, he argues, are long gone. According to Grantham, this has implications for not only oil markets but also for the energy underpinnings of global economic and productivity growth.

Yesterday, I also argued that the rapid slowing to the Chinese economy was the likely culprit behind the havoc in commodity markets rather than a breakthrough in one particular extraction technology. As evidence, I noted how iron ore and copper prices had collapsed along with the oil price, despite the fact that you can’t frack for copper and iron ore.

The critical question now is what will happen to supply in the face of sluggish demand. Tight oil production is dramatically different from traditional oil production due to the accelerated nature of the depreciation schedule. Fracked fields deplete quickly, so to maintain production you must continually invest. If you don’t, aggregate production falls fast–that is, within a year or two. So we won’t witness a decade long excess capacity work-out as you would have seen in previous oil price busts: supply should adjust to demand at breakneck speed this time around.

Consequently, while we are not at “peak technology” for oil extraction, we possibly are at “peak cheap technology”. If so, forget all talk of “final blows” to peak oil.

Links for the Week Ending 6 April 2014

  • The second instalment of The Intergovernmental Panel on Climate Change’s (IPCC) Fifth Assessment Report (AR5), titled “Impacts, Adaption and Vulnerability”, was released in Tokyo on the 31st March and can be found here. The “Summary for Policymakers” can be downloaded here. On page 19 of the Summary, the IPCC states that “the incomplete estimates of global annual economic losses for additional temperature increases of around 2 degrees Celsius are between 0.2 and 2.0% of income (± one standard deviation around the mean)” with the risk for higher rather than lower losses. The report then goes on to say “Losses accelerate with greater warming, but few quantitative estimates have been completed for additional warming around 3 degrees Celsius or above”. Given that it looks almost impossible that we will constrain warming to 2 degrees Celsius based on the current CO2 emission path and the installed fossil fuel energy infrastructure base, the world really is going into an unknown world of risk with climate change.
  • A key area of economic loss from climate change relates to drought. To date, most models have focussed on precipitation as the principal driver of drought. A new paper by Cook et al in the journal Climate Dynamics titled “Global Warming and Drought in the 21st Century” gives greater emphasis to the role of evaporation (more technically, potential evapotranspiration or PET) in drought. Through better modelling of PET, the paper sees 43% of the global land area experiencing significant dryness by end of 21st century, up from 23% for models that principally looked at precipitation alone. A non-technical summary of the paper can be found here.
  • Meanwhile, the general public has lapsed back into apathy around the whole climate change question, partially due to the hiatus period in temperature rise we are currently experiencing. However, evidence is slowly mounting that we could be about to pop out of the hiatus on the back of a strong El Nino event (periods of high global temperature are linked to El Ninos). Weather Underground has been doing a good job of tracking this developing story, with another guest post from Dr. Michael Ventrice (here) explaining the major changes in the Pacific Ocean that have taken place over the last two months and which are setting us up for an El Nino event later in the spring or summer.
  • Changing subject, The Economist magazine ran a special report last week on robotics titled “Immigrants from the Future“. In some ways, I came away less impressed by the capabilities of the existing generation of robots than more.
  • I often blog on happiness issues (most recently here). This may seem strange for a blog whose stated focus is on such global risks as resource depletion and climate change, but I don’t see the contradiction. For me, much of our striving to extract and burn as much fossil fuel as possible comes through the pursuit of goals that don’t necessarily make us more happy. A new book by Zachary Karabell titled “The Leading Indicators” adds a new dimension to this argument. Karabell argues that over the last century or so we have created a series of statistics that are more than pure measurements of economic success. In short, they are ideology laden more than ideology free. Political parties set out their manifestos based on a mishmash of economic achievements and goals based on GDP, unemployment, inflation, the trade balance, interest rates, the strength of their national currency and so on and so forth. But these number encapsulate only part of well-being. Yet such statistics totally dominate political discourse because that is how we have been taught to keep score in a modern capitalist economy. As we career towards extremely dangerous climate change, I think it is time that we recognise these economic indicators for what they frequently have become: false gods. Karabell has an article in The Atlantic setting out the book’s main ideas here and there is a good review in The Week here.
  • Rising inequality has been one of the major economic development over the past 40 years. I am a great fan of the Word Bank economist Branko Milanovic, who wrote a wonderful book called “The Haves and Have-Nots: A Brief and Idiosyncratic History of Global Inequality“, in which he pulls together many strands of the inequality literature within a global context. I blogged on this once here. A nice complement to this book is the new web site titled Chartbook of Economic Inequality, which has been put together by two academic economists Anthony Atkinson and Salvatore Morelli. If you like infographics, you will love this site.

Hiding from the Computers Part 4: Time to Get Skeptical on Lump of Labour Skeptics?

A decade or so ago, any suggestion that technology could be a major driver of inequality, let alone unemployment, would generally have been met with contempt by your average economist. Anyone questioning the beneficence of technology would have been accused of succumbing to the ‘lump of labour‘ fallacy. Simplistically, the lump of labour fallacy refers to the belief that there exists a fixed pot of labour; accordingly, if a computer eats some of the pot, there is less left for everyone else.

As an example, here is Paul Krugman skewering the lump of labour fallacy back in 2003:

As the derisive name suggests (the lump of labor fallacy), it’s an idea economists view with contempt, yet the fallacy makes a comeback whenever the economy is sluggish.

And for decades, The Economist magazine, a generally intelligent supporter of free markets and free trade—by which I mean it has enough intellectual curiosity to explore counterarguments—froths at the mouth like a rabid Tea Party activist whenever the idea of a fixed supply of labour is raised.

In a very funny parody, Tom Walker of New York’s Monthly Review (who must have too much time on his hands) stitched together sentences containing the words “lump of labour” from more than  a decade’s worth of The Economist to create this article here. It perfectly captures the magazine’s angry but condescending tone whenever the issue is raised.

Walker was at the time writing in defence of shorter working hours in the face of rising unemployment, a somewhat tangental topic to where we are going with the this post (although I will touch on it in my next), but the point I think he was trying to make was simple: if you believe there is any limit on the amount of work available, The Economist thinks you are an idiot.

But then a strange thing happened. Some mainstream economists started to venture the opinion that technology was making the labour market act weird. They hadn’t quite got to the stage of taking the lump of labour seriously, but they were now prepared to admit that technological progress was a two-edged sword—at least for some workers.

As with most intellectual shifts, the new thinking has been rather messy, occasionally self-contradictory and, of course, hotly disputed. But out of it, we can find some seminal academic papers that deepen our understanding of what is going on. I will highlight one to begin with: the 2006 paper “The Polarization of the U.S. Labor Market,” by David Autor, Lawrence Katz and Melissa Kearney,.

Autor et al had noticed that the U.S. labour market was changing shape under the influence of the awkward acronym SBTC (aka skills-biased technical change). Within the skills-based distribution of jobs, employment was growing at the top end, it was growing at the bottom end, but it was hollowing out in the middle. Furthermore, back at the top end, wages were both rising but also diverging; in other words, inequality among the wealthy, so beautifully spoofed here by The Onion, was actually true. In contrast, no such divergence in fortunes was seen at the bottom.

Accordingly, the new trend the paper tracked was not one that was ‘lifting-all-boats’ technology-led economic growth. Rather, the boats in the Autor et al paper are being thrown all over the place—with some capsizing.

Now, typical of its type, the paper contains an empirical bit looking at the labour-market data, and then a model which tries to make sense of what is going on. In the model, jobs sit in a grid of attributes: work is cognitive or manual, but it is also routine or non-routine. Of course, such a model removes the grey scale of real  life, but that is not important for our basic understanding.

Autor and his co-authors then go on, somewhat confusingly, to simplify the workforce even further into three categories rather than four: 1) abstract, which are the cognitive, non-routine jobs; 2) routine, which are both routine cognitive and routine manual jobs; and finally 3) manual, which means manual non-routine jobs. An earlier paper, in which Autor was again an author, has a useful table showing what is going on in the grid:

Job Polarization Autor jpeg

Moving further into their model, workers are divided into those with tertiary education and those only educated up to high school. The latter can’t perform abstract tasks but can switch between routine and manual tasks. Finally, technology is seen as having two impacts: it mostly substitutes for routine tasks, but it mostly complements for abstract tasks—that is makes the abstract workers more productive.

The word compliment is key, since the complimentarity between technology and routine cognitive and manual labour has been the driver of the explosive growth in living standards since the industrial revolution. Ned Ludd was wrong to smash up two stocking frames in 1779 because his labour, with the course of time, could always be paired with new technology that required a routine cognitive or routine manual human compliment.

As Gregory Clark’s book “A Farewell to Arms” demonstrates, this was a win-win situation for the working class. As a result, workers were more ‘pulled’ out of the countryside and into the cities by attractive relative wages than they were pushed out by evil property owners enclosing their land. And while Marx and Engels may dispute the cause of the migration, they applauded the result; from the Communist Manifesto:

The bourgeoisie has subjected the country to the rule of the towns. It has created enormous cities, has greatly increased the urban population as compared with the rural, and has thus rescued a considerable part of the population from the idiocy of rural life.

At this point, I must stress that Autor has taken every opportunity to distance himself from lump of labour advocates since coming out with the polarisation thesis. From a 201o paper (here):

This ‘lump of labor fallacy’—positing that there is a fixed amount of labor to be done so that increased labor productivity reduces employment—is intuitively appealing and demonstrably false. Technological improvements create new goods and services, shifting workers from older to new activities. Higher productivity raises incomes, increasing demand for labor throughout the economy. Hence, in the long run technological progress affects the composition of jobs not the number of jobs…..

…..It is not fallacious, however, to posit that technological advance creates winners and losers.

And in an op-ed piece in the 24 August 2013 New York Times:

Computerization has therefore fostered a polarization of employment, with job growth concentrated in both the highest- and lowest-paid occupations, while jobs in the middle have declined. Surprisingly, overall employment rates have largely been unaffected in states and cities undergoing this rapid polarization. Rather, as employment in routine jobs has ebbed, employment has risen both in high-wage managerial, professional and technical occupations and in low-wage, in-person service occupations.

Hmmm. It is true that jobs have risen in the high-end cognitive occupations, but the rise in low-wage manual occupation job growth has been minimal, and far too small to absorb the displaced middle.We can see these numbers in a 2013 note from the Federal Reserve Bank of New York by Albanesi et al. Indeed, the job polarisation highlighted by Autor is impacting on the aggregate labour market beyond just relative wages. First, both cognitive routine and manual routine jobs have been in structural decline:

Breakdown of Employment Shares jpeg

And the routine jobs are the first to go in recessions and the last to come back:

Changes in Unemployment jpeg

And from my previous posts, this chart of labour participation shows the net effect of all these moving parts. In a modern state like the U.S., the unemployed hide where they can, seeking refuge, for example, in disability claims, so the labour participation rate goes down.

FRED Civilian Labor Force Participation Rate

Autor’s faith in the inability of technology to decrease jobs is thus stymied by the data—at least over the last decade. In his original 2006 article, Autor’s model predicted that the increased productivity of non-routine cognitive jobs would lead to income effects (greater wealth for the cognitive elite) that would in turn create higher demand for non-routine manual jobs.

Nonetheless, this is an empirical observation from old data not a truth that comes out of the model. What Autor admits is that most cognitive routine and manual routine workers can’t price their labour at a sufficiently low enough rate to compete with computers. Accordingly, they have to find refuge in work that is not in direct competition with computers (or technology broadly defined).

Looking at the Albanesi charts, however, the size of the non-routine manual job category is far smaller than the routine cognitive and manual job ones. So we have a huge problem of absorption. Nonetheless, to a classical economist, a price exists at which the market will clear. But the dirty little secret of the lump of labour skeptics is that the market may clear at a price that doesn’t provide a liveable wage (as per the Boxer and Napoleon example in Part 2).

Further, under the Autor model, income effects associated with the ever-prospering non-routine cognitive elite could compensate for the cut-throat competition within the non-routine manual sector. As the geeks’ wages rise, the opportunity cost of doing their own washing rather than cranking out computer code grows ever steeper. In the language of economists, we have income elasticity of demand effects (richer cognitive elite) for non-routine manual labour coupled with price elasticity emend effects (lower manual labour wages making non-routine manual workers more attractive to hire).

I would call this the ‘Downton Abbey economy’: A return to an Edwardian-style wealthy elite employing an army of non-cognitives. But how many workers would a modern-day Lord Grantham need to employ to run Downton Abbey in the requisite style? I would guess fewer than a fifth given the fact that technology has eliminated most of the routine jobs that all the scullery maids and man servants used to perform. Yet Autor remains resolutely upbeat. In the New York Times op-ed piece he says this.

The outlook for workers who haven’t finished college is uncertain, but not devoid of hope. There will be job opportunities in middle-skill jobs, but not in the traditional blue-collar production and white-collar office jobs of the past. Rather, we expect to see growing employment among the ranks of the “new artisans”: licensed practical nurses and medical assistants; teachers, tutors and learning guides at all educational levels; kitchen designers, construction supervisors and skilled tradespeople of every variety; expert repair and support technicians; and the many people who offer personal training and assistance, like physical therapists, personal trainers, coaches and guides. These workers will adeptly combine technical skills with interpersonal interaction, flexibility and adaptability to offer services that are uniquely human.

Nonetheless, in other statements by Autor, one can sense some ‘wobble’. In an MIT Technology Review article called “How Technology Is Destroying Jobs” by David Rotman, Autor comes out with this:

 “There was a great sag in employment beginning in 2000. Something did change,” he says. “But no one knows the cause.”

So something is going on with total employment, even if he doesn’t admit it is a lump of labour problem for non-cognitive workers in the face of advancing technology.

In the same article, Lawrence Katz, Autor’s co-author in the original 2006 polarisation paper, goes even further. While reiterating the fact that the historical record shows no record of a jobs decrease over an extended period of time following technological change, Katz confesses that this time could possibly be different:

Katz doesn’t dismiss the notion that there is something different about today’s digital technologies—something that could affect an even broader range of work. The question, he says, is whether economic history will serve as a useful guide. Will the job disruptions caused by technology be temporary as the workforce adapts, or will we see a science-fiction scenario in which automated processes and robots with superhuman skills take over a broad swath of human tasks? Though Katz expects the historical pattern to hold, it is “genuinely a question,” he says. “If technology disrupts enough, who knows what will happen?”

So lump of labour advocates are no longer ignored even if they still don’t get much respect. And if automated processes and robots do “take over a broad swath of human tasks” and radically downsize the job market, what is to be done? I will work through the implications of a lump of labour victory in my next two posts.