Category Archives: Technology

Links for the Week Ending 9 March 2014

Apologies for the late posting of this week’s links. Has been a crazy week.

  • For those of a non-business background, any reference to The Economist magazine with respect to climate change may appear strange. Who cares what The Economist writes on the subject? I would beg to disagree. Few, if any, senior business executives will read posts on Real Climate or Skeptical Science, let alone academic articles on the subject. For English speakers, most climate change commentary will come out of the pages (much of which will, of course, be online these days) of The Wall Street Journal, The Financial Times, other serious non-financial dailies like The New York Times in the U.S. and The Telegraph in the U.K., a motley collection of weeklies like Forbes, and, of course, The Economist. And The Economist is rather special in terms of its reach into board rooms across the globe (and for that matter cabinet offices). For example, Playboy Magazine once asked Bill Gates what he reads. The answer: “The Economist, every page”. A year ago, The Economist wrote an extended article on the global warming ‘hiatus’ that, I thought, gave too much weight to a few studies suggesting that climate sensitivity was far lower than previously thought (here, free registration). This week, however, the magazine made amends by publishing an excellent piece titled “Who pressed the pause button?” on the so called ‘hiatus’ in temperature rise. It ended with this statement:  “Most of the circumstances that have put the planet’s temperature rise on “pause” look temporary. Like the Terminator, global warming will be back.”
  • Talking of ‘The Terminator’, The Guardian carries an interview with the Crown Prince of techno-optimists and Google geek in chief Ray Kurzweil. God help us if anyone actually believes this stuff.
  • Up the road from me in Oxford is the NGO Climate Outreach and Information Network (COIN). Its founder George Marshall has an interesting blog that looks at the narratives surrounding climate change. In a post called “How the Climate Change Messengers Became Blamed for the Floods” he deconstructs the media’s reaction to the recent U.K. floods. It’s somewhat depressing stuff.
  • One of the sharpest observers of the shale hype has been the petroleum geologist Art Berman. He has a site called The Petroleum Truth Report, but, frustratingly, doesn’t keep it current. Fortunately, he has just given a new interview with Oilprice.com updating us on his recent thinking. The interview is full of gems such as this: “Oil companies have to make a big deal about shale plays because that is all that is left in the world. Let’s face it: these are truly awful reservoir rocks and that is why we waited until all more attractive opportunities were exhausted before developing them. It is completely unreasonable to expect better performance from bad reservoirs than from better reservoirs.” I highly recommend you read the whole thing.
  • The economist Noah Smith writes a lively blog called Noahpinion. In this post he makes some keen observations on the ‘jobs and robots’ debate, while in this article in The Week he compares America’s decline with the collapse of the Ming Dynasty.

Links for the Week Ending 2 March 2014

  • Martin Wolf has been revisiting the robots and jobs topic over the past few weeks in a couple of articles in The Financial Times here and here (free access after registration). This is a theme I have been addressing a lot recently in a series of posts starting here. Wolf finishes his last article with the observation that technology does not always have to shape institutions; it should be the other way around: “A form of techno-feudalism is unnecessary. Above all, technology itself does not dictate the outcomes. Economic and political institutions do. If the ones we have do not give the results we want, we must change them.” I agree, but this will not be easy.
  • I have also just discovered a fascinating blog that pulls together articles on the new robot economy called RobotEnomics (sic). For example, check out this post on the economic implications of driverless cars.
  • California has experienced significant rainfall over the last few days. The latest Drought Monitor (released weekly) doesn’t capture this rainfall, so we should see some slight improvement when the next update comes out. Critically though, California’s water bank—its high mountain snow pack—is still running at around 20% of average. You can see the end month figures as measured by the Department of Water Resources here and an article giving background to the snowpack hereMother Jones has some nice graphics on the crops being hurt by the drought here, while The Atlantic has a very interesting (and very long) article on the history and future of California’s massive water engineering projects here.
  • Here I go again: linking to the March 1998 Campbell and Laherrere article titled “The End of Cheap Oil” in Scientific American. The authors ended the article with this sentence “The world is not running out of oil—at least not yet. What our society does face, and soon, is the end of the abundant and cheap oil on which all industrial nations depend.” Average price of Brent crude in 1998: $13.2 per barrel, equivalent nowadays to around $19 after adjusting for inflation. Brent now: $109 per barrel. But isn’t fracking going to give us an endless supply of cheap oil?  Here is an article in Bloomberg titled “Dream of Oil Independence Slams Against Shale Costs”. In other words, Campbell and Laherrere continue to be proved right and the energy cornucopians continue to be proved very wrong.
  • For technological optimists the dream is for a transformational technology that can permanently alter the energy supply equation. Fusion has always been one such hope, but forever decades away from commercial development. The New Yorker has just published a superb article called “A Star in a Bottle” on the International Experimental Thermonuclear Reactor (ITER) being built in France. The audacity and scope of the project is extraordinary. Yet my takeaway from the article is that fusion provides little hope of providing a timely saviour with respect to either climate change or fossil fuel depletion.

Links for the Week Ending 23 February 2014

  • The so called hiatus period of flat-lining global mean temperatures has certainly been a godsend for the climate skeptic lobby. A lot of this recent change in temperature trend is due to the ENSO cycle: El Nino years are generally hot years, and we haven’t recently had many strong El Ninos. You can see this effect in the NASA chart here. So the next time we get a strong El Nino year expect to see a new global mean temperature record. When will we see the next one? This intriguing guest post on Jeff Masters Wunderblog suggests we may be due for a big El Nino in 2014. If true, expect to see the ‘hiatus period’ disappear from the climate skeptic lexicon.
  • By coincidence, I saw the NASA chart above reproduced in a blog post on Econbrowser by Menzie Chinn. Hamilton and Chinn, who co-author the blog, are two of the most respected economists in the world. Hamilton wrote one of the standard time series texts that a generation of econometricians grew up on. Chinn’s post is titled “Economic Implications of Anthropogenic Climate Change and Extreme Weather“. He takes aim at those who think we can easily adapt to climate change, pointing out that not only will the trend change but also volatility. All of this will cost a lot of money.
  • The global media has picked up on the Californian drought to a certain extent. If you want to track it yourself, click the U.S. Drought Monitor page here. There has been far less coverage, however, of the Brazilian drought; here is a rare piece of coverage by National Public Radio. And what is happening in Brazil is already having an effect on food prices as witnessed by the skyrocketing price of coffee; see here.
  • I have frequently commented that despite rising resource constraints  and a productivity slowdown, global GDP growth has motored on at around 3% per annum regardless. The is mostly because China has acted as a growth locomotive for everyone else, offsetting anaemic growth in the U.S., Europe and Japan. So if China’s growth collapses, this will likely mean that global growth takes a step-change downward (the other BRICs and MINTs have their own problems). Having seen Japan’s experience first hand (one day growth, the next day no growth), I have been a huge skeptic of China’s economic model. But to date, the sky has not fallen down. The BBC’s economics correspondent Robert Peston has just produced a short documentary called “How China Fooled the World” that sets out the pessimist’s case and can be found on iPlayer. If you have a problem accessing BBC content, try this link at YouTube here.
  • Most web-based technology favours scale: it facilitates ‘winner takes all’ economics. Think Google and Facebook. Yet it also reduces the cost of information and, potentially, small production runs. This, in turn, favours the so called ‘long tail’. This strange dance between the centrifugal and centripetal forces of information technology is a source of both fragility and resilience as we face resource and climate change challenges. For a slightly different riff on the same theme see this article by the economist Robert Frank in The New York Times.

Energy Return on Investment (EROI): State of Play

In my last post, I referred to the work of Charles Hall on Energy Return on Investment (EROI) and biophysical economics. Following an exchange of e-mails with Professor Hall,  he directed me to some of his more recent work, including a January 2014 paper titled “EROI of Different Fuels and the Implications for Society” published in Energy Policy (free access). The paper looks at the critical EROI question: “How many units of energy do you extract for each unit of energy you invest?”.

The paper is a veritable chartfest of all things EROI, but I will wet your appetitive with just three. First up, is an EROI comparison between different fossil fuel and biomass energy sources (click for larger image).

Mean EROI jpeg

The bad news here is that coal remains the king of EROI since you get around 40 times as much energy out for each unit of energy you put in. Hardly good for CO2 emissions trajectories and climate change.

Next up is the decline in global oil and gas EROIs (click for larger image):

Global Oil and Gas EROIs jpeg

The decline is unsurprising since we are trying to exploit ever more geologically marginal sources of oil and gas in ever more unconventional forms.

Finally, a chart showing fossil fuels up against renewables (click for larger image):

EROIs Different Energy Sources jpeg

I was genuinely surprised at this one because both wind and photovoltaic (PV) came in higher than I expected. Hall flags all the major problems with wind and PV (need for base load and so on) and also points to disputes over PV EROI methodology. Nonetheless, I have heard arguments in the past that PV is almost break-even in EROI terms; this does not appear to be the case.

There is a lot more in the paper, including numerous interesting references. When I get time, I will come back to the EROI of renewables as it seems such an important topic.

Links for the Week Ending 26 January

  • The current oil narrative in the U.S. is one of bountiful supply but structurally reduced demand. Yet Mark Lewis, in The Financial Times, disputes the latter story (here, free registration at the FT). He argues that the last five years have seen a cyclical, not structural, shift in demand. But now that the economy is picking up speed, demand for oil is kicking up a notch. Given the astronomical capital expenditures needed to bring new supply to market, however, the only mechanism able to maintain equilibrium will be the rationing effect of higher prices.
  • Again in The Financial Times are some fascinating statistics showing that 26% of young adults aged between 20 and 34 now live with their parents in the U.K., up from around 21% in 1996. A prime mover behind this trend is the 13% decline in real median incomes for this age group in the decade from 2001/02 to 2011/12. All part of the new normal.
  • In my former job running a hedge fund, I learned one great skill that is rarely developed in the general populace; that is, to believe both the buy and sell case for any individual position. So does this mean that I was unable to trade, like a deer caught in the headlights? Not really, because sometimes (but not often) you rate the rationale behind one side of a trade as a little superior than the other—and that’s when you place your bet.  This approach can be extended to most things in life. So in the case of my recent series on technology and unemployment (starting here), I  looked at a series of papers that suggested we have a serious problem with technology. Given that bias, my inclination is to find intelligent people who say we don’t have a problem. One such person is the progressive economist Lawrence Mishel, who in a blog post last week argues that technology is not the job killer; rather,  low wages, inequality and unemployment are caused by other, non-technological factors. My understanding of this topic is highly fluid, with argument and counter-argument going on to my weighing scales. I tilt towards worrying, but am still very receptive to opposing views.
  • Of course, in the case of climate change, the scales are dramatically weighted to one side—i.e., bad climate change outcomes. Marginally encouraging is the fact that corporations are slowly comprehending climate change risk. As evidence, climate change has elbowed its way back onto the agenda at Davos. The Guardian is one of the few publications to pick up on this trend and has been tracking the various seminars, panel discussions and presentations there (herehere and here). And The New York Times has an informative article by Coral Davenport on how big business is getting more concerned over global temperature rise.
  • The Guardian also has a very interesting article on the impact of ENSO cycles (El Nino and La Nina cycles) on global mean temperature. What is new to me is the claim, which originates from a note in the academic journal Nature Climate Change (full article is behind a paywall), that the ENSO cycle itself will change as the planet warms, leading to more extreme El Ninos and thus more volatility in temperature variation. Yet again we learn of another source of climate risk.

Blade Runner with Food Stamps and Benefits Street

I realise that many of my readers may not have the stamina for my longer posts, so wish to repackage some ideas raised within the extended pieces as mini-posts all to themselves. First up is Walter Russell Mead‘s alternative visions for a world in which computers replace work. One of these is the faintly dystopian Blade Runner with food stamps explained within a lovely essay in The American Interest titled “Jobs Jobs Jobs”. Russell Mead sets out his stall thus:

There are really two choices before us as we think about the future of jobs in an age of information. Either most human beings are about to become economically obsolete, or the information economy can find a use for their talent and hard work. Much depends on which of these two pictures turns out to be the best description of the future.

If we believe in the first alternative, we are going to start planning for the mother of all welfare states. There will be a period of transition, but something like 80 percent or more of the population is going become superfluous to the economy. There will be no jobs where the work of this group could command a living wage; the state must somehow make provision for them or wait for them to fall into poverty and risk the social explosion that will probably follow.

It’s likely that an information age welfare state would consist of two components: straight out welfare and “social inclusion” payments for some, subsidized make-work jobs (like Postal Service employment in an age of email) for others. The money to fund these programs will have to come from corporate profits and from the incomes of those who still manage to surf on the waves of digital change.  That suggests rising tax burdens and a constant class struggle between the economically connected citizens who want to keep what they have earned and the clients of the welfare and make-work state.

He that goes on to describe a world that is not a million miles away from what we see today in the U.K., at least in the extraordinary bifurcation of the country into greater London and the rest of the country.

If the information economy works like this, the whole country would start looking more like California and New York City: unbridgeable class divides, huge inequality, fountains of innovation, and tiny islands of great wealth and privilege surrounded by proles on the dole. Inside the glittering bubble, the digirati and their courtiers would live lives of intense purpose and excitement. Outside the bubble, meaning would be the good in scarcest supply. To have a life where your work means something and your hands help steer the world would be the exclusive privilege of a tiny handful of enlightened, intelligent, and energetic people.

However, “the mother of all welfare states”, as he describes it, is a working class (or shall we call it non-working class) dystopia:

This is Blade Runner softened by food stamps, but as in the public housing projects and other warehouses where we store “surplus” people today, the most acute form of poverty and deprivation will not be the lack of food, clothing or even shelter. It will be a lack of social connection, of independence founded on achievement, on the human dignity that comes from doing work. Bellies will be full, but lives will be empty, and with that emptiness will come ills of every kind: addiction, brutality, ugly, and stunted sexual and emotional lives for many, neglect of the young and the old.

The images tap into voter angst on both sides of the Atlantic, through which those precariously in work, but suffering from declining livings standards and a risk-filled future, may be inclined to kick out at the welfare-supported underclass rather than the elites. In Britain, the Channel 4 documentary “Benefits Street“, which tracks the lives of welfare claimants on a Birmingham residential road, has caused a firestorm across the political spectrum through playing on the same theme . Many see the lives depicted on “Benefits Street” as ’empty’ and ‘stunted’, yet still lives of deliberate choice. Nonetheless, the right-of-centre Spectator has a slightly different take on this underclass (here):

The biggest scandal of Benefits Street, which Channel 4 is unlikely to reveal, is that White Dee is behaving rationally in deciding not to work. This is not something ministers like to divulge, but Policy in Practice, a welfare and employment consultancy, has run the figures for The Spectator. Dee is a single mother with two young children.  Were she to earn, say, £90 a week as a cleaner, then the system would reduce her benefits by £70 — an effective tax rate of 78 per cent on that £90 she’s earned. She’d thus be slaving away all week for £20 — far less than the minimum wage.

It doesn’t get too much better higher up the scale. If she landed a £23,000-a-year job, her effective tax rate would still be 74 per cent – so she’d end up just £5,975 a year better-off than if she’d spent the year sitting on the sofa watching daytime TV and chatting to her pals on the street. If she then worked extra hours, or earned a pay rise, she’d keep a pitiful 9p in every extra pound paid. This is nothing to do with indolence. Which of us would work at a 91 per cent tax rate?

The mother of all welfare states, or Blade Runner with food stamps, is also referred to by Russell Mead as the ‘blue model’. The American colour coding for political allegiance is alien to most Europeans. Red is the natural colour of socialists and blue conservatives in the U.K and on the continent. In the following  15-minute video, however, Russell Mead attaches ‘blue’ to Democrats and ‘red’ to Republicans as he critiques the ‘blue model’.

This analysis of the problem is correct in one respect: if technology gradually replaces ever more workers, then the state may face a choice of letting them fall into poverty or provide them a living stipend. Yet Russell Mead doesn’t quite leave it there. He attaches a moral imperative to work, much as many critics of the unemployed in Benefits Street have done as well.

Nonetheless, I think even the left needs to think carefully about Russell Mead’s three-pronged solution: 1) make hiring easier and cheaper, 2) promote the service economy and entrepreneurship and 3) feed the state to the people.

The last concept is the most interesting and perhaps the most open to misinterpretation. Feeding the state to the people doesn’t have to mean a Tea Party call for small government (which frequently is a smokescreen for the promotion of financial and corporate interests). Rather, the state could provide the financial, physical and legal capital to establish community-centred markets that help distribute community-centred goods and services.

This may not be as efficient as the production of goods and services by the global economy, but we seem to be suffering from diminishing welfare returns to production efficiency anyway. If so called efficiency leads to a vibrant elite but a dysfunctional mass—albeit one with plenty of toys—then this is unlikely to produce a happy and healthy society. Nor for that matter will it produce a stable climate. Time for some fresh thinking.

Hiding from the Computers Part 6: What Is to Be Done?

In my last post, I argued that technology-driven productivity gains had to go somewhere, and the data suggested that the recipients were cognitive workers undertaking non-routine tasks (and thus safe from computers) and the holders of technology-based capital—generally the same set of people.

At this stage, it is worth stressing that the workplace is a little more complex than a simple grid of non-routine cognitive workers, routine cognitive workers, routine manual workers and non-routine manual workers as in the simplified models that underpin the job polarisation economic literature (click for larger image).

Job Polarization Autor jpeg

As the chart above shows (from a 2003 paper by Autor, Levy and Murnane), we are really talking about tasks as opposed to jobs. Every type of job will contain elements of the non-routine cognitive and the routine manual, even the most specialist, such as the non-routine cognitive profession of brain surgery, or the most routine, such as production-line work in a meat-packing factory.

What technology is doing is disassembling and then reassembling categories of jobs through extracting and then automating the routine elements. The semi-attended customer activated terminals (SACATs) that I covered in my third post in the series are not replacing any individual employee; what they are doing is taking a group of, say, 10 employees and stripping out a subset of their routine tasks until perhaps eight employees’ worth of non-routine cognitive and manual tasks are left.

Articles such as Farhad Manjoo’s “Humans 1, Robots 0: Cashiers Trump Self-Checkout Machines at the Grocery Store” in The Wall Street Journal completely miss the point. This is not a straight fight of human against machine over all aspects of a cashier’s work. It is a tussle between human and machine over individual tasks. And the very fact that self-checkouts exist and are proliferating means that the machines are winning individual task battles. Indeed, that is why U.S. Bureau of Labor Statistics show declining cashier numbers as I referenced in Part 3. So Manjoo’s article should really have been titled “Humans 5, Robots 1: Self-Checkout Machines Start Winning Battles Against Cashiers at the Grocery Store”.

Once we realise that we are really talking about aggregations of tasks when we are looking at the job market, it is easier to understand the chart below with which I began the series (click for larger image). Take ‘Office and Administrative Support’, for example. The workers who have some defence against technology are those who have concentrated on the non-routine cognitive aspects. But their jobs wills also change as technology keeps looking for ways to reformulate as routine processes bits of their work that currently appear non-routine. When enough of such tasks can be aggregated at the right technological price, a job will be eliminated and the remaining workers will have a different task set than that which existed before.

Probability of Computerisation jpeg

Faced with such a threat to the job market, I ended my last post with two possible policy prescriptions: 1) educate and train routine manual workers to become non-routine cognitive workers or 2) redistribute income from the cognitive workers to the non-routine manual workers. Actually, there are a couple more alternatives: 1) do nothing or 2) carve out more of human life that is not encompassed by the market economy.

So what will happen if governments do nothing? Continue reading

Hiding from the Computers Part 5: Follow the Money

In my last post, I explained how the academics behind the job polarisation literature (declining middle class) have given us a framework for understanding the emergence of very clear winners and losers in the modern workplace. Yet most of these scholars have refused to extend their analysis to justify any fear of technology-led mass unemployment.

According to these economists, the disappearing middle class —due to the death of white collar routine cognitive work carried out by office employees and blue collar routine manual work performed by factory employees—will reappear in cognitive non-routine or manual non-routine jobs. In so doing, these academics have generally wasted few opportunities to bash lump-of-labour advocates; that is, those people who believe that there exists a fixed pool of jobs that computers are draining away.

Nonetheless, there are cracks in the facade. For example, back in 2003 Paul Krugman (who has acted as a commentator on the job polarisation literature rather than an originator) was rock solid behind the consensus economic profession position as can be seen here. But by December 2012 we see a significant U-turn in a piece called Rise of the Robots in the New York Times.

However, I would say that the consensus, while shaky, is still in place. Moreover, for a high-voltage polemic against the lump of labour theory, I recommend you read “Are Robots Taking Our Jobs, or Making Them?” by Ben Miller and Robert Atkinson of the Information Technology and Innovation Foundation. Like all good polemics, the essay assembles all the evidence that supports their thesis of ‘don’t worry, be happy’ and omits any evidence that contradicts it.

Nonetheless, it is a good, comprehensive exposition of the consensus position of the economics profession that has dominated thinking for decades. Further, we can actually take their analysis, but subvert it somewhat to fit the facts of what is actually happening in the job market, and from there think about solutions.

Miller and Atkinson sum up their position thus:

Both history and scholarly analysis have clearly and consistently refuted the notion that increased productivity leads in the moderate to long term to higher unemployment. This is because rising productivity increases overall wealth, and in a competitive economy that increased wealth gets reallocated to create additional demand that requires new workers.

This is a bold statement that I would agree used to be true, but may no longer be valid. But before we look at any data, let’s focus on the mechanism that they claim supports their assertion. The next sentence is key: Continue reading

Hiding from the Computers Part 4: Time to Get Skeptical on Lump of Labour Skeptics?

A decade or so ago, any suggestion that technology could be a major driver of inequality, let alone unemployment, would generally have been met with contempt by your average economist. Anyone questioning the beneficence of technology would have been accused of succumbing to the ‘lump of labour‘ fallacy. Simplistically, the lump of labour fallacy refers to the belief that there exists a fixed pot of labour; accordingly, if a computer eats some of the pot, there is less left for everyone else.

As an example, here is Paul Krugman skewering the lump of labour fallacy back in 2003:

As the derisive name suggests (the lump of labor fallacy), it’s an idea economists view with contempt, yet the fallacy makes a comeback whenever the economy is sluggish.

And for decades, The Economist magazine, a generally intelligent supporter of free markets and free trade—by which I mean it has enough intellectual curiosity to explore counterarguments—froths at the mouth like a rabid Tea Party activist whenever the idea of a fixed supply of labour is raised.

In a very funny parody, Tom Walker of New York’s Monthly Review (who must have too much time on his hands) stitched together sentences containing the words “lump of labour” from more than  a decade’s worth of The Economist to create this article here. It perfectly captures the magazine’s angry but condescending tone whenever the issue is raised.

Walker was at the time writing in defence of shorter working hours in the face of rising unemployment, a somewhat tangental topic to where we are going with the this post (although I will touch on it in my next), but the point I think he was trying to make was simple: if you believe there is any limit on the amount of work available, The Economist thinks you are an idiot.

But then a strange thing happened. Some mainstream economists started to venture the opinion that technology was making the labour market act weird. They hadn’t quite got to the stage of taking the lump of labour seriously, but they were now prepared to admit that technological progress was a two-edged sword—at least for some workers.

As with most intellectual shifts, the new thinking has been rather messy, occasionally self-contradictory and, of course, hotly disputed. But out of it, we can find some seminal academic papers that deepen our understanding of what is going on. I will highlight one to begin with: the 2006 paper “The Polarization of the U.S. Labor Market,” by David Autor, Lawrence Katz and Melissa Kearney,.

Autor et al had noticed that the U.S. labour market was changing shape under the influence of the awkward acronym SBTC (aka skills-biased technical change). Within the skills-based distribution of jobs, employment was growing at the top end, it was growing at the bottom end, but it was hollowing out in the middle. Furthermore, back at the top end, wages were both rising but also diverging; in other words, inequality among the wealthy, so beautifully spoofed here by The Onion, was actually true. In contrast, no such divergence in fortunes was seen at the bottom.

Accordingly, the new trend the paper tracked was not one that was ‘lifting-all-boats’ technology-led economic growth. Rather, the boats in the Autor et al paper are being thrown all over the place—with some capsizing.

Now, typical of its type, the paper contains an empirical bit looking at the labour-market data, and then a model which tries to make sense of what is going on. In the model, jobs sit in a grid of attributes: work is cognitive or manual, but it is also routine or non-routine. Of course, such a model removes the grey scale of real  life, but that is not important for our basic understanding.

Autor and his co-authors then go on, somewhat confusingly, to simplify the workforce even further into three categories rather than four: 1) abstract, which are the cognitive, non-routine jobs; 2) routine, which are both routine cognitive and routine manual jobs; and finally 3) manual, which means manual non-routine jobs. An earlier paper, in which Autor was again an author, has a useful table showing what is going on in the grid:

Job Polarization Autor jpeg

Moving further into their model, workers are divided into those with tertiary education and those only educated up to high school. The latter can’t perform abstract tasks but can switch between routine and manual tasks. Finally, technology is seen as having two impacts: it mostly substitutes for routine tasks, but it mostly complements for abstract tasks—that is makes the abstract workers more productive.

The word compliment is key, since the complimentarity between technology and routine cognitive and manual labour has been the driver of the explosive growth in living standards since the industrial revolution. Ned Ludd was wrong to smash up two stocking frames in 1779 because his labour, with the course of time, could always be paired with new technology that required a routine cognitive or routine manual human compliment.

As Gregory Clark’s book “A Farewell to Arms” demonstrates, this was a win-win situation for the working class. As a result, workers were more ‘pulled’ out of the countryside and into the cities by attractive relative wages than they were pushed out by evil property owners enclosing their land. And while Marx and Engels may dispute the cause of the migration, they applauded the result; from the Communist Manifesto:

The bourgeoisie has subjected the country to the rule of the towns. It has created enormous cities, has greatly increased the urban population as compared with the rural, and has thus rescued a considerable part of the population from the idiocy of rural life.

At this point, I must stress that Autor has taken every opportunity to distance himself from lump of labour advocates since coming out with the polarisation thesis. From a 201o paper (here):

This ‘lump of labor fallacy’—positing that there is a fixed amount of labor to be done so that increased labor productivity reduces employment—is intuitively appealing and demonstrably false. Technological improvements create new goods and services, shifting workers from older to new activities. Higher productivity raises incomes, increasing demand for labor throughout the economy. Hence, in the long run technological progress affects the composition of jobs not the number of jobs…..

…..It is not fallacious, however, to posit that technological advance creates winners and losers.

And in an op-ed piece in the 24 August 2013 New York Times:

Computerization has therefore fostered a polarization of employment, with job growth concentrated in both the highest- and lowest-paid occupations, while jobs in the middle have declined. Surprisingly, overall employment rates have largely been unaffected in states and cities undergoing this rapid polarization. Rather, as employment in routine jobs has ebbed, employment has risen both in high-wage managerial, professional and technical occupations and in low-wage, in-person service occupations.

Hmmm. It is true that jobs have risen in the high-end cognitive occupations, but the rise in low-wage manual occupation job growth has been minimal, and far too small to absorb the displaced middle.We can see these numbers in a 2013 note from the Federal Reserve Bank of New York by Albanesi et al. Indeed, the job polarisation highlighted by Autor is impacting on the aggregate labour market beyond just relative wages. First, both cognitive routine and manual routine jobs have been in structural decline:

Breakdown of Employment Shares jpeg

And the routine jobs are the first to go in recessions and the last to come back:

Changes in Unemployment jpeg

And from my previous posts, this chart of labour participation shows the net effect of all these moving parts. In a modern state like the U.S., the unemployed hide where they can, seeking refuge, for example, in disability claims, so the labour participation rate goes down.

FRED Civilian Labor Force Participation Rate

Autor’s faith in the inability of technology to decrease jobs is thus stymied by the data—at least over the last decade. In his original 2006 article, Autor’s model predicted that the increased productivity of non-routine cognitive jobs would lead to income effects (greater wealth for the cognitive elite) that would in turn create higher demand for non-routine manual jobs.

Nonetheless, this is an empirical observation from old data not a truth that comes out of the model. What Autor admits is that most cognitive routine and manual routine workers can’t price their labour at a sufficiently low enough rate to compete with computers. Accordingly, they have to find refuge in work that is not in direct competition with computers (or technology broadly defined).

Looking at the Albanesi charts, however, the size of the non-routine manual job category is far smaller than the routine cognitive and manual job ones. So we have a huge problem of absorption. Nonetheless, to a classical economist, a price exists at which the market will clear. But the dirty little secret of the lump of labour skeptics is that the market may clear at a price that doesn’t provide a liveable wage (as per the Boxer and Napoleon example in Part 2).

Further, under the Autor model, income effects associated with the ever-prospering non-routine cognitive elite could compensate for the cut-throat competition within the non-routine manual sector. As the geeks’ wages rise, the opportunity cost of doing their own washing rather than cranking out computer code grows ever steeper. In the language of economists, we have income elasticity of demand effects (richer cognitive elite) for non-routine manual labour coupled with price elasticity emend effects (lower manual labour wages making non-routine manual workers more attractive to hire).

I would call this the ‘Downton Abbey economy’: A return to an Edwardian-style wealthy elite employing an army of non-cognitives. But how many workers would a modern-day Lord Grantham need to employ to run Downton Abbey in the requisite style? I would guess fewer than a fifth given the fact that technology has eliminated most of the routine jobs that all the scullery maids and man servants used to perform. Yet Autor remains resolutely upbeat. In the New York Times op-ed piece he says this.

The outlook for workers who haven’t finished college is uncertain, but not devoid of hope. There will be job opportunities in middle-skill jobs, but not in the traditional blue-collar production and white-collar office jobs of the past. Rather, we expect to see growing employment among the ranks of the “new artisans”: licensed practical nurses and medical assistants; teachers, tutors and learning guides at all educational levels; kitchen designers, construction supervisors and skilled tradespeople of every variety; expert repair and support technicians; and the many people who offer personal training and assistance, like physical therapists, personal trainers, coaches and guides. These workers will adeptly combine technical skills with interpersonal interaction, flexibility and adaptability to offer services that are uniquely human.

Nonetheless, in other statements by Autor, one can sense some ‘wobble’. In an MIT Technology Review article called “How Technology Is Destroying Jobs” by David Rotman, Autor comes out with this:

 “There was a great sag in employment beginning in 2000. Something did change,” he says. “But no one knows the cause.”

So something is going on with total employment, even if he doesn’t admit it is a lump of labour problem for non-cognitive workers in the face of advancing technology.

In the same article, Lawrence Katz, Autor’s co-author in the original 2006 polarisation paper, goes even further. While reiterating the fact that the historical record shows no record of a jobs decrease over an extended period of time following technological change, Katz confesses that this time could possibly be different:

Katz doesn’t dismiss the notion that there is something different about today’s digital technologies—something that could affect an even broader range of work. The question, he says, is whether economic history will serve as a useful guide. Will the job disruptions caused by technology be temporary as the workforce adapts, or will we see a science-fiction scenario in which automated processes and robots with superhuman skills take over a broad swath of human tasks? Though Katz expects the historical pattern to hold, it is “genuinely a question,” he says. “If technology disrupts enough, who knows what will happen?”

So lump of labour advocates are no longer ignored even if they still don’t get much respect. And if automated processes and robots do “take over a broad swath of human tasks” and radically downsize the job market, what is to be done? I will work through the implications of a lump of labour victory in my next two posts.

Hiding from the Computers Part 3: “Unexpected Item in Bagging Area”

Time for some micro. If we go back to the Frey and Osborne job destruction chart (see my post here for a more detailed explanation), we can see that the Sales and Related (the red) category is one of the first to get crushed (click for larger image).

Probability of Computerisation jpeg

But internet shopping, along with Semi-Automated Customer Activated Terminals (SACATs) telling us that there is an “unidentified object in the bagging area”, have been around for well over a decade. To get a sense of what has been going on in the US and UK you need a comparable data set, and the International Labour Organisation (ILO) is the place to find it (here).

Frustratingly, the time series only go up to 2008. Nonetheless, the data line up almost perfectly with the Osborne and Frey chart (click for larger image):

US Employment by Occupation jpeg

The three worst-performing employment categories were agriculture, manufacturing, and wholesale/retail, all showing declines against a 5.5% rise in US employment as a whole for the period. Incidentally, wholesale/retail is the largest employer in the US: 20.6 million of the 145.4 million Americans in work in 2008, or 14.2% of the employed workforce. And the UK: ditto (click for larger image). Retail/wholesale shrank by 1%, only beaten by the incredible 14.0% slump in manufacturing employment.

UK Employment ILO Data jpeg

For more granular and up-to-date data, we have to go to national statistical offices. Here is how employment has evolved in the US retail sector over the last decade according to the United States Department of Labor’s Bureau of Labor Statistics (see here, click for larger image):

US Employment in Retail Sector jpeg

Total employment in the Sales and Related sector grew 3.7% from 2002 to 2012 to reach 13.8 million, with a noticeable downward blip over the Great Recession. Employment for cashiers, however, declined by 1.8% to 3.3 million, with no bounce back after the Great Recession. Why the difference? Well, my guess is that the difference is because of these:

Self check Out Machines jpeg

Where I live in the UK, they have become ubiquitous, although there do exist certain national brand supermarkets that don’t use them. Nonetheless, Semi-Attended Customer Activated Terminals (SACATs) are still relatively basic in functionality and can only replace a part of what traditional cashiers do. So let’s break down the job of a cashier into parts, and then think about which bits existing SACATs are replacing and which parts the new generation of SACATs will be able to replicate. The cashier job description from the UK National Careers Service.

Checkout Operator Profile jpeg

To this list I would add the vetting of purchases of alcohol to see that the purchasers comply with age restrictions.

If we look at this list, we can see that old-style SACATs can only really do bullet point 2 and 7, and partially 6. Moreover, the customer is roped into performing bullet point 1 and 5. It’s actually a quite limited repertoire and stems from the bottleneck variables in the Frey and Osborne paper, in particular finger and manual dexterity (plus visual perception, which they don’t include).

O Net Variable Jan 14 jpeg

The SACAT has no manual dexterity. It co-opts the customer to pick up irregular-sized groceries and present them to the machine with the bar code in the correct position. Incidentally, bar codes technology has, in itself, eliminated the finger dexterity required to input prices into a cash register.  However, a 360-degree checkout called Rapid Scan, currently being trialed by the supermarket operator Asda in the UK (here), makes the need for the customer (or cashier) to jiggle each good redundant and is three times faster than traditional hand-scanning approaches.

360 Degree Checkout jpeg

Such visual perception issues don’t show up in In the Frey and Osborne chart of technology bottlenecks above, perhaps because they think the technology is already there. However, while the visual perception of the Google self-drive car may be better than that of a human’s, it certainly isn’t at a price point ready for mass deployment.

As an example, the old-style SACATs can’t recognise fruit and vegetables. From a Wall Street Journal article generally downbeat on the automated checkout:

What’s so cognitively demanding about supermarket checkout? I spoke to several former checkout people, and they all pointed to the same skill: Identifying fruits and vegetables. Some supermarket produce is tagged with small stickers carrying product-lookup codes, but a lot of stuff isn’t. It’s the human checker’s job to tell the difference between green leaf lettuce and green bell peppers, and then to remember the proper code.

“It took me about three or four weeks to get to the point where I wouldn’t have to look up most items that came by,” said Sam Orme, a 30-year-old grad student who worked as a checker when he was a teenager.

Another one-time checker, Ken Haskell, explained that even after months of doing the job, he would often get stumped. “Every once in a while I’d get a papaya or a mango and I’d have to reach for the book,” he said.

But out of Datalogic, we have just seen the market rollout of checkouts with Visual Pattern Recognition (ViPR) that can identify items without a barcode.

Datalogic Visual Pattern Recognition jpeg

Intriguingly, this checkout incorporates machine learning (ML). It has a hit rate of around 98%, with the 2% of unidentified object requiring recourse to a human operator. However, when identified, the image and the object go back into the database, improving the identification of all such machines in future.

But what about those annoying fruit and vegetables. Well Datalogic has a whole host of patents pending that deal with this particular problem. Here is one that incorporates X-ray fluorescence in the Visual Pattern Recognition software:

Unlike a visual recognition system, a system using X-ray fluorescence utilizes the chemical makeup of the item or produce being tested for recognition. The problems in visual systems associated with attempting to interpret variations in color, texture and size are minimised …. because the X-ray fluorescence system analyzes a fixed chemical composition. For example, kiwi has distinctly different chemical makeup from that of a potato, and so the X-ray identification process that utilizes the chemical makeup for identifying a product can be quite precise. Further, effects of ambient light and other visually sensitive environmental factors become less of a concern and less intrusive into the object recognition process. As an additional advantage, the FIG. 12 system can make it possible to recognize and characterize a paper bag with produce in it, in which case there is not a need to take the produce or other items to be recognized out of the bag to be identified.

So not only can we now identify the difference between a kiwi and a potato, we can do it without taking the items out of their paper bag! Human operator: beat that.

Another area in which humans outcompete SACATs in the visual perception sphere is in asking young-looking purchasers of alcohol to produce ID. Yet here again facial recognition software is entering the mainstream. From Bloomberg:

In the U.K., Tesco Plc (TSCO) is installing face-scanning technology at its gasoline stations to determine customers’ ages and gender so tailored advertisements can be delivered to them on screens at checkouts.

OK, let’s think about money. A pod of old-style SACATs (4 terminals) costs around $60,000, but, as with all such technology, I expect the cost to rapidly decline. I have no idea what the depreciation schedule is for a SACAT, but in line with similar equipment I would guess it is 4 years. Taking a straight-line depreciation, you would be expensing a pod at $15,000 per annum. On top of this, we have to take into account the weighted average cost of capital (WACC) to purchase the original equipment. For Walmart, it is 6% (here). So that’s another $3,600 per annum. Let’s round the whole package up to $20,000 (there will be some maintenance costs in there as well).

So how many cashiers can you buy for $20,000? From the US Bureau of Labor Statistic again:

Cashier Wages jpeg

But $20,000 per annum (roughly £12,000) is the direct wage. Even for a notorious employee-benefit miser like Walmart, the all-in cost is going to be higher (you at least have to provide staff toilets). I will lowball it at $5,000 on top of the base wage. So the cost of running a SACAT pod will be 80% of one cashier. However, a pod runs 24/7, a cashier doesn’t. The cashier also takes days off and holidays. With an average American working 260 days per year, the cashier is working just under 75% of a year. (OK, the cashier may work more days to feed the kids, but if he or she does, even Walmart has to pay more.)

We can now create an efficiency equation to find out at what point the SACAT pod is in equilibrium with human cashiers (that is the point at which the status quo, ratio of SACATs to human cashiers, continues) remembering that a pod has four checkout points (so we are equating hours of SACAT lane availability with human availability in term of dollar expense). Using the numbers above, the human cashier is 21 times as efficient as the SACAT, or the SACAT is only 5% of the efficiency of a human. Here I mean ‘efficiency’ in the broadest possible term—not just speed of processing a bunch of groceries. The number incorporates the ability to identify a banana, help put irregular-sized objects into an annoying plastic bag whose top doesn’t open, deal with special offer vouchers of increasing complexity, spot dodgy shop lifters, ask whether the bearded-teenage trying to buy a beer is really over 18, get a plastic security tag off a bottle of vodka and ask a cashier to work extra shifts on a busy Friday evening but not come in on a Monday afternoon.

Perhaps this is why SACATs have stalled the growth in cashier numbers, but not really reversed the number. Nonetheless, we are only at 5%! And Datalogic and its competitors are hell-bent at taking the SACAT into the other 95% of functionality currently occupied by humans.

In my next post, I will leave SACATs briefly and jump back to macro to see how economists have been struggling to explain developments in the labour market in the face of technological change over the last decade or so. After that, it will be back to SACATs to ask who benefits from them and how their installation, and other technologies like them, can dramatically change the shape of an economy. And, finally, I will ask: what is to be done?