Thursday, March 30, 2017

How Between-Firm Inequality Drives Economic and Social Inequality

"The real engine fueling rising income inequality is `firm inequality': In an increasingly winner-take-all or at least winner-take-most economy, the best-educated and most-skilled employees cluster inside the most successful companies, their incomes rising dramatically compared with those of outsiders. This corporate segregation is accelerated by the relentless outsourcing and automation of noncore activities and by growing investment in technology."

So argues Nicholas Bloom in his essay, "Corporations in the Age of Inequality," which appears as a cover story in the March 2017 issue of the Harvard Business Review.  The basic idea won't be new to long-time readers of this blog. For example, I discussed some earlier work by Bloom together with co-authors Jae Song, David J. Price, Fatih Guvenen in "Earnings Inequality Between Companies" (July 6, 2015) and a study by the OECD on this subject in "Productivity Growth and the Diffusion Problem" (July 14, 2015).  But this essay is a very nice compact and readable overview of the work. 

As a starting, instead of looking at the distribution of income between people, look at the distribution of average salaries across companies. As this figure shows, average salaries for companies in the 99th percentile of companies has grown substantially, while average salaries at the 25th and 50th percentile of companies hasn't grown much.

BLOOM_INEQUALITYBETWEENCOMPANIES

This pattern is also apparent within industries: that is, if you just look at manufacturing firms, or just look at services firms, the labor productivity of the top firms is pulling away from the labor productivity of other firms. Here's a figure from Bloom's essay, drawing on the OECD study mentioned earlier.




Bloom summarizes what is happening this way: 
In other words, the increasing inequality we’ve seen for individuals is mirrored by increasing inequality between firms. But the wage gap is not increasing as much inside firms, our research shows. This may tend to make inequality less visible, because people do not see it rising in their own workplace. This means that the rising gap in pay between firms accounts for the large majority of the increase in income inequality in the United States. It also accounts for at least a substantial part in other countries, as research conducted in the UK, Germany, and Sweden demonstrates. ... I believe that much of the rise of between-firm inequality, and therefore inequality in general, can be attributed to three factors: the rise of outsourcing, the adoption of IT, and the cumulative effects of winner-take-most competition."
In other words, a number of leading companies are now focusing on a very specific core competence. For other services, they hire outside contractors (either locally or long-distance). Bloom writes: 
"Employees inside winning companies enjoy rising incomes and interesting cognitive challenges. Workers outside this charmed circle experience something quite different. For example, contract janitors no longer receive the benefits or pay premium tied to a job at a big company. Their wages have been squeezed as their employers routinely bid to retain outsourcing contracts, a process ensuring that labor costs remain low or go ever lower. Their earnings have also come under pressure as the pool of less-skilled job seekers has expanded, due to automation, trade, and the Great Recession. In the process, work has begun to mirror neighborhoods — sharply segregated along economic and educational lines." 
The rise of between-firm inequality raises some social questions that go beyond the general issue of inequality between individuals. For example, it suggests that in the past, successful firms were more likely to have played a role in redistributing income, in the sense that all the employees of a successful firm tended to share, at least to some extent, in the firm's success. In the past, successful firms could offer a kind of career ladder, where a combination of experience and training helped some of their entry-level workers move up to middle-class jobs. In the past, successful companies offered a kind of integration across high-wage and low-wage jobs, because people in all kinds of jobs were more likely to have a common employer.

In contrast, a rise in between-firm inequality suggests that the US and other leading economies are becoming a more economically segregated, in the sense that those with high pay and those with lower pay are becoming less likely to have the same employer. It means that the classic "American dream" success story, of someone being hired in the mailroom or as a secretary or janitor, and then getting promoted up the company ladder, is less likely to occur. Nowadays, those jobs in the mailroom or the secretarial pool or the janitorial work are more likely to involve working for an outside contractor. In that sense, some of the rungs on the bottom of the ladder of success have been sawed off. 

Wednesday, March 29, 2017

“In Order That They Might Rest Their Arguments on Facts”

There is considerable concern among economists and social scientists that the Trump administration will take a hatchet to the government statistical programs. I find it difficult to argue for the value of government statistics, because their value seems so obvious to me that it is hard for me to imagine someone who both disagrees with me on this point but is potentially persuadable. However, Nicholas Eberstadt, Ryan Nunn, Diane Whitmore Schanzenbach, and Michael R. Strain have taken on the job a March 2017 working paper, "“In Order That They Might Rest Their Arguments on Facts”:The Vital Role of Government-Collected Data." The authors are from both the Hamilton Project at the Democratic-leaning Brookings Institution and the Republican-leaning American Enterprise Institute. 

They point out that total federal spending on statistics is about 0.18% of the federal budget--and just to be clear, that's not 18%, but rather a little less than one-fifth of 1 percent.  As the authors point out in detail, with examples, the potential benefits of this information are considerable. The federal budget is about $3.6 trillion, and of course the federal regulatory apparatus imposes additional costs. Information helps to direct government spending, taxes, and regulations, and it helps citizens to hold their government accountable. In addition, businesses and households often build on government statistics when making their own plans and decisions, thus allowing the economy to function more smoothly than if this information was only available, in partial chunks, from private providers.

But these arguments for the value of government statistics are old and well-known; indeed, they date back to the legislation involved in the first Census, back in 1790. Section 2 of the just-adopted US Constitution called for an enumeration of people to determine the number of members each state would have in the House of Representatives: "The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such Manner as they shall by Law direct." But when the bill to enact the first Census came up in 1790, James Madison (then a member of the House of Representatives) argued that there was a great opportunity here to do more than just counting heads, and that it would be useful to gather more information. Our records of Congressional debates from that time do not quote exactly verbatim, but instead are paraphrased. The fuller comments attributed to Madison are below, but here's are some highlights of what he had to say on January 25 and then on  February 2, 1790:
"This kind of information, he observed, all Legislatures had wished for; but this kind of information had never been obtained in any country. ... If the plan was pursued in taking every future census, it would give them an opportunity of marking the progress of the society, and distinguishing the growth of every interest. This would furnish ground for many useful calculations, and at the same time answer the purpose of a check on the officers who were employed to make the enumeration ... And I am very sensible, Mr. Speaker, that there will be more difficulty attendant on the taking the census, in the way required by the constitution, and which we are obliged to perform, than there will be in the additional trouble of making all the distinctions contemplated in the bill. ... I take it, sir, that in order to accommodate our laws to the real situation of our constituents, we ought to be acquainted with that situation. It may be impossible to ascertain it as far as I wish, but we may ascertain it so far as to be extremely useful ...  If gentlemen have any doubts with respect to its utility, I cannot satisfy them in a better manner, than by referring them to the debates which took place upon the bills, intend, collaterally, to benefit the agricultural, commercial, and manufacturing parts of the community. Did they not wish then to know the relative proportion of each, and the exact number of every division, in order that they might rest their arguments on facts, instead of assertions and conjectures?"
The modern arguments for government statistics are pretty much all there. From my perspective, I would only add that it's very useful to have statistics that are publicly available, and where the methods are openly discussed. You might not like, say, exactly how the poverty line is defined, or how the government statistics draw the line between those counted as unemployed and those counted as "out of the labor force," but at least the method is clear and the same method is used over time. The process is isolated from politics in various ways. For example, the many tasks involved in producing government statistics are typically divided up so that no one person can tell what the results will be until the end. Groups of outside experts are called in on a regular basis to evaluate and critique. Politicians have limited input to the actual data process. For example, "In the case of the BEA [Bureau of Economic Analysis], political appointees have very limited access to the data until after journalists receive them—one hour before the estimates are made public— further ensuring the impartiality of the process .."

I'm all for managing every little sliver of the budget as effectively as we can. But to me, government statistical programs are part of what makes citizenship possible.

Maybe the best way to get an idea of the information available through the government is just to list some of the agencies involved, and to think about how government policies, business choices, and personal decisions would potentially be affected if substantially less information was readily and publicly available in these areas: 
"A substantial portion of our official statistics is produced by the 13 agencies that have statistical work as their principal mission. Excluding funding for the decennial census, approximately 38 percent of overall funding for Federal statistical activities provides resources for these 13 agencies. The principal statistical agencies are the: Bureau of Economic Analysis; Bureau of Justice Statistics; Bureau of Labor Statistics; Bureau of Transportation Statistics; Census Bureau; Economic Research Service; Energy Information Administration; National Agricultural Statistics Service; National Center for Education Statistics; National Center for Health Statistics; National Center for Science and Engineering Statistics; Office of Research, Evaluation and Statistics (SSA); and Statistics of Income (IRS). ... The remaining 62 percent of total resources allocated to statistical work in the U.S. Government is carried out by about 115 programs in the Executive Branch that conduct statistical activities in conjunction with another program mission, such as providing services (for example, medical care benefits for the elderly and the poor) or enforcing regulations (for example, with respect to the environment, transportation, or occupational safety). ... Additionally, there are other Federal agencies whose statistical activities are excluded because they are not part of the Executive Branch. These agencies include the Congressional Budget Office, which develops and applies projection models for the budgetary impact of current and proposed Federal programs; the Federal Reserve Board, which compiles the widely used Flow of Funds report and other monetary statistical series and periodically conducts the Survey of Consumer Finances; and the U.S. Government Accountability Office, which develops statistical data in evaluations of government programs." 
Afterword: Here is the fuller version of the comments attributed to Madison. On January 25, 1790, the House of Representatives began to consider the actual bill that would implement the census that was called for the by just-adopted Constitution.Here is the paraphrase of Madison's comments that day: 
Mr. Madison observed, that they had now an opportunity of obtaining the most useful information for those who should hereafter be called upon to legislate for their country, if this bill was extended to as to embrace some other objects besides the bare enumeration of the inhabitants; it would enable them to adapt the public measures to the particular circumstances of the community. In order to know the various interests of the United States, it was necessary that the description of the several classes into which the community is divided should be accurately known. On this knowledge the Legislature might proceed to make a proper provision for the agricultural, commercial, and manufacturing, interests, but without it they could never make their provisions in due proportion. This kind of information, he observed, all Legislatures had wished for; but this kind of information had never been obtained in any country. He wished, therefore to avail himself of the present opportunity of accomplishing so valuable a purpose. If the plan was pursued in taking every future census, it would give them an opportunity of marking the progress of the society, and distinguishing the growth of every interest. This would furnish ground for many useful calculations, and at the same time answer the purpose of a check on the officers who were employed to make the enumeration; forasmuch as the aggregate number is divided into parts, any imposition might be discovered with proportionable ease." 
And here is the fuller paraphrase of Madison's comments on February 2, 1790:
And I am very sensible, Mr. Speaker, that there will be more difficulty attendant on the taking the census, in the way required by the constitution, and which we are obliged to perform, than there will be in the additional trouble of making all the distinctions contemplated in the bill. The classes of people most troublesome to enumerate, in this schedule, are happily those resident in large towns, the greatest number of artisans live in populous cities, and compact settlements, where distinctions are made with great ease.
I take it, sir, that in order to accommodate our laws to the real situation of our constituents, we ought to be acquainted with that situation. It may be impossible to ascertain it as far as I wish, but we may ascertain it so far as to be extremely useful, when we come to pass laws, affecting any particular description of people. If gentlemen have any doubts with respect to its utility, I cannot satisfy them in a better manner, than by referring them to the debates which took place upon the bills, intend, collaterally, to benefit the agricultural, commercial, and manufacturing parts of the community. Did they not wish then to know the relative proportion of each, and the exact number of every division, in order that they might rest their arguments on facts, instead of assertions and conjectures? Will any gentleman pretend to doubt, but our regulations would have been better accommodated to the real state of the society than they are? If our decisions had been influenced by actual returns, would they not have been varied, according as the one side or the other was more or less numerous? We should have given less encouragement in some instances, and more in others; but in every instance, we should have proceeded with more light and satisfaction.

Tuesday, March 28, 2017

Global Investment Spending: A Piece of the Puzzle

One of the distressing issues for the US economy has been the slow pace of investment, which is probably part of what made the US economic recovery sluggish in the last few years, and threatens to be part of a "secular stagnation" outcome of lower productivity growth in the years to come. In a globalizing economy, it's possible that one reason for low investment in the US (and other high-income economies) is just that the investment opportunities look better in emerging market economies. At least, this is an obvious interpretation of a figure from Thomas Klitgaard and Harry Wheeler in "The Need for Very Low Interest Rates in an Era of Subdued Investment Spending" (March 22, 2017), which appeared on Liberty Street Economics, a blog run by the Federal Reserve Bank of New York.

Subdued Investment Spending in an Era of Very Low Interest Rates

The figure points out that investment as a share of GDP was about the same in emerging markets and advanced economies in 2000, but since then, the investment/GDP ratio has risen in emerging markets and fallen in high-income countries. I'm sure this isn't all of the reason for stagnating US investment, but it's likely to be a piece of the puzzle.

Monday, March 27, 2017

Interview with Ricardo Reis: Macro and Reservism

The EconReporter website (an independent Hong Kong journalism project) has been publishing a series of interviews with prominent economists. In particular, a two-part interview with Ricardo Reis caught my eye. In the first part "The Performance of Macroeconomics is Not that Bad!" (posted February 9, 2017), Reis offers a limited and qualified defense of macroeconomics in the aftermath of the financial crisis; in the second part, "How to Use Interest on Reserve for Inflation Targeting" (posted February 11, 2017), Reis describes his current thinking about "reservism," or the practice of conducting monetary policy by having the central bank pay an interest rate on bank reserves. A few tidbits from the interviews:

Comparing forecasting in macroeconomics and epidemiology
"Economics is a field that has at best 200 years of systematic study. More likely we have less than a 100 year of actual systematic and sustained study. The budget for research in the US, or even in the UK, that goes to economics maybe just 1/1000 of the pounds or dollars that are invested in infectious diseases.
"Given the incredibly small amount of resources that we invested in economics, given the fact that economics have been studied for only 200 years where in medicine we have been studying infectious diseases in a systematic way for 2000 plus years, I think it is quite remarkable that economics models do such a good forecasting job when compared to infectious diseases models.
"We do not think that medicine is in a crisis whenever a new virus appears, even if that virus turns out to be quite deadly. Again, they have a budget that is a thousand time bigger than ours. They have hundreds of more people working on it. They have been doing ten times more years than we have in economics. If you think about that in this perspective, I actually think the performance of economics, compares to other fields of knowledge, is actually not bad at all."

The failures of macroeconomics during the Great Recession were about forecasting, not understanding
"When we look back at the financial crisis, the main question is why economists did not predict what has happened in 2007 to Bear Sterns and Lehman Brothers. Economists felt that they have a bunch of apparatus that allow us to understand what was going on. So it was a failure of forecasting but not a failure of understanding.
"It didn’t take very long for economists to understand that their models of bank runs, asymmetric information models and the importance of the financial sector could be used to understand what was going on. It is not the case that back at the 2007 economists felt, `Wow! Something happened and we don’t even know how to think about it!'
"No, we had all sorts of ways to think about it. We may not have forecast it but we have some tools. What you have seen is that as we have the tools, there were attempts using them, attempts in understanding in what ways those tools were lacking, and attempts in improving those tools and understand them better. But there was not a feeling that we do not even have the tools to even understand what was going on."
How central banks have shifted their basic monetary policy tools to paying interest on reserves
"In the last six years, the world of central banking, the way central banks operate, the way they set monetary policy, has changed radically. Even most people I admired don’t even quite understand it. The main radical change is that we went from a system in which central banks do the so-called open market operations, where they brought a few million bonds here and there, and in doing so affect the interest rate. Back in the days, central banks were using a fairly tiny balance sheet. Now we instead have a system which central banks’ balance sheets are very large.
"Why are the balance sheets very large now? Because on the liability side, there are a huge amount of reserves, i.e. the deposit of banks in the central bank. That means nowadays that the way that central banks actually control inflation is not through some Federal Reserve fund market, nor some interbank markets in the Europe, but rather by actually choosing an interest rate on reserves. It is not like the target of the Federal Fund rate, it is an actual interest paid by the central bank. ...  Reserves in the central banks used to be an asset that was essentially zero on the balance sheet. ... Now it is one of the largest financial assets in the US. So, we have this new asset which is fundamental to the financial market, to the monetary policy, and it has fundamentally changed what the central bank balance sheet does.
"A lot of my research in last year has been focused on understanding what does it mean and what does it imply for the control of inflation, for the risk of central bank insolvency and among others. That’s what I called Reservism, trying to understand what is the role of this new asset called reserve has on the economy and the central bank policy.
"Once you understand that these reserves in the central banks are very large, the next thing one can do to understand what effect they have is to try to understand to what extent they could be different. Reserve right now are overnight deposit in central bank by banks, they are paid a given interest rate but once you started thinking about what they are, you realized that those could be different. They could, instead of promising an interest rate, promising a different payment. They could be, instead of overnight, a 30-day deposit. They could be lots of different things. That is what led to some of the more recent research.
What about a monetary policy of paying an inflation-adjusted interest rate on bank reserves? 
"The intuition is as following: the reserve is a very special asset that has one particular property – reserves are the unit of account in the economy. One dollar of reserves defines what the dollar is. ... [R]eserve is the unit of account of the economy. One unit of reserve always worth one dollar.
"Now imagine that instead of promising to pay them the nominal interest rate, you promise that the interest rate, i.e. the remuneration of the reserves, is indexed to the price level. So, de facto, the reserve essentially pay a real payment in the same way that the inflation-indexed government bonds do. There is no barrier to doing this. After all, it is the same way government issued the inflation-indexed bonds, so can the central bank. ... 
"On the other hand, there is a real interest rate pinned down in the economy that has to do with investment opportunities and how impatient people are. If the central bank promises a real payment, under the no-arbitrage condition, this pinned down the real value of the reserves today, as the real payment tomorrow divided by the real value today is equal to the real return. ... So, if we have pinned down the real value, what also have we pinned down? We have pinned the price level. This is because the real value of one dollar of reserves is precisely given by the price level. ...
"Let me make it clear that this is an academic work, in the sense that I am not saying that the central banks should do it tomorrow. ...
"The [inflation-adjusted interest] payment on reserve rule, on the other hand, is not what we called a feedback rule. It doesn’t say how you should adjust interest rate to what inflation is at some point. .... [I]n the Taylor Rule [for monetary policy] one needs to track not just current inflation but also certain things like natural rate of interest or natural rate of unemployment to know how to adjust the nominal interest rate. Under the payment on reserve rule, what you need to track is not any of these natural and unobserved factors, but instead, an observable variable, i.e. the current real interest rate in the economy."
More discussion from Reis follows. I don't know if having the central bank pay an inflation-adjusted return on bank reserves is a good idea, but I certainly agree with Reis's premise that thinking about different ways conduct monetary policy through interest on bank reserves is really just getting underway.

Saturday, March 25, 2017

Spring 2017 Brookings Papers on Economic Activity

My guess is that a reasonable proportion of those who read this blog are already familiar with the Brookings Papers on Economic Activity, but for other, it's a venerable journal published twice-a-year, usually with 5-6 papers on high-profile topics. The papers are academic in tone and approach, but typically a lot more readable than what would appear in a technical journal of economics. Versions of all the papers for Spring 2017 are now available, although they are not quite finalized and typeset as yet. The six papers in the issue, with brief descriptions lifted from the Brookings website, include:

"Mortality and morbidity in the 21st century," by Anne Case and Sir Angus Deaton
In "Mortality and morbidity in the 21st century," Princeton Professors Anne Case and Angus Deaton, a Nobel Prize winner, follow-up their groundbreaking 2015 research that documented a dramatic increase in middle-aged white mortality. In their new paper, the authors find that “deaths of despair” (deaths by drugs, alcohol, and suicide) in midlife rose most dramatically for white non-Hispanic Americans with a high school degree or less—a pattern that diverges sharply from overall midlife mortality rates in other rich countries. When combined with a slowdown in progress against mortality from heart disease and cancer—the two largest killers in middle age—the increase in “deaths of despair” since the late 1990s has resulted in midlife mortality rates for white non-Hispanic Americans with a high school degree or less overtaking overall midlife mortality rates of minority groups.
"Along the watchtower: The rise and fall of U.S. low-skilled immigration," by Gordon H. Hanson, Chen Liu, and Craig McIntosh
In "Along the watchtower: The rise and fall of U.S. low-skilled immigration," Gordon Hanson, Chen Liu, and Craig McIntosh of the University of California San Diego project that immigration to the U.S. of young, low-skilled workers from Latin America will continue to slow until it reverses in 2050—even without changes to U.S. immigration and border policy—thanks to weak labor-supply growth in Mexico and other Latin American countries. Furthermore, the population of Latin American-born residents already in the U.S. over age 40 will grow by 82 percent over the next 15 years, presenting a bigger challenge for U.S. policymakers than how to stop or slow low-skilled immigration. “The current U.S. debate about immigration policy has a backward-looking feel to it. The challenge isn’t how to stop large-scale labor inflows, which has largely been achieved, but how to manage a large, settled population of undocumented immigrants. Massive investments in building border barriers or expanding the U.S. Border Patrol are not going to address this challenge," the authors argue.
"The disappointing recovery of output after 2009," by John Fernald, Robert Hall, James Stock, and Mark W. Watson
In "The disappointing recovery of output after 2009," the Federal Reserve Bank of San Francisco's John Fernald, Stanford's Robert Hall, Harvard's James Stock, and Princeton's Mark Watson find that the unexpectedly slow growth since 2009 in output—the economy’s measure of growth—is unlikely to improve because it has been caused by structural, non-cyclical factors and not just the financial crisis and subsequent recession. The authors also find evidence that weak government spending at all levels delayed the recovery. They attribute some of the unusually slow growth early in the recovery to cuts in federal spending from the sequester, the end of the of fiscal stimulus from the American and Reinvestment Recovery Act (ARRA), and changes in state and local level spending due to the recession’s causing home prices to collapse, which in turn impacted property tax receipts.
"Monetary policy in a low interest rate world," by Michael T. Kiley and John M. Roberts
In “Monetary policy in a low interest-rate world,” the Federal Reserve Board’s Michael T. Kiley and John M. Roberts find that rates could hit zero as much as 40 percent of the time—twice as often as predicted in work by others—according to standard economic models of the type used at the Federal Reserve and other central banks. The constraint on monetary policy imposed by frequent episodes of interest rates at zero could make it harder for the Fed to achieve its 2 percent inflation objective and full employment, and the analysis suggests that a monetary policy that tolerates inflation in good times near 3 percent may be necessary to bring inflation to 2 percent on average. As a result, there are a number of steps the Federal Reserve and other central banks can take to help better achieve full employment and price stability in this low interest-rate environment.

"Safety, liquidity, and the natural rate of interest," by Marco Del Negro, Domenico Giannone, and Marc P. Giannoni
In “Safety, liquidity, and the natural rate of interest,” Marco Del Negro, Domenico Giannone, Marc P. Giannoni, and Andrea Tambalotti of the Federal Reserve Bank of New York argue that the secular decline in the natural rate of interest (the real rate of return that prevails when the economy is at its potential) in the U.S. is primarily due to the strong demand for safe and liquid assets, and especially U.S. Treasury securities, provoked in part by foreign and domestic crises over the past 20 years. The analysis suggests the natural rate could rebound in the near future. The authors also note that the “decline in interest rates poses important challenges for monetary policy, but it also matters for fiscal policy and for our understanding of the nature of business cycles.”
"Is Europe an optimal political area?" by Alberto Alesina, Guido Tabellini, and Francesco Trebbi
"In “Is Europe an optimal political area?” Harvard University’s Alberto Alesina, Bocconi University’s Guido Tabellini and University of British Columbia’s Francesco Trebbi analyze cultural indicators across 15 EU countries and Norway from 1980-2009 to determine if the so-called European political project was “too ambitious.” The authors find cultural differences among Europeans are increasing and nationalism is on the rise despite several decades of economic and political integration. The authors believe the EU is at a crossroads: It must choose between the benefit of economies of scale for environmental protection, immigration, terrorism, foreign policy, and promoting research and innovation versus the cost of rising nationalism. While a majority of Europeans seems to favor more EU-level decision-making, they seem dissatisfied with how those policies are being implemented and they disagree along national lines.

Friday, March 24, 2017

Global Productivity Growth: Diminishing Convergence

"Productivity is a gift for rising living standards, perhaps the greatest gift. It is not, however, one that always keeps on giving ..." So said Andrew G. Haldane, the chief economist at the Bank of England, in his talk on "Productivity Puzzles" delivered at the London School of Economics on March 20, 2017. Some of the talk focuses on UK experience in particular: here, I want to focus on Haldane's broader perspective on global productivity growth, and on his intriguing argument that from a global perspective, most of the productivity slowdown can be attributed to a failure of innovation to diffuse across countries as rapidly as in the past.

Here's a figure showing the pattern of productivity growth worldwide since the 1950s, and then a figure showing the same productivity data divided into advanced and emerging economies.


Haldane summarizes overall patterns in the productivity figures in this way (footnotes omitted):
First, the slowdown of productivity growth has clearly been a global phenomenon, not a UK-specific one. From 1950 to 1970, median global productivity growth averaged 1.9% per year. Since 1980, it has averaged 0.3% per year. Whatever is driving the productivity puzzle, it has global rather than local roots.
Second, this global productivity slowdown is clearly not a recent phenomenon. It appears to have started in many advanced countries in the 1970s. Certainly, the productivity puzzle is not something which has emerged since the global financial crisis, though it seems the crisis has amplified pre-existing trends. Explanations for the productivity puzzle based on crisis-related scarring are likely to be, at best, partial.
Third, the productivity slowdown has been experienced by both advanced and emerging economies. The slowdown in median productivity growth after the 1970s among both advanced and emerging market economies is around 1¾ percentage points (Chart 8). Indeed, looking at country-specific trends, it is striking just how generalised the productivity slowdown has been ...
Haldane then turns to the economic arguments about convergence, which suggest that countries which are lagging behind in productivity and per capita GDP should have a natural opportunity to grow more quickly, by taking advantage of flows of technology and expertise from the countries on the technology frontier (for some additional discussion of convergence, see my post on "Will Convergence Occur?" November 25, 2015). As he says:
"Growth theory would predict that, over time, technological diffusion should lead to catch-up between frontier and non-frontier countries. And the greater the distance to the frontier, the faster these rates of catch-up are likely to be. So what explains the 1¾ percentage point slowdown in global productivity growth since the 1970s – slower innovation at the frontier or slower diffusion to the periphery? If the frontier country is taken to be the United States, then slowing innovation can only account for a small fraction of the global slowing, not least because the US only has about a 20% weight in world GDP. In other words, the lion’s share of the slowing in global productivity is the result of slower diffusion of innovation from frontier to non-frontier countries.
"To illustrate that, Chart 10 plots the distribution of levels of productivity across countries over a set of sample periods, where productivity is measured relative to a frontier country (the United States) indexed to one. Comparing the distributions in the 1950s and 1970s, there is a clear rightward shift. Cross-country productivity convergence or catch-up was underway, as the Classical growth model would suggest. In recent decades, however, that pattern has changed. Comparing the 1970s with the 1990s, there is a small leftward shift in the probability mass. And in the period since the global financial crisis, there has been a further leftward shift in the distribution and a widening of its range. Today, non-frontier countries are about as far from the technological frontier as they were in the 1950s."


Haldan also offers a graph showing productivity level relative to the US: again, emerging market economies show convergence toward the US level of productivity from the 1950s up through the 1970s, but then shows a divergence in much of the 1980s and 1990s--with no particular convergence or divergence since about 2000.


As Haldane puts it:
"One of the key determinants of international technology transfer has been found to be cross-border flows of goods and services, people and money and capital. While they have waxed and waned historically, all of these have tended to rise rapidly since the middle of the 20th century. Other things equal, that would have been expected to increase the speed of diffusion of innovation across countries over that period. In practice, the opposite appears to have occurred.

"Taken at face value, these patterns are both striking and puzzling. Not only do they sit oddly with Classical growth theory. They are also at odds with the evidence of history, which has been that rates of technological diffusion have been rising rather than falling over time, and with secular trends in international flows of factors of production. At the very time we would have expected it to be firing on all cylinders, the technological diffusion engine globally has been misfiring. This adds to the productivity puzzle.
I'll only add that any view of the US productivity slowdown is likely to be incomplete if it doesn't take into account that it's a long-term issue, with a global dimension, and that a decline in the diffusion of productivity seems to be involved.

Thursday, March 23, 2017

Interview with Jonathan A. Parker

Aaron Steelman has a broad-ranging "Interview" with Jonathan A. Parker in the most recent issue of Econ Focus from the Federal Reserve Bank of Richmond (Third/Fourth Quarter 2016, pp. 22-26). Here are a few tidbits that caught my eye:

Increased volatility for high-income households
"[I]n work with Annette Vissing-Jorgensen we have looked at how the labor income of high-income households has changed significantly. What we zoomed in on is that high-income households used to live a relatively quiet life in the sense that the top 1 percent would earn a relatively stable income, more stable than the average income. When the average income dropped by 1 percent, the incomes of the top 1 percent would drop by about only six-tenths of a percent. In the early 1980s that switched, so that in a recession if aggregate income dropped by 1 percent, the incomes of the top 1 percent dropped more like 2.5 percent — quadrupling the previous cyclicality. So now they're much more exposed to aggregate fluctuations than the typical income. We also show that decade by decade, as the top income share increased, so did its exposure to the business cycle in the 1980s, 1990s, and 2000s. And as you go further and further up the income distribution, that top share — not just the top 1 percent, but the top 10th of a percent, and the top 100th of a percent — there's also been a bigger increase in inequality and a bigger increase in the exposure to the business cycle. ... 
"First, starting around the end of the 1980s, we see the adoption of incentive-based pay for CEOs and other highly placed managers. Incentive compensation over this time rises, and it happens to be that the incentive compensation is not based on relative performance, which would therefore difference out what goes on in the macroeconomy, but instead is based on absolute performance. And in the U.S. case, that could partly be due to simply what counts legally as incentive-based compensation and so is not subject to corporate profits tax. Pay in the form of stock options, for example, counts as incentive-based compensation. Pure salary does not and so is taxed as corporate profits above $1 million.
"The other possibility is that ... new information and communication technologies allow the best managers to manage more people, to run bigger companies, and therefore to earn more; the best investment managers to manage more money and to make more for themselves; the best entertainers and performers to reach more people and therefore earn a larger share of the spending on entertainment goods. High earners have become small businesses. ... We do know that increased cyclicality in income among high earners can't come simply from the financial sector. That sector just isn't quantitatively big enough, and you see the increase in earnings share and in cyclicality across industries and occupations. It's not the case that just the top hedge fund managers have become the high earners and they're very cyclical; Oprah is also."
Why don't households smooth consumption?
"I use Nielsen Consumer Panel data to design and run my own survey on households to measure the effect of what was then the second of these large randomized experiments run by the U.S. government, the economic stimulus program of 2008. The key feature of that program was that the timing of the distribution of payments was determined by the last two digits of the Social Security number of the taxpayer, numbers that are essentially randomly assigned. So the government effectively ran a $100 billion natural experiment in 2008, distributing money randomly across time to people, and this policy provides a way to measure quite cleanly how people respond to infusions of liquidity. ...
"The first thing I found out is that illiquidity is still a tremendous predictor of who spends more when a predictable payment arrives. But it's not only liquidity. People with low income have a very high propensity to spend, and not just people who have low income today, as would be associated with the standard buffer-stock model. You can imagine a situation where you've had a bad income shock, you happen to have low liquidity, and you spend a lot. But illiquidity one or even two years prior to the payment is just as strongly associated with a propensity to spend out of liquidity, as illiquidity at the time of the payment. This same set of people who have persistently high propensities to consume are also the people who characterize themselves as the type of people who spend for today rather than save for tomorrow when I asked them specifically about their type, not their situation. They are also the people who report that they have not sat down and made financial plans. ... Low liquidity, or low financial wealth, is a very persistent state across households, suggesting the propensity to spend is not purely situational. A lot of it is closer to an individual-specific permanent effect than something transient due to temporary income shocks. ... 
"So the question is how many people are influenced by constraints in practice. Is their marginal propensity to consume noticeably influenced by the fact that they might be constrained next month or in six months? I would say that's quantitatively important for roughly half of the population. ... I don't think there's a lot of transition between the people who would consistently hit these constraints or be concerned about them and the people for whom they're not that relevant."
Tradeoffs in the coming revisions in the Consumer Expenditure Survey

"The BLS [Bureau of Economic Statistics] is revising the CE Survey now. It's called the Gemini Project, and I have been involved a little with advising how to revamp it. Surveys in general have been experiencing problems with participation and reporting. The CE is suffering from these problems, and so the Gemini Project is trying to address them. The CE has the huge benefit of being a nationally representative survey done by the Census Bureau; almost all of the alternative datasets that we're using from administrative sources that are not strictly survey datasets are less representative. So reducing the CE's problems with participation and reporting could potentially have a very large payoff. Of course, the cost of the change is that the CE Survey as it stands now is a very long panel dataset that has had the same format throughout the whole time. So we're going to break that and no longer be adding new time periods to an intertemporally comparable dataset. But I think that's probably a cost worth paying at this point.
"What the BLS is planning is to change dramatically the way the CE Survey is conducted. They're going to gather data in quite different ways than they have in the past, including some spending categories that will almost have so-called administrative sources. What I have been pushing for is maintaining some panel dimension in the new version of the CE Survey. If you don't have a panel dimension, then for lots of macro-type questions, you can track people only at the group level. And since groups are usually affected differently by other things going on in the world, you lose a lot of ability to identify stuff that might be interesting — tracking someone who had a specific policy exposure in one period and seeing how they're doing a month or a year later. If the BLS eliminates the panel dimension, researchers couldn't do anything like I did with my tax rebates work, nor any other work that looks at treatments that are happening at the individual level. But I'm hoping that the new, state-of-the-art version of the CE Survey will last another 35 years and be just as good."