Friday, October 9, 2015

The Eurozone Crisis: Crystalizing the Narrative

The economy of the eurozone makes the US economy look like a picture of robust health by comparison. Richard Baldwin and Francesco Giavazzi have edited The Eurozone CrisisA Consensus View of the Causesand a Few Possible Solutionsa book from the Centre for Economic Policy Research in London. The book includes a useful introduction from Richard Baldwin and Francesco Giavazzi, followed by 14 mostly short and all quite readable essays.

As a starting point for non-European readers, consider that the unemployment rate across the 19 eurozone countries is still well into double-digits at 11%.

While the US economy has experienced disappointingly sluggish growth after the end of the Great Recession in 2009, the eurozone economy experienced a follow-up recession through pretty much all of 2011 and 2011, and since then has experienced growth that is sluggish even by US standards.

What went wrong in the eurozone? Here's the capsule summary from the introduction by Baldwin and Giavezzi:

The core reality behind virtual every crisis is the rapid unwinding of economic imbalances. ... In the case of the EZ [eurozone] crisis, the imbalances were extremely unoriginal. They were the standard culprits that have been responsible for economic crises since time immemorial – namely, too much public and private debt borrowed from abroad. Too much, that is to say, in relation to the productive investment financed through the borrowing. 
From the euro’s launch and up until the crisis, there were big capital flows from EZ core nations like Germany, France, and the Netherland to EZ periphery nations like Ireland, Portugal, Spain and Greece. A major slice of these were invested in non-traded sectors – housing and government services/consumption. This meant assets were not being created to help pay off in the investment. It also tended to drive up wages and costs in a way that harmed the competitiveness of the receivers’ export earnings, thus encouraging further worsening of their current accounts. 
When the EZ crisis began – triggered ultimately by the Global Crisis – cross-border capital inflows stopped. This ‘sudden stop’ in investment financing raised concerns about the viability of banks and, in the case of Greece, even governments themselves. The close links between EZ banks and national governments provided the multiplier that made the crisis systemic. 
Importantly, the EZ crisis should not be thought of as a sovereign debt crisis. The nations that ended up with bailouts were not those with the highest debt-to-GDP ratios. Belgium and Italy sailed into the crisis with public debts of about 100% of GDP and yet did not end up with IMF programmes, while Ireland and Spain, with ratios of just 40%, (admittedly kept artificially low by large tax revenues associated with the real estate bubble) needed bailouts. The key was foreign borrowing. Many of the nations that ran current account deficits – and thus were relying of foreign lending – suffered; none of those running current account surpluses were hit.
In working through their detailed explanation, here's are a few of the points that jumps out at me. When the euro first came into widespread use in the early 2000s, interest rates fell throughout the eurozone and all the eurozone countries were able to borrow at the same rate; that is, investors were treating all governments borrowing in euros as having the same level of risk--Germany the same as Greece. Here's a figure showing the falling costs of government borrowing and the convergence of interest rates across countries.

The crucial patterns of borrowing that emerged were not about lending from outside Europe to inside Europe, but instead about lending between the countries of the eurozone, a pattern which strongly suggests that the common currency was at a level generating ongoing trade surpluses and capital outflows from some countries, with corresponding trade deficits and capital inflows for other countries. Baldwin and Giavezzi write:
To interpret the individual current accounts, we must depart from an essential fact: The Eurozone’s current account as a whole was in balance before the crisis and remained close to balance throughout. Thus there was very little net lending from the rest of the world to EZ countries. Unlike in the US and UK, the global savings glut was not the main source of foreign borrowing – it was lending and borrowing among members of the Eurozone. For example, Germany’s large current account surpluses and the crisis countries deficits mean that German investors were, on net, lending to the crisis-hit nations – Greece, Ireland, Portugal and Spain (GIPS).

Sitting here in 2015, it seems implausible that policymakers around Europe weren't watching these emerging imbalances with care and attention, and planning ahead for what actions could be taken with regard to government debt, private debt, banking reform, central bank lender-of-last-resort policy, and other issues. But of course, it's not uncommon for governments to ignore potential risks, and only make changes after a catastrophe has occurred.   Baldwin and Giavezzi note wryly:
It is, ex post, surprising that the building fragilities went unnoticed. In a sense, this was the counterpart of US authorities not realising the toxicity of the rising pile of subprime housing loans. Till 2007, the Eurozone was widely judged as somewhere between a good thing and a great thing.
And what has happened in the eurozone really is an economic catastrophe. Baldwin and Giavezzi conclude:
The consequences were and still are dreadful. Europe’s lingering economic malaise is not just a slow recovery. Mainstream forecasts predict that hundreds of millions of Europeans will miss out on the opportunities that past generations took for granted. The crisis-burden falls hardest on Europe’s youth whose lifetime earning-profiles have already suffered. Money, however, is not the main issue. This is no longer just an economic crisis. The economic hardship has fuelled populism and political extremism. In a setting that is more unstable than any time since the 1930s, nationalistic, anti-European rhetoric is becoming mainstream. Political parties argue for breaking up the Eurozone and the EU. It is not inconceivable that far-right or far-left populist parties could soon hold or share power in several EU nations. Many influential observers recognise the bind in which Europe finds itself. A broad gamut of useful solutions have been suggested. Yet existing rules, institutions and political bargains prevent effective action. Policymakers seem to have painted themselves into a corner.
For those looking for additional background on eurozone issues, here are links to a few previous posts on the subject:

Thursday, October 8, 2015

Update on US Unions

For those looking for an update on the modern role of American unions, the Council of Economic Advisers offers a useful starting point with its October 2015 "Issue Brief: Worker Voice in a Time of Rising Inequality."

The report begins with some graphs about US union membership that, while their shape is familiar to me, have not lost their power to shock. For example, US union membership peaked back in the 1950s and 1960s, and has been on a downward path as a share of total US employment since then. (The two different colored lines show that two different sources of data are being used.)

The share of US private sector workers belonging to unions is even lower, while public sector unionization rates are much higher.
What effects do unions have in modern labor markets and job conditions? The report notes: "Unionized workers still command a sizable wage premium of up to 25 percent relative to similar nonunionized workers, but that premium has fallen slightly over the past couple of decades. ... After controlling for observable differences between union and nonunion workers, research finds that workers who are represented by a union are about 30 percent more likely to be covered by health insurance at their job than similar nonunion workers. In addition, union workers are about 25 percent more likely to have retiree health benefits than similar nonunion worker ..."

The reason for this wage premium is a subject of some controversy. One possibility is that unionized workers with firms that have large and ongoing profits can negotiate in a way that gives the workers a larger share of those profit--which is a model that fits fairly well with the auto and steel unions of a half-century ago. Otherwise, union workers get paid more because they have higher productivity. This could happen in various ways: less worker turnover, better training, and the like. It could happen because even among workers who look the same based on observable characteristics in the data, the more motivated and productive workers are more likely to join a union. The possibility barely mentioned in this report is that employers react to a union by investing more heavily in capital equipment and in outsourcing to non-union firms, and increasing the pay of union workers in that way but reducing the number of union jobs over time.

Although this report has lots of useful information and embeds links to a number of relevant academic studies, reports from the Council of Economic Advisers are inevitably both economic and political documents. The main place the influence of politic shows up here is in the very brief (four sentences, one paragraph) discussion of "Union Impact on Firm Performance." You will be relieved, as I was, to read that unions always have a positive effect on firms. There was apparently no need to mention the possibility that unions might create inflexibilities: for example, in the size of the compensation bill, in a burden of retiree benefits, in job assignments, in limits on incentive pay, or in adapting new organizational or technological approaches.  From the discussion in this report, unionized workforces bring nothing but benefits to firms, and so it would seem that every US employer should be lining up to unionize its workforce. I do think one can make a strong case that unions have bolstered productivity and benefited firms at certain companies and in certain countries, at various places and times. But the possibility that unions at other times and places have had negative effects on firms is not not discussed here.

But on a number of other subjects, the report gives a more balanced overview. For example, on the question of whether unions improve job safety, the report points out that while one might be predisposed to believe this claim, in fact unionized workplaces seem to suffer more injuries. There are a number of possible reasons for this pattern. Perhaps workplaces with a lot of injuries are more likely to end up with unions. Perhaps unionized workplaces are more likely to keep a true record of workplace injuries. Perhaps the main gain of unions is not primarily a smaller number of workplace injuries, but that the injuries which occur are milder and less harmful. Work on sorting out these issues is still underway.

On the question of whether unions lead to improved skills and training, the report cites strong evidence from a number of other countries that this connection exists. But for the US, the report notes: "Other work examining unionized workers in the United States and Canada also finds that unionized workers tend to develop skills that are relatively job-specific. However, some work suggests little to no difference in training between union and nonunion firms, while other research suggests that employers in unionized workplaces offer less training than those in nonunionized workplaces."

Finally, there is the issue of  how much the decline in unionization has contributed to the rise in inequality. The graph shows the basic observation suggesting a connection here: the bottom 90% of the income distribution had a higher share of total income when unionization rates were at their highest, and the share of income going to that group has declined as unionization rates have fallen.

Of course, this correlation doesn't prove causation. It's easy to imagine that certain factors in the economy--say, the rise in information and communication technology together with an increase in global trade--might account for both the fall in unionization and rising inequality of the income distribution.  But more detailed studies that try to take these other factors into account do suggest that the decline in unionization is part of the story, too: "A recent paper finds that declines in unionization explain one-fifth of the increase in wage inequality between 1973 and 2007 for women and one-third of the increase for men. For men, the effect is comparable to the effect of the increasing stratification of wages by education (for women, the effect of deunionization is about half that of education)."

However, give the changes in who is a union member over time, it's not clear that a rise in unionization right now would have a substantial effect on inequality. Today's union members are less likely to be low-wage workers; indeed, union workers are now more likely to be college-educated than the average US worker.
The report notes:
"Union membership has also become more representative of the population, with the share of members who are female or college-educated rising quickly. Studies have shown that union wage effects are largest for workers with low levels of observed skills and that unionization can reduce wage inequality among workers partially by increasing wages at the bottom of the distribution and by reducing pay dispersion within unionized firms and industries. Since both the union wage premium and the coverage of low-skilled workers, who receive the highest wage premium, have fallen, unionization’s ability to reduce inequality has very likely been limited in recent years."
During the last few decades, I've seen occasional articles about how US unions are just about to make a comeback. Maybe they will, although after decades of ongoing decline I'm skeptical. But the report also points to other organizations that can offer a voice for workers. For example, Germany is well-known for its "works councils":
Works councils, groups of workers that represent all employees in discussions with their employer but are not part of a formal trade union, are a common form of worker voice outside of trade unions in Germany and, under the authority of the German Works Constitution Act of 1952, can be set up in any private workplace with at least five employees. Works councils ensure that workplace decisions, such as those about pay, hiring, and hours, involve workers – they have both participation rights (where works councils must be informed and consulted about certain issues) and co-determination rights (where the works council must be involved in the decision). Works councils are separate from trade unions: trade unions exist to protect their members, while works councils exist to integrate workers with management into the decision making process.
In the US, there are some organizations forming that represent independent contractors and freelancers.
A similar organization for workers not covered by the National Labor Relations Act, and that operates closely with the organized labor community, is the New York Taxi Workers’ Alliance. Formed in 1998, the New York Taxi Workers’ Alliance helps advocate for taxi drivers, who are primarily independent contractors rather than employees. The organization expanded nationally in 2011 as the National Taxi Workers’ Alliance, and became the first charter for non-traditional workers since the farm workers in the 1960s, and the first one ever of independent contractors; they are recognized by the AFL-CIO as an affiliate organization. The group advocates for its members in much the same way as a traditional union, but their right to collectively bargain is not protected under the National Labor Relations Act due to their non-employee status.
I don't know if these kinds of non-union worker-voice organizations have a real future in the US context. But I do think that worker voice offers potentially valuable input, and that many workers want a voice that can speak on their behalf to management. If workers don't have workplace-related organizations to provide such a voice, they will inevitably turn their voices to politicians who promise legislation to address their concerns.

Wednesday, October 7, 2015

Rethinking Parameters of the US Welfare State

Here's a three-question true-or-false quiz about your beliefs on the welfare state:

  1. True or false: "The current American welfare state is unusually small." 
  2. True or false: "The United States has always been a welfare state laggard." 
  3. True or false: "The welfare state undermines productivity and economic growth."

My guess is that a lot of US economists would agree with at least two of these statements. Irwin Garfinkel and Timothy Smeeding challenge all three in their essay, "Welfare State Myths and Measurement," which appears in Capitalism and Society (volume 10, issue 1). They write: "Very reasonable changes in measurement reveal that all three beliefs are untrue."

While one can certainly quarrel in various ways with the "reasonable changes" they propose, the mental exercise involved in doing so is a nice chance to take a big-picture look at the US "welfare state" in a variety of contexts. When Garfinkel and Smeeding refer to the "welfare state," they are not talking about the extent of income redistribution. Instead, they are talking more broadly about the extent to which the government takes on the provision of a range of social benefits including retirement income, health care, and education spending.

So what is the argument that the US welfare state is not "unusually small"? Their answer comes in three parts. First, the usual pattern in the world economy is that countries with higher per capita income devote a higher percentage of GDP to social welfare spending. Here's a graph with 162 countries. The high-income countries, like the US, are the solid dots. From this image, their similarities look a lot larger than their differences.

Garfinkel and Smeeding write: "Clearly, the richer the country, the greater the share of their income that citizens devote to welfare state transfers. (Three countries—Hong Kong, Singapore, and the United Arab Emirates—are outliers: very rich with relatively small welfare states. These exceptional nations have not been included in previous research on welfare states in rich nations and we leave it to future scholars to explain their exceptionalism.) The same pattern holds within the United States
and Europe. The higher the income of states or countries, the greater the share of income that they devote to welfare state transfers (Chernick, 1998). Most important, in the international context of all nations, the size of the US welfare state does not stick out."

One might object that the figure is visually misleading, because the horizontal and vertical axes are expressed in log form, which can make differences that are large in the real world look smaller on a figure (at least for those not used to reading log graphs).  For example, on the vertical axis, 3 is the log level for 20% of GDP, 3.4 is the log level for 30% of GDP going to social welfare spending.

Garfinkel and Smeeding make two other points about the size of the US welfare state. They point out that while many other countries rely heavily on government funding for retirement and health spending, the US relies more heavily on an employer-provided system. They essentially argue that it doesn't make sense to treat a system where the government taxes and spends as a "welfare state," but a system where government gives large tax incentives and regulates heavily as "not a welfare state." Here's their argument (citations omitted):

 "Should including tax expenditures or employer-provided benefits in the measurement of welfare states be controversial? Again, we argue no. Most economists treat tax expenditures as economically equivalent to explicit budget expenditures and would therefore agree that, at a minimum, the tax-subsidized portion of employer-provided health insurance (between one-fifth and one-quarter of the total) should be included as welfare state expenditures. Although a case can be made for counting only the tax-subsidized portion on the grounds that state funding differs from funding stimulated
and regulated by the state, some economists and political scientists—whose practice and rationale we follow—argue for including the entire amount of employer expenditures. These benefits are publicly subsidized and regulated. Moreover, employer-provided health insurance involves socialization of the risk of ill health and redistribution from the healthy to the sick. While this occurs at the firm rather than the national level, failing to include these benefits underestimates the share of the population with insurance and mischaracterizes the US welfare state by obscuring and minimizing how much it spends on subsidized health insurance. Most important, including the full expenditures for employer-provided health insurance comes much closer than including only the tax-subsidized portion to measuring the real social cost of the American system of health insurance, that is to say the peculiar American welfare state version of health insurance—a staggering 17 percent of GDP!"
Finally, Garfinkel and Smeeding point out that saying the US has a small welfare state is based on statements about the size of welfare relative to GDP, but that if one looks just at the absolute size of the US welfare state on a per person basis, it is enormous.
"The United States, as one of the richest nations, could be spending more in absolute terms and less as a percentage of income than other rich nations. ... Australia, for example, spent a slightly larger proportion of its GDP on SWE [social welfare expenditures] in 2001 than the US ... but its [per capita] GDP then was only a bit above 60% of US GDP. Consequently, US per capita social welfare expenditures are much higher than Australia’s. ... Real per capita social welfare spending in the United States is larger than that in almost all other countries! Even if employer-provided benefits and tax expenditures are excluded, the United States is still the third biggest spender on a per capita basis."
The second question in the mini-quiz above is whether the US has always been a laggard in welfare state spending. Garfinkel and Smeeding point out that the US was historically a global leader in one kind of social spending: "throughout most of the 19th and 20th centuries the United States was a leader in the provision of mass public education." However, they also point out that in recent decades, other high-income countries have caught up (as I've also noted herehere, and here, for example). Here's their summary (again, citations omitted):
The wide American lead in secondary education persisted past mid-century, at least until 1970, but by century’s end was much reduced. Most of the other countries had caught up fully or nearly, and Canada and Ireland were notably ahead. During the last quarter of the twentieth century, the United States also fell increasingly behind in early childhood education and in higher education. ... [T]he United States is now fast becoming a laggard in postsecondary degree completion. For the current 55–64 age cohort, the United States was the leader in post-secondary degrees of all kinds. But subsequent cohorts showed little if any gains in post secondary educational attainment, while several nations not only overtook, but now lead the United States in post-secondary educational attainment and others are rapidly catching up.
The third question in the mini-quiz was about whether the welfare state undermines productivity and growth. Garfinkel and Smeeding point out that in the big picture, all the high-income and high-productivity nations of the world have large welfare states; indeed, one can argue that growth rates for many high-income nations were higher in the mid- and late 20th century, when the welfare state was comparatively larger, than back in the 19th century when the welfare state was smaller. Indeed, improved levels of education and health are widely recognized as important components of improved productivity. As they write: "Furthermore, by reducing economic insecurity, social insurance and safety nets make people more willing to take economic risks."

One can make any number of arguments for improving the design of the various aspects of the welfare state, or to point to certain countries where aspects of the welfare state became overbearing or dysfunctional. But from a big-picture viewpoint, it's hard to make the case that a large welfare state has been a drag on growth. Garfinkel and Smeeding write:
"Of course, many other factors besides social welfare spending have changed in the past 150 years. But, as we have seen, welfare state spending is now very large relative to the total production of goods and services in all advanced industrialized nations. If such spending had large adverse effects, it is doubtful that growth rates would have been so large in the last 30 years. The crude historical relationship suggests, at a minimum, no great ill effects and, more likely, a positive effect. The burden of proof clearly lies on the side of those who claim that welfare state programs are strangling productivity and growth. If they are right, they need to explain not only why all rich nations have large welfare states, but more importantly why growth rates have grown in most rich nations as their welfare states have grown larger."

Monday, October 5, 2015

Mass Shootings: Trends and Categories

The vile and ghastly news of the mass shooting last week at Umpqua Community College in Roseburg, Oregon, sent me to the July 2015 report "Mass Murder with Firearms: Incidents and Victims, 1999-2013," written by William J. Krouse and Daniel J. Richardson for the Congressional Research Service.

The most-used definition of a "mass shooting" is that four or more people are killed in a single incident. Here's the pattern from 1999-2013. 

Mass shootings can be divided into three categories: mass public shootings, "familicide" shootings that involve a group of family members, and felony mass shootings that would include gang executions, botched robberies and hold-ups,and the like. The chart above includes all three of these. The breakdown in the three categories is below. Mass public shootings have about half of the incidents of each of the other two categories, but include almost as many victims killed and more victims injured. 

What if we just look at the pattern for mass public shootings over time, leaving out the familicides and other felony mass shootings? Here's the pattern. If you squint a bit at 2007, 2009 and 2012, you can sort of imagine an upward trend here, but given that there's a lot of annual fluctuation, it's not clear that the trend is a meaningful one over this time frame. 
However, if one takes a longer time-frame going back several decades, it does appear that mass public shootings have risen. Krouse and Richardson write:

With data provided by criminologist Grant Duwe, CRS [the Congressional Research Service] also compiled a 44-year (1970-2013) dataset of firearms-related mass murders that could arguably be characterized as “mass public shootings.” These data show that there were on average:
• one (1.1) incident per year during the 1970s (5.5 victims murdered, 2.0 wounded per incident),
• nearly three (2.7) incidents per year during the 1980s (6.1 victims murdered, 5.3 wounded per incident),
• four (4.0) incidents per year during the 1990s (5.6 victims murdered, 5.5 wounded per incident),
• four (4.1) incidents per year during the 2000s (6.4 victims murdered, 4.0 wounded per incident), and
• four (4.5) incidents per year from 2010 through 2013 (7.4 victims murdered, 6.3
wounded per incident).
These decade-long averages suggest that the prevalence, if not the deadliness, of “mass public shootings” increased in the 1970s and 1980s, and continued to increase, but not as steeply, during the 1990s, 2000s, and first four years of the 2010s.
As another way of illustrating this longer-term upward pattern of mass public shootings, consider only mass public shootings in which 10 or more people were killed. Up through 2013, there had been 13 such episodes in modern US history: seven occurring from 2007-2013, and six during the 34 years from 1966 to 2006. 

It feels almost mandatory for me to tack on some policy recommendation at the end of this post, but I'll resist. Some of the newly dead in Oregon are not even in their graves yet. There have been 4-5 mass public shootings each year for the last quarter-century.The US population is 320 million.  In an empirical social-science sense, it's probably impossible to prove that any particular policy would reduce this total from 4-5 per year back to the 1-3 mass public shootings per year that happened in the 1970s and 1980s. In such a situation, policy proposals (whether the proposal is to react in certain ways or not to react in certain ways) will inevitably be based on a mixture of grief, outrage, preconceived beliefs, and hope, not on evidence.

Friday, October 2, 2015

An Interview with Amy Finkelstein: Health Insurance, Adverse Selection, and More

Douglas Clement has an "Interview with Amy Finkelstein" in the September 2015 issue of The Region, which is published by the Minneapolis Federal Reserve. Finkelstein has done a lot of her most prominent work looking at issues of insurance and risk: especially health insurance, but also long-term care insurance, annuities, and others. She's a theory-tester: that is, an empirical researcher who works with a keen awareness of what the previous accepted underlying theories might seem to imply. Back in 2012, Finkelstein was awarded the very prestigious John Bates Clark medal, given annually to an "American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge." In the Fall 2012 issue of the Journal of Economic Perspectives (where I labor in the fields as Managing Editor), Jonathan Levin and James Poterba offered an overview of Finkelstein's earlier career.

For example, standard models of the economics of insurance suggest that people who know that they are more likely to receive the insurance payout (more likely to get sick, for example) are more likely to seek out generous insurance policies. Sellers of Insurance need to beware this "adverse selection" dynamic, as it is called, or they can end up pricing their insurance as if it was for the average person, and then ending up with much higher payouts than expected. But does the evidence support the theory? Finkelstein points out that in a number of studies, those who get the insurance often do not end up receiving greater payouts. A possible reason is that some people are pretty safe risks in part because they are quite risk-averse, so they are more likely to purchase insurance and less likely to use it. Here are some comments from Finkelstein:

Suppose you have people—in health insurance we often refer to them as the “worried well”—who are healthy, so a low-risk type for an insurer, but also risk averse: They’re worried that if something happens, they want coverage. ... As a result, people who are low risk, but risk averse, will also demand insurance, just as high-risk people will. And it’s not obvious whether, on net, those with insurance will be higher risk than those without. ... We looked at long-term care insurance—which covers nursing homes—and rates of nursing home use. We found that individuals with long-term care insurance were not more likely to go into a nursing home than those without it, as standard adverse selection theory would predict. In fact, they often looked less likely to go into a nursing home. These results held even after controlling for what the insurance company likely knew about the individual, and priced insurance on. ... [O]our data gave us a way to detect private information: people’s self-reported beliefs about their chance of going into a nursing home. And we showed that people who think they have a higher chance of going into a nursing home are both more likely to buy long-term care insurance and more likely to go into a nursing home. ... That certainly sounds like the standard adverse selection models! ... Then we found some examples in the data that we broadly interpreted as proxies for preferences such as risk aversion, and we found that individuals who report being more likely to, for example, get flu shots, or more likely to wear seatbelts, were both more likely to buy long-term care insurance and less likely to subsequently go into a nursing home.
In another prominent line of work. Finkelstein and several co-authors looked at the question of geographic variation in health care costs--that is, the well-known fact that health care utilization and spending per person is much higher in some urban areas and states than in others. They asked the question: What happens if a person relocates from a high-utilization, high-cost area to a low-cost, low utilization area? If one believes that health care decisions are determined by a mixture of patient expectations and what local health care providers think of as "best practice," one might expect the health care usage of those who relocate to gradually trend toward the patterns of their new geographic location. But that's not what happens. Finkelstein explains:
"We ... look at people who moved geographically across areas with different patterns of health care utilization (i.e., high-utilization versus low-utilization areas) and whether their health care utilization changed. Originally, we were very focused on this issue of habit formation, which would suggest a very specific conceptual model and econometric specification. ... So you would expect, in a model with habit formation, that maybe initially there wouldn’t be much change in your health care utilization. But over time—whether it’s because doctors would be urging you to do less or the people around you were like, “Why go to the doctor when you have a minor pain?”—you would gradually change your behavior toward the new norm.But that’s just not what we see at all. We have about 11 years of data on Medicare beneficiaries and about 500,000 of them who move across geographic areas. When they do, we see a clear, on-impact change: When you move from a high-spending to a low-spending place, or vice versa, you jump about 50 percent of the way to the spending patterns of the new place. But then your behavior doesn’t change any further. ...  We estimate that about half of the geographic variation in health care utilization reflects something “fixed” about the patient that stays with them when they move, such as their health or their preferences for medical care. And about half of the geographic variation in health care utilization reflects something about the place, such as the beliefs and styles of the doctors there, or the availability of various medical technologies. This gives you a very different perspective on how to think about the geographic variation in health care spending than the prior conventional wisdom that most of the geographic variation in the health care system was due to the supply side—that is, something about the place rather than the patient.
In the last few years, some of Finkelstein's most prominent research has been an analysis of data generated by an experiment in the state of Oregon. Back in 2008, the state of Oregon wanted to expand Medicaid coverage to low-income people who wouldn't have otherwise been eligible for Medicaid. The state realized that it didn't have enough money to offer the expanded health insurance to everyone, so it held a lottery. From an academic research point of view, this decision was a dream come true, because it becomes possible to compare health and life outcomes for two very similar groups--one randomly chosen to receive additional health insurance and one not. Finkelstein and a team of co-authors were on the job. Finkelstein describes some of their findings:
For health care use, we found across the board that Medicaid increases health care use: Hospitalizations, doctor visits, prescription drugs and emergency room use all increased. On the one hand, this is economics 101. Demand curves slope down: When you make something less expensive, people buy more of it. And what health insurance does, by design, is lower the price of health care for the patient. ... On the other hand, there were ways in which these results were surprising. For Medicaid, in particular, there’s been a lot of conjecture that while in general, health insurance would increase use of health care, that because Medicaid reimbursement rates to providers are so low, providers wouldn’t want to treat Medicaid patients. ... Our findings reject this view. We find compelling evidence from a randomized evaluation that relative to being uninsured, Medicaid does increase use of health care. Another result that some found surprising was on use of the emergency room. There had been claims in policy circles that covering the uninsured with Medicaid might get them out of the emergency room … The hope that ER use would go down comes from the belief that doctor visits are substitutes for the ER, so when the doctor also becomes free, you go to the doctor instead of the emergency room. Maybe this is the case (or maybe it isn’t), but on net, our results show any substitution for the doctor that may exist is just not outweighed by the direct effect of making the emergency room free. On net, Medicaid increases use of the emergency room, at least in the first one to two years of coverage we are able to look at.
A variety of other findings have emerged from this research, which is ongoing. In the Oregon data, the additional health insurance reduced financial risk for households, and perhaps not coincidentally, also led to improvements in mental health status (measured both by self-reported mental health and by the proportion diagnosed with depression). In terms of measures of physical health, Finkelstein reports, "we did not detect statistically significant effects on the physical health measures we studied: blood sugar, cholesterol and blood pressure."

The expansion of Medicaid in Oregon clearly brought at least some benefits to the previously uninsured. But what the cost to the state worth the benefits to the individuals? Finkelstein and a couple of co-authors tried to model what the insurance was worth to those receiving it. They found:
[O]ur central estimate is that the value of Medicaid to a recipient is about 20 to 40 cents per dollar of government expenditures. ... The other key finding is that the nominally “uninsured” are not really completely uninsured. We find that, on average, the uninsured pay only about 20 cents on the dollar for their medical care. This has two important implications. First, it’s a huge force working directly to lower the value of Medicaid to recipients; they already have substantial implicit insurance. ... Second and, crucially, the fact that the uninsured have a large amount of implicit insurance is also a force saying that a lot of spending on Medicaid is not going directly to the recipients; it’s going to a set of people who, for want of a better term, we refer to as “external parties.” They’re whoever was paying for that other 80 cents on the dollar.

For those who would likes some additional doses of Finkelstein, I've posted a couple of times as results from the Oregon study were publishes, and you can check them out at "Effects of Health Insurance: Randomized Evidence from Oregon" (August 31, 2012) and "Why the Uninsured Don't Have More Emergency Room Visits" (January 6, 2014). Finkelstein has also published several articles in the Journal of Economic Perspectives, once on the subject of "Long-Term Care Insurance in the United States" (November 22, 2011) and another time in an article with Liran Einav in the Winter 2011 issue on the analysis of a "Selection in Insurance Markets: Theory and Empirics in Pictures."

Thursday, October 1, 2015

Causes of Wealth Inequality: Dynastic, Valuation, or Income?

There are at least three reasons why inequality of wealth could remain high or rise over time: 1) dynastic reasons, in which inherited wealth looms larger over time; 2) valuation issues, as when the price of existing assets like stocks or real estate soars for a time; and 3) a surge of inequality at the very top of the income distribution which generates a corresponding inequality in wealth. Richard Arnott, William Bernstein, and Lillian Wu "agree that inequality of wealth has intensified in the recent past." However, challenge the importance of the dynastic explanation and emphasize the latter two causes in their essay, "The Myth of Dynastic Wealth: The Rich Get Poorer," which appears n the Fall 2015 issue of the Cato Journal.

A substantial chunk of their essay is a review and critique of the arguments in Thomas Piketty's 2013 book Capital in the Twenty-First Century. I assume that even readers of this blog, who are perhaps more predisposed than normal humans to find such a discussion of interest, have mostly had enough of that. For those who want more, some useful starting points are my post on "Piketty and Wealth Inquality" (February 23, 2015) and on "Digging into Capital and Labor Income Shares" (March 20, 2015).

Here, I want to focus instead on the empirical evidence Arnott, Bernstein, and Wu about dynastic wealth in the United States. They focus on evidence from the Forbes 400 list of the wealthiest Americans, which has been published since 1982. They look both at how many famous fortunes of the earlier part of the 20th century survived to be on this list, and at the evolution of who is on this list over time. They write:

Take, as a counterexample, the Vanderbilt family. When the family converged for a reunion at Vanderbilt University in 1973, not one millionaire could be found among the 120 heirs and heiresses in attendance. So much for the descendants of Cornelius Vanderbilt, the richest man in the world less than a century before. ... The wealthiest man in the world in 1918 was John David Rockefeller, with an estimated net worth of $1.35 billion. This was a whopping 2 percent of the U.S. GDP of $70 billion at that time, nearly two million times our per capita GDP, at a time when the nation was the most prosperous in the world. An equivalent share of U.S. GDP today would translate into a fortune of over $300 billion.  ...  The Rockefellers ... scored 13 seats on the 1982 Forbes debut list, with collective wealth of $7 billion in inflation-adjusted 2014 dollars. As of 2014, only one Rockefeller (David Rockefeller, who turned 100 in June 2015) remains, with a net worth of about $3 billion.If dynastic wealth accumulation were a valid phenomenon, we would expect little change in the composition of the Forbes roster from year to year. Instead, we find huge turnover in the names on the list: only 34 names on the inaugural 1982 list remain on the 2014 list ... 

Arnott, Bernstein, and Wu offer a number of ways in which dynastic wealth is eroded from one generation to the next: 1) low returns (including when the rich "fall prey to knaves); 2) investment expenses paid to "bank trust companies, `wealth management' experts, estate attorneys, and the like"; 3) income, capital gains, and estate taxes; 4) charitable giving; 5) when fortunes are divided among heirs; and 6) spending, as in when some heirs do a lot of it. Their overall finding based on the patterns in their data is that among the hyper-wealthy, the common pattern is for real net worth to be cut in half every 14 years or so, and for it to decline by about 70% from one generation to the next.

If the inequality of wealth is not a dynastic phenomenon and dynastic wealth in fact tends to fade with time, then why has inequality of wealth remained high in recent decades. Arnott, Bernstein, and Wu suggest two alternatives.

One is the huge run-up in asset values in recent decades, including the stock market. However, the authors make an important and intriguing point about these valuations. From a long-run viewpoint, gains from stock market investment need to be connected to the profits earned by companies. In the last few decades, a major change in US stock market is that dividends paid by firms have dropped. In In the past, those who owned stock looked less wealthy right now, but because of owning stock they could often expect to receive a hefty stream of dividend payments in the future. Now, those who own stock look more wealthy right now (after the run-up in stock prices), but they appear likely to receive a lower stream of dividend payments in the future. Thus, more of the future profit performance of a company is showing up in the current price of the stock, and less as a payment of dividends in the future. This is a more complex phenomenon than a simple rise in wealth inequality.

The other change that they point to are the enormous payments being received by corporate executives, often through stock option. The authors are writing in a publication of the libertarian Cato Institute. Thus, it is no surprise when they write: "We have no qualms about paying entrepreneurial rewards (i.e., vast compensation) to executives who create substantial wealth for their shareholders or
who facilitate path-breaking innovations and entrepreneurial growth." But then they go on to add:
But an abundance of research shows little correlation between executive compensation and shareholder wealth creation (let alone societal wealth creation). Nine-figure compensation packages are so routine they only draw notice when the recipients simultaneously run their companies into the ground, as was the case with Enron, Global Crossing, Lehman Brothers, Tyco, and myriad others. It’s difficult for an entrepreneur to become a billionaire, in share wealth, while running a failing business. How can even mediocre corporate executives take so much of the pie? Bertrand and Mullainathan (2001) cleverly disentangled skill from luck by examining situations in which earnings changes could be reasonably ascribed to luck (say, a fortuitous change in commodity prices or exchange rates). They found that, on average, CEOs were rewarded just as much for “lucky” earnings as for “skillful” earnings. The authors postulate what they term the “skimming” hypothesis: “When the firm is doing well, shareholders are less likely to notice a large pay package.” A governance linkage is also evident: The smaller the board, the more insiders on it, and the longer tenured the CEO, the more flagrant “pay for luck” becomes, while the presence of large shareholders on the board serves to inhibit skimming. Perhaps shareholders should be more attentive to governance?

Wednesday, September 30, 2015

Exchange Rates Moving

Major exchange rates for countries around the world are in the midst of movement that is large by historical standards. The International Monetary Fund offers some background in its October 2015 World Economic Outlook report, specifically in Chapter 3: "Exchange Rates and Trade Flows: Disconnected?"  The main focus of the chapter is on how the movements in exchange rates might affect trade balances, but at least to me, equally interesting is how the movement may affect the global financial picture.

As a starting point, here's a figure showing recent movements in exchange rates for the United States, Japan, the euro area, Brazil, China, and India. In each panel panel of the figure, the horizontal axis runs from 0 to 36 months. The shaded areas show how much exchange rates typically move over a 36 month period using data from January 1980 through June 2015. The darkest shading for 25th/75th percentile means that exchange rates moved historically within this range from 25-75% of the time. The lighter shading for 10th/90th percentile means that exchange rates move in this area from 10-90% of the time. The blue lines show the actual movement of exchange rates using different but recent starting dates for each country (as shown in the  panels). In every case the exchange rate has moved more than the 25th/75th band, and in most cases it is outside the 10th/90th band, too.

As the figure shows, currencies are getting stronger in the US, China, and India, but getting weaker in Japan, the euro area, and Brazil. The IMF describes the patterns this way:
Recent exchange rate movements have been unusually large. The U.S. dollar has appreciated by more than 10 percent in real effective terms since mid-2014. The euro has depreciated by more than 10 percent since early 2014 and the yen by more than 30 percent since mid-2012 ...  Such movements, although not unprecedented, are well outside these currencies’ normal fluctuation ranges. Even for emerging market and developing economies, whose currencies typically fluctuate more than those of advanced economies, the recent movements have been unusually large.
The report focuses on how movements of exchange rates have historically affected prices of imports and exports (which depends on the extent to which importers and exporters "pass through" the changes in exchange rates as they buy and sell), and in turn what that change in import and export prices means for the trade balance.
The results imply that, on average, a 10 percent real effective currency depreciation increases import prices by 6.1 percent and reduces export prices. in foreign currency by 5.5 percent ... The estimation results are broadly in line with existing studies for major economies. ... The results suggest that a 10 percent real effective depreciation in an economy’s currency is associated with a rise in real net exports of, on average, 1.5 percent of GDP, with substantial cross-country variation around this average ...
The estimates of how movements in exchange rates affect trade seem sensible and mainstream to me, but I confess that I am more intrigued and concerned about how changes in exchange rates can affect the global financial picture. In the past, countries often ran into extreme financial difficulties when they had borrowed extensively in a currency not their own--often in US dollars--and then when the exchange rate moved sharply, they were unable to repay. In the last few years, the governments of most emerging market economies have tried to make sure this would not happen, by keeping their borrowing relatively low and by building up reserves of US dollars to be drawn down if needed.

However, there is some reason for concern that a large share of companies in emerging markets have been taking on a great deal more debt, and because a substantial share of that debt is measured in foreign currency, these firms are increasingly exposed to a risk of shifting exchange rates. A different IMF report, the October 2015 Global Financial Stability Report, looks at this issue in Chapter 3: "Corporate Leverage in Emerging Markets--A Concern?" For a sample of the argument, the report notes:
Corporate debt in emerging market economies has risen significantly during the past decade. The corporate debt of nonfinancial firms across major emerging market economies increased from about $4 trillion in 2004 to well over $18 trillion in 2014 ... The average emerging market corporate debt-to-GDP ratio has also grown by 26 percentage points in the same period, but with notable heterogeneity across countries. ...  Leverage has risen relatively more in vulnerable sectors and has tended to be accompanied by worsening firm-level characteristics. For example, higher leverage has been associated with, on average, rising foreign exchange exposures. Moreover, leverage has grown most in the cyclical construction sector, but also in the oil and gas subsector. Funds have largely been used to invest, but there are indications that the quality of investment has declined recently. These findings point to increased vulnerability to changes in global financial conditions and associated capital flow reversals—a point reinforced by the fact that during the 2013 “taper tantrum,” more leveraged firms saw their corporate spreads rise more sharply ...
The relatively benign outcome from shifts in exchange rates is that they tweak prices of exports and imports up and down. The deeper concern arises if the movements in exchange rates lead to substantial debt defaults, or to "sudden stop" movements where large flows of international financial capital that had been heading into a country sharply reverse direction. In the last few decades, this mixture of debt problems and sudden shifts in international capital flows changes has been the starting point for national-level financial crises in east Asia, Russia, Latin America, and elsewhere.