Persnickety followups on inequality and demand

Teaser: The following graphs are from “Inequality and Household Finance During the Consumer Age“, by Barry Cynamon and Steven Fazzari. “Demand rates” are expressed as fractions of “spendable income”. There’s more on this paper at the end of the post.


I am always outclassed by my correspondents and commenters. The previous post on inequality and demand was no exception. I want to follow up on a few scattered bits of that conversation. I apologize as always for all the great writing, in comment threads and in e-mails, that I fail to respond to.

First, I want to make a methodological point. Several commenters (e.g. beowulf, JKH, Mark Sadowski) point to discrepencies between measures of income and saving used by various empirical studies and those used in national accounting statements (e.g. NIPA). In particular, there is the question of whether unspent capital gains (realized or unrealized) should count as saving. NIPA accounts quite properly do not treat capital gains as income.

But what is proper in one context is not proper in another. NIPA accounts attempt to characterize the production, consumption, and investment of real resources in the consolidated aggregate economy. A capital gain represents a revaluation of an existing resource, not a new resource, and so properly should not be treated as income.

However, when we are studying distribution, what we are after is the relative capacity of different groups to command use of production. We must divide the economy into subgroups of some sort, and analyze interrelated dynamics of those subgroups’ accounts. Capital gains don’t represent changes in aggregate production, but they do shift the relative ability of different groups to appropriate that output when they wish to. When studying the aggregate economy, capital gains should be ignored or netted-out. But when studying distribution — “who owns what” — capital gains, as well as the dissaving or borrowing that funds those gains, are a critical part of the story. Research on how the distribution of household income affects consumption absolutely should include capital gains. There are details we can argue over. Unencumbered, realized capital gains should qualify almost certainly as income. Gains from favorable appraisal of an illiquid asset probably should not. Unrealized gains from liquid or hypothecable assets (stocks, real estate) are shades of gray. There is nothing usual about shades of gray. All spheres of accounting require estimation and judgment calls. Business accountants learn very quickly that simple ideas like “revenue” and “earnings” are impossible to pin down in a universally satisfactory way.

Similar issues arise in interpreting the excellent work of J.W. Mason and Arjun Jayadev on household debt dynamics, to which Mark Sadowski points in the comments. Mason and Jayadev decompose the evolution of the United States’ household sector’s debt-to-income ratio, breaking down changes into combinations of new borrowing and interest obligations (which increase debt-to-income) plus inflation and income growth (which decrease debt-to-income). It’s a wonderful, fascinating paper. If you’ve not done so already, I strongly recommend that you give it a read. It’s accessible; much of the tale is told in graphs. (A summary is available at Rortybomb, but you want to study especially Figure 7 of the original paper.)

One of Mason and Jayadev’s most interesting discoveries is that the period since the 1980s has been an “Era of Adverse Debt Dynamics”, a time during which household debt-to-income increased because of reductions in inflation, low income growth and a high effective interest rate on outstanding debt. [1] For most of the period from 1980 to 2000, the aggregate US household sector was not taking on new debt. Household sector debt-to-income deteriorated despite net paydown of imputed principal, because of adverse debt dynamics.

So, can a theory that claims borrowing by lower-income households supported demand over the same period possibly be right? Yes, it can. At any given time, some groups of people are repaying debt and others are taking it on. If, say, high income boomers are paying off their mortgages faster than low-income renters are borrowing to consume, new borrowing in aggregate will be negative even while new borrowing by poorer groups supports demand. (Remember how back at the turn of the millennium we were marveling over the “democratization of credit“?)

As always, when studying distributional questions, aggregate data is of limited use. That’s not to say that there is no information in aggregate data. It’s definitely more comfortable to tell the borrowed demand story about the 2000s, when, Mason and Jayadev show us, aggregated households were dramatically expanding their borrowing. But what we really want is research that disentagles the behavior of wealthy and nonwealthy households.

A working paper by Barry Cynamon and Steven Fazzari does just that. It tells a story quite similar to my take in the previous post, but backs that up with disaggregated data. The three graphs at the top of this post summarize the evidence, but of course you should read the whole thing. There is stuff to quibble over. But the paper is an excellent start.

I’ll end all this with an excerpt from the Cynamon and Fazzari paper that I think is very right. It addresses the question of why individuals would undermine their own solvency in ways that sustain aggregate spending:

It is difficult for standard models, most notably the life cycle model, to account for the long decline in the saving rate starting in the early 1980s. A multitude of economists propose explanations including wealth effects, permanent income hypothesis (high expected income) effect, and demographics, but along with many researchers we find those explanations unsatisfying. We argue that the decline in the saving rate can best be understood by recognizing the important role of uncertainty in household decision making and the powerful influence of the reference groups that to which those household decision makers turn for guidance. We propose that households develop an identity over time that helps them make consumption decisions by informing them about the consumption bundle that is normal.9 We define the consumption norm as the standard of consumption an individual considers normal based on his or her identity (Cynamon and Fazzari, 2008, 2012a). The household decision makers weigh two questions most heavily in making consumption and financial decisions. First, they ask “Is this something a person like me would own (durable good), consume (nondurable good), or hold (asset)?” Second, they ask “If I attempt to purchase this good or asset right now, do I have the means necessary to complete the transaction?” Increasing access to credit impacts consumption decisions by increasing the rate of positive responses to the second question directly, and also by increasing the rate of positive responses to the first question indirectly as greater access to credit among households in one’s reference group raises the consumption norm of the group. Rising income inequality also tends to exert upward pressure on consumption norms as each person is more likely to see aspects of costlier lifestyles displayed by others with more money.

People will put up with almost anything to live the sort of life their coworkers and friends, parents and children, consider “normal”. Over the last 40 years, for very many Americans, normal has grown increasingly unaffordable. And that created fantastic opportunities in finance.


[1] Mark Sadowski suggests that the stubbornly high effective interest rates reported by Mason and Jayadev are inconsistent with a claim that the secular decline in interest rates was used to goose demand. But that’s not quite right — it amounts to a confusion of average and marginal rates. At any given moment, most household sector debt was contracted some time ago. The effective interest rate faced by the household sector is an average of rates on all debt outstanding, and so lags headline “spot” interest rates. But incentives to borrow are shaped by interest rates currently available rather than rates on debt already contracted. Falling interest rates can increase individual households’ willingness to borrow much more quickly than they alter the aggregated sector’s effective rate.

In my first encounter with the Mason and Jayadev paper, I thought I saw in the stubbornly high effective rates evidence of a rotation from more-creditworthy to less creditworthy borrowers. (See comments here.) I’ve looked into that a bit more, and the evidence is not compelling: the stubbornly slow decline in household sector interest rates pretty closely mirrors the slow decline in the effective interest rate of the credit-risk-free Federal government. There’s a hint of spread expansion in the late 1990s, but nothing to persuade a skeptic.

Inequality and demand

I’d rather interfluidity take a break from haranguing Paul Krugman. But I think that the relationship between distribution and demand is a very big deal. I’ve just gotta weigh in.

Here’s Krugman:

Joe [Stiglitz] offers a version of the the “underconsumption” hypothesis, basically that the rich spend too little of their income. This hypothesis has a long history — but it also has well-known theoretical and empirical problems.

It’s true that at any given point in time the rich have much higher savings rates than the poor. Since Milton Friedman, however, we’ve know that this fact is to an important degree a sort of statistical illusion. Consumer spending tends to reflect expected income over an extended period. If you take a sample of people with high incomes, you will disproportionally include people who are having an especially good year, and will therefore be saving a lot; correspondingly, a sample of people with low incomes will include many having a particularly bad year, and hence living off savings. So the cross-sectional evidence on saving doesn’t tell you that a sustained higher concentration of incomes at the top will lead to higher savings; it really tells you nothing at all about what will happen.

So you turn to the data. We all know that personal saving dropped as inequality rose; but maybe the rich were in effect having corporations save on their behalf. So look at overall private saving as a share of GDP:

The trend before the crisis was down, not up — and that surge with the crisis clearly wasn’t driven by a surge in inequality.

So am I saying that you can have full employment based on purchases of yachts, luxury cars, and the services of personal trainers and celebrity chefs? Well, yes.

Let’s start with the obvious. The claim that income inequality unconditionally leads to underconsumption is untrue. In the US we’ve seen inequality accelerate since the 1980s, and until 2007 we had robust demand, decent growth, and as Krugman points out, no evidence of oversaving in aggregate. Au contraire, even.

And Krugman is correct to point out that simple cross-sectional studies of saving behavior are insufficient to resolve the question.

But that’s why we have social scientists! Unsurprisingly, more sophisticated reviews have been done. See, for example, “Why do the rich save so much?” by Christopher Carroll (ht rsj, Eric Schoenberg) and “Do the Rich Save More?”, by Karen Dynan, Jonathan Skinner, and Stephen Zeldes. These studies agree that the rich do in fact save more, and that they do so in ways that cannot be explained by any version of the permanent income hypothesis. Further, these studies probably understate the differences in savings behavior, because the “rich” they study tend to be members of the top quintile, rather than the top 1% that now accounts for a steeply increasing share of national income.

So how do we reconcile the high savings rates of the rich with the US experience of both rising inequality and strong demand over the “Great Moderation”? If, ceteris paribus, increasing inequality imposes a drag on demand, but demand remained strong, ceteris must not have been paribus.

I would pair Krugman’s chart with the following graph, which shows household borrowing as a fraction of GDP:


includes both consumer and mortgage debt, see “credit market instruments” in Table L.100 of the Fed’s Flow of Funds release

Household borrowing represents, in a very direct sense, a redistribution of purchasing power from savers to borrowers. [1] So if we worry that oversaving by the rich may lead to an insufficiency of purchases, household borrowing is a natural place to look for a remedy. Sure enough, we find that beginning in the early 1980s, household borrowing began a secular rise that continued until the financial crisis.

And this arrangement worked! Over the whole “Great Moderation“, inequality expanded while the economy grew and demand remained strong.

Rather than arguing over the (clearly false) claim that income inequality is always inconsistent with adequate demand, let’s consider the conditions under which inequality is compatible with adequate demand. Are those conditions sustainable? Are they desirable?

Suppose that the mechanism that reconciles inequality and adequate demand is household borrowing. Is that sustainable? After all, poorer households would have to borrow new purchasing power in every period in order to support demand for as long as inequality remains high. That’s jarring.

But quantities matter. Continual borrowing might be sustainable, depending on the amount of new borrowing required, the interest rate on the debt, and the growth rate of borrowers’ incomes. If the interest rate is lower than the growth rate of income to poorer households, then there is room for new borrowing every period while holding debt-to-income ratios constant. Even without much income growth, sufficiently low (and especially negative) interest rates can enable continual new borrowing at constant leverage.

If the drag on demand imposed by inequality is sufficiently modest, it can be papered over indefinitely by borrowing without much difficulty. But as the drag grows large and the quantity of new borrowing required increases, sustaining demand will become difficult for institutional reasons. Economically, there’s nothing wrong with letting real interest rates fall to very sharply negative values, if that’s what would be required to create demand. But that would require central banks to tolerate very high rates of inflation, or schemes to invalidate physical cash. It would piss off savers.

I think the behavior of real interest rates is the empirical fingerprint of the effect of inequality on demand:

Obviously, one can invent any number of explanations for the slow and steady decline in real rates that began with but has outlived the “Great Moderation”. My explanation is that growing inequality required ever greater inducement of ever less solvent households to borrow in order to sustain adequate demand, and central banks delivered. Other stories I’ve encountered don’t strike me as very plausible. Markets would have to be pretty inefficient, or bad news would have had to come in very small drips, if technology or demography is at the root of the decline.

It’s worth noting that these graphs almost certainly understate the decline in interest rates, at least through 2008. Concomitant with the reduction in headline yields, “financial engineering” brought credit spreads down, eventually beneath levels sufficient to cover the cost of defaults. This also helped support demand. John Hempton famously wrote that “banks intermediate the current account deficit.” We very explicitly ask banks to intermediate the deficit in demand, exhorting them to lend lend lend for macroeconomic reasons that are indifferent to microeconomic evaluations of solvency. We can have a banking system that performs the information work of credit analysis and lends appropriately, or we can have a banking system that overcomes deficiencies in demand. We cannot have both when great volumes of lending are continually required for structural reasons.

Paul Krugman argues that “you can have full employment based on purchases of yachts, luxury cars, and the services of personal trainers and celebrity chefs”. What about that? In theory it could happen, but there’s no evidence that it does happen in the real world. As we’ve seen, high income earners do save more than low income earners, and that is not merely an artifact of consumption smoothing.

If the rich did consume in quantities proportionate with their share of income, we would expect the yacht and celebrity chef sectors to become increasingly important components of the national economy. They have not. I’ve squinted pretty hard at the shares of value-added in BEA’s GDP-by-industry accounts, and can’t find any hint of it. I suppose personal trainers and celebrity chefs would fall under “Arts, entertainment, recreation, accommodation, and food services”, a top-level category whose share of GDP did increased by 0.5% between 1990 and 2007. Attributing all of that expansion to the indulgences of the rich, more than 90% of the proportional-consumption expected increase in the top one percent’s consumption remains unaccounted. The share of the “water transportation” sector has not increased. If the rich do consume in proportion to their income, they pretty much consume the same stuff as the rest of us. Which would bring a whole new meaning to the phrase “fat cats”. Categories of output that have notably increased in share of value-added include “professional and business services” and “finance, insurance, real estate, rental, and leasing”. Hmm.

Casual empiricists often point to places like New York City as evidence that rich-people-spending can drive economic demand. Rich Wall-Streeters certainly bluster and whine enough about how their spending supports the local economy. New York is unusually unequal, and it hasn’t especially suffered from an absence of demand. QED, right? Unfortunately, this argument misses something else that’s pretty obvious about New York. It runs a current-account surplus. It is a huge exporter of services to the country and the world. Does New York’s robust aggregate demand come from the personal-training and fancy-restaurant needs of its wealthy upper crust, or from the fact that the rest of the world pays New Yorkers for a lot of the financial services and media they consume? China is very unequal, and rich Chinese have a well-known taste for luxury. But no one imagines that local plutocrats could replace all the world’s Wal-Mart customers and support full employment in the Middle Kingdom. Why is that story any more plausible for New York?

While it’s certainly true that rich people could drive demand by spending money on increasingly marginal goods, the fact of the matter is that they don’t. To explain observed behavior, you need a model where, in Christopher Carroll’s words, “wealth enters consumers’ utility functions directly” such that its “marginal utility decreases more slowly than that of consumption (and hence will be a luxury good relative to consumption)”. It’s not so hard to believe that people like to have money, even much more money than they ever plan to spend on their own consumption or care to pass onto their children. You can explain a preference for wealth in terms of status competition, or in terms of the power over others that wealth confers. I’ve argued that we desire wealth for its insurance value, which is inexhaustible in a world subject to systemic shocks. These motives are not mutually exclusive, and all of them are plausible. Why pick your poison when you can swallow the whole medicine cabinet?

If the world Krugman describes doesn’t exist, we could try to manufacture it. We could tax savings until the marginal utility of extra consumption comes to exceed the after-tax marginal utility of an extra dollar saved. This would shift the behavior of the very wealthy towards demand-supporting expenditures without our having to rely upon a borrowing channel. But politically, enacting such a tax might be as or more difficult than permitting sharply negative interest rates. (A tax on saving rather than income or consumption would be much the same as a negative interest rate.) Moreover, the scheme wouldn’t work if the value wealthy people derive from saving comes from supporting their place in a ranking against other savers (as it would under status-based theories, or in my wealth-as-insurance-against-systemic-risk story). As long as one’s competitors are taxed the same, rescaling the units doesn’t change the game.

So. Inequality is not unconditionally inconsistent with robust demand. But under current institutional arrangements, sustaining demand in the face of inequality requires ongoing borrowing by poorer households. As inequality increases and solvency declines, interest rates must fall or lending standards must be relaxed to engender the requisite borrowing. Eventually this leads to interest rates that are outright negative or else loan defaults and financial turbulence. If we insist on high lending standards and put a floor beneath real interest rates at minus 2%, then growing inequality will indeed result in demand shortfall and stagnation.

Of course, we needn’t hold institutional arrangements constant. If we had any sense at all, we’d relieve our harried bankers (the poor dears!) of contradictory imperatives to both support overall demand and extend credit wisely. We’d regulate aggregate demand by modulating the scale of a outright transfers, and let bankers make their contribution on the supply side, by discriminating between good investment projects and bad when making credit allocation decisions.


[1] Readers might object, reasonably, that since the banking system can create purchasing power ex nihilo, it’s misleading to include the clause “from savers”. But if we posit a regulatory apparatus that prevents the economy from “overheating”, that sets a cap on effective demand in obeisance of some inflation or nominal income target, then the purchasing power made available to borrowers is indirectly transferred from savers. The banking system would not have created new purchasing power for borrowers had savers not saved.

Update History:

  • 24-Jan-2012, 1:00 a.m. PST: Fixed a tense: “would have had to come”
  • 24-Jan-2012, 11:20 p.m. PST: God I need an editor. Not substantive changes (or at least none intended), but I’m trying to clean up a bit of the word salad: “in the world we actually live in in the real world“; “spending drives supports the local economy”; “It is a huge exporter of services to the rest of the country and the world.”;”pays New Yorkers of a wide variety of income levels for much a lot of the financial services, entertainment, and literature and media they consume?”; “Wal-Mart customers to sustain and support full employment”; “which is inexhaustible in a world subject to systemic shocks that people must compete to evade.” Also: Added link to NYO piece around “piss off savers”.

A confederacy of dorks

It is, to be sure, only a baby step towards world peace.

But it is a step! Market monetarists will lie with post-Keynesians, the parted waters will turn brackish, as we affirm, in unison: Paul Krugman and I are both inarticulate dorks. Further, it is agreed, that David Beckworth, Peter Dorman, Tim Duy, Scott Fullwiler, Izabella Kaminska, Josh Hendrickson, Merijn Knibbe, Ashwin Parameswaran, Cullen Roche, Nick Rowe, Scott Sumner, and Stephen Williamson are all dorks, albeit of a more articulate variety. I say the most articulate dorks of all are interfluidity‘s commenters.

To mark the great convergence, there will be feastings and huzzahs from all. Or at least from everyone but Paul Krugman and myself, since during feastings, it is the most inarticulate of the dorks who tend to find themselves on a spit. Wouldn’t you all prefer to eat plastic apples?

For those not tired of spectacle, let us continue our pathetic grope towards clarity. Paul Krugman asks two questions:

My questions involve whether interest on excess reserves changes any of the fundamentals of monetary policy and its relationship to the budget. That is, does IOER change the fact that the Federal Reserve has great power over aggregate demand except when market interest rates are near zero, and the related fact that when we’re not in a liquidity trap there is an important distinction between debt-financed and money-financed deficits?

My answer to both questions is no.

My answers are “no” and “depends how you define ‘liquidity trap'”. But brevity is the soul of wit, and I’ve a reputation for witlessness to maintain. So let me elaborate.

First, let’s make a distinction that sometimes gets lost. Paying interest on reserves (excess or otherwise) and operating under a “floor system” are not the same thing. A floor system does require that interest be paid on reserves, but that is not sufficient. A floor system also requires an abundance of reserves (relative to private sector demand), such that the central bank would be unable achieve its target interest rate without paying interest at or above the target. David Beckworth is right to emphasize the question of whether interest is paid on reserves at a lower or higher rate than the central bank’s target. (Beckworth phrases things in terms of the short-term T-bill rate, but the T-bill rate is capped by the expected path of the target rate, as long as the central bank’s control of interest rates remains credible and reserves are abundant.)

When a central bank effectively targets an interbank rate, but the rate of interest paid on reserves is less than the target rate, the following statements are all true: 1) base money must be “scarce” relative to private sector demand for transactional or regulatory purposes, so people accept an opportunity cost to hold it; 2) there is a direct link between the quantity of base money outstanding and short-term interest rates, they cannot be managed independently; and 3) the opportunity cost borne by the public is is mirrored by a seigniorage gain to the fisc — money is different from debt in the sense that it is cheaper for the sovereign to issue.

When the IOR rate is equal to or above the target rate, all of that breaks: base money may be abundant relative to private demand, the link between the quantity of base money and interest rates disappears, “printing money” is at least as costly to the fisc as issuing short-term debt.

When the IOR rate is below the target rate, we are in a “channel” or “corridor” system (of which traditional monetary policy is a special case, with IOR pinned at zero). When the IOR rate is at or above the target rate, we are in a “floor” system, under which the distinction between “printing money” and “issuing debt” largely vanishes.

Krugman and I can enjoy an ecstatic “kumbaya” on both of his questions (no visuals please!), if he is willing to define as a liquidity trap any circumstance in which the central bank pays interest on reserves at a level greater than or equal to its target interest rate.

I agree full stop that “the Federal Reserve has great power over aggregate demand except when market interest rates are near zero”, even in a floor system. But, as Nick Rowe correctly points out, the source of this power is the Fed’s ability to affect demand for, rather than the supply of, money. And not just for money! When the Fed sets interest rates, it alters demand for money and government debt as a unified aggregate. What keeps the Fed special under a floor system is an institutional difference. The Fed issues the debt it calls “reserves” at rates fixed by fiat, while Treasury rates float at auction. The Fed leads, then Treasury rates follow by arbitrage. The Fed is powerful by virtue of how it prices its debt, not because it is uniquely the supplier of base money.

None of this means (qua Nick Rowe) that “monetarism” is refuted. “Market monetarists”, for example, argue that level/path targets are better than rate targets; that targeting nominal income would be better than targeting the price level; and that “monetary policy” operates via a variety of channels, including expectations about future macro policy. I think they are correct on all counts. They may need to rethink stories that place the quantity of base money (as distinct from debt) at the indispensible heart of macro policy, but revising those stories would make the rest of their perspective stronger, not weaker. (I owe Scott Sumner more detailed comments, but those will have to wait.)

Nor does monetary management at the floor invalidate “mainstream Keynsianism”. The consolidated government/central-bank manages the quantity, maturity, and yield of the paper it emits, as well as patterns of spending and the taxation. Even under a floor system, it is coherent to argue, for example, that macro policy should be confined to rejiggering yield, except at the zero bound when it might be necessary to expand the quantity of liabilities. We give yield management the name “monetary policy” and quantity management the name “fiscal policy”.

Some post-Keynesians take the inverse view, that macro policy should prefer managing quantity to paying yield. They suggest operating under a floor system with IOR set at zero. That is equally coherent.

I really meant it when, in the initial post of this series, I said that there’s no grand ideological point here.

But it matters very much that we get the mechanics right. We’ll make consequential mistakes if we fail to revise intuitions that were formed when T-bills paid 10% and the monetary base paid zilch. Quantitative easing might still be inflationary via an expectations channel, by virtue of the intent it communicates. But the policy’s mechanical effect on the velocity of (base_money + govt_debt) is almost certainly contractionary when the Fed replaces short-term debt with higher-yielding reserves. (I don’t think we know what QE does when longer-term debt is purchased, other than complicate the work of pensions managers.) Izabella Kaminska’s deep point that “monetary policy” helped remedy a shortage of safe assets during the crisis makes no sense unless you get that money is now yieldy government debt rather than a hot potato to be shed. That condition is not inherently related to the number “zero”.

Perhaps I have sufficiently demonstrated that I am the least articulate dork of all, so let’s leave it there. Most of the posts cited in the first paragraph are far better than anything I’d ever write, so do read those. If you are a dork, that is.



Update: See also this conversation between Ashwin Parameswaran and Frances Coppola, which took place just before my series of posts began! Parameswaran tweeted on January 7:

In a world of interest-bearing money, money = govt bonds & The “liquidity trap” is a permanent condition, not a temporary affliction.

Thanks to Mike Sankowski, who mentioned this in a comment that I failed to follow up on.

Update II: More dorkishness! John Carney, Stephen Ewald, Scott Fullwiler, Robert P. Murphy, Negative Outlook, Nick Rowe, Michael Sankowski, Joshua Wojnilower

Update History:

  • 16-Jan-2012, 3:10 a.m. PST: Added bold update pointing to related, prior Parameswaran / Coppola conversation.
  • 16-Jan-2012, 3:45 a.m. PST: Added “the” before “velocity”.
  • 16-Jan-2012, 10:50 a.m. PST: Added second bold update, with more related links. Will keep adding names there without tracking that in the update history. Also added a link to Bryan Caplan’s great piece on velocity, where I use the term “velocity”.

Yet more on the floor with Paul Krugman

So, if you have been following this debate, you are a dork. To recap the dorkiness: I suggested that, from now on, the distinction between base money and short-term government debt will cease to matter in the US, because I think the Fed will operate under a “floor” system, under which the Fed no longer sets interest rates by altering the quantity of base money, but instead floods the world with base money while paying interest on reserves at the target rate. Paul Krugman objected, but I think he was misunderstanding me, so I tried to clarify. He’s responded again. Now I think that the points of miscommunication are very clear and remediable.

Krugman:

What Waldman is now saying is that in the future the Fed will manage monetary policy by varying the interest rate it pays on reserves rather than the size of the conventionally measured monetary base. That’s possible, although I don’t quite see why. But in his original post he argued that under such a regime “Cash and (short-term) government debt will continue to be near-perfect substitutes”.

Well, no — not if by “cash” you mean, or at least include, currency — which is the great bulk of the monetary base in normal times.

So, here’s one confusion. I agree with Krugman that zero-interest currency is inherently very different from interest-bearing paper, including both T-bills or interest-paying bank reserves. However, under a regime where cash can be redeemed at will for interest-bearing paper, that inherent difference disappears, and they trade as near-perfect substitutes.

Let’s try a more edible example. Plastic apples are inherently very different from organic apples. Only one of the two is yummy. But suppose there was an omnipotent orchard that, upon invocation of the phrase “apple-cadapplea”, converted plastic apples to fleshy ones and fleshy apples to plastic apples. Then choke-hazard-y, untasty, but easy-to-carry(!) plastic apples would suddenly trade as perfect substitutes for real apples. The two would still be inherently different. During periods when people travel a lot, they’ll drive up the quantity of plastic fruit as a fraction of the total, um, “apple base”. At everybody’s favorite snack time, the apple base will be nearly all flesh. But as an economic matter, at all times, they will trade as perfect substitutes. Because with a mere invocation of “apple-cadapplea” they are perfect substitutes, despite the fact that one is inherently tasty and the other a choke hazard.

Emitting plastic apples would then be equivalent to emitting real ones, and vice versa. Similarly, when cash is instantaneously interconvertible to interest-bearing debt at par, emitting cash is equivalent to emitting debt.

More Krugman, considering a “platinum coin” example:

what happens if and when the economy recovers, and market interest rates rise off the floor?

There are several possibilities:

  1. The Treasury redeems the coin, which it does by borrowing a trillion dollars.
  2. The coin stays at the Fed, but the Fed sterilizes any impact on the economy, either by (a) selling off assets or (b) raising the interest rate it pays on bank reserves
  3. The Fed simply expands the monetary base to match the value of the coin, an expansion that mainly ends up in the form of currency, without taking offsetting measures to sterilize the effect.

What Waldman is saying is that he believes that the actual outcome would be 2(b). And I think he’s implying that there’s really no difference between 2(b) and 3.

So, Waldman definitely is saying that he believes the actual outcome would be 2(b), and he agrees with Krugman’s analysis of what that implies. That expanding the base affects the Federal budget is part of how money and government debt are equivalent under a floor system.

But Waldman definitely does not at all believe that 2(b) and (3) are equivalent when the interest rate is positive. He’s not sure where he implied that, but he must have done, and is grateful for the opportunity to disimply it. An expansion of the currency unopposed either by offsetting asset sales or paying interest on reserves would have the simple effect of preventing the Fed from maintaining its target rate. That would mean the Fed could not use interest rate policy to manage inflation or NGDP.

But that is precisely why Krugman is a bit unhelpful when he concludes, “Short-term debt and currency are still not at all the same thing, and this is what matters.” It does not matter, once the Fed’s reaction function is taken into account. The Fed will do what it needs to do to retain control of its core macroeconomic lever. Its ability to pay interest on reserves means it has the power to offset a hypothetical issue of currency by the Treasury, regardless of its size. Krugman is right to argue that, above the zero bound, an “unsterilized” currency issue would be different from debt, that it would put downward pressure on interest rates and upward pressure on inflation. But that is precisely why it is inconceivable that the Fed would ever allow such a currency issue to go unsterilized! In a world where it is certain that the Fed will either pay IOR or sell assets in response, we can consider issuance of currency by the Treasury fully equivalent to issuing debt.

Update: I should clarify, in Krugman’s 3(b) 2(b) above, a central bank operating under a floor system needn’t actually raise the interest rate it pays on reserves to “sterilize” the new currency issue. It need only continue to pay its target rate on reserves, including the reserves generated from deposit of the new currency at the Fed. The total quantity of interest the Fed pays must rise (unless, unlikely, the private sector wants to hold all the new currency). But that is because of an expansion of the principle on which interest will be paid, rather than an increase in the rate itself.

Update History:

  • 16-Jan-2012, 5:00 a.m. PST: Added bold update clarifying that interest-paid must increase, but not the interest rate, to sterilize a new currency issue. Changed an “it’s” to “its” and “Krugman’s” to “Krugman” because, grammar.
  • 16-Jan-2012, 8:55 a.m. PST: Modified bold update to properly refer to “2(b)” rather than 3(b). Many thanks to commenter wh10!

Do we ever rise from the floor?

Paul Krugman has responded to my argument that the distinction between money and short-term debt has been permanently blurred. As far as I can tell, our disagreement is not about economics per se but about how we expect the Fed to behave going forward. Krugman suggests my view is based on a “slip of the tongue”, a confusion about what constitutes the monetary base. It is not, but if it seemed that way, I need to write more clearly. So I’ll try.

Let’s agree on a few basic points. By definition, the “monetary base” is the sum of physical currency in circulation and reserves at the Fed. The Fed has the power to set the size of the monetary base, but cannot directly control the split between currency and reserves, which is determined by those who hold base money. The Fed stands ready to interconvert currency and reserves on demand. Historically, as Krugman points out, the monetary base has been held predominantly in the form of physical currency.

However, since 2008, several things have changed:

  1. The Fed has dramatically expanded the size of the monetary base;
  2. The percentage of the monetary base held as reserves (rather than currency) has gone from a very small fraction to a majority;
  3. The Fed has started to pay interest on the share of the monetary base held as reserves.

Krugman’s view, I think, is that we are in a period of “depression economics” that will someday end, and then we will return to the status quo ante. The economy will perform well enough that the central bank will want to “tap the brakes” and raise interest rates. The Fed will then shrink the monetary base to more historically ordinary levels and cease paying interest on reserves.

I’m less sure about the “someday end” thing. The collapse of the “full employment” interest rate below zero strikes me as a secular rather than cyclical development, although good policy or some great reset could change that. Regardless, if and when the Fed does want to raise interest rates, I think that it will not do so by returning to its old ways. A permanent institutional change has occurred, which renders past experience of the scale and composition of the monetary base unreliable.

To understand the change that has occurred, I recommend “Divorcing money from monetary policy by Keister, Martin, and McAndrews. It’s a quick read, and quite excellent. Broadly speaking, it describes three “systems” that central banks can use to manage interest rates. Under the traditional system and the “channel” system, an interest-rate targeting central bank is highly constrained in its choice of monetary base. There is a unique quantity of money that, given private sector demand for currency and reserves, is consistent with its target interest rate. However, there is an alternative approach, the so-called “floor” system, which allows a central bank to manage the size of the monetary base independently of its interest rate policy.

Under the floor system, a central bank sets the monetary base to be much larger than would be consistent with its target interest rate given private-sector demand, but prevents the interbank interest rate from being bid down below its target by paying interest to reserve holders at the target rate. The target rate becomes the “floor”: it never pays to lend base money to third parties at a lower rate, since you’d make more by just holding reserves (converting currency into reserves as necessary). The US Federal Reserve is currently operating under something very close to a floor system. The scale of the monetary base is sufficiently large that the Federal Funds rate would be stuck near zero if the Fed were not paying interest on reserves. In fact, the effective Federal Funds rate is usually between 10 and 20 basis points. With a “perfect” floor, the rate would never fall below 25 bps. But because of institutional quirks (the Fed discriminates, it fails to pay interest to nonbank holders of reserves), the rate falls just a bit below the “floor”.

If “the crisis ends” (whatever that means) and the Fed reverts to its traditional approach to targeting interest rates, Krugman will be right and I will be wrong, the monetary base will revert to something very different than short-term debt. However, I’m willing to bet that the floor system will be with us indefinitely. If so, base money and short-term government debt will continue to be near-perfect substitutes, even after interest rates rise.

Again, there’s no substantive dispute over the economics here. Krugman writes:

It’s true that the Fed could sterilize the impact of a rise in the monetary base by raising the interest rate it pays on reserves, thereby keeping that base from turning into currency. But that’s just another form of borrowing; it doesn’t change the result that under non-liquidity trap conditions, printing money and issuing debt are not, in fact, the same thing.

If the Fed adopts the floor system permanently, then the Fed will always “sterilize” the impact of a perpetual excess of base money by paying its target interest rate on reserves. As Krugman says, this prevents reserves from being equivalent to currency and amounts to a form of government borrowing. So, we agree: under the floor system, there is little difference between base money and short-term debt, at any targeted interest rate! Printing money and issuing debt are distinct only when there is an opportunity cost to holding base money rather than debt. If Krugman wants to define the existence of such a cost as “non-liquidity trap conditions”, fine. But, if that’s the definition, I expect we’ll be in liquidity trap conditions for a very long time! By Krugman’s definition, a floor system is an eternal liquidity trap.

Am I absolutely certain that the Fed will choose a floor system indefinitely? No. That is a conjecture about future Fed behavior. But, as I’ve said, I’d be willing to bet on it.

After all, the Fed need do nothing at all to adopt a floor system. It has already stumbled into it, so inertia alone makes its continuation likely. It would take active work to “unwind” the Fed’s large balance sheet and return to a traditional quantity-based approach to interest rate targeting.

Further, a floor system is very attractive to central bankers. It maximizes policy flexibility (and policymakers’ power) because it allows the central bank to conduct whatever quantitative or “qualitative” easing operations it deems useful without abandoning its interest rate target. Suppose, sometime in the future, there is a disruptive run on the commercial paper market, as happened in 2008. The Fed might wish to support that market, as it did during the financial crisis, even while targeting an interbank interest rate above zero. Under the floor system, the Fed retains the flexibility to do that, without having to offset its support with asset sales and regardless of the size of its balance sheet. Under the traditional or channel system, the Fed would have to stabilize the overall size of the monetary base even while purchasing lots of new assets. This might be operationally difficult, and may be impossible if the scale of support required is large.

The Fed could go back to the traditional approach and keep a switch to the floor system in its back pocket should a need arise. But why plan for a confidence-scarring regime shift when inertia already puts you where you want to be? Why go to the trouble of unwinding the existing surfeit of base money, which might be disruptive, when doing so solves no pressing problem?

From a central bankers’ perspective, there is little downside to a floor system. Grumps (like me!) might object to the very flexibility that renders the floor system attractive. But I don’t think the anti-bail-out left or hard-money right will succeed in rolling back operational flexibility that the Federal Reserve has already won and routinized. Every powerful interest associated with status quo finance prefers the Fed operate under the floor system. Paying interest on reserves at the Federal Funds rate eliminates the “tax” on banks and bank depositors associated with uncompensated reserves, and increases the Fed’s ability continue to do “special favors” for financial institutions (in the name of widows and orphans and “stability” of course).

Perhaps my read of the politics (and faith in inertia) will prove wrong. But the economics are simple, not at all based on a slip of the tongue and quite difficult to dispute. If the Fed sticks to the floor, base money and government debt will continue to be near perfect substitutes and theories of monetary policy that focus on demand for base money as distinct from short-term debt will be difficult to sustain. The Fed will still have an institutional “edge” over the Treasury in setting interest rates, because the Fed sets the interest rate on reserves by fiat, while short-term Treasury debt is priced at auction. When reserves are abundant, T-bill rates are effectively capped by the rate paid on reserves. Which means that, in our brave new future (which is now), reserves will likely remain a more attractive asset (for banks) than short-term Treasuries, so issuing base money (whether reserves or currency convertible on-demand to reserves by banks) will be less inflationary than issuing lower interest, less-transactionally-convenient debt.

There’s no such thing as base money anymore

Tim Duy has a great review of why platinum coin seigniorage was a bridge too far for Treasury and the Fed. I think he’s pretty much spot on.

However, with Greg Ip (whose objection Duy cites), I’d take issue with the following:

Ultimately, I don’t believe deficit spending should be directly monetized as I believe that Paul Krugman is correct — at some point in the future, the US economy will hopefully exit the zero bound, and at that point cash and government debt will not longer be perfect substitutes.

Note that there are two distinct claims here, both of which are questionable. Consistent with the “Great Moderation” trend, the so-called “natural rate” of interest may be negative for the indefinite future, unless we do something to alter the underlying causes of that condition. We may be at the zero bound, perhaps with interludes of positiveness during “booms”, for a long time to come.

But maybe not. Maybe we’ll see the light and enact a basic income scheme or negative income tax brackets. Maybe we’ll restore the dark, and engineer new ways of providing fraudulently loose credit. Either sort of change could bring “full employment” interest rates back above zero. Let’s suppose that will happen someday.

What I am fairly sure won’t happen, even if interest rates are positive, is that “cash and government debt will no[] longer be perfect substitutes.” Cash and (short-term) government debt will continue to be near-perfect substitutes because, I expect, the Fed will continue to pay interest on reserves very close to the Federal Funds rate. (I’d be willing to make a Bryan-Caplan-style bet on that.) This represents a huge change from past practice — prior to 2008, the rate of interest paid on reserves was precisely zero, and the spread between the Federal Funds rate and zero was usually several hundred basis points. I believe that the Fed has moved permanently to a “floor” system (ht Aaron Krowne), under which there will always be substantial excess reserves in the banking system, on which interest will always be paid (while the Federal Funds target rate is positive).

If Ip and I are right, Paul Krugman is wrong to say

It’s true that printing money isn’t at all inflationary under current conditions — that is, with the economy depressed and interest rates up against the zero lower bound. But eventually these conditions will end.

Printing money will always be exactly as inflationary as issuing short-term debt, because short-term government debt and reserves at the Fed will always be near-perfect substitutes. In the relevant sense, we will always be at the zero lower bound. Yes, there will remain an opportunity cost to holding literally printed money — bank notes, platinum coins, whatever — but holders of currency have the right to convert into Fed reserves at will (albeit with the unnecessary intermediation of the quasiprivate banking system), and will only bear that cost when the transactional convenience of dirty paper offsets it. In this brave new world, there is no Fed-created “hot potato”, no commodity the quantity of which is determined by the Fed that private holders seek to shed in order to escape an opportunity cost. It is incoherent to speak, as the market monetarists often do, of “demand for base money” as distinct from “demand for short-term government debt”. What used to be “monetary policy” is necessarily a joint venture of the central bank and the treasury. Both agencies, now and for the indefinite future, emit interchangeable obligations that are in every relevant sense money. [1]

I’ve no grand ideological point to make here. But I think a lot of debate and commentary on monetary issues hasn’t caught up with the fact that we have permanently entered a brave new world in which there is no opportunity cost to holding money rather than safe short-term debt, whether we are at the zero bound or not.


[1] Yes, there are small frictions associated with converting T-bills to reserves or cash for use as a medium of exchange. I think they are too small to matter. But suppose I’m wrong. Then nonusability as means of payment would mean a greater opportunity cost for T-bill holders than for reserve holders. That is, printing money outright would be less inflationary than issuing short-term debt! And for now, when Fed reserves pay higher interest rates than short-term Treasury bills, people concerned about inflation should doubly prefer “money printing” to short-term debt issuance! Quantiative easing is currently disinflationary in terms of any mechanical effect via the velocity of near-money, when the Fed purchases short-term debt (although it may be inflationary via some expectations channel, because of the intent that’s communicated). The mechanical effect of QE is less clear when the Fed purchases longer maturity debt, it would depend on how market participants trade-off the yield premium and interest rate risk, as well as on what long-term debt clienteles — pension funds etc. — choose to substitute for the scarcer assets. But it is not at all obvious that “printing money” to purchase even long maturity assets is inflationary when the Fed pays a competitive interest rate on reserves.


Thanks to Kid Dynamite for helping me think through some of these issues in correspondence (though he doesn’t necessarily agree with me on any of it!)

Rebranding the “trillion-dollar coin”

So, hopefully you know about the whole #MintTheCoin thing. If you need to get up to speed, Ryan Cooper has a roundup of recent commentary, and the indefatigable Joe Wiesenthal has fanned a white-hot social-media flame over the idea. For a longer-term history, see Joe Firestone, and note that all of this began with remarkable blog commenter beowulf. See also Josh Barro, Paul Krugman, Dylan Matthews, Michael Sankowski, Randy Wray among many, many others. Also, there’s a White House petition.

Basically, an obscure bit of law gives the Secretary of the Treasury carte blanche to create US currency of any denomination, as long as the money is made of platinum. So, if Congress won’t raise the debt ceiling, the Treasury could strike a one-trillion-dollar platinum coin, deposit the currency in its account at the Fed, and use the funds to pay the people’s bills for a while.

Kevin Drum and John Carney argue (not persuasively) that courts might find this illegal or even unconstitutional, despite clear textual authorization. For an executive that claims the 2001 “authorization to use military force” permits it to covertly assassinate anyone anywhere and no one has standing to sue, making the case for platinum coins should be easy-peasy. Plus (like assassination, I suppose), money really can’t be undone. What’s the remedy if a court invalidates coinage after the fact? The US government would no doubt be asked to make holders of the invalidated currency whole, creating ipso facto a form of government obligation not constrained by the debt ceiling.

I think Heidi Moore and Adam Ozimek are more honest in their objection. The problem with having the US Mint produce a single, one-trillion-dollar platinum coin so Timothy Geithner can deposit it at the Federal Reserve is that it seems plain ridiculous. Yes, much of the commentariat believes that the debt ceiling itself is ridiculous, but two colliding ridiculousses don’t make a serious. We are all accustomed to sighing in a world-weary way over what a banana republic the US has become. But, individually and in our roles as institutional investors and foreign sovereigns, we don’t actually act as if the United States is a rinky-dink bad joke with nukes. As a polity, we’d probably prefer that the US-as-banana-republic meme remain more a status marker for intellectuals than a driver of financial market behavior. Probably.

The economics of “coin seigniorage” are not, in fact, rinky-dink. Having a trillion dollar coin at the Fed and a trillion dollars in reserves for the government to spend is substantively indistinguishable from having a trillion dollars in US Treasury bills at the Fed and the same level of deposits with the Federal Reserve. The benefit of the plan (depending on your politics) is that it circumvents an institutional quirk, the debt ceiling. The cost of the plan is that it would inflame US politics, and there is a slim chance that it would make Paul Krugman’s “confidence fairies” suddenly become real. But note that both of these costs are matters of perception. Perception depends not only on what you do, but also on how you do it.

The Treasury won’t and shouldn’t mint a single, one-trillion-dollar platinum coin and deposit it with the Federal Reserve. That’s fun to talk about but dumb to do. It just sounds too crazy. But the Treasury might still plan for coin seigniorage. The Treasury Secretary would announce that he is obliged by law to make certain payments, but that the debt ceiling prevents him from borrowing to meet those obligations. Although current institutional practice makes the Federal Reserve the nation’s primary issuer of currency, Congress in its foresight gave this power to the US Treasury as well. Following a review of the matter, the Secretary would tell us, Treasury lawyers have determined that once the capacity to make expenditures by conventional means has been exhausted, issuing currency will be the only way Treasury can reconcile its legal obligation simultaneously to make payments and respect the debt ceiling. Therefore, Treasury will reluctantly issue currency in large denominations (as it has in the past) in order to pay its bills. In practice, that would mean million-, not trillion-, dollar coins, which would be produced on an “as-needed” basis to meet the government’s expenses until borrowing authority has been restored. On the same day, the Federal Reserve would announce that it is aware of the exigencies facing the Treasury, and that, in order to fulfill its legal mandate to promote stable prices, it will “sterilize” any issue of currency by the Treasury, selling assets from its own balance sheet one-for-one. The Chairman of the Federal Reserve would hold a press conference and reassure the public that he foresees no difficulty whatsoever in preventing inflation, that the Federal Reserve has the capacity to “hoover up” nearly three trillion dollars of currency and reserves at will.

That would be it. There would be no farcical march by the Secretary to the central bank. The coins would actually circulate (collectors’ items for billionaires!), but most of them would find their way back to the Fed via the private banking system. The net effect of the operation would be equivalent to borrowing by the Treasury: instead of paying interest directly to creditors, Treasury would forgo revenue that it otherwise would have received from the Fed, revenue the Fed would have earned on the assets it would sell to the public to sterilize the new currency. The whole thing would be a big nothingburger, except to the people who had hoped to use debt-ceiling chicken as leverage to achieve political goals.


Some legal background: here’s the law, the relevant bit of which—subsection (k)—was originally added in 1996 then slightly modified in 2000; here is appropriations committee report from 1996, see p. 35; and legislative discussion of the 2000 modification.

Huge thanks to @d_embee and @akammer for digging up this stuff.

Why vote?

I’m a great fan of Kindred Winecoff, especially when I disagree with him, which is often. Today Winecoff joins forces with Phil Arena expressing disdain for the notion that there might be any virtue or utility to voting other than whatever consumption value voters enjoy for pre-rational, subjective reasons. There are lots of interesting arguments in the two pieces, but the core case is simple:

  1. The probability that any voter will cast the “decisive vote” is negligible, effectively zero;
  2. Even if a voter does cast the “decisive vote”, the net social gain associated with that act is roughly zero because different people have stakes in opposing outcomes. Once you subtract the costs to people on the losing side from the gains to winners, you find that there is little net benefit to either side prevailing over the other.

The first point is a commonplace among economists, who frequently puzzle over why people bother to vote, given that it is a significant hassle with no apparent upside. The second point is a bit more conjectural — there is no universally defensible way of netting gains and losses across people, so economists try to pretend that they don’t have to, resorting whenever possible to fictions like “Pareto improvement”. But the point is nevertheless well-taken. In terms of subjective well-being, whoever wins, at the resolution of a close election a lot of people will be heartbroken and bitter while another lot of people will be moderately elated, and the world will continue to turn on its axis. Over a longer horizon, elections may have big consequences for net welfare: perhaps one guy would trigger nuclear armageddon, while the other guy would not. But in evaluating the consequences of casting a vote, the conjectural net benefit of voting for the right guy has to be discounted for the uncertainty at the time of the election surrounding who is the right guy. After all, if armageddon is at stake, what if you actually do cast the “decisive vote”, but you choose poorly? It must be very unclear, who one should vote for, if victory by one of the candidates would yield widely shared net benefit (rather than partisan spoils), yet the contest is close enough for your vote to matter.

All of these arguments are right but wrongheaded. We don’t vote for the same reason we buy toothpaste, satisfying some personal want when the benefit outweighs the cost of doing so. Nor, as Winecoff and Arena effectively argue, can we claim that our choice to vote for one side and against another is altruistic, unless we have a very paternalistic certitude in our own evaluation of which side is best for everyone. Nevertheless, voting is rational behavior and it can, under some circumstances, be a moral virtue.

Let’s tackle rationality first. Suppose you have been born into a certain clan, which constitutes roughly half of the population of the hinterland. Everyone else belongs to the other clan, which competes with your clan for status and wealth. Every four years, the hinterland elects an Esteemed Megalomaniac, who necessarily belongs to one of the two clans. If the E.M. is from your clan, you can look forward to a quadrennium in which all of your material and erotic desires will be fulfilled by members of the other clan under the iron fist of Dear Leader. Of course, if a member of the other clan becomes Dear Leader, you may find yourself licking furiously in rather unappetizing places. It is fair to say that even the most narrow-minded Homo economicus has a stake in the outcome of this election.

Still, isn’t it irrational for any individual, of either clan, to vote? Let’s stipulate that the population of the hinterland is many millions and that polling stations are at the top of large mountains. The cost of voting is fatigue and often injury, while the likelihood of your casting “the decisive vote” is pretty much zero. So you should just stay home, right? It would be irrational for you to vote.

The situation described is simply a Prisoners’ Dilemma. If everyone in your clan is what we’ll call “narrowly rational”, and so abstains from voting, the predictable outcome will be bad. But it is not rational, for individuals within a group that will foreseeably face a Prisoners’ Dilemma, to shrug and say “that sucks” and wait for everything to go to hell. Instead, people work to find means of reshaping their confederates’ behavior to prevent narrowly rational but collectively destructive choices. Unless one can plausibly take oneself as some kind of ubermensch apart, reshaping your confederates’ behavior probably implies allowing your own behavior to be reshaped as well, even though it would be narrowly in your interest to remain immune. In our example, this implies that rational individuals would craft inducements for others in their clan to vote, and would subject themselves to those same inducements. These inducements might range from intellectual exhortations to norms enforced by social sanctions to threats of physical violence for failing to vote. If we suppose that in the hinterland, as in our own society, physical violence is ruled out, rational individuals would work to establish pro-voting norms and intellectual scaffolding that helps reinforce those norms, which might include claims that are almost-surely false in a statistical sense, like “Your vote counts!”

A smarty-pants might come along and point out the weak foundations of the pro-voting ideology, declaring that he is only being rational and his compatriots are clearly mistaken. But it is our smarty-pants who is being irrational. Suppose he makes the “decisive argument” (which one is much more likely to make than to cast the decisive vote, since the influence of well crafted words need not be proportionate to 1/n). By telling “the truth” to his kinsmen, he is very directly reducing his own utility, not to mention the cost he bears if his preferences include within-group altruism. In order to be rational, we must profess to others and behave as though we ourselves believe things which are from a very reductive perspective false, even when those behaviors are costly. That is to say, in order to behave rationally, our relationship to claims like “your vote counts!” must be empirically indistinguishable from belief, whether or not we understand the sense in which the claim is false.

Of course, it would be perfectly rational for a smarty-pants to make his wrongheaded but compelling argument about the irrationality of voting to members of the other clan. But it would be irrational for members of either group to take such arguments seriously, by whomever they are made and despite the sense in which they are true.

So, when elections have strong intergroup distributional consequences, not only is voting rational, misleading others about the importance of each vote is also rational, as is allowing oneself to be misled (unless you are sure you are an ubermensch apart, and the conditions of your immunity don’t imply that others will also be immune).

But is voting virtuous? I think we need to subdivide that question into at least two different perspectives on virtue, a within-group perspective and a detached, universal perspective. Within the clans of our hinterland, voting would almost certainly be understood as a virtue, a sacred obligation even, and to not vote would be to violate a taboo and be shunned or shamed, if physical violence is ruled out. Perhaps by definition, the social norms that most profoundly affect behavior are those endowed with moral significance, and a clan that did not define voting as a moral obligation would be at a severe competitive disadvantage. Further, at a gut level, people seem to have an easy time perceiving actions that are helpful to people within their own social tribe as virtuous, especially when it counters harmful (to us) actions of other tribes. From the perspective of almost everyone in our hypothetical hinterland, voting would be a virtue, for themselves and members of their own clan.

However, observing from outside the hinterland and from a less partisan point-of-view, voting does not seem especially virtuous. Whoever wins, half the population will be treated abhorrently. Since getting to voting booths involves climbing steep rock faces, as external observers we’d probably say that the whole process is harmful, and that it’d be better if the Hinterlonians found some less miserable means of basically flipping a coin to decide who rules, or better yet if they’d reform their society so that half its members weren’t quadrennially enslaved by a coin-flip. Even from outside, we’d probably recognize not voting as a sort of sin in its anthropological context, just as we’d condemn shirking by a baseball player even when we don’t care which team wins. But we’d consider the whole exercise distasteful. It’d be like the moral obligation of a slave to claim responsibility for an action by her child, so that the whipping comes to her. We’d simultaneously recognize the virtue and wish for its disappearance.

But lets leave the hinterland, and consider a polity in which there is a general interest as well as distributional interests. After an election, the losing clan might be disadvantaged relative to the winning clan, sure, but the skew of outcomes is much smaller than in the hinterland, and “good leadership” — whatever that means — can improve everyone’s circumstances so much (or bad leadership can harm everyone so dramatically) that often members of a clan would be better off accepting relative disadvantage and helping a leader from the other clan win. Now there are two potential virtues of voting, the uncomfortable within-clan virtue of the hinterland, but also, potentially, a general virtue.

Let’s consider some circumstances that would make voting a general virtue. Suppose that citizens can in fact perceive the relative quality of candidates, but imperfectly. In economist-speak, each citizen receives an independent estimate, or “signal”, of candidate quality. Any individual estimate may be badly distorted, as idiosyncratic experiences lead people to over- or underestimates of candidate quality, but those sorts of distortions affect all candidates similarly. Individuals cannot reliably perceive how accurate or distorted their own signals are. Some individuals mistakenly believe that candidate A is better than candidate B, and would vote for A. But since candidate B is in fact superior, distortions that create a preference for A would be rarer than those leave B’s lead in place. In this kind of world, voting is an unconflicted general virtue. There is a candidate whose victory would make the polity as a whole better off, despite whatever distributional skew she might impose. If only a few people vote, however, there is a significant possibility that voters with a mistaken ranking of quality will be overrepresented, and the low quality candidate will be chosen. The probability of error shrinks to zero only as the number of voters becomes very large. The expected quality of the election victor is monotonically increasing in the number of voters. Every vote improves the expected welfare of the polity, however marginally, and so every vote does count.

Even in worlds where voter participation is a clear public good, the Prisoners’ Dilemma described above still obtains. In very narrow terms, it’s unlikely that the personal benefit associated with a tiny improvement in expected general welfare exceeds the hassle of schlepping to the polls to cast a vote. Yet the cost of low voter participation, in aggregate and to each individual, can be very high, if it allows a terrible candidate to get elected. So, what do rational, forward-looking agents do? They don’t fatalistically intone about free-rider problems and not vote. As in the hinterland, they establish institutions intended to reshape individual behavior towards the collective rationality from which they will individually benefit. A polity might make voting compulsory, and some do. Short of that, it might establish strong social norms in favor of voting, try to enshrine a moral obligation to vote, and promote ideologies that attach higher values to voting than would be implied by individual effects on outcomes. As before, in this kind of world, it is those who make smarty-pants arguments about how voting is irrational who are behaving irrationally. Rationality is not a suicide pact.

In both of the sort of worlds I’ve described, we’d expect voting to be considered a virtue within competing clans or parties, as we pretty clearly observe in reality. We’d only expect voting to be considered a general virtue, one in which you exhorted others to vote regardless of their affiliations, in a world where people believed in a general interest to which citizens of every group have imperfect access. I think it’s interesting, and depressing, to observe growing cynicism about universal voting in the United States. Political operatives have always sought advantage from differential participation, but it was once the unconsidered opinion of patriotic Americans that everyone who could should vote. Maybe I’m just a grumpy old man, but now it seems that even “civically active” do-gooders focus on getting-out-the-vote on one side and openly hope for low participation on the other. To me, this suggests a polity that increasingly perceives distributional advantage as overwhelming any potential for widely-shared improvement. That can become a self-fulfilling prophesy.

Winecoff dislikes Pascal’s Wager, so lets use an idea from finance, optionality, instead. Suppose that there is no general welfare correlated to election outcomes, and apparent signals thereof are just noise. Then, if people falsely believe in “national leadership” and vote based on a combination of that and more partisan interests, we’d have, on average, the same distributional contest we’d have if people didn’t falsely believe. At worst we’d have a differently skewed distributional contest as one side manipulates perceptions of general interest more adroitly than the other. But suppose that there is a general interest meaningfully correlated to election outcomes, in addition to distributional concerns. Then “idealism” about the national interest, manifest as citizens working to perceive the relationship between electoral outcomes and the general welfare, voting according to those perceptions, and encouraging others to do the same, could lead to significant improvements for all. There’s little downside and a lot of upside to the elementary-school-civics take on elections. With this kind of gamma and so low a price (polling stations are not stuck atop mountains!), even hedge fund managers and political scientists ought to be long electoral idealism.


Note: I’m overseas and I don’t live in a swing state. I won’t be voting on Tuesday, by absentee ballot or otherwise. I deserve your disapproval, although not so very much of it. Social norms are contingent and supple. (Pace Winecoff and Arena, whether one lives in swing state should condition norms about voting. Why is left as an exercise to the reader. Hint: Consider the phrase “marginal change in expected welfare” — whether applied to members of an in-group or the polity as a whole — and the fact that cumulative distribution functions are typically S-shaped.)

Forcing frequent failures

I’m sympathetic to the view that financial regulation ought to strive not to prevent failures but to ensure that failures are frequent and tolerable. Rather than make that case, I’ll refer you to the oeuvre of the remarkable Ashwin Parameswaran, or macroresilience. Really, take a day and read every post. Learn why “micro-fragility leads to macro-resilience”.

Note that “micro-fragility” means that stuff really breaks. It’s not enough for the legal system to “permit” infrequent, hypothetical failures. Economic behavior is conditioned by people’s experience and expectations of actual events, not by notional legal regimes. As a matter of law, no bank has ever been “too big to fail” in the United States. In practice, risk-intolerant creditors have observed that some banks are not permitted to fail and invest accordingly. This behavior renders the political cost of tolerating creditor losses ever greater and helps these banks expand, which contributes to expectations of future bailouts, which further entices risk-intolerant creditors. [1] In order to change this dynamic, even big banks must actually fail. And they must fail with some frequency. Chalk it up to agency problems (“you’ll be gone, i’ll be gone“) or to human fallibility (“recency bias”), but market participants discount crises of the distant past or the indeterminate future. That might be an error, but as Minsky points out, the mistake becomes compulsory as more and more people make it. Cautious finance cannot survive competition with go-go finance over long “periods of tranquility”.

So we need a regime where banks of every stripe actually fail, even during periods when the economy is humming. If we want financial stability, we have to force frequent failures. An oft-cited analogy is the practice of setting occasional forest fires rather than trying to suppress burns. Over the short term, suppressing fires seems attractive. But this “stability” allows tinder to build on the forest floor at the same time as it engenders a fire-intolerant mix wildlife, creating a situation where the slightest spark would be catastrophic. Stability breeds instability. (See e.g. Parameswaran here and here. Also, David Merkel.) We must deliberately set financial forest fires to prevent accumulations of leverage and interconnectedness that, if unchecked, will eventually provoke either catastrophic crisis or socially costly transfers to creditors and financial insiders.

Squirrels don’t lobby Congress, when the ranger decides to burn down the bit of the forest where their acorns are buried. Banks and their creditors are unlikely to take “controlled burns” of their institutions so stoically. If we are going to periodically burn down banks, we need some sort of fair procedure for deciding who gets burned, when, and how badly. Let’s think about how we might do that.

First, let’s think about what it means for a financial institution, or any business really, to “fail”. Businesses can fail when they are perfectly solvent. They can survive for long periods of time even when they are desperately insolvent. Insolvency is philosophy, illiquidity is fact. Usually we say a business “fails” when it has scheduled obligations that it cannot meet — a creditor must be paid, the firm can’t come up with the money. The consequence of business failure is that creditors — the people to whom obligations were not timely met — become equityholders, often on terms that prior equityholders consider disadvantageous. The business may then be liquidated, so that involuntary equityholders can recover their investments quickly, or it may continue under new ownership, depending on its value as a going concern.

Forcing failure by rendering banks illiquid is not a good idea, for lots of different reasons. A better alternative is to jump straight to the consequence of illiquidity. We’ll say a bank has “failed” when some fraction of its debt is converted to equity on terms that affected creditors and incumbent equityholders would not have voluntarily arranged. [2] “Forced failure” will mean provoking unwelcome debt-to-equity conversions by regulatory fiat.

Failure isn’t supposed to be fun. Forced conversions to equity should be unpleasant both to creditors and incumbent equity. Upon failure, equityholders should experience unwelcome dilution, while creditors should find themselves shorn of predictable payments and bearing equity risk they do not want. Converted equity should not take the form of public shares, but restricted-sale instruments that are intentionally costly to hedge. Over the long-term, ex post as they say, there will be winners and losers from the conversions: If the “failed” bank was “hold-to-maturity” healthy, patient creditors will have received a transfer from equity holders via the dilutative conversion. If the bank turns out to have skeletons in its balance sheet, then converted creditors will lose, bearing a portion of losses that would have been borne entirely by incumbent equityholders. In either case, unconverted creditors (including depositors and public guarantors) will gain from a reduction of risk, as the debt-to-equity conversion improves the capital position of the “failed” bank. And in either case, both creditors and shareholders will be unhappy in the short-term.

One might think of these “forced failures” as what Garrett Jones has called speed bankruptcies. (See also Zingales, or me.) There are devils in details and lots of variations, but as Jones points out, “speed bankruptcy” needn’t be disruptive for people other than affected creditors and shareholders. Managed forest fires do suck for the squirrels, but we’d never be willing to adopt the policy if it weren’t reasonably safe for bystanders. Related ideas would be to frequently force “CoCos” (contingent convertible debt) to trigger or public injections of capital on terms that dilute existing equity.

But if we are going to “force” failures — if these failures are going to be regulatory events rather than outcomes provoked by market counterparties — how do we decide who must fail, and when? There is, um, some scope for preferential treatment and abuse if it becomes a matter of regulatory discretion whose balance sheets get painfully rearranged.

A frequent-forced-failure regime would have to be relative, rule-based, and stochastic. By “relative”, I mean that banks would get graded on a curve, and the “worst” banks would be at high risk of forced failure. That is very different from the present regime, whereunder there is little penalty for being an unusually risky bank as long as your balance sheet seems “strong” in an absolute sense. During good times, behaving like Bear Stearns just makes a bank seem unusually profitable. Given agency costs, recency bias, and the vast uncertainty surrounding outcomes for all banks should a crisis hit, penalizing banks only when they are in direct peril of regulatory insolvency is inadequate. We want to create incentives for firms to compete with one another for prudence as well as for profitability. Even during booms, creditors should have incentives to discriminate between cautious stewards of capital and firms capturing short-term upside by risking delayed catastrophe. The risk of forced conversions to illiquid equity would create those incentives for bank creditors.

Forced failures should obviously be rule-based. The current, discretionary system of bank regulation and enforcement is counterproductive and unjust. Smaller, less connected banks find themselves subject to punitive “prompt corrective action” when they get into trouble, while more dangerous “systemically important” banks get showered with loan guarantees, cheap public capital, and sneaky interventions to help them recover at the public’s expense. That’s absurd. Regulators should determine, in placid times and under public scrutiny, the attributes that render banks systemically dangerous and publish a formula that combines those attributes into rankable quantities. The probability that a bank would face a forced restructuring would increase with the estimated hazard of the bank, relative to its peers.

And “probability” is the right word. Whether a bank is forced to fail should be stochastic, not certain. Combining public sources of randomness, regulators should periodically “roll the dice” to determine whether a given bank should be forced to fail. Poorly ranked banks would have a relatively high probability of failure, very good banks would have a low (but still nonzero) probability of forced debt-to-equity conversion. The dice should be rolled often enough so that forced failures are normal events. For an average bank in any given year, the probability of a forced restructuring should be low. But in aggregate, forced restructurings should happen all the time, even (perhaps especially) to very large and famous banks. They should become routine occurrences that bank investors, whether creditors or shareholders, will have to price and prepare for.

Stochastic failures are desirable for a variety of reasons. If failures were not stochastic, if we simply chose the worst-ranked banks for restructuring, then we’d create perverse incentives for iffy banks to game the criteria, because very small changes in ones score would lead to very large changes in outcomes among tightly clustered banks. If restructuring is stochastic and the probability of restructuring is dependent upon a bank’s distance from the center rather than its relationship with its neighbor, there is little benefit to becoming slightly better than the next guy. It only makes sense to play for substantive change. Also, stochastic failure limits the ability for regulators to tailor criteria in order to favor some banks and disfavor others. (It doesn’t by a long shot eliminate regulators’ ability to play favorites, but it means that in order to fully immunize a favored future employer bank, a corrupt regulator would have to dramatically skew the ranking formula, whereas with deterministic failure, a regulator could reliably exempt or condemn a bank with a series of small tweaks.) It might make sense for the scale of debt/equity conversions to be stochastic as well, so that most forced failures would be manageable, but investors would still have to prepare for occasional, very disruptive reorganizations.

Banking regulation is hard, but in a way it is easier than forest management. As Parameswaran emphasizes, when a forest has been stabilized for too long, it becomes impossible to revert to the a priori smart strategy of managed burns. Too much tinder will have accumulated to control the flames, to permit any fire at all would be to risk absolute catastrophe. It is clear that regulators believe (or corruptly pretend to believe) that this is now the case with our long overstabilized financial system. Lehman, the story goes, was an attempt at a managed burn and it almost blew up the world. Therefore, we must not tolerate any sparks at all in the vicinity of “systemically important financial institutions”. No more Lehmans! [3]

However, unlike physical fire, with bank “failures” there are infinite gradations between quiescence and conflagration. A forced-frequent-failure regime could be phased in slowly, on a well-telegraphed schedule. Both the probability of forced failure and the expected fraction of liabilities converted could rise slowly from their status quo values of zero. Risk-intolerant creditors would, over time, abandon financing dangerous banks at low yields, but they would not flee all at once, and early “learning experiences” would provoke only modest, socially tolerable, losses. Over time, the cost of big-bank finance would rise. Of course, the banking community will cry catastrophe, and make its usual threat, “Nice macroeconomy you got there, ‘shame if something were to happen to the availability of credit.” As always, when bankers make this threat, the correct response is, “Good riddance, not a shame at all, we have tools to expand demand that don’t rely on mechanisms so unstable and combustible as bank credit.” We will never have a decent society until we develop macroeconomic alternatives to loose bank credit. Bankers will simply continue to entangle their own looting with credit provision, and blackmail us into accepting both.

There are a lot of details that would need to be hammered out, if we are to force frequent failures. Should debt/equity conversions strictly follow banks’ debt seniority hierarchy, or should more senior debt face get “bailed in” to haircuts? (Senior creditors would obviously take smaller haircuts than those experienced by junior lenders.) As a matter of policy, do we wish to encourage the over-the-counter derivatives business by exempting derivative counterparties from forced failures, or do we prefer that OTC counterparties monitor bank creditworthiness? (If so, “in the money” contracts with force-failed banks might be partially paid out in illiquid equity.) If risk of forced conversion is relative, banks may try (even more than they already do) to “herd”, to be indistinguishable from their peers so their managers cannot be blamed if anything goes wrong. Herding is already a huge problem in banking — “If everybody does it, nobody gets in trouble” ought to be the motto of the Financial Services Roundtable. (See also Keynes, and Tanta, on “sound bankers”.) Any decent regulatory regime would impose congestion taxes on bank exposures to ensure diversification of the aggregate banking sector portfolio.

These are all policy choices we can make, not barriers to imposing policy. We can, in fact, create a more loosely coupled financial system where risk-intolerant actors are driven to explicitly state-backed instruments and creditors of large private banks genuinely bear risk of losses. The hard part is choosing to do so, when so many of those who rail against “bailouts” and “too big to fail” are protected by, and profit handsomely from, those very things.


Acknowledgments:

This post was provoked by recent correspondence/conversation with Cassandra, The Epicurean Dealmaker, Dan Davies, Pascal-Emmanuel Gobry, Francis O’Sullivan, Ben Walsh and of course Ashwin Parameswaran. And whoever I’ve forgot. Unforgivably. The good stuff is almost certainly lifted from my correspondents. The bad stuff is my own contribution.


Notes:

[1] Note that “too big to fail” has nothing to do with how Jamie Dimon talks to his cronies in the boardroom. It is a Nash equilibrium outcome in a game played between creditors, bank managers and shareholders, and government regulators. Legal exhortations that try to compel regulators to pursue a poor strategy, given the behavior of creditors and bankers, are not credible. If “the Constitution is not a suicide pact”, then neither was FDICIA with its “prompt corrective action”. Nor will Dodd-Frank be, despite its admirable resolution authority.

[2] Note that “creditors” here might include the state, which is the “creditor from a risk perspective” with respect to liabilities to insured depositors and other politically protected stakeholders.

[3] Some argue that Dodd-Frank’s “living wills” and resolution authority give regulators tools to safely play with fire “next time”, and so they will be more willing to do so. I’m very skeptical of claims they did not have sufficient tools last time around, and don’t believe their incentives have changed enough to alter their behavior next time. Perhaps you, dear reader, are less cynical.

Update History:

  • 20-Oct-2012, 11:35 p.m. EEST: “easier that than forest management”, “should be probabilistic stochastic, not certain”, “aggregate banking sector asset portfolio”

Rational astrologies

Suppose that you are transported back in time several hundred years. You start a new life in an old century, and memories of the future grow vague and dreamlike. You know you are from the future, but the details are chased away like morning mist by a scalding sun. You marry, have children. You get on with things.

Suddenly, your wife becomes ill. She may die. You consult the very best physicians. They discuss imbalances of her humors, and where and how she should be bled. You were never a doctor or a scientist. The men you consult seem knowledgeable and sincere. But all of a sudden you get a flash of memory from your forgotten future. The medicine of this era is really bad. Almost none of what they think they know is true. Some of their treatments do some good, but others are actively harmful. On average, outcomes are neither better nor worse with than without treatment.

You know your insight from the future is trustworthy. Do you let the doctors treat your wife, even pay them handsomely to do so?

Of course you do. With no special scientific or medical talent, you have no means of finding and evaluating an alternative treatment. You do have the option of turning the doctors away, letting nature take its course. From a narrowly rationalistic perspective, you understand that nontreatment would be “optimal”: Your wife’s chances would be just as good without treatment, and you would save a lot of coin. That doesn’t matter. You pay the most respected doctors you can find a great deal of money to do whatever they can do. And it is perfectly rational that you should do so. Let’s understand why.

You know that your wife’s expected medical outcome is unchanged by the treatment. But your “payoff” is not solely a function of that outcome. Whether your wife lives or dies, your future welfare turns crucially on how your actions are viewed by other people. First and foremost, you must consider the perceptions of your wife herself. Your life will be a living hell if your beloved dies and you think she had the slightest doubt that you did everything possible to help her. Your wife has no mad insights from the future. She is a creature of her time and its conventions. She will know your devotion if you hire the best doctors of the city to attend her day and night. She may be less sure of your love if you do nothing, or if you listen to the neighborhood madwoman who counsels feeding moldy bread to the ill.

Moreover, it is not only your wife’s regard to which you must attend. Your children and friends, patrons and colleagues, are observing your behavior. If you call in the respected doctors, you will have done everything you could have done. If you do nothing, or take a flyer on a madwoman, you will not have. Your behavior will have been indefensible. Perhaps you are a freethinker, an intellectual, a noncomformist. That makes for lively dinner conversation. But you are human, and when things get serious, you depend upon the regard of others, both to earn your keep and to shape and sustain your own sense of self. If your wife dies and it is the world’s judgment that you permitted her to die, rationalizations of your actions will ring hollow. You will be miserable in your own skin, and your position in your community will be compromised.

The mainstream medicine of several centuries ago was what I think of as a rational astrology. A rational astrology is a set of beliefs which one rationally behaves as if were true, regardless of whether they are in fact. Rational astrologies need not be entirely fake or false. Like bullshit, the essential characteristic of a rational astrology is the indifference to truth or falsehood of the factors that compel ones behavior. Some rational astrologies may turn out to be largely true, and that happy coincidence can be a great blessing. But they are still a rational astrologies to the degree the factors that persuade us to behave as though the beliefs are true are not closely related to the fact of their truth. The beliefs that undergird modern medicine may represent a well-founded characterizations of reality. But, now as centuries ago, most of us act as if those beliefs are true regardless of own judgments, especially when giving advice or making decisions for other people. Medicine remains a rational astrology. We hope that our truth-seeking institutions — universities and hospitals, the scientific method and peer review — have created convergence between the beliefs we behave as if are true and those that actually are true. But we behave as if they are true regardless.

There is nothing very exotic about all this. It is obvious there can be advantage in deferring to convention and authority. But rational astrologies are a bit more interesting, and a bit more insidious, than wearing a tie to get ahead in your career. When an aspiring banker puts on a suit, he may compromise his personal fashion sense, but his intellect and integrity are intact. He knows that he is conforming to a fairly arbitrary convention, because that is what is socially required of him. Rational astrologies refer to conventional beliefs adherence to which confers important benefits. In order to gain the benefits, an individual must persuade himself that the favored beliefs are in fact true, or else pretend to believe and know himself to be a cynical prevaricator. Either choice is problematic. If one embraces an orthodoxy as true regardless of the evidence, one contributes to what may be a misguided and destructive consensus. If one pretends whenever obeisance is socially required, it becomes hard to view oneself as a person of integrity, or else one must adopt a very sophisticated and contextualized notion of integrity. The vast majority of us, I think, avoid the cognitive dissonance and gin up a sincere deference to the conventional beliefs that it is in our interest to hold. When confronted with opposing evidence, we may toy with alternative viewpoints. But we stick with the consensus until the consensus shifts. And, after all, who could blame us?

After all, who could blame us? That is what drives rational astrologies, the fixative that seals them into place. In financial terms, behavior in accordance with conventional wisdom comes bundled with extremely valuable put options that are not available when we deviate. If, after an independent evaluation of the evidence, I make a medical decision considered “quack” and it doesn’t work out, I will bear the full cost of the tragedy. The world will blame me. I will blame myself, if I am an ordinarily sensitive human. If I do what authorities suggest, even if the expected outcome is in fact worse than with the “quack” treatment, then it will not be all my fault if things go bad. I will not be blamed by others, or put in jail for negligent homicide. The consolation of peers will help me to console myself that I did all that could and should have been done. If you understand how to value options, then you understand that the value of hewing to convention is increasing in uncertainty. If I am certain that the “quack” treatment will work, I will lose nothing by showing the imposing men in white coats an upraised middle finger. But even if I am quite sure the average outcome under the quack treatment is better than with the conventional treatment, if there is sufficient downside uncertainty surrounding the outcomes, the benefit of convention will come to exceed the cost.

If you knew with perfect certainty that a conventional cancer treatment had a 10% likelihood of success and a crazy unconventional “quack” treatment had a 10.1% likelihood, which one would you choose for a loved one? I’d like to think I’m good enough and courageous enough to choose door number two. But I like to think a lot of things. In the real world, of course, we never know with perfect certainty that conventional beliefs are wrong, and we can always console ourselves that we are imperfect judges and perhaps it is the best strategy to defer to social consensus. In any given case, that may be true or it may not be. But it is certainly convenient. It allows us to collect a lot of extremely valuable put options, and compels us to believe and behave in very conventional ways.

I see rational astrologies everywhere. I think they are the stuff that social reality is made of, bones of arbitrary belief that masquerade as truth and shape every aspect of our lives and institutions.

We crave rational astrologies very desperately, so much that we habitually and quite explicitly embed them into our laws. Regulations often provide “safe harbors”, practices that may or may not actually live up to the spirit and intent of the legislative requirement, but which if adhered to immunize the regulated parties from sanction. People grow quickly indifferent to the actual purpose of these law but very attentive to the prerequisites for safe harbor. Rational astrologies are conventional beliefs adherence to which elicits provision of safe harbor by the people around us, socially if not legally.

The inspiration for this post was a wonderful conversation (many moons ago now) between Bryan Caplan and Adam Ozimek on the value of “sheepskin”, a college degree. Caplan is a proponent of the “signalling model” of higher education, which suggests that rather than “educating” students in a traditional sense, college provides already able students with a means of signalling to employers preexisting valuable characteristics like diligence and conformity. Ozimek is sympathetic to the traditional “human capital” story, that we gain valuable skills through education and achievement of a college degree reflects that accomplishment. Both of them are trying to explain the wage premium that college graduates enjoy.

I’m pretty agnostic to this debate — I think people really do learn stuff in college, but I think attaining a degree also reflects and signals all kinds of preexisting characteristics about the sort of people who do it. I’d add the “social capital” story to the mix, that college students make connections, with peers, faculty, and institutions, that increase their likelihood of being placed in high-wage positions. (And, I’d argue, actual graduation is an important consummation of membership in “the club”, so post-college social and institutional connections are weaker for those who don’t collect their sheepskin.)

But even if none of those stories were true, “rational astrology” would be sufficient to explain a large college wage premium.

Suppose that it is merely conventional to believe that college graduates are better job candidates than non-graduates, and that graduates of high-prestige colleges are better than graduates of low prestige colleges. Suppose that in fact, the distribution of degrees is wholly orthogonal to the ability of job candidates to succeed, but that outcomes are uncertain and there is no sure predictor of employee success.

Consider the situation of a hiring decisionmaker at a large firm. She reads through a lot of resumés and interviews candidates. She develops hunches about who is and isn’t good. Our decisionmaker has real ability: the people she thinks are good are, on average, substantially better than the people she thinks are not so good. But there is huge uncertainty surrounding hiring outcomes. Often even people in her “good” group don’t work out, and each failed hire is an emotionally and financially costly event for the firm. How will our hiring agent behave? If she is rational, whenever possible, she will choose people from her “good” pile who also went to prestigious colleges. She will be entirely indifferent to the actual untruth of the claim that Harvard grads are “good”. She will choose the Harvard grad whenever possible, because if a Harvard graduate doesn’t work out, she will be partially immunized from blame for the failure. If she had chosen a person who, according to her judgment, was an equally promising or even better candidate but who had no college degree, and that candidate didn’t work out, her choice would be difficult to defend and her own employment might be called into question. Thus, whenever possible, hiring decisionmakers rationally choose the Harvard man over similar or even slightly more promising candidates without the credential, and would rationally do so even if she understands that a Harvard degree contains no information whatsoever about the quality of the candidate, but that it is conventional to pretend that it does. People hiring for more prestigious and lucrative positions attract larger pools of applicants, and have greater ability to find Harvard grads not very much less promising than other applicants, and so rationally hire them. People hiring for less remunerative positions attract fewer prestige candidates of acceptable quality, and so must do without the valuable protection a candidate’s nice degree might confer. Candidates with prestige degrees end up disproportionately holding higher paid jobs, for reasons that have nothing to do with what the degree says about them, and everything to do with what the degree offers to the person who hires them.

This is not rocket science. It is a commonplace to point out that “no one ever got fired for going with [ Harvard / Microsoft / IBM / Goldman Sachs ]”. An obvious corollary of that is that it would be very valuable to become the thing that no one ever got fired for buying. One way of becoming the safe choice is by being really, really good, sure. But I don’t think it’s overly cynical to suggest that actual quality is not always well correlated with being the unimpeachable hire, and that once, somehow, an organization gains that cachet, a lot of hiring occurs that is somewhat insulated from the actual merit of the choice. [ Harvard / Microsoft / IBM / Goldman Sachs ] credentials may be informative of quality, or they may not, but they are very valuable regardless, once it becomes conventional to treat them as if they signify quality.

Rational astrologies are very difficult to dislodge. People who have relied upon them in the past have a stake in their persisting. More importantly, present and future decisionmakers require safe harbors and conventional choices, and unless it is clear what new convention is to be coordinated around, the old convention remains the obvious focal point. Very visible anomalies are insufficient to undo a rational astrology. There needs to be a clear alternative that is immune to the whatever called the old beliefs into question. The major US ratings agencies are a fantastic example. They could not have performed more poorly during last decade’s credit bubble. But regulators and asset managers require some conventional measure of quality around which to build safe harbors. Lacking a clearly superior alternative, we prefer to collectively ignore indisputable evidence of inadequacy and corruption, and have doubled down on the convention that ratings are informative markers of quality. Asset managers still find safety in purchasing AAA debt rather than unrated securities on which they’ve done their own due diligence. We invent and sustain astrologies because we require them, not because they are true.

The process by which rational astrologies are chosen is the process by which the world is ruled. The United States is the world’s financial power because it is conventional to pretend that the US dollar is a safe asset, and so long as it is conventional it is true and so the convention is very difficult to dislodge. Economics as a discipline has not performed very well from the perspective of commonsensical outside observers like the Queen of England. But the conventions of economic analysis are the rational astrology of technocratic government, and decisions that can’t be couched and justified according to those conventions cannot be safely taken by policy makers. Policy is largely a side effect of the risk-averse behavior of political careerists, who rationally parade their adherence to this moment’s conventions as enthusiastically as noblemen deferred to pronouncements of a court astrologer in an earlier time. We can only hope that the our era’s conventions engender better policy as a side-effect than attention to the movement of the stars. (As far as I am concerned, the jury is still out.) But it is not individuals’ independent judgment of the wisdom of these conventions that guides collective behavior. Our behavior, and often our sincere beliefs, are largely formed in reaction to the terrifying accountability that comes with making consequential choices unconventionally. Our rational astrologies are at the core of who we are, as individuals and as societies.