...Archive for the ‘uncategorized’ Category

Terminal demographics

About a week ago, I argued that the Great Inflation of the 1970s was largely a demographic phenomenon. That claim has provoked a lot of debate and rebuttal, in the comment sections of several posts here, and elsewhere in the blogosphere. See Kevin Erdman, Edward Lambert [1, 2, 3], Marcus Nunes [1, 2], Steve Roth [1,2], Mike Sax, Karl Smith [1, 2, 3], Evan Soltas, and Scott Sumner [1, 2], as well as a related post by Tyler Cowen. I love the first post by Karl Smith. My title would have been, “Arthur Burns, Genius.”

These will be my last words on the subject for a while, though of course they needn’t be yours. To summarize my view, I dispute the idea that the United States’ Great Inflation in the 1970s resulted from errors of monetary policy, errors that wise central bankers could have avoided at modest cost. During the 1970s, the simultaneous entry of baby boomers and women into the workforce meant the economy had to absorb workers at more than double the typical rate to avoid high levels of unemployment. This influx was effectively exogenous — it was not like a voluntary migration, provoked by the existence of opportunities. Absorption of these workers required a fall in real wages and some covert redistribution to new workers, which the Great Inflation enabled.

I don’t dispute that monetary contraction could have prevented the inflation of the 1970s. But under the demographic circumstances, the cost of monetary contraction in terms of unemployment and social stability would have been unacceptably high. As a practical matter, monetary policy was impotent, and would have been even if Paul Volcker had sat in Arthur Burns’ chair a decade earlier. I am perfectly fine with Evan Soltas’ diplomatic rephrasing of my position, that perhaps inflation remained a monetary phenomenon, but that the 1970s generated a “worse trade-off [for policymakers that] was not a monetary phenomenon”. I don’t claim that monetary policy was “optimal” during the period. Policy is never optimal, and in an infinite space of counterfactuals, I don’t doubt that there were better paths. But I do think it is foolish to believe that the policy decisions of the early 1980s would have had the same success if attempted during the 1970s. Monetary contraction was tried, twice, and abandoned, twice, in the late 60s and early 70s. There was and is little reason to believe that just holding firm would have successfully disinflated at tolerable cost in terms of employment and social peace. I don’t claim that demographics was the only factor that rendered disinflation difficult. With Arthur Burns (ht Mark Sadowski) and Karl Smith, I think union power may have played a role. It also mattered, again with Smith, that “unemployment was poisonous to the social fabric and the social fabric was already strained, most notably by race relations”. Monetary contraction succeeded — third time’s the charm! — when the demographic onslaught was subsiding, when Reagan cowed the unions, when the country was at relative peace. It might not have been practical otherwise.

I’ve had a wonderful nemesis and helper the last few days in commenter Mark Sadowski, who challenged me to provide evidence for a demographic effect on inflation in international data. Mostly I made a fool of myself (twice actually, and not unusually). Looking at the graphs — after Sadowski helped me get them right! — I see support for a relationship between labor force demographics and inflation in the United States, Japan, Canada, and Finland. Italy is a strong counterexample — it disinflated in the middle of its labor boom. The rest you can squint and tell stories about. (I now have 14 graphs now.) Italy notwithstanding, the claim “it’s hard to disinflate when labor force growth is strong” looks more general than “inflation correlates with labor force growth”. Decide for yourself.

Sadowski is not much impressed by my demographic view of the Great Inflation. But he paid me the huge compliment of devoting time and his considerable expertise to testing my speculations. He writes

I took your set of eight nations plus the four from my original set of counterexamples that you excluded (West Germany, Ireland, Luxembourg and the Netherlands), combined civilian labor force data from the OECD with CPI from AMECO, and computed 5-year compounded average civilian labor force growth rates and CPI inflation rates. The time periods ran from 1960-65 through 2007-12.

Then I regressed the average CPI inflation rates upon the average labor force growth rates. Five of the twelve were statistically significant, and all at the 1% level. The average civilian labor force growth rate and average CPI inflation rate were positively correlated in the U.S. and Japan, and negatively correlated in Spain, the Netherlands and Luxembourg.

Next I conducted Granger causality tests using the Toda and Yamamato method on the level data over 1960-2012 for the U.S., Japan, Spain, the Netherlands and Luxembourg.

The U.S. data is cointegrated, so although the majority of lag length criteria suggested using only one lag, since Granger causality in both directions was rejected at a length of one, I went to two lags based on the other criteria. The results are that CPI Granger causes civilian labor force at the 10% significance level but civilian labor force does not Granger cause CPI.

In Japan’s case civilian labor force Granger causes CPI at the 1% significance level but CPI does not Granger cause civilian labor force.

Granger causality was rejected in both directions for the other three countries.

In short, out of the 12 countries I looked at, only five have a significant correlation between average civilian labor force growth and average CPI inflation, and only two of five have a positive correlation. Of the five, only the two with positive correlation demonstrate Granger causality. But in the US case the direction of causality is in the opposite direction to that which you predict. Only Japan seems to support the kind of story you are trying to tell.

and follows up

I added your set of seven new nations (Canada, Finland, Greece, Italy, New Zealand, Switzerland and Turkey) plus seven additional nations (Belgium, Denmark, Iceland, Korea, Norway, Poland and Portugal) to the set of 12 that I commented on last time. I did the same analysis as I did last time for this new set of 14, that is I combined civilian labor force data from the OECD with CPI from AMECO, and computed 5-year compounded average civilian labor force growth rates and CPI inflation rates. The time periods ran from 1960-65 through 2007-12 with the exception of Korea which started with 1967-72. I regressed the average CPI inflation rates upon the average labor force growth rates. Ten of the fourteen were statistically significant, and all at the 1% level with the exception of Poland which was at the 10% significance level. The average civilian labor force growth rate and average CPI inflation rate were positively correlated in Canada, Denmark, Finland, Greece, Iceland, Italy, Korea, New Zealand and Norway and negatively correlated in Poland.

Next I conducted Granger causality tests using the Toda and Yamamato method on the level data over 1960-2012 (except for Korea which was over 1967-2012) for the ten countries which had statistically significant correlations.

In Finland, Poland and Korea civilian labor force Granger causes CPI at the 5% significance level but CPI does not Granger cause civilian labor force. In Greece CPI Granger causes civilian labor force at the 1% significance level but civilian labor force does not Granger cause CPI. In Iceland CPI Granger causes civilian labor force at the 10% significance level but civilian labor force does not Granger cause CPI.

So out of the 26 countries I have looked at, fifteen have a significant correlation between average civilian labor force growth and average CPI inflation with eleven of the fifteen having a positive correlation. Of the eleven with positive correlation six demonstrate Granger causality with three showing one way causality from civilian labor force to CPI and three showing one way causality from CPI to civilian labor force. Of the four with negative correlation one demonstrates Granger causality from civilian labor force to CPI.

Only three countries (Japan, Korea and Finland) out of the 26 support the kind of story you are trying to tell.

Mostly I am very grateful to Sadowski for his work.

Alas, I am not at all dissuaded from my view. At the margin I’m even a bit encouraged. The direction of Granger causality is not very meaningful here. (Granger causality, in the econometric cliché, is not causality at all but a statement about the arrangement of correlations in time. Expectations matter and near-future labor force growth is easy to predict, so there’s no problem if CPI changes can precede labor force changes.) I see some support for my thesis in the significant and usually positive correlations Sadowski observes in many countries. However, much as I am grateful, I don’t take this work as strong evidence either way. Sadowski overflatters my graphical analysis technique by translating it directly to an empirical model. Collapsing growth into overlapping 5-year trailing windows smooths out graphs that would otherwise just look like choppy tall-grass noise. But it creates a lot of autocorrelation unless the data is chunked into nonoverlapping periods. (Sadowski may well have done that! It’s not clear from the write-ups.) More substantively, to generate a good empirical model we’d have to think hard about other influences and controls that should be included. One wouldn’t model inflation as always and everywhere a univariate function of domestic labor force growth.

Maybe my view has been definitively refuted and I’m just full of derp! You’ll have to judge for yourself. In any case, I thank Sadowski for the work and food for thought, and for help and correction as I made a fool of myself.

I want to address some smart critiques by Evan Soltas:

Consider a standard Cobb-Douglas production function: Y = zKαLβ. Consider a large and sustained shock to L, as Waldman shows. Consider, also, that the level of K has some rigidities, such that the level of K is not always optimal given the level of L, but that in a single shock to L, K will eventually approach the optimal level of K over some lag period. With this background, the marginal productivity of labor should drop and remain low over that lag period.

Now assume that real wages tend towards the marginal productivity of labor with a lag. What we should expect to see is that real wages should drop in the 1970s. Why? Surplus labor reduces the bargaining power of workers relative to that of employers. We don’t see that; real wages begin to fall after monetary policy tightens in the 1980s. My Cobb-Douglas model is also a bit limiting, but if anything, we might expect to see downward pressures on the labor share of income β. We don’t see that either — as compared to later periods, the 1970s appears to be a time of slightly stronger labor-share performance.

Under Soltas’ nice description of what I’ll call the “first order” effects of a demographic firehose, we should indeed expect real wages to fall relative to an ordinary population growth counterfactual. Did they? Yes, I think so. Let’s graph a few series.

The blue line is one of the series that Soltas graphed, CPI-adjusted hourly wages of nonsupervisory employees. They fell during the course of the 1970s in absolute terms. The black line is the broadest measure of hourly compensation I could compute, CPI-adjusted employee compensation divided by hours worked. It is essentially flat over the course of the decade, breaking a strong prior uptrend. The red line is CPI-adjusted compensation per employee. It fell in absolute terms over the decade.

Soltas suggests that compensation did not fall based on a graph of CPI-adjusted average manufacturing sector hourly wage, which rose over the 1970s. But manufacturing was simply an unrepresentative sector. (Those unions again?)

Overall, I think it’s fair to say that real wages did fall. They certainly fell relative to the prior trend, and probably in absolute terms. Still, looking at that black line, you might say they fell a bit less or more slowly than you might expect. More on that below.

Soltas also points to a strong labor share during the 1970s as disconfirmative, but that’s hard to interpret. Under the “first-order” demographic firehose story, we expect real wages per unit of labor to fall, but the number of units paid to increase. Which effect would dominate would depend on details of the production function. (It’s not clear that labor share was strong in the 1970s. Labor share seems to have declined over the decade from unusually high levels in the late 1960s.)

Let’s continue see the rest of Soltas’ critique:

Let’s simplify. Waldman’s thesis, stated uncharitably, is that a large increase in the labor supply acted as an inflationary pressure on the economy of the 1970s. I don’t see how that works. Show me the model. Less uncharitably, it forced a worse trade-off on the Fed, specifically forcing higher unemployment or higher inflation. I don’t see how that works, either. Unemployment might have been higher, but that would have put a deflationary pressure on the central bank, all else equal, given the exogenous surge in the labor supply.

Think about it this way: Whatever the Fed of the 1970s did in monetary policy, it faced an unstoppable surge in the labor supply. That should be a tremendous headwind against wage increases and broader inflation. You can argue that the capital stock wasn’t ready for the higher labor supply, and I would grant that point, but that should translate into a downward pressure on wages, not an upward pressure on inflation. It’s not an adverse supply shock.

Now we come to the “second order” story.

The piece of the model that Soltas isn’t seeing is rigidity. First, there is that most conventional rigidity, nominal wage stickiness. If real wages must decline for the labor market to clear, but nominal wages are sticky downward, then the only way to avoid unemployment is to tolerate inflation.

But in addition to nominal rigidities, there are real rigidities. All those kids in the 1970s graduating high school or college and entering the labor force had expectations about the kinds of lives they should be able to live when they got a job. They would not have been satisfied with increasing dollar wages, if those dollars could not support starting lives and families of their own, independent of Mom and Dad. Middle class labor force entrants then expected to be able to support a home, a car, even start a family on a single income.

Rigidities aren’t forever. Those expectations have evaporated over the past 40 years. That is the famous Two Income Trap, under which the necessities of “ordinary life” as most Americans define it now require two, rather than just one, median earner. But Rome wasn’t destroyed in a day. Baby boomers entered the labor force with real expectations. They were not mere price takers. But the declining marginal productivity in Soltas’ Cobb-Douglas production function meant boomers could not earn sufficient real wages to meet those expectations in a hypothetical perfect market. They faced precisely a supply shock [1], arising from each boomer’s own diminished capacity to supply relative to prior cohorts, for reasons entirely beyond their control. They would not be very happy about it. A tacit promise would have been broken.

To avoid social turmoil, the political system had to find ways of not disappointing the budding boomers’ expectations too abruptly. That implied some redistribution, from older workers and capital holders to new workers. Among other things, inflation can be a means of engineering covert redistributions. This is the part, I think, that puzzles Scott Sumner. High inflation reduces the real purchasing power of people living off of interest, and of people already employed who are slow or lack bargaining power to negotiate raises. That foregone purchasing power liberates supply, which becomes available to subsidize the real wages of the newly employed. Real GDP did not collapse during the 1970s, but the inflation created losers. The share of production that losers lost went to someone. I think that, among other mischief, it helped subsidize the employment of new workers at real wages below but not too far below what the generation prior had enjoyed.

We’ve seen that Arthur Burns was a genius, but I don’t think that any of this was conscious strategy. Like most real-world policymakers, Burns felt his way towards the least painful solution, then used his considerable intellect to justify what he found himself doing. When Burns tightened money, unemployment rose sharply and Greg Brady got mad that the jobs he was refusing wouldn’t pay enough to cover wheels and a pad across town from his annoying sisters. He started to smoke dope, get into politics, frighten his parents and neighbors. (We won’t even talk about J.J and the Panthers.) When money was loose and inflation roared, times were bad for everyone, which helped ease the sting. Greg’s job offers still paid less in real terms than he’d have hoped, but the money illusion made the numbers sound OK, and he could stretch the salary to move away, albeit into a smaller place than he had hoped. Carol and Mike breathed a sigh of relief, as did their congressman, Richard Nixon, and Arthur Burns.

If you’ve read all of this, there is something terribly wrong with you. I thank you nonetheless. As I said at the start, this will be my last episode of “That ’70s Show”, at least for a while. (Thanks @PlanMaestro!) I want to move on to other things. Do feel free to have the last word, in the comments or elsewhere.


[1] More abstractly, fix the median new worker’s market clearing real wage as our numeraire. Prior to a population boom, the median new worker supplies her labor for a particular bundle of goods and services. But in the population boom, because of the diminishing marginal productivity of labor (holding K constant), the median new worker cannot produce enough in real terms to purchase the same bundle. The worker’s demand curve has not changed at all: she remains willing to trade one unit of salary for one unit of consumption bundle, just as her predecessor did. But that trade is no longer available to her, because she herself is unable to supply real production in sufficient quantity to purchase those goods, despite expending her fullest effort. This is a supply shock, from the worker’s perspective, not a demand shock. The population boom is a shock to her ability to supply, which would be reflected by inflation of the cost of goods relative to the numeraire of her labor.

Update History:

  • 11-Sept-2013, 1:55 p.m. PDT: “Congressman congressman“; “her fullest less effort”; Also, a while back, corrected (again) my persistent misspelling Sadowski’s name “Sandowski Sadowski

Demographics and inflation: international graphs

Update: Sometimes using a “data source of convenience” leads one badly astray.

Mark Sadowski points out that that the price level in the Penn World Tables represent an attempt at a global price level, rather than a country-specific domestic index, based upon the price of US GDP. Thus changes in the real exchange rate between a country and the US show up as changes in the price level, but don’t always represent “inflation” or “deflation” within the local economy. Combining these flawed price levels with what we already knew to be a flawed proxy for labor force, I think the graphs below are too noisy to be meaningful. This renders much of my discussion, which was based on the flawed graphs, essentially bullshit.

At Sadowski’s suggestion, I’ve regenerated graphs where I could based entirely on data from the OECD’s Main Economic Indicators database, as hosted by FRED. (Sadowski suggests AMECO data for CPI, but I’ve not had a chance to look at that yet. I’ve used the OECD CPI series.) The OECD database (at least as hosted by FRED) is not complete. There were surprising omissions. In particular, the lack of UK and Austrian Civilian Labor Force series prevented me from reproducing those graphs from the original post.

I’ve generated graphs for a larger set of countries that I did originally. (I basically generated graphs for all countries for which both OECD data series were reasonably complete.) In an Appendix to this post, I’ll first place the six graphs from the original piece I could reproduce, and then all the rest. I’ll defer any discussion to a later post. I’m going to strike through the original text, and add cautionary watermarks to the flawed graphs, in order to emphasize the essentially bullshitty nature of the original discussion.

The two previous posts on demographics and the Great Inflation remain very much not bullshit. They are…

Not a monetary phenomenon
Agreeing in different languages

See also a whole lot of excellent discussion of this topic by others, links are here.

Skip to the corrected graphs in the Appendix.


Mark Sadowski (in comments here and here) and Scott Sumner challenge me to support my thesis that inflation is related to demographics with international data. Doing that right would be hard — inflation correlates across borders, as did World War II and its effect on postwar fertility. Even if international data appeared to support my hypothesis perfectly, one could argue that the whole thing is just a correlated coincidence. I’m not going to make a project of teasing out causality econometrically. But the least I can do is show you some graphs.

Graphs in economics are always Rorschach Tests. They are invitations to confirmation bias. But I’m going to go out on a limb and say there’s some evidence for my thesis in these pictures. You, of course, will decide for yourself.

You will be looking at data from Penn World Tables version 8.0, which I am supposed to cite as

Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2013), “The Next Generation of the Penn World Table” available for download at www.ggdc.net/pwt

This was a data source of convenience. It’s designed for international comparisons, and includes population and price-level data beginning in 1950. I have graphed 5-year trailing growth rates of the price level (i.e. 5-year compounded average inflation rates) against 20-year lagged population growth rates, also trailing 5-year averages. Lagged population growth is an imperfect proxy for what I really want, which is contemporaneous labor force growth. It misses entirely the effect women’s entry in the workforce, which I think is an important part of the story, and ignores variations in the rate of labor-force exit. Nevertheless, it is the best I could (easily) do.

There’s nothing special about the 5-year lookback window or the 20-year lag. I haven’t data-mined them. In my initial post I chose a 10-year window, and it seemed to work. This time I tried a 5-year window for the hell of it. It’s still fine. Note that growth rates are reported in gross terms, that is “1.1” refers to a 10% growth rate, not a 110%.

Without further ado, here is my base case, the United States:

Unsurprisingly, as this is the economy from which I drew the thesis, it matches up pretty well.

Let’s try Australia next:

Again, to me, the smoothed inflation rate and population growth level follow one another during the disinflation rather strikingly. The inflation changes seem to lead the population growth rate changes a bit, which is somewhat surprising. But the population growth is 20-year lagged. Inflation in 1970 isn’t likely to be causing changes in the 1950s birthrate. Here, as in many of the graphs, I’m going to write off this lead of price changes off as due to some combination of an ill-chosen lag, missing effects of workforce composition changes, and forward-looking policymakers anticipating near-future entrants to the labor force. Reasonable or confirmation bias? You be the judge!

Next let’s look at Sweden.

Sweden is one of four countries I looked at that Sadowski cited as a counterexample. But once you look at time series rather than individual data points, I think it’s okay. Again, the price level leads a bit, but otherwise the two series track one another fairly well.

The UK is the first country that contains a really interesting anomaly:

Note the downspike in inflation in the mid-1980s, entirely unmatched by any fall in population growth. I think this anomaly is very easy to explain, and very informative. Concomitant with the Volcker disinflation in the US, the Bank of England allowed interest rates to climb very high. Sure enough, tight money, killed inflation! But inflation seems to be very zombie-like when the labor force is rapidly growing! You can kill it, but unless you painfully hold it down, it comes right back. That was the US experience in the late 1960s and 1970’s: each attempt to tighten money (reflected in high interest rates) provoked a sharp bout of unemployment, until the Fed cried uncle. Was the problem insufficient grit and determination on the part of the Arthur Burns Fed? Or did Volcker get lucky, in that his tightening cycle happened to coincide with the peak growth rate of the US labor force? Maybe a bit of both? (I know we love to fight, but most hypotheses are not mutually exclusive!)

Anyway, the UK experience was very clear. There was a lot of grit and determination in the early 1980s, it was temporarily effective, but it didn’t take. The inflation zombie rose again. It was slain for good when, perhaps coincidentally, the lagged population growth rate collapsed.

How about Japan:

I don’t know what to say about Japan. In a general sense, the two series track. Population growth collapsed and the economy went into deflation. But the two series certainly don’t wiggle together in any obvious way. Totally a Rorschach Test.

Next let’s look at Austria.

Again, you can call this one either way. On the one hand, Austria’s disinflation corresponded very well to a collapse of putative workforce growth. But the initial inflation did not. I could wave my hands about commodity prices and the Great Inflation being a global phenomenon, so that explains the early start. But you know, confirmation bias. You decide, I’ll call it a draw.

France graphs like the surprisingly sexy love child of the UK and Austria:

Like Austria, France suffered from embarrassing premature inflation. It did the job, though! Babies were conceived and born in the 20-year-lagged time machine, and eventually there was a nice boom of workforce entrants, timed to match the Volcker disinflation in the 1980s. As in the UK, there was a satisfying downspike then a zombie-like return of price level growth. Inflation then fell roughly — very roughly! — coincident with the decline in population growth. Like Austria, the nadir of the two series match very well but any comeandering of the rest is less clear. There is something interesting in the tail of the France graph. Post-2000, inflation rose sharply, in a manner that doesn’t seem at all proportionate to population growth. We’ll see there was a similar spike in Spain. (Also in Austria, but that did track population growth.) One explanation for these spikes would be post-Euro capital flows: Price level growth is often attributed to foreign lending, and both France and Spain were current account deficit nations. (Do foreign capital flows fall under the definition of a “monetary phenomenon”? You tell me!)

So, Spain. Let’s do Spain:

Of the countries I examined, Spain was the one that least fit my story. Spain disinflated before population growth collapsed, and reinflated just when it did. So Spain is a legitimate counterexample. I have my stories: There may have been unusual political will towards disinflation in the high population growth 1990s Spain, so the country could qualify for Euro entry. Once the Spain entered the Euro, it was a prime destination for speculative lending. The timing of these events was completely decoupled from Spain’s population dynamics.

But, confirmation bias is a real danger. I have to concede that, on face, Spain’s inflation dynamics looks nothing like I would expect from my demographic hypothesis.

Overall, I think my claim that labor force expansion imparts an inflationary bias is supported by these graphs. The Great Inflation and its unwind, I score four clear “wins”, three iffy cases, and one clear loss. These were the only countries I examined. (I’m not cherry picking from a larger group.)

Throughout the graphs, there are some recurring themes. Inflation downspikes in the 1980s are nearly universal, but they “take” as persistent disinflation only when ratified by falling labor force growth. The relationship between disinflation and slowing labor force growth seems more precise than the relationship between labor force growth and the initial inflation. Current account deficit countries (including the UK and Australia and Eurozone countries, but not the US) tended to experience inflation unrelated to population growth in the period following Y2K.

I think demographics has had a very great deal to do with inflation dynamics. My cards are on the table. Obviously this is imperfect evidence. So what do you think?


p.s. The messy spreadsheet from which these graphs were generated is here. R&R my ass — I screw this kind of thing up all the time.

Appendix: Please see the update at the top of this post. The spreadsheet from which these new graphs were generated is here. These are graphs of 5-yr trailing growth rates, annualized. They are presented as gross rates, so that 1.2 means a 20% growth rate in more conventional terms.

Graphs from the original post, redone (corrected!):

[UK omitted, missing data]

[Austria omitted, missing data]

Other countries:

Update History:

  • 8-Sept-2013, 8:30 a.m. PDT: Corrected name (twice) “Sandowski Sadowski
  • 9-Sept-2013, 1:40 a.m. PDT: Retraction via bold update at beginning of piece + strike-through of original, recomputed graphs with more appropriate data in the Appendix.
  • 9-Sept-2013, 4:15 a.m. PDT: Added “BAD” watermarks to the original graphs.
  • 11-Sept-2013, 10:00 p.m. PDT: “to too

Agreeing in different languages

Scott Sumner replies to my claim that the Great Inflation of the 1970s wasn’t a monetary phenomenon by saying, yes, in fact it was.

But reading his post, I don’t see any substantive inconsistency between his views and mine at all. He argues that the Fed overstimulated, because if it was trying to prevent unemployment, it would simply have stabilized nominal wage growth. Instead it tolerated — or caused, depending how you tell the story — wage inflation in excess of its long-term growth rate. My view is that stable nominal wages and full employment (under the Fed’s more robust 1970s definition of full employment) were simply inconsistent, so stabilizing nominal wages would not have been an effective strategy. Decent employment of a labor force growing faster than productive employment was only possible via a combination of falling real wages and a cross-subsidy from creditors, that is by high inflation.

I think Sumner says basically what I’m saying, when he describes stories of the Great Inflation he considers reasonable:

There are some theories that help us to understand why the Fed blew it in the 1966-81 period:

  1. Assumption of stable Phillips Curve.

  2. Mis-estimation of the natural rate of U, which was rising.

  3. Confusion between nominal and real interest rates.

Waldman’s theory deserves to be added to that list. It’s not the whole story, but it’s a significant piece of the story.

In the language that Sumner (like many economists) uses, the “natural rate of unemployment” was rising. In that language, my claim is simply an explanation of why this rate was rising: Growth of the workforce was temporarily outstripping the economy’s capacity to employ marginal workers at expected levels of productivity. It sounds like Sumner considers this to be at least a reasonable conjecture.

The Great Moderation consensus was that unemployment should be reduced to this “natural” or “non-accelerating inflation rate of unemployment“, but no lower. But that reflects a value judgment about the relative pain of inflation versus unemployment, a judgment that central bankers of the 1970s simply did not share. To say that policymakers erred or even misestimated, you’d have to claim they did not understand that their employment-focused policies might bring inflation. I think it’s pretty clear that they did understand that. They simply made a choice that became taboo during a later period. They accepted a risk of accelerating inflation in pursuit of full employment.

So was the Great Inflation a “monetary phenomenon”? It really depends on how you assign causality. Suppose that I am right, that largely for demographic reasons, the “natural rate” of unemployment was higher than the socially acceptable rate of unemployment. Central bankers deliberately tolerated a risk — unfortunately realized — that inflation would become a significant problem. Does that mean the inflation was a monetary phenomenon? In a sense, yes, in a sense, no. The inflation could have been avoided with different monetary policy at cost of accepting a painfully high “natural” rate of unemployment. In that sense, it was monetary.

But the deeper cause, the factor that created the conditions under which the central bank was faced with so terrible a choice, was a real mismatch between the growth rate of the work force and the speed with which organizations and machines could be arranged to make all that labor productive.

Let’s try an analogy. Consider the hair loss of a cancer patient. Doctors make a choice, weighing the harms of chemotherapy with the risks of nontreatment. When doctors choose to apply chemotherapy, they “cause” the loss of hair and other toxic side effects of chemotherapy. The choice to treat or not to treat may sometimes be a close call, so doctors and patients never really know whether the putative reduction of cancer risks was worth the certain pain of chemo.

Nevertheless, we don’t often describe all that hair loss and pain as “iatrogenic illness“, even though strictly speaking it is. To do so, we recognize, would be to place blame where it doesn’t belong, on the people making very difficult choices in response to circumstances they did not create. We rage about iatrogenic illness when people are hurt because doctors fail to wash their hands or follow checklists. But during chemo, we acknowledge the cancer as the true cause of the bad situation, and don’t blame the doctors.

Similarly, if the real-economic situation was as I contend in the 1970s, it strikes me as churlish to refer to the inflation as a “monetary phenomenon”. Yes, different monetary choices might have led to less inflation. But they would have risked much higher levels of unemployment. No, we never will know how that counterfactual would have worked out. But I consider the absorption into the labor force of the baby boom, of both sexes of the baby boom, to be a remarkable achievement. Considering the social and political circumstances in the late 1960s and early 1970s, I don’t think it’s at all obvious that we’d have been better off choosing a different balance of risks.

Sumner points to the assumption of stable Phillips Curve as a potential explanation of the Great Inflation. There might have been some economists who believed in a stable Phillips Curve, but I think that is mostly a straw man. It’s hard to find examples of influential people actually making this mistake. The Phillips Curve certainly did shift into the 1970s, but I’d argue it did so precisely due to the demographic / real-economic problem the United States faced during that era rather than due to the Lucas Critique explanation that näive reliance on the relationship undermined it. I think there was no näive reliance at all, just difficult choices made by policymakers fully cognizant of the uncertainties they faced. I perceive a lot more overconfidence and dogmatism in post-1980s reaction to the Great Inflation than there was in the choices that preceded it.

Ultimately, hubris is the issue. I am writing in 2013 about choices made in 1973 because I think a mythology has developed around 1970s experience that is very harmful. Whether he agrees or disagrees with anything I’ve said here, Scott Sumner is much more ally than adversary. He as much as anyone has challenged the new orthodoxy symbolized by “divine coincidence”, a property woven into some New Keynesian models to sanctify the claim that there are no trade-offs in macroeconomic policymaking. Stabilizing inflation is always enough in these models, because stabilization of output necessarily follows. That is a terrible error. Arthur Burns was pushed around by Richard Nixon and knowingly made difficult trade-offs. Jean Claude Trichet was pushed around by no one, and stabilized inflation “impeccably“. In doing so, Trichet made errors that have already been far more costly than the American experience of the 1970s, mistakes whose costs have yet to be fully tallied.

The real economy always gets a vote. The political economy always gets a vote. This isn’t some kind of econopartisan divide. There are post-Keynesians who claim that the combination of good fiscal policy and a job guarantee can always deliver price stability and full employment. But no policy rule can guarantee those things. If real output collapses, or if the population grows in ways that cannot be technologically matched to production of the goods and services it wishes to consume, meaningful full employment will imply inflation or else more direct transfers. First-best policy would be to prevent real- and political-economic cul de sacs. Unfortunately, we’ve already failed to prevent some political-economic disasters: the institutional configuration of Europe, rent extraction and socioeconomic segregation in the United States. Because they were not prevented, we face real tradeoffs, between those who might be harmed by inflation and those at greatest risk of unemployment, and along all kinds of other dimensions. Polities will have to make these tradeoffs, or find creative means of rearranging themselves to circumvent the fighting. Absurdly abstract cautionary tales flogged as “science” by representatives of a class of people whose interests are enmeshed in the tradeoffs are much worse than suspect.

The previous post, like nearly all my posts, generated a comment thread with thinking and writing much better than my own. (Read it!) I continue to believe that demographics plus real-economic rigidity created conditions that rendered the 1970s inflation better than many alternatives. I don’t claim that always and everywhere population booms must coincide with productivity collapses — sometimes population booms coincide (usually not coincidentally!) with opportunities for expanded production. I don’t claim that all monetary expansions derive from attempts to employ a burgeoning population. (Sometimes they are about exchange rates, for example!) History is not an ergodic process. If the evidence for my conjecture seems unrigorous, I’d ask you to compare it to the widely accepted view all we needed was Paul Volcker a decade earlier and things would have been just fine. Where is there evidence for that?


Update: Scott Sumner responds.

Update History:

  • 7-Sept-2013, 12:15 p.m. PDT: Added bold update, link to Scott Sumner’s response.
  • 7-Sept-2013, 12:40 p.m. PDT:Inserted the word “monetary”, “Yes, different monetary choices…”

Not a monetary phenomenon

Nor was it a fiscal phenomenon, my (post-)Keynesian comrades. Let’s not be glib.

I’m talking about the inflation of the 1970s. Sorry, Milton, I know you got a lot of mileage out of the line, but the great inflation was not at root a monetary phenomenon. Let’s take a look at a graph:

The crucial economic fact of the 1970s is an incredible rush into the labor force. The baby boom came of age at the same time as shifting norms about women and work dramatically increased the proportion of the population that expected jobs.

The “malaise” of the 1970s was not a problem with GDP growth. NGDP growth was off the charts (more on that below). But real GDP growth was strong as well, clocking in at 38%, compared to only 35% in the 1980s, 39% in the 1990s, and an abysmal 16% in the 2000s.

What was stagnant in the 1970s was productivity, which puts hours worked beneath GDP in the denominator. Boomers’ headlong rush into the labor force created a strong arithmetic headwind for productivity stats. Here’s a graph of RGDP divided by the number of workers in the labor force. The malaise shows up pretty clearly:

The root cause of the high-misery-index 1970s was demographics, plain and simple. The deep capital stock of the economy — including fixed capital, organizational capital, and what Arnold Kling describes as “patterns of sustainable specialization and trade” — was simply unprepared for the firehose of new workers. The nation faced a simple choice: employ them, and accept a lower rate of production per worker, or insist on continued productivity growth and tolerate high unemployment. Wisely, I think, we prioritized employment. But there was a bottleneck on the supply-side of the economy. Employed people expect to enjoy increased consumption for their labors, and so put pressure on demand in real terms. The result was high inflation, and would have been under any scenario that absorbed the men, and the women, of the baby boom in so short a period of time. Ultimately, the 1970s were a success story, albeit an uncomfortable success story. Going Volcker in 1973 would not have worked, except with intolerable rates of unemployment and undesirable discouragement of labor force entry. By the early 1980s, the goat was mostly through the snake, so a quick reset of expectations was effective.

Fiscal policy could not have solved the problem, unless you posit that new workers would have been more productively employed by government than the private sector was capable of employing them. Contemporary market monetarism could not have solved the problem. Given the huge demographic shift, stabilizing NGDP growth at an arbitrary level would have been a prescription for depression. Market monetarists sometimes hint that NGDP per capita would be a more appropriate growth path target than simple NGDP. Consider: Both supply and demand tend to correlate with work. Workers make stuff, and they also expect to consume more than nonworkers. One might argue, then, that NGDP per member of the labor force would be a good level for a market monetarist to target. Let’s take a look at that:

It seems to me that the Fed did a pretty good job of matching NGDP to workers in the 1970s. If anything, they were a bit too tight, but permitted some catch-up growth in the 1980s to offset that.

Since the 1970s, macroeconomics as a profession has behaved like some Freud-obsessed neurotic, constantly spinning yarns about how the trauma of the 1970s means this and that, “Keynes was wrong”, “NAIRU”, independent (ha!) central banks. A New Keynesian synthesis made of output gaps and inflation and no people at all, just a representative household reveling in its microfoundations. Self-serving tall tales of the Great Moderation, all of them.

It was the people wut done it, by being born and wanting jobs. Even the ones without penises.

Oh, and give poor Arthur Burns a break. You couldn’t have done any better.

Banks and macroeconomic models

There has been a recrudescence of blogospheric argument on the nature of commercial banks, whether they are best considered “financial intermediaries” not unlike mutual funds or insurance companies, or whether they are something different, in particular, whether their ability to issue liabilities that are near-perfect substitutes for base money renders them special in macroeconomically important ways. See e.g. Cullen Roche, Winterspeak, Ramanan, and Paul Krugman.

If banks are mere intermediaries between savers and borrowers, it may be reasonable to abstract them out of macroeconomic models and simply focus on the preferences of borrowers and savers and the price mechanism (interest rates) that ultimately reconcile those preferences, perhaps with “frictions”. If banks are special, if they have institutional characteristics that affect the macroeconomy in ways not captured by the stylized preferences of borrowers and savers, then it may be important to model the dynamics of the banking system explicitly.

Paul Krugman says banks are not special, most recently citing James Tobin’s famous paper on Commercial Banks As Creators of Money:

In particular, the discussion on pp. 412-413 of why the mechanics of lending don’t matter — yes, commercial banks, unlike other financial intermediaries, can make a loan simply by crediting the borrower with new deposits, but there’s no guarantee that the funds stay there — refutes, in one fell swoop, a lot of the nonsense one hears about how said mechanics of bank lending change everything about the role banks play in the economy.

I want to unpack this just a bit. First, please don’t misunderstand the argument. Tobin’s, and by extension Krugman’s, point is not the facile argument sometimes made, that loans don’t meaningfully create deposits because a bank needs to fund the loan when the deposit created by a loan is spent or transferred. That is true of an individual bank, but not of the banking system as a whole, the object to which Tobin correctly devotes his attention. It is an entirely uncontroversial fact that when the banking system net-increases its lending it creates new deposits, regardless of whether an individual lender’s balance sheet expands permanently or ephemerally.

Tobin’s argument was that this mechanical capacity of the banking system to “create new money” by net-lending ultimately doesn’t matter very much, because the non-bank private sector has a preferred portfolio of assets, of which bank deposits are a single component, and the net-lending of the banking system is constrained and ultimately determined by the non-bank sector’s desires. If the banking system somehow ramped up its lending in ways that create more bank deposits than the non-bank sector wished to hold, the nonbank sector would pay down bank loans until its preferred portfolio was restored. This is a perfectly coherent view, a view fully cognizant of the mechanics of bank lending and deposit creation, under which there is nothing fundamentally important about banks. Everything that matters is captured by the portfolio preference of nonbanks, so a macro modeler might reasonably ignore the details of the banking system and simply model the portfolio choice of the nonbank sector.

However, that a view is coherent doesn’t mean that it is accurate. The weakness of the Tobin/Krugman view is that common Achilles Heel of macroeconomic models, aggregation. In order for banks not to matter, it must be reasonable to model the nonbank private sector as if it were a unified actor with preferences independent of the behavior of the banking system. It’s easy to offer plausible accounts under which this would not be the case.

Suppose, for example, that banks lend primarily to cash-starved agents, and that cash-starved agents spend primarily to cash-rich agents. (I am including bank deposits as in my definition of “cash” here.) Should the banking system “exogenously” increase lending, the effect would be first a transfer of cash and an increase in debt to the cash-poor, and then a transfer of cash to the cash-rich as borrowers spent their loans. Suppose that the cash-rich then find themselves holding more bank deposits then they prefer to hold. Mechanically, they have absolutely no ability to redeem the deposits for other assets. The only way that deposits in aggregate are reduced is when loans are repaid to the banking system. But the cash-rich have very few loans to repay! Unless they pay off the loans of the cash-poor, taking losses to uphold the collective preferences of a putative nonbank private sector, bank deposits are as inescapable to cash-rich as base money is to the private sector as a whole.

If the real world looks anything like this, then commercial banks do indeed have something quite analogous to a central bank’s printing press. Net-expansions of the banking system’s balance sheet provoke an inescapable injection of deposits into the aggregate portfolio of the cash-rich. The price of bank deposits, like base money, is pegged to unity. If deposit balances come to exceed the desired allocation in portfolios of the cash-rich, the imbalance cannot be resolved by falling prices. Instead, a “hot potato” effect must take hold: prices of other assets might be bid higher until deposits are restored to their desired small share of the aggregate portfolio. Credit expansion would lead to asset price inflation (much more than to ordinary price inflation, as the consumption plans of the cash-rich need not change in real terms, so there is no impetus to bid up the prices of goods, services, or labor). As a stylized fact about the world, bank-credit-expansion-leads-to-asset-price inflation seems pretty solid. [1, 2]

This is just one account among a potentially infinite many under which the Tobin/Krugman “banks don’t matter” view would be insufficient, despite its theoretical coherence. It would be nice, from a tractability perspective, if the nonbank private sector could be modeled as a single agent with portfolio preferences independent of the behavior of the banking system. But I think the weight of the evidence suggests that the world is more complicated than that. I think we will need to account explicitly for the behavior of the banking system in order to capture important features of the macroeconomy.


[1] The account I’ve provided is a simplification. Implicitly, I’m presuming a banking system that holds no assets other than base money and loans to customers. In the real world, bank balance sheets may hold all sorts of assets. For the persnickety, lets generalize the tale. Deposits may be redeemed not just by repaying loans, but by purchasing any sort of asset off the consolidated banking system’s balance sheet. Repaying a loan is a special case of an asset purchase: a borrower buys up her own obligation to a bank. But deposit holders who are not borrowers can shed deposits in the real world by, for example, purchasing Treasury securities held by banks. Does this meaningfully change the story? Given a modest credit expansions, maybe. Given a large credit expansion, definitely not. Let’s go through it.

When we claim that “deposit balances come to exceed the desired allocation in portfolios of the cash-rich”, we are implicitly conjecturing some menu of other assets that become underrepresented. If all of those other assets are held on the banking system’s balance sheet and available for sale, then cash-rich agents can restore their preferred portfolio simply by redeeming excess deposits for the assets they desire until their desired allocation is restored.

However, unless the assets the cash-rich desire are illiquid loans to the cash-poor, there will exist a scale of credit expansion beyond which some or all of the desired assets become unavailable for purchase from bank balance sheets. A creative solution to this problem is for banks to try to transform illiquid loans to the cash poor into substitutes for the assets the cash-rich desire. That’s one explanation (in addition to regulatory arbitrage) for the securitization boom of the early 2000s. But as we’ve seen, the strategy has its limits. Unless we posit perfect alchemy, there will be some scale of credit expansion beyond which cash-rich agents cannot restore balance to their portfolios by purchasing assets from the banking system.

Realistically, the scale of credit expansion beyond which portfolio balance cannot be restored by direct redemption of deposits need not be very large at all. For example, cash-rich agents typically desire to hold much of their wealth in corporate equities, which commercial banks hold in small quantities if at all. Following an injection of deposits provoked by lending and spending, cash-rich agents will be unable to restore their desired allocation of stocks except by bidding up share prices.

A common, but foolish, dodge of these problems is to pretend that the only assets in the world (besides loans) are base money, bank deposits, and Treasury securities, and then to presume that the banking system will always carry a sufficient inventory of Treasury securities and base money to restore balance through deposit redemption. This is stupid for the obvious reason that private-sector portfolio allocations contain stuff other than Treasuries, base money, and bank deposits, and for the less obvious reason that accommodating unlimited redemption implies that the banking system may borrow in unlimited quantities from the state.

For the super-persnickety, bank deposits can also be redeemed by purchasing services, rather than assets, from the banking systems. That is, cash-rich agents could get rid of excess deposits by doing stuff that caused them to increase their bank-fee expenses. It is unlikely they’d find this a very appealing way to restore portfolio balance, however.

[2] Readers might complain that I am misrepresenting Tobin a bit here, in that I’ve attributed to him the view that an overabundance of deposits will be remedied via loan repayment or asset purchase, while Tobin explicitly allows for a price-adjustment mechanism as well:

Given the wealth and asset preferences of the community, demand for bank deposits can increase only if yields of other assets fall [and therefore prices of other assets rise]. The fall in these yields is bound to restrict profitable lending and investment opportunities available to the banks themselves. Eventually the marginal returns on lending and investing, account taken of the risks and administrative costs involved, will not exceed the marginal cost to the bank of attracting and holding additional deposits.

Tobin draws a sharp contradistinction between deposits and base money:

Once created, printing press money cannot be extinguished, except by reversal of the budget policies that led to its birth. The community cannot get rid of its currency supply; the economy must adjust until it is willingly absorbed. The “hot potato” analogy truly applies. For bank created money, however, there is a mechanism of extinction as well as creation, contraction as well as expansion. If bank deposits are excessive relative to public preferences, they will tend to decline; otherwise banks will lose income. The burden of adaptation is not placed entirely on the rest of the economy.

Tobin wants to conclude that bank deposits differ from the obligations of other private financial intermediaries in degree rather than in kind. But, on the facts he accurately and perspicaciously observes, he might as easily have argued that bank deposits differ from base money in degree rather than in kind. After all, the economy adjusts to the issuance of no other private-sector asset by shifting prices and yields of across the full spectrum of financial assets. That sort of adjustment implies a “hot potato” effect sometimes. It seems arbitrary for Tobin to claim that for the “‘hot potato’ analogy” to truly apply, bidding up of other assets must be the only conceivable means of private-sector adjustment to its issuance. Since bank deposits behave sometimes or partially like “hot potatoes”, since they uniquely share the quality of being pegged to a price of unity with a government guarantee, they arguably share as much resemblance to base money as they do to “ordinary” financial assets. Tobin makes much of the fact that there is a limit to profitable issuance of deposits, that eventually yields on lending and deposits must converge, but there are limits to profitable issuance of base money as well, the state’s capacity for seignorage is not inexhaustible. The state must contract the supply of its money and obligations when private sector demand for them falters, or risk hyperinflation and political collapse. Banks and states have surprisingly similar financial structures, and modern banking systems inevitably include states at their core.

I think a lot of assumptions that foreshadow current disagreements surrounding banking are embedded in Tobin’s phrasing: “Given the wealth and asset preferences of the community…[t]he fall in…yields is bound to restrict profitable lending and investment opportunities available to the banks themselves.” Tobin presumes a unified financial community and rational profit-seeking banks. People (like me) who think a detailed understanding of the institution of banking should be at the heart of macro modeling contest both of those assumptions. We think that there is no homogenous “community”, but segmented populations whose socially problematic interactions are often mediated via financial institutions. We think that banks do not always or even usually behave in ways that can be characterized by a well-behaved infinite-horizon profit maximization problem. Instead, for a variety of reasons ranging from agency problems, political influence, faddishness, and mere error, we view bank behavior as special and complex. So we find it convenient to model banks specially, or to examine expansions and contractions of credit as if they were exogenous, rather than presume that other aspects of our models neatly determine what banks will do.

Update History:

  • 7-Apr-2015, 10:25 p.m. PDT: “only of if yields of other assets fall”

Secret snooping keeps us vulnerable

This is an obvious point.

Part of NSA’s mission, a very noble part, has always been to play digital defense. They call this “information assurance”, and describe it as “the formidable challenge of preventing foreign adversaries from gaining access to sensitive or classified national security information.” In practice, their role is much broader than that. I run NSA software — on purpose! Thank you, National Security Agency, for SELinux. I’m not worried about foreign adversaries, in particular. I just don’t want my server hacked. NSA helps evaluate and debug encryption standards that find their way into civilian use. With all the talk about 21st century cyberwarfare, about dams being made to malfunction or cars hacked to spin out of control, you’d think the best way to keep the homeland safe from terrorists and foreign adversaries would be an exceptionally secure domestic infrastructure.

However, NSA faces a conflict of mission. The organization’s more famous, swashbuckling “signals intelligence” is about maintaining a digital offense. It relies on adversaries using vulnerable systems. NSA discovers (or purchases) uncorrected “exploits” in order to break into the systems on which it hopes to spy. Normally, a good-guy “white hat” hacker who discovers a vulnerability would quietly inform the provider of the exposed system so that the weakness can be eliminated as quickly and safely as possible. Eventually, if the issue is not resolved, she might inform the broad public, so people know they are at risk. Vulnerabilities that are discovered but not widely disclosed are the most dangerous, and the most valuable, to NSA for intelligence gathering purposes, but also to cyberterrorists and foreign adversaries. There are tradeoffs between the strategic advantage that come from offensive capability and the weakness maintaining that capability necessarily introduces into domestic infrastructure. If the mission is really about protecting America from foreign threats (rather than enjoying the power of domestic surveillance), it is not at all obvious that we wouldn’t be better off nearly always hardening systems rather than holding exploits in reserve. Other countries undoubtedly tap the same backbones we do (albeit at different geographical locations and with the help of different suborned firms). Undoubtedly, passwords that nuclear-power-plant employees sloppily reuse occasionally slip unencrypted through those pipes.

Of course there is a trade-off. If security agencies did work aggressively to harden civilian infrastructure as soon as they discover vulnerabilities, the spooks would not have been able, for example, to stall Iran’s nuclear program with Stuxnet. But the same flaws that we exploited might also have been known to terrorists or foreign adversaries, who could have caused catastrophic industrial accidents in the US or elsewhere while that window was left open. Rather than applauding our clever cyberwarriors, perhaps we ought to be appalled at them for having left us dangerously exposed so that the Iranians would be too. When a cyberattack does come, via some vulnerability NSA might have patched, will we know enough to blame our cyberwarriors, or will we just shovel more money in their direction?

Before we let spy agencies make these tradeoffs for us (tradeoffs between security and security, for those who prioritize security über alles) we might want to think about institutional bias. Would it be rude to point out, given recent events, that NSA’s Power-Point-blared enthusiasm for awesome, eyes-of-the-President offensive capabilities may have eclipsed the unglamorous but critical work of running a good defense? And no, going all North-Korea with personnel is not a solution. I’m very grateful that what’s leaked has leaked, but if reports about what Snowden got are accurate, the absence of ordinary precaution is shocking. There is no irreducible danger from sysadmins that would excuse such a failure. Root access to some machines does not imply pwning the organization. I am speculating, but both Snowden’s claims of expansive access and Keith Alexander’s assessment of “irreversible damage” suggest NSA prioritized analyst convenience over data compartmentalization and surveillance of use. That’s great for helping analysts get stuff into the President’s daily briefing while avoiding blowback for, uhm, questionable trawling. It should be incredibly embarrassing to an organization whose mission is securing data.

Perhaps my speculations are misguided. The point remains. At an organizational level and at a national level, there are tradeoffs between offensive capacity (surreptitious surveillance, sabotage) and defensive security. Maintaining a killer offense requires tolerating serious weaknesses in our defense. The burgeoning, sprawling surveillance state has its own incentives that render it ill-suited to make judgments about how much vulnerability is acceptable in pursuit of an impressive offense. That shouldn’t be their call.

Sometimes the best defense is a great defense. Even if it is a lot less awesome.


Note: The trade-offs described here apply especially to covert, surreptitious means of accessing computer systems. If “we” (however constituted) decide that we want systems that are both secure and susceptible to government surveillance, we can make use of “key escrow” or similar schemes. There would be significant technical challenges to getting these right, but at least the systems could be openly designed and vetted, and could include software-enforced auditing to document use and deter abuse. Systems designed to allow third-party access will always be weaker than well-designed systems without that “feature”, but they can be made a lot more secure than systems whose flaws are intentionally uncorrected in order to enable access. It would be important to avoid implementation monocultures and centralized, single-point-of-failure key repositories. A public review process could see to that. It would not be necessary to ban alternative systems, if we wish to maintain status quo capabilities.

I’m not arguing any of this would be a good idea. But if we decide that we want data-mining or widespread surveillance, we can implement them in ways that are overt and publicly auditable rather than clandestine, insecure, and unaccountable. The status quo, a peculiar combination of lying a lot and demanding the public’s trust, is simply unsupportable.

Regulation, legitimation, neutralization

A thing that bankers and spies have in common is protestations about how elaborately they are monitored. Every financial product or arrangement requires elaborate legal vetting, is touched by innumerable regulations, must run a gamut of refusals and reworkings in the name of “compliance”. In the current scandal over government surveillance, we hear repeated assurances that the programs are legal, that they are reviewed, that despite potentially vast loopholes in the documents thus far leaked, our security services have procedures in place to ensure that there is no abuse.

A cynic might dismiss these protestations as mere cant, but that’s a mistake. I think the insiders who offer us these assurances are perfectly, almost desperately, sincere. From their perspective, I suspect regulatory precaution seems absurdly overdone, even Kafka-esque, interfering with the good work they are trying to do. And they are trying to do good work! Most bankers are nice people working hard jobs. Most people who work for the ominous-sounding “surveillance state” genuinely strive to contribute to the security of the country without dishonoring its ideals. Large organizations are peopled by, well, people, most of whom are not so different than you and me. When organizations misbehave, it is important to understand that they do so in spite of being filled for the most part by people of good will. We should try to understand how they manage this.

One way that they manage this is by virtue of the regulation that exists ostensibly to control the misbehavior. Regulation sprouts in broad thickets, often in response to idiosyncratic events and concerns and constituencies. Pushback by the regulated trims those thickets, but not universally or uniformly. Organizations assent to and even embrace regulation that doesn’t challenge their core imperatives. They aggressively resist those that do. We end up with organizations that are, in fact, extensively and intrusively regulated, but have blind spots, weaknesses and loopholes that are not at all random.

“Core imperatives” are objectives that enable organizations to survive and thrive, and importantly to defend themselves from various sorts of challenge. Core objectives may be related to formal missions, but they are distinct, and may sometimes be in conflict. There are tensions between banks’ formal role as high-information allocators of credit to the real economy and the scale that renders failure politically intolerable. Where those tradeoffs exist, successful banks pursue the core imperative, not the formal mission. The organizational form that maximizes the quality of credit allocation would be one that keeps the organization and its stakeholders forever at risk. That form does not thrive in competition with others who gain funding advantages by insulating themselves from those risks. Like it or not, maximalist acquisition of information is a core imperative of organizations in the intelligence community. For reasons good and obvious but also for some ugly reasons, access to information is crucial to defending and expanding the intelligence ecosystem.

We should not be surprised, although we should certainly be angry, that “the toughest financial regulation in a half century” was a thousand pages long and failed to address the core problem of immunities and funding advantages that derive from scale and interconnectedness. We should not be surprised, though we should certainly be angry, that as technology has rendered surreptitious, ubiquitous surveillance easier, weak spots and loopholes have appeared that make it colorable, although still shameful, for the President of the United States to come on television and call this stufflawful“.

But it’s important to note, both among bankers and spies, we do not end up with an absence of regulation. Instead we end up with a festival of regulation undermined by a few strategic lacunae. And that festival of regulation is a critical part of the problem we’ve circuitously set out to address. How do organizations persuade the well-meaning humans that comprise them to work hard and feel proud of doing stuff that is ultimately not so good? Why is it that people in banking or intelligence so often effuse, with complete sincerity and no small measure of frustration, about how regulated they are, about how much care they take to comply with the regulations they face, in letter and in spirit. Because they do!

Regulation and compliance serves a straightforward human function. It substitutes for and absolves participants of the duty they would feel, as human beings, to exercise independent judgment about the nature of the work they are doing. There is nothing odd or conspiratorial about saying this. Organizations wouldn’t function if every organizational action were subject to idiosyncratic review and veto by each participant. When people claim that Edward Snowden had no right to do what he did, they do not mean to say everyone in any organization must always “just follow orders”. Their argument is based on the notion that Snowden’s organization was well-regulated, and an ethical participant in an organization ought to let the regulations of the organization stand-in for individual moral views. As we see each time one of our shameful politicians or one of our shameful bankers goes on television, lawfulness and regulation are used to legitimate organizational behavior externally. But they are used to legitimate behavior internally as well, and so may enable groups of people to do what perhaps they ought not do.

The moral choice faced in practice by members of a large organization isn’t whether some activity they participate in is ethical, but whether it is so unethical that they are compelled to substitute their own judgment for the organization’s controls and resist in some fashion. The perceived quality of the controls plays a role in those decisions. When effective participation is significantly voluntary — that is when talented people might choose to quit, or slack off, or in extremis enlist external allies to address their concerns — controls that appear to be high quality are an important organizational asset. Elaborate regulation and burdensome compliance can serve as “accountability theater” even when it is less than effective. In large organizations that do prima facie icky things, whether those things are ultimately justifiable or not, you’d expect to find a mix of very visible controls and strong economic incentives. Which, I think, you typically do. Obviously, in military and intelligence organizations, controls and incentives are supplemented by a sense of serving a larger interest that may sometimes override more ordinary qualms. But feelings of patriotism don’t eliminate participants’ need for legitimating regulation.

The most dangerous organizations are those whose participants are subject to internally credible controls that are nevertheless ineffective at constraining organizational behavior, whether because the controls are inadequate or because some groups within the organization are able to circumvent them. Unfortunately, that’s exactly the combination that serves organizations best in amoral, functional terms. We should be careful of starting with a kitchen-sink of potential regulations, then letting organizations choose their battles, as happened with banking reform. Obviously, we should be careful of letting regulated entities drive and manage the regulatory process from the start, as happened with the security state. When insiders tell us that prima facie bad things are justified by the regulations or controls under which they are produced, we should understand those accounts to be both sincere and usually mistaken. We should remember that mistargeted regulation may be worse than useless. It may provide, to use a term I learned from Bill Black, a means of neutralization that perversely enables the very misbehavior it ostensibly exists to prevent. I think that is very obviously what has happened in the US and elsewhere with state surveillance.

Tradeoffs

The stupidest framing of the controversy over ubiquitous surveillance is that it reflects a trade-off between “security” and “privacy”. We are putting in jeopardy values much, much more important than “privacy”.

The value we are trading away, under the surveillance programs as presently constituted, are quality of governance. This is not a debate about privacy. It is a debate about corruption.

Just after the PRISM scandal broke, Tyler Cowen offered a wonderful, wonderful tweet:

I’d heard about this for years, from “nuts,” and always assumed it was true.

There is a model of social knowledge embedded in this tweet. It implies a set of things that one believes to be true, a set of things one can admit to believing without being a “nut”, and an inconsistency between the two. Why the divergence? Oughtn’t it be true that people of integrity should simply own up to what they believe? Can a “marketplace of ideas” function without that?

It’s obvious, of course, why this divergence occurs. Will Wilkinson points to an economy of esteem, but there is also an economy of influence. There are ideas and modes of thought that are taboo in the economy of influence, assertions that discredit the asserter. Those of us who seek to matter as “thinkers” are implicitly aware of these taboos, and we navigate them mostly by avoiding or acceding to them. You can transgress a little, self-consciously and playfully, as Cowen did in his tweet. If you transgress too much, too earnestly, you are written off as a nut or worse. Conversely, there are ideas that are blessed in the economy of influence. These are markers of “seriousness”, as in Paul Krugman’s perceptive, derisive epithet “Very Serious People”. This describes “thinkers” whose positions inevitably align like iron filings to the pull of social influence, indifferent to evidence that might impinge upon their views. Most of us, with varying degrees of consciousness, are pulled this way and that, forging compromises between what we might assert in some impossible reality where we observed social facts “objectively” and the positions that our allegiances, ambitions, and taboos push us towards. Individually, there is plenty of eccentricity, plenty of noise. People go “off the reservation” all the time. But pubic intellectualizing is a collective enterprise. What matters is not what some asshole says, but the conventional wisdom we coalesce to. When the noise gets averaged out, the bias imposed by the economy of influence is hard to overcome. And the economy of influence pulls, always, in directions chosen by incumbent holders of wealth and power, by people with capacity to offer rewards and to mete out punishment.

I want to introduce a word into the discourse surrounding NSA surveillance that has been insufficiently discussed. That word is blackmail. I will out and say this. I think our President’s “evolutions” on questions of civil liberties and surveillance are largely the result of blackmail. I think it is not coincidental that support for the security state is highly correlated with seniority and influence, in both of our increasingly irrelevant political parties. The apparatus we are constructing, have constructed, creates incredible scope for digging up dirt on people and their spouses, their children, their parents. It doesn’t take much to manage the shape of the economy of influence. There are, how shall we say, network effects. You don’t have to blackmail the whole Congress. Powerful people are, almost by definition, people very attuned to economies of influence. They quickly detect the trends and emerging conventions among other powerful people and conform to them. A consensus that emerges at the top is quickly magnified and disseminated. Other voices don’t disappear, there is plenty of shouting in the blogs. But a correlation emerges between a certain set of views and “seriousness”, “respectability”. The mainstream position is defined. Eventually it’s reflected by the polls, so it’s what the American people wanted all along, we are just responding to the demands of the public, whine the politicians.

Blackmail is and has always been a consequential component of our political system. This ought not to be controversial. Blackmail — like its sister B-word, “bribery” — has largely gone mainstream and been institutionalized. “Opposition research” is a profession that is openly practiced and is considered respectable. Opposition researchers, like lobbyists, will tell perfectly accurate stories about the useful role served by their profession. The public deserves to know the truth about the people in whom it will invest the public trust. Legislators require information and expertise that only industry participants can provide. True, true! But these are, obviously, incomplete accounts of the roles that these professionals play. Lobbyists don’t simply inject neutral, objective information into the legislative process. And opposition research is used in ways other than to immediately inform the public. For both bribery and blackmail, there is a spectrum of vulgarity. A guy gives you a suitcase of hundred-dollar bills that you hide in your freezer in exchange for a legislative favor. That’s vulgar, and illegal. But the same gentleman hints in conversation that, should you ever choose to “leave public service”, his firm would be excited to hire someone with your connections and expertise — expertise which, it needn’t be said, ought naturally be reflected in legislative choices! — and that is tasteful, normal, legal. Those jobs are worth a lot more than a suitcase full of C-notes. Similarly, it is vulgar and unnecessarily risky to show up in a Congressional office with a dossier of compromising pictures, or the dossier documenting ones participation in a fraud. You just have to make it known that you know.

I’m going to excerpt a bit from a great, underdiscussed piece by Beverly Gage:

[J. Edgar] Hoover exercised powerful forms of control over potential critics. If the FBI learned a particularly juicy tidbit about a congressman, for instance, agents might show up at his office to let him know that his secrets—scandalous as they might be—were safe with the bureau. This had the predictable effect: Throughout the postwar years, Washington swirled with rumors that the FBI had a detailed file on every federal politician. There was some truth to the accusation. The FBI compiled background information on members of Congress, with an eye to both past scandals and to political ideology. But the files were probably not as extensive or all-encompassing as people believed them to be. The point was that it didn’t matter: The belief alone was enough to keep most politicians in line, and to keep them voting yes on FBI appropriations.

Today, James Bamford quotes a former senior CIA official, describing current spymaster Keith Alexander:

We jokingly referred to him as Emperor Alexander — with good cause, because whatever Keith wants, Keith gets… We would sit back literally in awe of what he was able to get from Congress, from the White House, and at the expense of everybody else.

Bribery and blackmail go together, of course. The carrot and the stick. It’s not just that bad things will happen if you don’t toe the line. If you do the right thing, who knows? You might be the next Dianne Feinstein. Or John Boehner. Or Barack Obama. Note that, despite my excesses in this regard as a writer, I did not place do-the-right-thing in italics or scare quotes. There is a third element in this recipe for influence: persuasion. People don’t like to view themselves as venal, corrupt, weak. Even the sort of person who ends up “senior in politics” has limits to how crass a view of themselves they will tolerate. Bribery and blackmail are omnipresent in the background, but in the foreground are spirited conversations, arguments over policy, arguments in which I suspect decisionmakers frequently start with the hardest possible line against the position they will eventually accept so that they can reassure themselves: they have been persuaded, it was not just the pressure. I accuse Barack Obama of having been effectively bribed and blackmailed on these issues, but if he ever were to respond, I suspect he would deny that fervently and with perfect, absolute sincerity. He was persuaded. He knows more now than he did then.

We humans are such malleable things. This is not, ultimately, a story about evil individuals. The last thing I want to do with my time is get into an argument over the character of our President. I could care less. The problem we face here is social, institutional. Bribery, blackmail, influence peddling, flattery — these have always been and always will be part of any political landscape. Our challenge is to minimize the degree to which they corrupt the political process. “Make better humans” is not a strategy that is likely succeed. “Find better leaders” is just slightly less naive. Institutional problems require institutional solutions. We did manage to reduce the malign influence of the J. Edgar Hoover security state, by placing institutional checks on what law enforcement and intelligence agencies could do, and by placing those agencies under more public and intrusive supervision. I think that much of our task today is devising a sufficient surveillance architecture for our surveillance architecture.

But as we are talking about all this, let’s remember what we are talking about. We are not talking about a tradeoff between “security” and “privacy”. That framing is a distraction. Our current path is to pay for (alleged) security by acquiescence to increasingly corrupt and corruptible governance. We ought to ask ourselves whether a very secure, very corrupt state is better than the alternatives, whether security for corruption is a tradeoff we are willing to make.


P.S. It’s worth pausing in this context to note with sadness the death of Michael Hastings yesterday in a car crash. Hastings was a person clearly trying to address corrupt power by placing it under aggressive public surveillance. It’s worth considering the lessons of Cowen’s quip about “nuts” before we profess to be certain of very much.

Update History:

  • 20-Jun-2013, 6:15 a.m. PDT: “professionals plays
  • 21-Jun-2013, 4:55 a.m. PDT: converted parens to em dashes in bit beginning “experties which…”; added hyphen into “self-consciously”; “to which they will eventually be persuadedaccept“; “reassure themselves. T: they have…”

‘Tis of thee

I want to comment on this widely discussed bit by Josh Marshall:

Let me put my cards on the table. At the end of the day, for all its faults, the US military is the armed force of a political community I identify with and a government I support. I’m not a bystander to it. I’m implicated in what it does and I feel I have a responsibility and a right to a say, albeit just a minuscule one, in what it does. I think a military force requires a substantial amount of secrecy to operate in any reasonable way. So when someone on the inside breaks those rules, I need to see a really, really good reason. And even then I’m not sure that means you get off scott free. It may just mean you did the right thing.

So do I see someone [Manning] who takes an oath and puts on the uniform and then betrays that oath for no really good reason as a hero? No.

The Snowden case is less clear to me. At least to date, the revelations seem more surgical. And the public definitely has an interest in knowing just how we’re using surveillance technology and how we’re balancing risks versus privacy. The best critique of my whole position that I can think of is that I think debating the way we balance privacy and security is a good thing and I’m saying I’m against what is arguably the best way to trigger one of those debates.

But it’s more than that. Snowden is doing more than triggering a debate. I think it’s clear he’s trying to upend, damage — choose your verb — the US intelligence apparatus and policies he opposes. The fact that what he’s doing is against the law speaks for itself. I don’t think anyone doubts that narrow point. But he’s not just opening the thing up for debate. He’s taking it upon himself to make certain things no longer possible, or much harder to do. To me that’s a betrayal. I think it’s easy to exaggerate how much damage these disclosures cause. But I don’t buy that there are no consequences. And it goes to the point I was making in an earlier post. Who gets to decide? The totality of the officeholders who’ve been elected democratically – for better or worse – to make these decisions? Or Edward Snowden, some young guy I’ve never heard of before who espouses a political philosophy I don’t agree with and is now seeking refuge abroad for breaking the law?

I, like Josh Marshall, identify very strongly with the political community called the United States of America. But for precisely that reason, my reaction is almost precisely the opposite of Marshall’s.

As a human being, it is very important to me to be good. But I am not meaningfully a human being as an individual. No one is, no matter how libertarian ones philosophy or how sterilely individualistic ones economic models. I do not identify with the political community called the United States like I might identify with a football team, hoping for victory against rivals simply because it is mine, my team. I identify with the United States of America as a vast and complex social and moral agent of which I have the privilege to be a part. I am elevated by my affiliation with, incorporation within, that community when it is a good community. I am diminished and pained and made nauseous when it is an evil community. Of course it is both, always, in various degrees, in my own shifting perceptions and those of others and in whatever unknowable objective reality might ever be ascribed to such a thing. But there are preponderances. A decade ago, my view was that the United States was, on the whole, in this imperfect realm of man, a good community. “The arc of the moral universe is long, but it bends towards justice, ” said Dr. King. I believed that about the United States, and I believed that about the United States’ role in the larger world. Now I hear various smug, self-righteous, powerful figures intone those words and I want to puke. In my tiny, flawed view — but I don’t think I am alone in this — the preponderances have shifted. As a community, our flaws are eclipsing our virtues. Men and women are imperfect, human communities are imperfect, but there are differences of degree and they matter. We are all born sinners, avaricious machines, selfish genes, but some work and strive and, to various degrees, perhaps even succeed at being better than they might be, and we should notice that, honor it. And when we are small and mean — perhaps without bad intentions but life is hard and the world is complex and we are buffeted by so many different forces, so many “incentives” — when we notice that we are small and mean, we should dishonor that, and work to change it.

In the 1980s, Ronald Reagan referred to the Soviet Union as an evil empire, and he was right to do so. That never meant that Russian people, individually, were bad people. That did not and would not justify anyone’s blowing up a Moscow apartment building, or attacking their soldiers, or any other prima facie awful thing. Violence in the service of good is not an impossible thing, perhaps, but it is a rare and very delicate thing at best. Violence ought never be justified in broad, sloppy brush strokes. Nothing would have justified a terrorist act against the Soviet Union, but it was a community whose moral character, with respect to both the lived experience inside it and its influence externally, was malign, and it was a matter of serious concern to people within and without that it should change. The community that was the Soviet Union is still evolving, and the jury is still out, as it will ever be, because human affairs are never permanent. For the sake of the people inside that community and for the sake of all the rest of us, we ought to wish them the best.

It pains me very much to say so, but the United States today is not a benign community. We have, over the last decade, undermined nearly all of the reasons that I, perhaps as a fool, thought distinguished us as virtuous, in our own particular way, despite our many flaws. A decade ago, I trusted our institutions, our government, our think tanks and university and third estate, our processes and our evaluations of our own competences, and supported what turned out to be a disastrous war in Iraq. Even though I observed what at the time were pretty obvious housing and credit bubbles, I believed our system was self-correcting, that those who fueled those errors would eventually be held accountable, economically and sometimes criminally, that we would suffer institutions to fall and titans to be shamed in order to preserve the integrity of our economy. I remember how ashamed I was when, in 2005, my then-girlfriend (now wife) came to visit the United States, and we were driving around with people on NPR debating whether torture was OK. Today my country holds people it has exonerated of wrongdoing in a tropical prison for indefinite terms because it cannot overcome the bureaucratic and political obstacles to letting them live, somewhere, the lives that they like each of us have been blessed with. Today my country sends remote control airplanes into a country we are not at war with and kills people it cannot identify in a program it assures us is “surgical”. When called to account for harm to noncombatants, it classifies “males of military age” as militants to keep the statistics flattering.

I am, and I will always be, a member of the political community called the United States of America. That is why these things pain me. The fact that I identify with this community does not mean I identify with the state, its government, its military institutions, even its civil society as presently constituted. I certainly want to identify with those things. I certainly used to identify with those things, ostentatiously and very proudly. In the community we all wish we belonged to, we would honor the state and the constellation of prominent institutions that surround it, banks and universities and newsmedia, as constituting a system of political and economic and necessarily moral governance that functions reasonably well, whose actors police and correct themselves when, inevitably, things go askew. I don’t think a reasonable observer can claim this describes the institutions of the United States right now. Laws are made of words. There is no rule of law in a society where Presidents argue over what “is” is, or claim “domestic” e-mails aren’t “targeted” to avoid discussing whether or not they are read. There is no rule of law when our leaders offer no language that meshes with commonsense reality, only phrases carefully parsed not to be caught out as outright lies while revealing as little truth as possible. There is no rule of law when members of incumbent power centers, in government or banking or the military, are almost never held to account for crimes that in ordinary life would be grave, while those they dislike are jailed naked and sleepless for the sin of betraying their secrets without any hint or allegation of malice.

Even in a good community, there is a role for secrets. I have taken some heat for defending, in principle, a role for opacity in banking, and would certainly defend a sphere of secrecy in diplomacy and governance. And it is always true that some cockroaches will thrive in the shadows. But though we may sometimes choose to blind ourselves, we ought not do so blindly. I might entrust my funds to a banker in promise of a sure return, but only if I have reason to believe, in the context of the web of institutions to which she belongs, she can be trusted to do reasonable things rather than steal the advance. I needn’t mislead myself that she is infallible. But I should know that in the unlikely event of a failure, it will be a virtuous failure, which in practice implies an accountable failure. Secrecy may be necessary, but it is intolerable without accountability, accountability in fact not in form. Our core institutions and the humans within them no longer hold themselves accountable for their large crimes, though they occasionally offer scapegoats from their ranks for small ones. They have evolved ingenious contrivances with elaborate rituals of accountability whose lack of substance is most invisible to the people enmeshed in them. This will kill us, is killing us, slowly and by degrees and not before it kills many other people. It is because I identify with my political community, because I do not exist except as a part of that community, that I am desperate to change this. I cannot be a good person, and I cannot be happy, when this is my polity.

The comfortable, “legitimate”, forms of accountability are failing, have failed. Whistleblowing is accountability by other means, and we need that, and ought to celebrate it. Our problem is not that it is done too frequently or too lightly. I might prefer Edward Snowden hadn’t gone to China(ish), but the health and virtue of my community is not a contest, not a rivalry with that or any other country. Contra Marshall, Snowden has “upended” nothing. Nothing he did prevents us from doing as much or as little surveillance as we, collectively, choose to do, and we may yet choose to do quite a lot of it. What Snowden has done is force us to own up, to stop pretending we are not doing what we all know we are doing, to stop pretending we do not know what is being done to us. It is on us, as a political community, to decide what we want to do and most importantly, how, on what terms, we want to do it. The people to whom we should listen the least are Dianne Feinstein and Barack Obama, John Boehner and Lindsay Graham, James Clapper. The tragedy is they probably have as hard time telling when they are lying as we do, they are so lost in it all. This isn’t their decision. This is our country.

Update History:

  • 18-Jun-2013, 3:25 a.m. PDT: Reworked sentence about noticing ourselves being “small and mean” a bit (no change in meaning, but removed a duplicate “but” and made it slightly less awkward, i hope, although nearly every sentence of this piece is awkward.) italicized rituals in “rituals of accountability”.

Apology in advance

I am in general a terribly constipated blogger. Actually, I don’t know whether I qualify as a “blogger” at all, given the infrequency of my expulsions. As I’ve developed a readership, I’ve come to consider publishing on this site a big deal, something not to be taken lightly. I abandon as many posts as I publish, and it’s rare that I publish a post on the same day I begin it. Sometimes it’s not even the same week. Though I love that people disagree, I feel terrible if I publish stuff that, after the fact, I myself decide is shoddy or mistaken. I feel terrible when I think I’ve misrepresented or failed to properly attribute other people, and usually make a time-consuming effort (which never turns out to be enough) to track down and link antecedents. More generally, though it may be cliché to say so, I know myself to be a total fraud, and self-censor a great deal in hope that you won’t notice.

My thoughts over the last week have not really been on helicopter drops or monetary policy or anything narrowly economic, but on a whole range of repressed concerns brought to the fore by the NSA scandals. If I try to express these with the restraint and care I’ve come to impose on my posts, I will just never express them. So I’m going to give myself license to be a lot more stream of consciousness, a lot more careless, over the next few days and just vomit into your RSS feed. I want to apologize in advance for the green chunks.