...Author Archive

Welfare economics: housekeeping and links

A correspondent asks that I give the welfare series a table of contents. So here’s that…

Welfare economics:
  1. Introduction
  2. The perils of Potential Pareto
  3. Inequality, production, and technology
  4. Welfare theorems, distribution priority, and market clearing
  5. Normative is performative, not positive

I think I should also note the “prequel” of the series, the post whose comments inspired the exercise:

Much more interesting than any of that, I’ll add a box below with links to related commentary that has come my way. And of course, there have been two excellent comment threads.

Welfare economics: normative is performative, not positive (part 5 and conclusion of a series)

This is the fifth (and final) part of a series. See parts 1, 2, 3, and 4.

For those who have read along thus, far, I am grateful. We’ve traveled a long road, but in the end we haven’t traveled very far.

We have understood, first, the conceit of traditional welfare economics: that with just a sprinkle of one, widely popular bit of ethical philosophy — liberalism! — we could let positive economics (an empirical science, at least in aspiration) serve as the basis for normative views about how society should be arranged. But we ran into a problem. “Scientificoliberal” economics can decide between alternatives when everybody would agree that one possibility would be preferable to (or at least not inferior to) another. But it lacks any obvious way of making interpersonal comparisons, so it cannot choose among possibilities that would leave some parties “better off” (in a circumstance they would prefer), but others worse off. Since it is rare that nontrivial economic and social choices are universally preferable, this inability to trade-off costs and benefits between people seems to render any usefully prescriptive economics impossible.

We next saw a valiant attempt by Nicholas Kaldor, John Hicks, and Harold Hotelling to rescue “scientificoliberal” economics with a compensation principle. We can rank alternatives by whether they could make everybody better off, if they were combined with a compensating redistribution (regardless of whether the compensating redistribution actually occurs). At a philosophical level, the validity of the Kaldor-Hicks-Hotelling proposal requires us to sneak a new assumption into “scientificoliberal” economics — that distributive arrangements adjudicated by the political system are optimal, so that any distributive deviation from actual compensation represents a welfare improvement relative to the “potential” improvement which might have occurred via compensation. This assumption is far less plausible than the liberal assumption that what a person would prefer is a marker of what would improve her welfare. But we have seen that, even if we accept the new assumption, the Kaldor-Hicks-Hotelling “potential Pareto” principal cannot coherently order alternatives. It can literally tell us that we should do one thing, and we’d all be better off, and then we should undo that very thing, because we would all be better off.

In the third installment, we saw that these disarming “reversals” were not some bizarre corner case, but are invoked by the most basic economic decisions. To what goods should the resources of an economy be devoted? What fraction should go to luxuries, and what fraction to necessities? Should goods be organized as “public goods” or “club goods” (e.g. shared swimming pools), or as private goods (unshared, personal swimming pools)? These alternatives are unrankable according to the Kaldor-Hicks-Hotelling criterion. The resource allocation decision that will “maximize the size of the pie” depends entirely on what distribution the pie will eventually have. It is impossible to separate the role of the economist as an objective efficiency maximizer from the role of the politician as an arbiter of interpersonal values. The efficiency decision is inextricably bound up with the distributional decision.

Most recently, we’ve seen that the “welfare theorems” — often cited as the deep science behind claims that markets are welfare optimizing — don’t help us out of our conundrum. The welfare theorems tell us that, under certain ideal circumstances, markets will find a Pareto optimal outcome, some circumstance under which no one can be made better off without making someone worse off. But they cannot help us with the question of which Pareto optimal outcome should be found, and no plausible notions of welfare are indifferent between all Pareto optimal outcomes. The welfare theorems let us reduce the problem of choosing a desirable Pareto optimal outcome to the problem of choosing a money distribution — once we have the money distribution, markets will lead us to make optimal production and allocation decisions consistent with that distribution. But we find ourselves with no means of selecting the appropriate money distribution (and no scientific case at all that markets themselves optimize the distribution). We are back exactly where we began, wondering how to decide who gets what.

In private correspondence, Peter Dorman suggests

Perhaps the deepest sin is not the urge to have a normative theory as such, but the commitment to having a single theory that does both positive and normative lifting. Economists want to be able to say that this model, which I can calibrate to explain or predict observed behavior, demonstrates what policies should be enacted. If these functions were allowed to be pursued separately, each in its own best way, I think we would have a much better economics.

We’ve seen that positive economics (even with that added sprinkle of liberalism) cannot serve as the basis for a normative economics. But if we toss positive economics out entirely, it’s not clear how economists might have anything at all to say about normative questions. Should we just leave those to the “prophet and the social reformer”, as Hicks disdainfully put it, or is there some other way of leveraging economists’ (putative) expertise in positive questions into some useful perspective on the normative? I think that there is.

They key, I think, is to relax the methodological presumption of one way causality from positive observations and normative conclusions. The tradition of “scientific” welfare economics is based on aggregating presumptively stable individual preferences into a social welfare ordering whose maximization could be described as an optimization of welfare. Scitovsky and then Arrow showed that this cannot be done without introducing some quite destructive paradoxes, or letting the preferences of a dictator dominate. It is, however, more than possible — trivial, even — to define social welfare functions that map socioeconomic observables into coherent orderings. We simply have to give up the conceit that our social welfare function arises automatically or mechanically from individual preferences characterized by ordinal utility functions. At a social level, via politics, we have to define social welfare. There is nothing “economic science” can offer to absolve us of that task.

But then what’s left for economic science to offer? Quite a bit, I think, if it would let itself out of the methodological hole its dug itself into. As Dorman points out, economists so entranced themselves with the notion that their positive economics carries with it a normative theory like the free prize in a box of Cracker Jacks that they have neglected the task of creating a useful toolset for normative economics as a fully formed field of its own.

A “scientific” normative economics would steal the Kaldor-Hicks-Hotelling trick of defining a division of labor between political institutions and value-neutral economics. But politicians would not uselessly (as a technical matter) and implausibly (let’s face it) be tasked with “optimal” distributional decisions. Political institutions are not well-suited to making ad hoc determinations of who gets what. We need something systematic for that. What political institutions are well suited to doing, or at least better suited than plausible contenders, is to make broad-brush determinations of social value, to describe the shape of the society that we wish to inhabit. How much do we, as a society, value equality against the mix of good (incentives to produce and innovate) and bad (incentives to cheating and corruption, intense competitive stress) that come with outcome dispersion? How much do we value public goods whose relationship to individual well-being is indirect against the direct costs to individuals required to pay for those goods?

A rich normative economics would stand in dialogue with the political system, taking vague ideas about social value and giving them form as social welfare functions, exploring the ramifications of different value systems reified as mathematics, letting political factions contest and revise welfare functions as those ramifications stray from, or reveal inconsistencies within, the values they intend to express. A rich normative economics would be anthropological in part. It would try to characterize, as social welfare functions, the “revealed preferences” of other polities and of our own polity. Whatever it is we say about ourselves, or they say about themselves, what does it seems like polities are actually optimizing? As we analyze others, we will develop a repertoire of formally described social types, which may help us understand the behavior of other societies and will surely add to the menu we have to choose from in framing our own social choices. As we analyze ourselves, we will expose fault lines between our “ideals” (preferences we claim to hold that may not be reflected in our behavior) and how we actually are. We can then make decisions about whether and how to remedy those.

The role of the economist would be that of an explorer and engineer, not an arbiter of social values. Assuming (perhaps heroically) a good grasp of the positive economics surrounding a set of proposals, an economist can determine — for a given social welfare function — which proposal maximizes well being, taking into account effects on production, distribution, and any other inputs affected by the proposal and included in the function. Under which of several competing social welfare functions policies should be evaluated would become a hotly contested political question, outside the economist’s remit (at least in her role as scientist rather than citizen). Policies would be explored under multiple social welfare functions, each reflecting the interests and values of different groups of partisans, and political institutions would have to adjudicate conflicting results there. But different social welfare functions can be mapped pretty clearly to conflicting human values. We will learn something about ourselves, perhaps have to fess up something about ourselves, by virtue of the social welfare functions whose champions we adopt. And perhaps seeing so clearly the values implied by different choices will help political systems make choices that better reflect our stated values, our ideals.

Coherent social welfare functions would necessarily incorporate cardinal, not ordinal, individual welfare functions. Those cardinal functions could not be fully determined by the results of strictly ordinal positive economics, though they might be defined consistently with those results. Their forms and cardinalities would structure how we make tradeoffs between individuals along dimensions of consumption and risk.

What if they get those tradeoffs “wrong”? What if, for example, we weight individual utilities equally, but one of us is the famous “utility monster“, whose subjective experience of joy and grief is so great and wide that, in God’s accounting, the rest of our trivial pleasures and pains would hardly register? How dare we arrogate to ourselves the power to measure and weigh one individual’s happiness against some other?

In any context outside of economics it would be unsurprising that the word “normative” conjures other words, words like “obligation” or “social expectation”. Contra the simplistic assumption of exogenous and stable preferences, the societies we inhabit quite obviously shape and condition both the preferences that we subjectively experience and the preferences it is legitimate to express in our behavior. Ultimately, it doesn’t matter whether “utility monsters” exist, and it doesn’t matter that the intensities of our subjective experiences are unobservable and incommensurable. Social theories do not merely describe human beings. Tacitly or explicitly, as they become widely held, they organize our perceptions and shape our behavior. They become descriptively accurate when we are able, and can be made willing, to perform them. And only then.

So the positive and the normative must always be in dialogue. A normative social theory, whether expressed as a social welfare function or written in a holy scripture, lives always in tension with the chaotic, path-dependent predilections of the humans whose behavior it is intended to order. On the one hand, we are not constrained (qua traditional welfare economics) by the positive. Our normative theories can change how people behave, along with the summaries of behavior that economists refer to as “preferences”. But if we try to impose a normative theory too out of line with the historically shaped preferences and incentives of those it would govern, our norms will fail to take. Our project of structuring a “good” society (under the values we choose, however arbitrarily) will fail. The humans may try to perform our theory or they may explicitly rebel, but they won’t manage it. Performativity gives us some latitude, but positive facts about human behavior — susceptibility to incentives, requirements that behavior be socially reinforced, etc. — impose constraints. Over a short time horizon, we may be unable to optimize a social welfare function that reflects our ideals, because we are incapable or unwilling to behave in the ways that would require. Intertemporal utility functions are a big deal in positive economics. The analog in normative economics should be dynamic social welfare functions, that converge over time to the values we wish would govern us, while making near-term concessions to the status quo and our willingness and capacity to perform our ideals. (The rate and manner of convergence would themselves be functions of contestable values constrained by practicalities.)

This performativity stuff sounds very postmodern and abstract, but it shouldn’t. It impinges on lots of live controversies. For example, a few years ago there was the kerfuffle surrounding whether the rich and poor consume such different baskets of goods that we should impute different inflation rates to them. Researchers Christian Broda and John Romalis argued that the inflation rate of the rich was higher than that of the poor, and so growth in real income inequality was overstated. I thought that dumb, since the rich always have the option of substituting the cheaper goods bought buy the poor into their consumption basket. Scott Winship pointed out the to-him dispositive fact that, empirically, they seem not to substitute. In fact, if you read the paper, the researchers estimate different utility functions for different income groups, treating rich and poor as though they were effectively distinct species. If we construct a social welfare function in which individual welfares were represented by the distinct utility functions estimated by Broda and Romalis, if in the traditional manner we let their (arguable) characterization of the positive determine the normative, we might find their argument unassailable. The goods the poor buy might simply not enter into the utility functions of the rich, so the option to substitute would be worthless. If we took this social welfare function seriously, we might be compelled, for example, to have the poor make transfers to the rich if the price of caviar rises too steeply. Alternatively, if we let the normative impose an obligation to perform, and if we want our social welfare function to reflect the value that “all men are created equal”, we might reject the notion of embedding different individual welfare functions for rich and poor into our social welfare function and insist on a common (nonhomothetic) function, in which case the option to substitute hot dogs for caviar would necessarily reflect a valuable benefit to the wealthy. But, we’d have to be careful. If our imposed ideal of a universal individual welfare function is not a theory our rich could actually perform — if it turns out that the rich would in fact die before substituting hot dogs for caviar — then our idealism might prove counterproductive with respect to other ideals, like the one that people shouldn’t starve. Positive economics serves as a poor basis for normative economics. But neither can positive questions be entirely ignored. [Please see update.]

I’ve given an example where a normative egalitarianism might override claims derived from positive investigations. That’s comfortable for me, and perhaps many of my readers. But there are, less comfortably, situations where it might be best for egalitarian ideals to be tempered by facts on the ground. Or not. There are no clean or true answers to these questions. What a normative economics can and should do is pose them clearly, reify different sets of values and compromises into social welfare functions, and let the polity decide. (Of course as individuals and citizens, we are free to advocate as well as merely explore. But not under the banner of a “value neutral science”.)

This series on welfare economics was provoked by a discussion of the supply and demand diagrams that lie at the heart of every Introductory Economics course, diagrams in which areas of “surplus” are interpreted as welfare-relevant quantities. I want to end there too. Throughout this series, using settled economics, we developed the tools by which to understand that those diagrams are, um, problematic. Surplus is incommensurable between people and so is meaningless when derived from market, rather than individual, supply and demand curves. Potential compensation of “losers” by “winners” is not a reasonable criterion by which to judge market allocations superior to other allocations: It does not form an ordering of outcomes. Claims that ill-formed surplus somehow represents a resource whose maximization enables redistribution ex post are backwards: Under the welfare theorems, redistribution must take place prior to market allocation to avoid Pareto inferior outcomes. As I said last time, the Introductory Economics treatment is a plain parade of fallacies.

You might, think, then, that I’d advocate abandoning those diagrams entirely. I don’t. All I want is a set of caveats added. The diagrams are redeemable if we assume that all individuals have similar wealth, that they share the similar indirect utility with respect to wealth while their detailed consumption preferences might differ, and the value of the goods being transacted is small relative to the size of market participants’ overall budget. Under these assumptions (and only under these assumptions), if we interpret indirect utilities as summable welfare functions, consumer and producer surplus become (approximately) commensurable across individuals, and the usual Econ 101 catechism holds. Students should learn that the economics they are taught is a special case — the economics of a middle class society. They should understand that an equitable distribution is prerequisite to the version of capitalism they are learning, that the conclusions and intuitions they develop become dangerously unreliable as the dispersion of wealth and income increases.

Why not just throw the whole thing away? Writing on economics education, Brad DeLong recently, wonderfully, wrote, “modern neoclassical economics is in fine shape as long as it is understood as the ideological and substantive legitimating doctrine of the political theory of possessive individualism.” An ideological and substantive legitimating doctrine is precisely what the standard Introductory Economics course is. The reason “Econ 101” is such a mainstay of political discussions, and such a lightning rod for controversy, is because it offers a compelling, intuitive, and apparently logical worldview that stays with students, sometimes altering viewpoints and behavior for a lifetime. For a normative theory to be effective, people must be able to internalize it and live it. Simplicity and coherence are critical, not for parsimony, but for performativity. “Econ 101” is a proven winner at that. If students understand that they are learning the “physics” of an egalitarian market economy, the theory is intellectually defensible and, from my value-specific perspective, normatively useful. If it is taught without that caveat (and others, see DeLong’s piece), the theory is not defensible intellectually or morally.

It would be nice if students were also taught they were learning a performative normative theory, a thing that is true in part because they make it true by virtue of how they behave after having been taught it. But perhaps that would be too much to ask.


Update: Scott Winship writes to let me know that some doubt has been cast on the Broda/Romalis differential inflation research; it may be mistaken on its own terms. But the controversy is still a nice example of the different conclusions one draws when normative inferences are based solely on positive claims drawn from past behavior versus when normative ideas are imposed and expected to condition behavior.

Update History:

  • 8-Jul-2014, 10:45 a.m. PDT: Inserted, “if we interpret indirect utilities as summable welfare functions,”; “Potential compensation of ‘winners’ by ‘losers’ of ‘losers’ by ‘winners’
  • 8-Jul-2014, 11:40 a.m. PDT: Added bold update re report by Scott Winship that there may be problems with Broda / Romalis research program on its own terms.
  • 8-Jul-2014, 3:25 p.m. PDT: “The tradition of ‘scientific’ welfare economics is based on aggregating…”; “It would try to characterize, as social welfare functions…”; “that converge over time to the values we wish would govern us”; “If we too took this social welfare function seriously” — Thanks Christian Peel!
  • 11-Jul-2014, 10:45 a.m. PDT: ” a useful toolset for a normative economics as a fully formed field of its own.”

Welfare economics: welfare theorems, distribution priority, and market clearing (part 4 of a series)

This is the fourth part of a series. See parts 1, 2, 3, and 5. Comments are open on this post.

What good are markets anyway? Why should we rely upon them to make economic decisions about what gets produced and who gets what, rather than, say, voting or having an expert committee study the matter and decide? Is there a value-neutral, “scientific” (really “scientifico-liberal“) case for using markets rather than other mechanisms? Informally, we can have lots of arguments. One can argue that most successful economies rely upon market allocation, albeit to greater and lesser degrees and with a lot of institutional diversity. But that has not always been the case, and those institutional differences often swamp the commonalities in success stories. How alike are the experiences of Sweden, the United States, Japan, current upstarts like China? Is the dominant correlate of “welfare” really the extensiveness of market allocation, or is it the character of other institutions that matters, with markets playing only a supporting role? Maybe the successes are accidental, and attributing good outcomes to this or that institution is letting oneself be “fooled by randomness“. History might or might not make a strong case for market economies, but nothing that could qualify as “settled science”.

But there is an important theoretical case for the usefulness of markets, “scientific” in the sense that the only subjective value it enshrines is the liberal presumption that what a person would prefer is ipso facto welfare-improving. This scientific case for markets is summarized by the so-called “welfare theorems“. As the name suggests, the welfare theorems are formalized mathematical results based on stripped-down and unrealistic models of market economies. The ways that real economies fail to adhere to the assumptions of the theorems are referred to as “market failures”. For example, in the real world, consumers don’t always have full information; markets are incomplete and imperfectly competitive; and economic choice is entangled with “externalities” (indirect effects on people other than the choosers). It is conventional and common to frame political disagreements around putative market failures, and there’s nothing wrong with that. But for our purposes, let’s set market failures aside and consider the ideal case. Let’s suppose that the preconditions of the welfare theorems do hold. Exactly what would that imply for the role of markets in economic decisionmaking?

We’ll want to consider two distinct problems of economic decisionmaking, Pareto-efficiency and distribution. Are there actions that can be taken which would make everyone better off, or at least make some people better off and nobody worse off? If so, our outcome is not Pareto efficient. Some unambiguous improvement from the status quo remains unexploited. But when one person’s gain (in the sense of experiencing a circumstance she would prefer over the status quo) can only be achieved by accepting another person’s loss, who should win out? That is the problem of distribution. The economic calculation problem must concern itself with both of those dimensions.

We have already seen that there can be no value-neutral answer to the distribution problem under the assumptions of positive economics + liberalism. If we must weigh two mutually exclusive outcomes, one of which would be preferred by one person, while the other would be preferred by a second person, we have no means of making interpersonal comparisons and deciding what would be best. We will have to invoke some new assumption or authority to choose between alternatives. One choice is to avoid all choices, and impose as axiom that all Pareto efficient distributions are equally desirable. If this is how we resolve the problem, then there is no need for markets at all. Dictatorship, where one person directs all of an economy’s resources for her own benefit, is very simple to arrange, and, under the assumptions of the welfare theorems, will usually lead to a Pareto optimal outcome. (In the odd cases where it might not, a “generalized dictatorship” in which there is a strict hierarchy of decision makers would achieve optimality.) The economic calculation problem could be solved by holding a lottery and letting the winner allocate the productive resources of the economy and enjoy all of its fruits. Most of us would judge dictatorship unacceptable, whether imposed directly or arrived at indirectly as a market outcome under maximal inequality. Sure, we have no “scientific” basis to prefer any Pareto-efficient outcome over any other, including dictatorship. But we also have no basis to claim all Pareto-efficient distributions are equivalent.

Importantly, we have no basis even to claim that all Pareto-efficient outcomes are superior to all Pareto-inefficient distributions. For example, in Figure 1, Point A is Pareto-efficient and rankably superior to Pareto-inefficient Point B. Both Kaldor and Hicks prefer A over B. But we cannot say whether Point A is superior or inferior to Point C, even though Point A is Pareto-efficient and Point C is not. Kaldor prefers Point A but Hicks prefers Point C, its Pareto-inefficiency notwithstanding. The two outcomes cannot be ranked.

welfare4_fig1

We are simply at an impasse. There is nothing in the welfare theorems, no tool in welfare economics generally, by which to weigh distributional questions. In the next (and final) installment of our series, we will try to think more deeply about how “economic science” might be put to helpfully address the question without arrogating to itself the role of Solomon. But for now, we will accept the approach that we have already seen Nicholas Kaldor and John Hicks endorse: Assume a can opener. We will assume that there exist political institutions that adjudicate distributional tradeoffs. In parliaments and sausage factories, the socially appropriate distribution will be determined. The role of the economist is to be an engineer, Keynes’ humble dentist, to instruct on how to achieve the selected distribution in the most efficient, welfare-maximizing way possible. In this task, we shall see that the welfare theorems can be helpful.

welfare4_fig2

Figure 2 is a re-presentation of the two-person economy we explored in the previous post. Kaldor and Hicks have identical preferences, under a production function where different distributions will lead to deployment of different technologies. In the previous post, we explored two technologies, discrete points on the production possibilities frontier, and we will continue to do so here. However, we’ve added a light gray halo to represent the continuous envelope of all possible technologies. (The welfare theorems presume that such a continuum exists. The halo represents the full production possibilities frontier from the Figure 1 of the previous post. The yellow and light blue curves represent specific points along the production frontier.) Only two technologies will concern us because only two distributions will concern us. There is the status quo distribution, which represented by the orange ray. But the socially desired distribution is represented by the green ray. Our task, as dentist-economists, is to bring the economy to the green point, the unique Pareto-optimal outcome consistent with the socially desired distribution.

If economic calculation were easy, we could just make it so. Acting as benevolent central planners, we would select the appropriate technology, produce the set of goods implied by our technology choice, and distribute those goods to Kaldor and Hicks in Pareto-efficient quantities consistent with our desired distribution. But we will concede to Messrs. von Mises and Hayek that economic calculation is hard, that as central planners, however benevolent, we would be incapable of choosing the correct technology and allocating the goods correctly. Those choices depend upon the preferences of Kaldor and Hicks, which are invisible and unknown to us. Even if we could elicit consumer preferences somehow, our calculation would become very complex in an economy containing many more than two people and a near infinity of goods. We’d probably screw it up.

Enter the welfare theorems. The first welfare theorem tells us that, in the absence of “market failure” conditions, free trade under a price system will find a Pareto-efficient equilibrium for us. The second welfare theorem tells us that for every point in the “Pareto frontier”, there exists a money distribution such that free trade under a price system will take us to this point. We have been secretly using the welfare theorems all along, ever since we defined distributions as rays, fully characterized by an angle. Under the welfare theorems, we can characterize distributions in terms of money rather than worrying about quantities of specific goods, and we can be certain that each point on a Pareto frontier will map to a distribution, which motivates the geographic representation as rays. The second welfare theorem tells us how to solve our economic calculation problem. We can achieve our green goal point in two steps. (Figure 3) First, we transfer money from Hicks to Kaldor, in order to achieve the desired distribution. Then, we let Kaldor and Hicks, buy, sell, and trade as they will. Price signals will cause competitive firms to adopt the optimal technology (represented by the yellow curve), and the economy will end up at the desired green point.

welfare4_fig3

The welfare theorems are often taken as the justification for claims that distributional questions and market efficiency can be treated as “separate” concerns. After all, we can choose any distribution, and the market will do the right thing. Yes, but the welfare theorems also imply we must establish the desired distribution prior to permitting exchange, or else markets will do precisely the wrong thing, irreversibly and irredeemably. Choosing a distribution is prerequisite to good outcomes. Distribution and market efficiency are about as “separable” as mailing a letter is from writing an address. Sure, you can drop a letter in the mail without writing an address, or you can write an address on a letter you keep in a drawer, but in neither case will the letter find its recipient. The address must be written on the letter before the envelope is mailed. The fact that any address you like may be written on the letter wouldn’t normally provoke us to describe these two activities as “separable”.

Figure 4 illustrates the folly of the reverse procedure, permitting market exchange and then setting a distribution.

welfare4_fig4

In both panels, we first let markets “do their magic”, which take us to the orange point, the Pareto-efficient point associated with the status quo distribution. Then we try to redistribute to the desired distribution. In Panel 4a, we face a very basic problem. The whole reason we required markets in the first place was because we are incapable of determining Pareto-efficient distributions by central planning. So, if we assume that we have not magically solved the economic calculation problem, when we try to redistribute in goods ex post (rather than in money ex ante), we are exceedingly unlikely to arrive at a desirable or Pareto efficient distribution. In Panel 4b, we set aside the economic calculation problem, and presume that we can, somehow, compute the Pareto-efficient distribution of goods associated with a distribution. But we’ll find that despite our remarkable abilities, the best that we can do is redistribute to the red point, which is Pareto-inferior to the should-be-attainable green point. Why? Because, in the process of market exchange, we selected the technology optimal for the status quo distribution (the light blue curve) rather than the technology optimal for the desired distribution (the yellow curve). Remember, our choice of “technology” is really the choice of which goods get produced and in what quantities. Ex post, we can only redistribute the goods we’ve actually produced, not the goods we wish we would have produced. There is no way to get to the desired green point unless we set the distribution prior to market exchange, so that firms, guided by market incentives, select the correct technology.

The welfare theorems, often taken as some kind of unconditional paean to markets, tell us that market allocation cannot produce a desirable Pareto-efficient outcome unless we have ensured a desirable distribution of money and initial endowments prior to market exchange. Unless you claim that Pareto-efficient allocations are lexicographically superior to all other allocations, that is, unless you rank any Pareto-efficient allocation as superior to all not Pareto-efficient distributions — an ordering which reflects the preferences of no agent in the economy — unconditional market allocation is inefficient. That is to say, unconditional market allocation is no more or less efficient than holding a lottery and choosing a dictator.

In practice, of course, there is no such thing as “before market allocation”. Markets operate continuously, and are probably better characterized by temporary equilibrium models than by a single, eternal allocation. The lesson of the welfare theorems, then, is that at all times we must restrict the distribution of purchasing power to the desired distribution or (more practically) to within an acceptable set of distributions. Continuous market allocation while the pretransfer distribution stochastically evolves implies a regime of continuous transfers in order to ensure acceptable outcomes. Otherwise, even in the absence of any conventional “market failures”, markets will malfunction. They will provoke the production of a mix of goods and services that is tailored to a distribution our magic can opener considers unacceptable, goods and services that can not in practice or in theory be redistributed efficiently because they poorly suited to more desirable distributions.

By the way, if you think that markets themselves should choose the distribution of wealth and income, you are way off the welfare theorem reservation. The welfare theorems are distribution preserving, or more accurately, they are distribution defining — they give economic meaning to money distributions by defining a deterministic mapping from those distributions to goods and services produced and consumed. Distributions are inputs to a process that yields allocations as outputs. If you think that the “free market” should be left alone to determine the distribution of wealth and income, you may or may not be wrong. But you can’t pretend the welfare theorems offer any help to your case.

There is nothing controversial, I think, in any of what I’ve written. It is all orthodox economics. And yet, I suspect it comes off as very different from what many readers have learned (or taught). The standard introductory account of “market efficiency” is a parade of plain fallacies. It begins, where I began, with market supply and demand curves and “surplus”, then shows that market equilibria maximize surplus. But “surplus”, defined as willingness to pay or willingness to sell, is not commensurable between individuals. Maximizing market surplus is like comparing 2 miles against 12-feet-plus-32-millimeters, and claiming the latter is longest because 44 is bigger than 2. It is “smart” precisely in the Shel Siverstein sense. More sophisticated catechists then revert to a compensation principle, and claim that market surplus is coherent because it represents transfers that could have been made, the people whose willingness to pay is measured in miles could have paid off the people whose willingness to pay is measured in inches, leaving everybody better off. But, as we’ve seen, hypothetical compensation — the principle of “potential Pareto improvements” — does not define an ordering of outcomes. Even actual compensation fails to redeem the concept of surplus: the losers in an auction, paid-off much more than they were willing to pay for an item as compensation for their loss, might be willing to return the full compensation plus their original bid to gain the item, if their original bid was bound by a hard budget constraint, or (more technically) did not reflect an interior solution to their constrained maximization problem. No use of surplus, consumer or producer, is coherent or meaningful if derived from market (rather than individual) supply or demand curves, unless strong assumptions are made about transactors’ preferences and endowments. The welfare theorems tell us that market allocations will not produce outcomes that are optimal for all distributions. If the distribution of wealth is undesirable, markets will misdirect capital and make poor decisions with respect to real resources even while they maximize perfectly meaningless “surplus”.

So, is there a case for market allocation at all, for price systems and letting markets clear? Absolutely! The welfare theorems tell us that, if we get the distribution of wealth and income right, markets can solve the profoundly difficult problem of converting that distribution into unfathomable multitudes of production and consumption decisions. The real world is more complex than the maths of welfare theorems, and “market failures” can muddy the waters, but that is still a great result. The good news in the welfare theorems is that markets are powerful tools if — but only if — the distribution is reasonable. There is no case whatsoever for market allocation in the absence of a good distribution. Alternative procedures might yield superior results to a bad Pareto optimum under lots of plausible notions of superior.

There are less formal cases for markets, and I don’t necessarily mean to dispute those. Markets are capable of performing the always contentious task of resource allocation with much less conflict than alternative schemes. Market allocation with tolerance of some measure of inequality seems to encourage technological development, rather than the mere technological choice foreseen by the welfare theorems. In some institutional contexts, market allocation may be less corruptible than other procedures. There are lots of reasons to like markets, but the virtue of markets cannot be disentangled from the virtue of the distributions to which they give effect. Bad distributions undermine the case for markets, or for letting markets clear, since price controls can be usefully redistributive.

How to think about “good” or “bad” distributions will be the topic of our final installment. But while we still have our diagrams up, let’s consider a quite different question, market legitimacy. Under what distributions will market allocation be widely supported and accepted, even if we’re not quite sure how to evaluate whether a distribution is “right”? Let’s conduct the following thought experiment. Suppose we have two allocation schemes, market and random. Market allocation will dutifully find the Pareto-efficient outcome consistent with our distribution. Random allocation will place us at an arbitrary point inside our feasible set of outcomes, with uniform probability of landing on any point. Under what distributions would agents in our economy prefer market to random allocation?

Let’s look at two extremes.

welfare4_fig5

In Panel 5a, we begin with a perfectly equal distribution. The red area delineates a region of feasible outcomes that would be superior to the market allocation from Kaldor’s perspective. The green area marks the region inferior to market allocation. The green area is much larger than the red area. Under equality, Kaldor strongly prefers market allocation to alternatives that tend to randomize outcomes. “Taking a flyer” is much more likely to hurt Kaldor than to help him.

In Panel 5b, Hicks is rich and Kaldor is poor under the market allocation. Now things are very different. The red region is much larger than the green. Throwing some uncertainty into the allocation process is much more likely to help Kaldor than to hurt. Kaldor will rationally prefer schemes that randomize outcomes in favor of determinstic market allocation. He will prefer such schemes knowing full well that it is unlikely that a random allocation will be Pareto efficient. You can’t eat Pareto efficiency, and the only Pareto-efficient allocation on offer is one that’s worse for him than rolling the dice. If Kaldor is a rational economic actor, he will do his best to undermine and circumvent the market allocation process. Note that we are not (necessarily) talking about a revolution here. Kaldor may simply support policies like price ceilings, which tend to randomize who gets what amid oversubscribed offerings. He may support rent control and free parking, and oppose congestion pricing. He may prefer “fair” rationing of goods by government, even of goods that are rival, excludable, informationally transparent, and provoke no externalities. Kaldor’s behavior need not be taken as a comment on the virtue or absence of virtue of the distribution. It is what it is, a prediction of positive economics, rational maximizing.

Of course, if Kaldor alone is unhappy with market allocation, his hopes to randomize outcomes are unlikely to have much effect (unless he resorts to outright crime, which can be rendered costly by other channels). But in a democratic polity, market allocation might become unsupportable if, say, the median voter found himself in Kaldor’s position. Now we come to conjectures that we can try to quantify. How much inequality-not-entirely-in-his-interest would Kaldor tolerate before turning against markets? What level of wealth must the median voter have to prevent a democratic polity from working to circumvent and undermine market allocation?

Perfect equality is, of course, unnecessary. Figure 6, for example, shows an allocation in which Kaldor remains much poorer than Hicks, yet Kaldor continues to prefer the market allocation to a random outcome.

welfare4_fig6

We could easily compute from our diagram the threshold distribution below which Kaldor prefers random to market allocation, but that would be pointless since we don’t live in a two-person ecomomy with a utility possibilities curve I just made up. With a little bit of math [very informal: pdf nb], we can show that for an economy of risk-neutral individuals with identical preferences under constant returns to scale, as the number of agents goes to infinity the threshold value beneath which random allocation is preferred to the market tends to about 69% of mean income. (Risk neutrality implies constant marginal utility, enabling us map to from utility to income.) That is, people in our simplified economy support markets as long as they can claim at least 69% of what they would enjoy under an equal distribution. This figure is biased upwards by the assumption of risk-neutrality, but it is biased downwards by the assumption of constant returns to scale. Obviously don’t take the number too seriously. There’s no reason to think that the magnitude of the biases are comparable and offsetting, and in the real world people have diverse preferences. Still, it’s something to think about.

According the the Current Population Survey, at the end of 2012, median US household income was 71.6% of mean income. But the Current Population Survey fails to include data about top incomes, and so its mean is an underestimate. The median US household likely earns well below 69% of the mean.

If it is in fact the case that the median voter is coming to rationally prefer random claims over market allocation, one way to support the political legitimacy of markets would be to compress the distribution, to reduce inequality. Another approach would be to diminish the weight in decision-making of lower-income voters, so that the median voter is no longer the “median influencer” whose preferences are reflected by the political system.


Note: There will be one more post in this series, but I won’t get to it for at least a week, and I’ve silenced commenters for way too long. Comments are (finally!) enabled. Thank you for your patience and forbearance.

Welfare economics: inequality, production, and technology (part 3 of a series)

This is the third part of a series. See parts 1, 2, 4, and 5.

Last time, we concluded that output cannot be measured independently of distribution, “the size of the proverbial pie in fact depends upon how you slice it.” That’s a clear enough idea, but the example that we used to get there may have seemed forced. We invented people with divergent circumstances and preferences, and had a policy decision rather than “the free market” slice up the pie.

Now we’ll consider a more natural case, although still unnaturally oversimplified. Imagine an economy in which only two goods are produced, loaves of bread and swimming pools. Figure 1 below shows a “production possibilities frontier” for our economy.

IPT-Bread-Pools-Fig-1

The yellow line represents locations of efficient production. Points A, B, C, D, and E, which sit upon that line, are “attainable”, and the production of no good can be increased without a corresponding decrease in the other good. Point Z is also attainable, but it is not efficient: by moving from Z to B or C, more of both goods could be made available. Assuming (as we generally have) that people prefer more goods to fewer (or that they have the option of “free disposal”), points B and C are plainly superior to point Z. However, from this diagram alone, there is no way to rank points A, B, C, D, and E. Is possibility A, which produces a lot of swimming pools but not so much bread, better or worse than possibility E, which bakes aplenty but builds pools just a few?

Under the usual (dangerous) assumptions of “base case” economics — perfect information, complete and competitive markets, no externalities — markets with profit-seeking firms will take us to somewhere on the production possibilities frontier. But precisely which point will depend upon the preferences of the people in our economy. How much bread do they require or desire? How much do they like to swim? How much do they value not having to share the pools that they swim in? Except in very special cases, which point will also depend upon the distribution of wealth among the people in our economy. Suppose that the poor value an additional loaf of bread much more than they value the option of privately swimming, while the rich have full bellies, and so allocate new wealth mostly towards personal swimming pools. Then if wealth is very concentrated, the market allocation will be dominated by the preferences of the wealthy, and we’ll end up at points A or B. If the distribution is more equal and few people are so sated they couldn’t do with more bread, we’ll find points D or E. All of the points represent potential market allocations — we needn’t posit any state or social planner to make the choice. But the choice will depend upon the wealth distribution.

Let’s try to understand this in terms of the diagrams we developed in the previous piece. We’ll contrast points A and E as representing different technologies. Don’t mistake this for different levels of technology. We are not talking about new scientific discoveries. By a “technology” we simply mean an arrangement of productive resources in the world. One technology might involve devoting a large share of productive resources to the construction of very efficient large-scale bakeries, while another might redirect those resources to the mining and mixing of the materials in concrete. Humans, whether via markets or other decision-making institutions, can choose either of these technologies without anyone having to invent things. (By happenstance, Paul Krugman drew precisely this distinction yesterday.)

Figure 2 shows a diagram of Technology A and Technology E in our two person (“Kaldor” and “Hicks”) economy.

IPT-Fig-2

The two technologies are not rankable independently of distribution. I hope that this is intuitive from the diagram, but if it is not, read the previous post and then persuade yourself that the two orange points in Figure 3 below are subject to “Scitovsky reversals”. One can move from either orange point to the other, and it would be possible to compensate the “loser” for the change in a way that would leave both parties better off. So, by the potential Pareto criterion, each point is superior to the other, there is no well-defined ordering.

IPT-Fig-3

In contrast to our previous example of an unrankable change, Kaldor and Hicks here have identical and very natural preferences. Both devote most of their income to bread when they are poor but shift their allocation towards swimming pool construction as they grow rich. As a result, both prefer Technology A when the distribution of wealth is lopsided (the light blue points), while both prefer Technology E (the yellow point) when the distribution is very equal. It’s intuitive, I think, that whoever is rich prefers swimming-pool-centric Technology A. What may be surprising is that, if the wealth distribution is held constant, the choice of technology is always unanimous. If Hicks is rich and Kaldor is poor, even Kaldor prefers Technology A, because his meager share of the pie includes claims on swimming pools that he can offer to The Man in exchange for disproportionate quantities of bread.

This is more obvious if we consider an extreme. Suppose there were a technology that produced all bread and no swimming pools under a very unequal wealth distribution. Then, putting aside complications like altruism, whoever is rich eats a surfeit of bread that provides almost no satisfaction, and perhaps even throws away a large excess. The poor have nothing but bread to trade for bread, so there is no trade. They are stuck with no way to expand the small meals they are endowed with. But, add some swimming pools to the economy and give the poor a pro rata share of everything (i.e. define the initial distribution in terms of money), then all of a sudden the poor have something that the rich value, which they can exchange for excess bread that the rich value not at all. The rich are willing to surrender a lot of (useless to them) bread in exchange for even small claims on the swimming pools that they really want. When things are very unequal, the benefit to the poor of having something to trade exceeds the cost of an economy whose aggregate production is not well matched with their consumption. Aggregate production goes to the rich; the poor are in the business of maximizing their crumbs.

So, which organizations of resources, Technology A or Technology E, is “most efficient”, “maximizes the size of the pie”? There is no distribution-independent answer to that question. If the pie will be sliced up equally, then Technology E is superior. If the pie will be sliced up very unequally, then Technology A is superior. The size of the pie depends upon how you slice it, given very natural, very ordinary sorts of preferences. Patterns of resource utilization, of what gets produced and what does not, depend very much on the distribution of wealth within an economy. It’s not coherent to claim that economic arrangements are “more efficient” than they would be under some alternative distribution. If what you mean by “efficiency” is mere Pareto efficiency, there are Pareto-efficient outcomes consistent with any distribution. If you have a broader notion of economic efficiency in mind, then which arrangements are “most efficient” cannot be defined independently of the distribution of wealth.

I’ll end with a speculative thought experiment, about technological development. Remember, up until now, we’ve been considering alternative choices among already known technologies. Now let’s think about the relationship between distribution and the invention of new technologies. Consider Figure 4 below:

IPT-Fig-4

In our two-person economy, technological improvement shifts utility possibility curves outward, making it feasible for both individuals to increase their enjoyment without any tradeoff. In Figure 4, we have shown outward shifts from the two technologies that we considered above. Panel 4a shows incremental improvements on Technology A. Panel 4b shows incremental improvements on Technology E. Not all technological improvements are incremental, but most are, even most of what gets marketed as “revolutionary”. We assume, per the discussion above, that our economy chooses the distribution-dependent superior technology and iterates from that. We also assume that, absent political intervention, the deployment of new technology leaves the distribution of wealth pretty much unchanged. That may or may not be realistic, but it will serve as a useful base case for our thought experiment.

In both panels, after four iterative improvements, technological improvement dominates the choice of technologies in a rankable Kaldor-Hicks sense. After four rounds of technological change, regardless of which technology we started from, there is some distribution under the new technology that would be a Pareto improvement over any feasible distribution prior to the technological development. (My choice of four iterations is completely arbitrary; this is just an illustration.) If we assume that adoption of the new technology is accompanied by optimal social choice of distribution (however the “optimality” of that choice is defined), technological improvement quickly overwhelms the initial, distribution-dependent, choice of technology. A futurist, technoutopian view naturally follows: whatever sucks about now, technological change will undo it, overcome it.

But “optimal social choice of distribution” is a hard assumption to swallow. What if we suppose, more realistically, inertia — that there’s a great deal of status quo bias in distributive institutions, that the distribution after technology adoption remains similar to the distribution prior. Worse, but realistically, what if we imagine that distribution-preserving technological change and redistribution are perceived within political institutions as alternative means of addressing economically induced unhappiness and dissatisfaction, as substitutes rather than complements. Some voices hail “innovation” as the solution to problems like poverty and precarity, while other voices argue that redistribution, however contentious, represents a surer path.

Under what circumstances would distribution-preserving innovation dominate distributional conflict as a strategy for overcoming economic discontent? A straightforward criterion would be when technological change could yield outcomes better than any change in distributional arrangements or choice of status quo technologies. In Figure 4 (both panels), this dominant region is represented by the purple region northeast of the purple dashed lines.

Distribution-preserving innovation implies moving outward with technological change along the current “distribution ray”, represented by the red dashed line. Qualitatively, loosely, informally, the distance that one would have to travel along a distribution ray before intersecting with the dominant region is a measure of the plausibility of innovation as a universally acceptable alternative to distributional conflict. The shorter the distance from the status quo to the dominant technology region, the more attractive innovation, rather than distributional conflict, becomes for all parties. Conversely, if the distance from the status quo to a sure improvement is very long, one party is likely to find contesting distributive arrangements a more plausible strategy than supporting innovation.

In the right-hand panel of Figure 4, representing an equal current distribution, innovation along the distribution ray would pretty quickly reach the dominant region. Just a few more rounds than are shown and the yellow-dot status quo could travel along the red-dashed distribution ray to the purple promised land. But in the left-hand panel, where we start with a very unequal distribution, the distribution ray would not intersect the purple region for a long, long time, well beyond the top boundary of the figure. When the status quo is this unequal, innovation is unlikely to be a credible alternative to distributional conflict. In the limiting case of a perfectly unequal distribution, the distribution ray would sit at 90° (or 0°) and even infinite innovation would fail to intersect the redistribution-dominating region. For the status quo loser, no possible distribution-preserving innovation would be superior to contesting distributional arrangements.

For agents with similar preferences, more equal distributions will be “closer” to the dominant region for three reasons:

  • perfect equality is “minimax“, that is it minimizes the maximum benefit achievable by either party from redistribution, reducing the attractiveness of distributive fights;
  • under equality, for a given level of technology, the choice among available technologies will fall closer (or at least as close) to the dominant region as under less equal distributions, giving iterations from that choice a head start;
  • the closest-in point of the dominant region (the point closest to the origin) sits on the equal-distribution ray, it is there that one finds the “lowest hanging fruit”. More unequal “distribution rays” point to ever more distant frontiers of the dominant region.

Note that there is a continuum, not a stark choice between perfectly equal and very unequal distributions. The more equal the distribution of wealth, the more attractive will be innovation as an alternative to distributive conflict. As the distribution of wealth becomes more unequal, distributive losers will come to perceive calls for innovation as a fig-leaf that distracts from a more contentious but superior strategy, while distributive winners will preach technoutopianism with ever greater fervor.

There’s lots to argue with in our little thought experiment. Technological change needn’t be distribution-preserving, innovation and redistribution needn’t be mutually exclusive priorities, the “distance” in our diagrams — in joint utility space along contours of technological change — may defy the Euclidean intuitions I’ve invited you to indulge. Nevertheless, I think there’s a consonance between our story and the current politics of technology and innovation. The best way to build a consensus in favor of innovation and technological development may be to address distributional issues that make cynics of potential enthusiasts.


Note: With continued apologies, comments remain closed until the completion of this series of posts on welfare economics. Please do write down your thoughts and save them! I think there will be two more posts, with comments finally open on the last.

Update History:

  • 2-Jul-2014, 4:25 a.m. PDT: “other voices argue that redistribution, however contentions contentious, represents a surer path.”

Welfare economics: the perils of Potential Pareto (part 2 of a series)

This is the second part of a series. See parts 1, 3, 4, and 5.

When economics tried to put itself on a scientific basis by recasting utility in strictly ordinal terms, it threatened to perfect itself to uselessness. Summations of utility or surplus were rendered incoherent. The discipline’s new pretension to science did not lead to reconsideration of its (unscientific) conflation of voluntary choice with welfare improvement. So it remained possible for economists to recommend policies that would allow some people to be made better off (in the sense that they would choose their new circumstance over the old), so long as no one was made worse off (no one would actively prefer the status quo ante). “Pareto improvements” remained defensible as welfare-improving. But, very little of what economists had previously understood to be good policy could be justified under so strict a criterion. Even the crown jewel of classical liberal economics, the Ricardian case for free trade, cannot meet the test. As John Hicks memorably put it, the caution implied by the new “economic positivism might easily become an excuse for the shirking of live issues, very conducive to the euthanasia of our science.”

Hicks, following Nicholas Kaldor and Harold Hotelling, thought he had a way out. Suppose there were an economy that, in isolation, could produce 50 bottles of wine and 40 bolts of cloth. If the borders were opened, the country would specialize in wine-making. Devoting its full capacity to the task, it would produce enough wine so as to be able to keep 60 bottles for domestic use, even while trading for a full 50 bolts of cloth. Under the presumption that people prefer more to less, “the economy” would clearly be made better off by opening the borders. There would be more wine and more cloth “to go ’round”. However, in practice, skilled cloth-makers would be impoverished by the change. They would be reemployed as menial grape-pickers, leading to a reduction of earnings so great that they’d have less cloth and less wine to consume, despite the increase in overall wealth. Opening the borders is not a Pareto improvement: the “pie” grows larger, but some people are made badly worse off. So, on what basis might a “scientific” economist recommend the policy?

The insight that Kaldor, Hicks, and Hotelling brought to the problem is simple. Opening the borders represents a potential Pareto improvement, if we imagine that those who benefit from the change compensate those who lose out. In our example, since the total quantities of wine and cloth available are greater with free trade than without, there must be some way of distributing the bounty that leaves everyone at least as well off as they were before, and others better off. Economists could, in good conscience, argue for policies that would be Pareto improvements, if they were bundled with some redistribution, regardless of whether or not the redistribution would, in the event, actually happen. Such a change is now said to be “Kaldor-Hicks efficient“, or, more straightforwardly, a “Potential Pareto improvement”.

At first blush, this sounds dumb. Nobody harmed by a change can eat a “potential” Pareto improvement. But there is, nonetheless, a case to be made for the criterion. The distribution of scarce goods and services is inherently a question of competing values. But quantities of goods are objective and measurable. So a “scientific” economics could concern itself with “efficiency” — maximizing objective economic output, while the distribution of that output and concerns about “equity” could be left to the political institutions that adjudicate competing values. An activity that could leave everybody with all the goods and services they might otherwise have while providing some people with even more necessarily implies an increase in the quantity of goods and services made available, and is objectively superior on efficiency grounds. If those goods and services get distributed poorly, that may be a terrible problem. But it represents a failure of politics, and outside the scope of a scientific economics. Let economics concern itself with the objective problem of maximizing output, and remain silent on the inherently political question of how output should be distributed.

This is might be a clever answer to the threat of the “euthanasia of our science”, but it is incoherent as the basis for a welfare economics. In reality, economic output cannot be objectively measured. The quantity of corn or cars or manicures produced can be counted. An action that increases the availability of all goods, actual and potential, might be pronounced an objective increase in the size of the economy. But most economic activities provoke tradeoffs in production: more of something gets produced, while less of something else does. There is no way to determine whether such an event represents an increase or decrease in the size of the economy without making interpersonal comparisons of value. Dollar values can’t be used in place of goods and services unless the dollars actually change hands, prices change to reflect the new patterns of wealth and production, and all parties consent that their new situation is superior to the old. When there are trade-offs made in patterns of production, only an actual Pareto improvement counts as an objective increase in the size of an economy.

Tibor de Scitovsky demonstrated very elegantly the incoherence of Kaldor-Hicks efficiency in a world with multiple goods. I’m going to present the argument in detail, stealing a pedagogical trick from Matthew Adler and Eric Posner, but adding my own overdone diagrams.

Let’s start charitably. Figure 1 shows some pictures of the special case that might be scored as an objective increase in efficiency:

WellOrderedComic

We have an economy of two people, Nicholas Kaldor and John Hicks. In Panel 1, the bright green curve represents a “utility possibilities curve“. For each point on the curve, the x value represents “how much utility” Kaldor enjoys while the y value represents how much Hicks enjoys. Utility is strictly ordinal, so the axes are unlabeled, and the exact shapes are meaningless. You could stretch or squeeze the diagram as much as you like, rescale it to any aspect ratio, and nothing would change. Any transformation that preserves the x– and -ordering of things is fine.

At a given time, the economy is represented by a point on the curve. Each location reflects a different distribution of economic output. The point where the curve intersects the y-axis represents an economy in which Hicks gets literally all of the goods, while Kaldor dies starving. As we rotate clockwise along the curve, Hicks gets less and less, while Kaldor gets more and more. Again, the exact shape is meaningless. All we can tell is that, as control over economic output shifts, Hicks’ utility declines while Kaldor’s rises. Finally we reach the x-axis, where it is Kaldor who starves while Hicks feasts. At the moment, the economy sits at the yellow point marked “status quo”.

A distribution can be summarized by the angle marked θ in Panel 1. When θ is 0°, Kaldor owns the whole economy. When θ is 90°, Hicks owns everything. We can locate Kaldor’s and Hicks’ satisfaction under any distribution by following the “distribution ray” to the utility possibilities curves. [1]

In Panel 2, a policy change is proposed. It might be deployment of a new technology, or construction of high return infrastructure. But let’s imagine that it trade-liberalization under circumstances where Ricardian comparative advantage logic unproblematically holds.

It turns out that John Hicks is a skilled cloth-maker. That’s how he earns an honest living. If trade were liberalized, textile manufacture would be outsourced, and he would be out of a job. Nicholas Kaldor, on the other hand, owns acres and acres of vineyards. His real income would dramatically increase, as cloth would grow cheaper and the market for his wine would expand. If the borders were simply thrown open, the economy would end up at the position marked “Uncompensated Project” in Panel 2. Trade liberalization is not Pareto improving. As you can see, relative to the status quo, we shift rightwards (Kaldor benefits big time!) but also downwards (Hicks loses) if the project is implemented without compensating redistribution. Can we state, as a matter of objective science rather than value judgment, that trade-liberalization would represent an efficiency improvement?

Kaldor, Hicks, and Hotelling ask us to perform a thought experiment represented on Panel 3. Suppose that we did throw open the borders. We’d be thrust along the yellow arrow from the current status quo to the new “uncompensated project” point. Would it be possible to redistribute along the new utility possibilities frontier in a way that would render the policy-change-plus-redistribution a Pareto improvement, a boon both for Kaldor and for Hicks? The existence of the purple region, above and to the right of our original status quo, shows that it is indeed possible. Our trade liberalization is a “potential Pareto improvement”, and should be scored by economists an objective efficiency gain, regardless of whether not the political institutions that adjudicate rival claims actually impose compensation. Political institutions might not compensate Hicks at all, leaving him where he lands in Panel 3. Or they might compensate only partially, as in Panel 4. Maybe it is best to retain market incentives for fogies like Hicks to anticipate change and learn new skills. Maybe the resentment that would be provoked by full compensation overwhelms the benefit of making Hicks whole. Maybe there is no good reason, but the political system is plagued by inertia and so fails to compensate. Or maybe Kaldor has bought the politicians with his good wine. Those are questions beyond the scope of economic science. Nevertheless, say Kaldor, Hotelling, even penurous Hicks, we can objectively declare the proposed policy an efficiency improvement. If poor Hicks starves when all is said and done, well, that will be the fault of the politicians. Or perhaps it will be optimal. As economists, we really can’t say. Incomparable subjectivities are involved.

I have to admit to feeling queasy about this, like a surgeon who opens the chest of an awake screaming patient and then blames the anesthesiologist for sleeping in. But this is the procedure Kaldor and Hicks propose for us. (Hotelling, to his great credit, admits the possibility that imperfect politics might imply revision of his economic prescriptions.) But we’ll put our reservations aside for now, and declare this policy change an “efficiency increase”, distinct and separable from distributional concerns.

Now let’s examine a different project. Hicks has abandoned his cloth-making (a folly of youth!) and has entered a respectable profession, bourbon distillery. Kaldor, never a fool, has stuck with his wine-making.

Here is the thing, though. Each gentleman has come to despise the good he himself produces. The grapes stain Kaldor’s fingers, his clothes, his bare soles. Hicks is plagued by the smell of corn mash and the weight of oak barrels. If Hicks were a rich man, he’d never look at a bottle of bourbon. He’d sip wine like a gentleman. If Kaldor were a rich man, he would drown the nightmares (out, out, damned wine stain!) in a bottle of whiskey.

In Panel 1 of Figure 2, we start very much like before. Kaldor and Hicks ply their trades, they get what they get, represented in joint utility terms by the yellow-dot status quo.

Scitovsky-Comic

In Panel 2, a rezoning of some land is considered, which would prevent “industrial agriculture” on acreage currently devoted to the growing of corn. There’d be nothing for this land but to transition it to bucolic vineyards. Both of our protagonists are ambivalent about the proposal. In his role as producer, the rezoning would be great for Kaldor’s business. Hicks would have to sell the land for a song, enabling more and cheaper wine production. But the rezoning would shift the composition of output in a manner opposed to Kaldor’s consumption preferences. If Kaldor could be made rich in some manner independent of the proposed change — if we drew a “distribution ray” in Panel 2 at 0° signifying Kaldor’s complete ownership of output — Kaldor would strongly prefer the status quo and the abundant bourbon it produces to the proposed repurposing of land for wine. Conversely, the businessman in Hicks hates the proposal, selling out to Kaldor for a song would really sting! But the wine-lover in Hicks would be delighted, if only he’d be rich enough to afford the wine. If the “distribution ray” were at 90° — if Hicks was very rich — he’d strongly prefer that the land be rezoned!

So, can economic science tell us whether the rezoning is efficient? According to Messrs. Kaldor, Hicks, and Hotelling (when they dabble at economics), the proposal is efficient. In Panel 3, you can see that, subsequent to the rezoning, it would be possible to redistribute output in a manner that would leave both parties better off than the status quo, exactly as in Panel 3 of Figure 1 above! The change would survive any cost-benefit analysis.

But. Here comes Mr. Scitovsky, who is a real sourpuss. He points out (Panel 4) that, subsequent to the rezoning, analysis under the very same criterion would declare a reversal of the rezoning efficient! Does it make sense to declare the rezoning an “increase in economic efficiency” and then to declare the undoing another increase in economic efficiency? I have an idea: Get the zoning authority to to re-re-re-re-re-re-rezone the land. We’ll have so many economic efficiency increases, all scarcity will be vanquished!

Or not. What Scitovsky showed, quite definitively, is that the Potential Pareto criterion is incoherent as a measure of economic efficiency. It just doesn’t work. In a fallen world, it may in practice be used to evaluate potential changes, just as in a fallen world interpersonal comparisons of utility are used to evaluate changes. Both are equally (un)scientific under the axioms of liberal economics. Scitovsky proved that, in general, it is simply not possible to score the efficiency of a change without taking into account effects both on output and on distribution. The two are not independent, except in the special case illustrated by Figure 1.

Scitovsky didn’t think he was destroying the Potential Pareto criterion entirely. He pointed out that, for some distributions, reversals are not possible. Panel 5 of Figure 2 divides the utility possibilities frontier after the proposed change into distributions that are Pareto-improving (which implies making actual, full compensation for the change), into regions that are reversable and therefore not rankable as efficiency improvements, and into regions that are Potential Pareto but not Pareto and still irreversible. Scitovsky thought that changes that led to these distributions might still scored as efficiency increasing under Kaldor-Hicks-Hotelling logic. It took subsequent work to show that, no, even these irreversible regions aren’t safe. (See Blackorby and Donaldson for a mathematical review.) Scitovsky’s proposed modification of the Kaldor-Hicks criterion is intransitive, permitting cycles if more than two projects are compared. Project A can be “more efficient” than the status quo, Project B can be “more efficient” than Project A, but the status quo can be “more efficient” that Project B. Hmm. Panel 6 of Figure 2 shows an example. I won’t go through it in detail, but if you’ve understood the diagrams, you should be able to persuade yourself that 1) each transition is both Kaldor-Hicks efficient and irreversible; 2) there is no coherent efficiency ordering between them.

While it is impossible to rank alternatives at arbitrary distributions, it is possible to rank projects if we fix a distribution. In Figure 2, Panel 2, extend a “distribution ray” outward from the origin at any angle. The outermost project is preferred. At a slight angle, when Kaldor enjoys most of the output, the bourbon-producing status quo is preferable. At a steep angle, when it is Hicks who will do most of the consuming, the wine-drenched rezoning is preferable. There is some distribution where both Kaldor and Hicks would be indifferent to the proposed rezoning, where the curves cross.

Given the rather elaborate story we told to rationalize the shape of the curves in Figure 2, you might wonder whether we might rescue a “scientific” efficiency from value-laden distributional concerns by suggesting that these “reversals” and “intransitivities” are rare, pathological cases that can in practice be ignored. They are not. We will encounter a simpler example soon. The likelihood that these sorts of issues arise increases with the number of people and goods in an economy, unless you restrict the form of peoples’ utility functions unrealistically. Allowing for (nearly) unrestricted preferences (people are assumed always to prefer more goods to less or to have the option of “free disposal”), the only projects that can be ranked independently of distribution are those that increase the number of some goods and services without any cost in availability of other goods or services, an analog to Pareto efficiency in the sphere of production.

As one economist put it:

The only concrete form that has been proposed for [a social welfare function grounded in ordinal utilities] is the compensation principal developed by Hotelling. Suppose the current situation is to be compared with another possible situation. Each individual is asked how much he is willing to pay to change to the new situation; negative amounts mean that the individual demands compensation for the change. The possible situation is said to be better than the current one if the algebraic sum of all the amounts offered is positive. Unfortunately, as pointed out by T. de Scitovsky, it may well happen that situation B may be preferred to situation A when A is the current situation, while A may be preferred to B when B is the current situation.

Thus, the compensation principal does not provide a true ordering of social decisions. It is the purpose of this note to show that this phenomenon is very general.

That economist was Kenneth Arrow. “This note“, circulated at The Rand Corporation, was the first draft of what later become known as Arrow’s Impossibility Theorem.

It is not, actually, an obscure result, this impossibility of separating “efficiency” from distribution. The only place you will not find it is in most introductory economics textbooks, which describe an “equity” / “efficiency” trade-off without pointing out that the size of the proverbial pie in fact depends upon how you slice it.

I wonder why that is missing.


Note: This was the second of a series of posts on welfare economics. The first was here. With apologies, I’m disabling comments until the end of the series, so I can get through my little plan untempted by the brilliant and enticing diversions that I know commenters would offer. Please do write down your comments, and save them for the final post in the series. I thought this would go faster; I feel very guilty for leaving no forum for responses for so long. I really am sorry about that!


[1] Because the scales are arbitrary, the numerical value of θ between 0° and 90° are also arbitrary. Each angle represents a concrete distribution, but the number associated with the angle depends on how we draw the diagram. Despite that, we will find θ to be meaningful in its ordering when we draw comparisons between arrangements and policies. We will find that, once we fix a representation of the utilities possibilities curves, there are regions of θ representing distributions of wealth over which one policy is superior, regions over which another policy is superior, and points at which Kaldor and Hicks would be indifferent to the alternatives. The ordering of these regions will be conserved, even though the numerical values of θ associated with them will not be. Keep reading!

Update History:

  • 5-Jun-2014, 10:45 a.m. PDT: “known as the Arrow’s Impossibility Theorem”
  • 6-Jun-2014, 12:30 p.m. PDT: “these ‘reversals’ and ‘intransitivities’ represent are rare, pathological cases that can in practice be ignored. They cannot be are not.”
  • 2-Jan-2016, 2:05 p.m. PST: Some fixes: “counterclockwise clockwise“; add footnote [1] re the arbitrariness of θ values; “He pointed out that, for some distributions, reversals are not possible.”; “Note that wWhile it is impossible”

Welfare economics: an introduction (part 1 of a series)

This is the first part of a series. See parts 2, 3, 4, and 5.

Commenters at interfluidity are usually much smarter than the author whose pieces they scribble beneath, and the previous post was no exception. But there were (I think) some pretty serious misconceptions in the comment thread, so I thought I’d give a bit of a primer on “welfare economics”, as I understand the subject. It looks like this will go long. I’ll turn it into a series.

Utility, welfare, and efficiency

Our first concern will be a question of definitions. What is the difference between, and the relationship of, “welfare” and “utility”? The two terms sound similar, and seem often to be used in similar ways. But the difference between them is stark and important.

“Utility” is a construct of descriptive or “positive” economics. The classical tradition asserts that economic behavior can be usefully described and predicted by imagining economic agents who rank the consequences of possible actions and choose the action associated with the highest-ranking. Utility, strictly speaking, has nothing whatsoever to do with well-being. It is simply a modeling construct that (it is hoped) helps organize and describe observed behavior. To claim that “people value utility” is a claim very similar to “nature abhors a vacuum”. It’s a useful way of putting things, but nature’s abhorrence is not meant to signal an actual discomfort demanding remedy in an ethical sense. Subjective well-being, of an individual human or of the universe at large, is simply not a topic amenable to empirical science. By hypothesis, human agents “strive” to maximize utility, just as molecules “strive” to find lower-energy states over the course of a chemical reaction. Utility is important not as a desideratum of scientifically inaccessible minds, but as a tool invented by economists, a technique for describing and modeling human behavior that may (or may not!) turn out to be useful.

“Welfare” is a construct of normative economics. While “utility” is a thing we imagine economic agents maximize, “welfare” is what economists seek to maximize when they offer policy advice. There is no such thing as, and can be no such thing as, a “scientific welfare economics”, although the discipline is still burdened by a failed and incoherent attempt to pretend to one. Whenever a claim about “welfare” is asserted, assumptions regarding ethical value are necessarily invoked as well. If you believe otherwise, you have been swindled.

If claims about welfare can’t be asserted in a value-neutral way, then neither can claims of “efficiency”. Greg Mankiw teaches that “[under] free markets…[transactors] are together led by an invisible hand to an equilibrium that maximizes total benefit to buyers and sellers”. That assertion becomes completely insupportable. Even the narrow and technical notion of Pareto efficiency, often omitted from undergraduate treatments, is rendered problematic, as nonmarket allocations can also be Pareto efficient and value-neutral ranking of allocations becomes impossible. Welfare economics is the very heart of introductory economics. Market efficiency, deadweight loss, tax incidence, price discrimination, international trade — all of these topics are diagrammed and understood in terms of what happens to the area between supply and demand curves. If we cannot redeem those diagrams, all of that becomes little more than propaganda. (We’ll think later on about how we might redeem them!)

The prehistory of a problem

The term “utility” is associated with Jeremy Bentham’s “utilitarianism”, which sought to provide “the greatest good for the greatest number”. Prior to the 20th Century, utility was an intuitive quantifier of this “goodness”. It represented an cardinal quantity — 15 Utils is better than 10 Utils, and we could think about comparing and summing Utils enjoyed by multiple people. Classical utilitarianism made no distinction between utility and welfare. Individuals were hypothesized to maximize something that could be understood as “well-being” in a moral sense, this well-being was at least in theory quantifiable and comparable across individuals. “Maximizing aggregate utility” and “maximizing social welfare” amounted to the same thing. Utility had a meaningful quantity, it represented an amount of something, even if that something was as unobservable as the free energy in a chemist’s flask.

The 20th Century saw an attempt to “scientificize” economics. The core choice associated with this scientificization was a decision to reconceive of utility as strictly “ordinal”. A posited value for utility was to serve as a tool for ranking of potential actions, significant only by virtue of whether it was greater than or less than some other value, with no meaning whatsoever attached to the distance between. If an agent must choose between a chocolate bar and a banana, and reliably goes for the Ghirardelli, then it is equivalent to attribute 3 Utils or 300 Utils to the candy, as long as we have attributed less than 3 Utils to the banana. The ordering alone determines agents’ choices. Any values that preserve the ordering are identical in their implications and their accuracy.

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

The reconceptualization of utility in strictly ordinal terms represented a contestable methodological choice. It carries within it a substantive assertion that the only useful measure of preference intensity is a ranking of alternatives. If a one person claims to be near indifferent between the banana and the chocolate, but reliably chooses the chocolate, while another person claims to love chocolate and hate bananas, economic methodology declares the two equivalent and the verbal distinction of value (or observable differences in heart rates or skin tone or whatever that may accompany the choice) unworthy or unuseful to measure. It could be the case, for example, that a cardinal measure of preference intensity based on heart rates and brainwaves would predict behavior more effectively than a strictly ordinal measure (just as measuring the heat generated by a chemical reaction provides information useful in addition to the fact that the reaction does occur). But, wisely or not (I’m agnostic on the point), economists of the early 20th Century decided that mere rankings of choices offered a sufficient, elegant, and straightforwardly measurable basis for a scientific economics and that subjective or objective covariates that might be interpreted as intensity were best discarded. (Perhaps this will change with some “neuroeconomics”. Most likely not.)

An entirely useful and salutary effect of the reconceptualization was that it forced a distinction, blurred in traditional utilitarianism, between positive and normative conceptions of utility, or in the language now used, between “utility” and “welfare”. It rendered this distinction particularly obvious with respect to notions of aggregate welfare or utility. Ordinal values can’t meaningfully be summed. If we attach the value 3 utils to one individual’s chocolate bar and 300 utils to another’s, these numbers are arbitrary, and it does not follow that giving the candy to the second person will “improve overall well-being” any more than giving it to the first would. A scientific economics whose empirical data are “revealed preferences” — which, among multiple alternatives, does an individual choose? — has nothing analogous to measure with respect to the question of group choice. Given one chocolate bar and two individuals, the “revealed preference” of the group might be determined by which has the stronger fist, a characteristic that seems conceptually distinct from the unobservable determinants of action within an individual.

However, it is an error, and quite a grievous one, to interpret (as a commenter did) this limited use of “revealed preference” as a predictor of group behavior as an “ethical principle” of welfare economics. Strictly speaking, when we are talking about utility, there are no ethical principles whatsoever, just observations and predictions. Even within one individual, even when we can observe that an individual reliably chooses chocolate bars over bananas, it does not follow as ethical matter that supplying the chocolate in preference to the fruit improves well-being.

Within a single individual, to jump from utility to welfare, to equate satisfying a “preference” that is epistemologically equivalent to nature’s abhorrence of vacuum with improving an individual’s well-being in a morally relevant way requires a categorical leap, out of the realm of “scientific economics” and into what might be referred to as “liberal economics”. It is philosophical liberalism, associated with writers like John Stuart Mill and John Locke, that bridges the gap between observations about how people behave when faced with alternatives and “well being” in a morally relevant sense. The liberal conflation of revealed preference with well-being is deeply contestable and much contested, for obvious reasons. Should we attach moral force to the choice of a chocolate bar over a banana, even under circumstances where the choice seems straightforwardly destructive of the chooser’s health? Philosophical liberalism depends on a mix of a priori assumptions about the virtue of freedom and consequentialist claims about “least bad” outcomes given diverse preferences (in a subjective and morally important sense, rather than as a scientist’s shorthand for morally neutral observed or predicted behavior).

I don’t wish to contest philosophical liberalism (I am mostly a liberal myself), just to point out that it is contestable and not remotely “scientific”. However, philosophical liberalism permits a coherent recasting of value-neutral “scientific” economics into a normative welfare economics but only at the level of the individual. Liberal economics permits us to interpret the preference maximization process summarized by increased utility rankings as welfare maximization in a moral sense. A liberal economist can assert that a person’s welfare is increased by trading a banana for a chocolate bar, if she would do so when given the option. She can even try to overcome the strictly ordinal nature of utility and uncover a morally meaningful preference intensity by, say, bundling the banana with some US dollars and asking how many dollars would be required to persuade her to stick with the banana. There are a variety of such cardinal measures of welfare, which go under names like “compensating variation” (very loosely, how much a person would pay to get the chocolate rather than the banana) and “equivalent variation” (how much you’d have to pay the person to keep the banana, again loosely). However, what all of these measures have in common is that they are only valid within the context of a single individual making the choice. Scientifico-liberal economics simply has no tools for ranking outcomes across individuals, and the dollar value preference intensities that might be measurable for one individual are not commensurable with the dollar values that might be measured for some other unless one imagines that those dollars actually change hands.

Aha! So what if we imagine the dollars actually do change hands? Could that serve as the basis for a scientifico-liberal interpersonal welfare economics? In a project most famously associated with John Hicks and Nicholas Kaldor, economists strove to claim that, yes, it could! They were mistaken, irredeemably I think, although most of the discipline seems not to have noticed. The textbooks continue to present deeply problematic normative claims as scientific and indisputable. (See the previous post, and more to follow!)

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

I do actually think we can do a bit better than plead ignorance, but for that you’ll have to wait, breathlessly I hope, until the end of our series.


Note: Unusually, and with apologies, I’ve disabled comments on this post. This is the first of a series of planned posts. I wish to write the full series, and I don’t have the discipline not to be deflected by your excellent responses. The final post in the series will have comments enabled. Please write down your thoughts and save them for just a few days!

Update History:

  • 30-May-2014, 2:25 p.m. PDT: “that is epistemologically equivalent to natures nature’s abhorrence”, “just to point out that it is deeply contestable and not remotely”
  • 31-May-2014, 3:40 a.m. PDT: “tool invented by economists, a as technique”
  • 2-Jun-2014, 3:50 p.m. PDT: “rather than as the a scientist’s shorthand”, “value-neutral “scientific” economic economics”
  • 5-Jun-2014, 6:55 p.m. PDT: “some pretty serious misconception misconceptions

Should markets clear?

David Glasner has a great line:

[A]s much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.

Macroeconomics is where all the booming controversies lie. Some economists like to argue that the field has an undeservedly bad reputation because the part that “just works”, microeconomics, has such a low profile. That view is mistaken. Microeconomic analysis, whenever it escapes the elegance of theorem and proof and is applied to the actual world, always makes assumptions about the macroeconomy. One very common assumption microeconomists frequently forget that they are making is an assumption of rough distributional equality. Once that goes away, even such basic conclusions like “markets should clear” go away as well.

The diagrams above should be familiar to you if you’ve had an introductory economics course. The top graph shows supply and demand curves, with an equilibrium where they meet. At the equilibrium price where quantity supplied is equal to quantity demanded, markets are said to “clear”. The bottom two diagrams show “pathological” cases where prices are fixed off-equilibrium, leading to (misleadingly named) “shortage” or “glut”.

We’ll leave unchallenged (although it is a thing one can challenge) the coherence of the supply-demand curve framework, and the presumption that supply curves upwards and demand curves down. So we can note, as most economists would, that the equilibrium price is the one that maximizes the quantity exchanged. Since a trade requires a willing buyer and a willing seller, the quantity sold is the minimum of quantity supplied and quantity demanded, which will always be highest where the curves meet.

But the goal of market exchange is to maximize welfare, not to generate trade for the sheer churn of it. In order to make the case that the market-clearing price maximizes well-being as well as trade, your introductory economics professor introduced the concept of surplus, represented by the shaded regions in the diagram. The light blue “consumer surplus” represents in a very straightforward way the difference between the maximum consumers would have been willing to pay for the goods they received and what they actually paid for the goods. The green producer surplus represents how much money was received in excess of what suppliers would have been minimally willing to accept for the goods they have sold. Intuitively (and your economics instructor is unlikely to have challenged this intuition), “surplus over willingness to pay” seems a good measure of consumer welfare. After all, if I would have been willing to pay $100 for some goods, and it turns out I can buy then for only $80, I have in some sense been made $20 better off by the trade. If I can buy the same bundle for only $50, I’ve been made even more better off. For an individual consumer or producer, under usual economic assumptions, welfare does vary monotonically with the surpluses represented in the graph above. And market-clearing maximizes the total surplus enjoyed by the consumer and producer both. (The naughty red triangles in the diagram represent the loss of surplus that occurs if prices are fixed at other than the market-clearing value.) Markets are “efficient” with respect to total surplus.

Unfortunately, in realistic contexts, surplus is not a reliable measure of welfare. An allocation that maximizes surplus can be destructive of welfare. The lesson you probably learned in an introductory economics course is based on a wholly unjustifiable slip between the two concepts.

Maximizing surplus would be sufficient to maximize welfare in a world in which one individual traded with himself. (Don’t laugh: that is a coherent description of “cottage production”.) But that is not the world to which these concepts are usually applied. Very frequently, surplus is defined with respect to market supply and demand curves, aggregations of individuals’ desire rather than one person’s demand schedule or willingness to sell, with producers and consumers represented by distinct people.

Even in the case of a single consumer and a different, single producer, one can no longer claim that market-clearing necessarily maximizes welfare. If you retreat to the useless caution into which economists sometimes huddle when threatened, if you abjure all interpersonal comparisons of welfare, then one simply cannot say whether a price below, above, or at the market-clearing value is welfare maximizing. As you see in the diagrams above, a price ceiling (a below-market-clearing price) can indeed improve our one consumer’s welfare, and a price floor (an above-market price) can make our producer better off. (Remember, within a single individual, surplus and welfare do covary, so increasing one individual’s surplus increases her welfare.) There are winners and losers, so who can say what’s right if utilities are incommensurable?

Here at interfluidity, we are not in the business of useless economics, so we will adopt a very conventional utilitarianism, which assumes that people derive similar but steadily declining welfare from the wealth they get to allocate. Which brings us to our first result: If our single producer and our single consumer begin with equal endowments, and if the difference between consumer and producer surplus is not large, than the letting the market clear is likely to maximize welfare. But if our producer begins much wealthier than our consumer, enforcing a price ceiling may increase welfare. If it is our consumer who is wealthy, then the optimal result is a price floor. This result, a product of unassailably conventional economics, comports well with certain lay intuitions that economists sometimes ridicule. If workers are very poor, then perhaps a minimum wage (a price floor) improves welfare even of it does turn out to reduce the quantity of labor engaged. If landlords are typically wealthy, perhaps rent control (a price ceiling) is, in fact, optimal housing policy. Only in a world where the endowments of producers and those of consumers are equal is market-clearance incontrovertibly good policy. The greater the macro- inequality, the less persuasive the micro- case for letting the price mechanism do its work.

Of course we have cheated already, and jumped from the case of a single buyer and seller to a discussion of populations. Fudging aggregation is at the heart of economic instruction, and I do love to honor tradition. If producers and consumers represent distinct groupings, but each group is internally homogeneous, aggregation doesn’t present us with terrible problems. So we’ll stand with the previous discussion. But what if there is a great diversity of circumstance within groupings of consumers or producers?

Let’s consider another common case about which many economists differ with views that might be characterized as “populist”. Suppose there is a limited, inelastic supply of road-lanes flowing onto the island of Manhattan. If access to roads is ungated, unpleasant evidence of shortage emerges. Thousands of people lose time in snarling, smoking, traffic jams. A frequently proposed solution to this problem is “congestion pricing”. Access to the bridges and tunnels crossing onto the island might be tolled, and the cost of the toll could be made to rise to the point where the number of vehicles willing to pay the price of entry was no more than what the lanes can fluidly accommodate. The case for price-rationing of an inelastically supplied good is very strong under two assumptions: 1) that people have diverse needs and preferences related to the individual circumstances of their lives; and 2) willingness to pay is a good measure of the relative strength of those needs and values. Under these assumptions, the virtue of congestion pricing is clear. People who most need to make the trip into Manhattan quickly, those who most value a quick journey, will pay for it. Those who don’t really need the trip or don’t mind waiting will skip the journey, or delay it until the price of the journey is cheap. When willingness to pay is a good measure of contribution to welfare, price rationing ensures that those more willing to pay travel in preference to those less willing, maximizing welfare.

Unfortunately, willingness to pay cannot be taken as a reasonable proxy for contribution to welfare if similar individuals face the choice with very different endowments. Congestion pricing is a reasonable candidate for near-optimal policy in a world where consumers are roughly equal in wealth and income. The more unequal the population of consumers, the weaker the case for price rationing. Schemes like congestion pricing become impossibly dumb in a world where a poor person might be rationed out of a life-saving trip to the hospital by a millionaire on a joy ride. Your position on whether congestion pricing of roads, or many analogous price-rationing schemes, would be good policy in practice has to be conditioned on an evaluation of just how unequal a world you think we live in. (Alternatively, maybe under some “just desserts” theory you think inequality of endowment in the context of an individual choice is determined by more global factors that justify rationing schemes that are plainly welfare-destructive and would be indefensible in isolation. I, um, disagree. But if this is you, your case in favor of microeconomic market-clearing survives only through the intervention of a very contestable macro- model.)

Inequality’s evisceration of the case for market-clearing does not require any conventional market failures. We need not invoke externalities or information asymmetries. The goods exchanged can be rival and excluded, the sort of goods that markets are presumed to allocate best. Under inequality, administered prices might be welfare maximizing when suppliers are perfectly competitive (a price floor might be optimal) or when demand is perfectly elastic (in which case price ceilings might of help).

But this analysis, I can hear you say, cruel reader, is so very static. Even if the case for market-clearing, or price-rationing, is not as strong as the textbooks say in the short run, in the long run — in the dynamic future of our brilliant transhuman progeny — price rationing is best because it creates incentives for increased supply. Isn’t at least that much right? Well, maybe! But there is no general reason to think that the market-clearing price is the “right” price that maximizes dynamic efficiency, and any benefits from purported dynamic efficiency have to be traded off against the real and present welfare costs of price rationing in the context of severe inequality. It’s quite difficult to measure real-world supply and demand curves, since we only observe the price and volume of transactions, and observed changes can be due to shifts in supply or demand. To argue for “dynamic market efficiency” one must posit distinct short- and long-run supply curves, a dynamic process by which one evolves to the other with a speed sensitive to price, and argue that the short-term supply curve over continuous time provides at every moment prices which reflect a distribution-sensitive optimal tradeoff between short-term well-being and long-run improved supply. If not, perhaps a high price floor would better encourage supply than the short-run market equilibrium, at acceptable cost (as we seem to think with respect to intellectual property), or perhaps a price ceiling would help consumers at minimal cost to future supply. There is no introductory-economics-level case to establish the “dynamic efficiency” of laissez-faire price rationing, and no widely accepted advanced case either. We do have lots of claims of the form, “we must let XXX be priced at whatever the market bears in order to encourage future supply”. That’s a frequent argument for America’s rent-dripping system of health care finance, for example. But, even if we concede that the availability of high producer surplus does incentivize innovation in health care, that provides us with absolutely no reason to think that existing supply and demand curves (which emerge from a crazy patchwork of institutional factors) equilibrate to make the correct short- and long-term tradeoffs. Maybe we are paying too little! Our great grandchildren’s wings and gills and immortality hang in the balance! Often it is simply incorrect to posit long-term price elasticity masked by short-term tight supply. The New Urbanists are heartbroken that, in fact, the supply of housing in coveted locations seems not to be price elastic, in the short-term or long. Their preferred solution is to cling manfully to price rationing but alter the institutions beneath housing markets in hope that they might be made price elastic. An alternative solution would be to concede the actual inelasticity and just impose price controls.

But… but… but… If we don’t “let markets clear”, if we don’t let prices ration access to supply, won’t we have day-long Soviet meat lines? If the alternative to price-rationing automobile lanes creates traffic jams and pollution and accidents, isn’t price-rationing superior because it avoids those costs, which are in excess of mere lack of access to the goods being rationed? Avoiding unnecessary costs occasioned by alternative forms of rationing is undoubtedly a good thing. But bearing those costs may be welfare-superior to bearing the costs of market allocation under severe inequality. There is a lot of not-irrataional nostalgia among the poor in post-Communist countries for lives that included long queues. And there are lots of choices besides “whatever price the market bears” and allocation by waiting in line all day. Ration coupons, for example, are issued during wartime precisely because the welfare cost of letting the rich bid up prices while the poor starve are too obvious to be ignored. Under sufficiently high levels of inequality, rationing scarce goods by lottery may be superior in welfare terms to market allocation.

The point of this essay is not, however, to make the case for nonmarket allocation mechanisms. There are lots of things to like about letting the market-clearing price allocate goods and services. Market allocations arise from a decentralized process that feels “natural” (even though in a deep sense it is not), which renders the allocations less likely to be contested by welfare-destructive political conflict or even violence. It is not market-clearing I wish to savage here, but the inequality that renders the mechanism welfare-destructive and therefore unsustainable. Under near equality, market allocation can indeed be celebrated as (nearly) efficient in welfare terms. However, if reliance on market processes yields the macroeconomic outcome of severe inequality, the microeconomic foundations of market allocation are destroyed. Chalk this one up as a “contradiction of capitalism”. If you favor the microeconomic genius of market allocation, you must support macroeconomic intervention to ensure a distribution sufficiently equal that the mismatch between “surplus” and “welfare” is modest, or see the balance tilt towards alternative mechanisms. Inequality may be generated by capitalism, like pollution. Like pollution, inequality may be necessary correlate of important and valuable processes, and so should be tolerated to a degree. But like pollution, inequality without bound is inconsistent with the efficient functioning of free markets. If you are a lover of markets, you ought wish to limit inequality in order to preserve markets.

Update History:

  • 14-May-2014, 1:50 a.m. PDT: “wholly unjustifiable conceptual slip between the two concepts.”
  • 14-May-2014, 12:25 p.m. PDT: “absolutely no reason”, thanks Christian Peel!
  • 3-Aug-2014, 10:50 p.m. EEDT: “and log-run long-run supply curves”
  • 23-Mar-2021, 1:40 p.m. EDT: “…people derive the similar but steadily declining…”; “They The greater the macro- inequality, the less persuasive the micro- case…”

VC for the people

Oddly (very oddly), I found myself last week at the INET Economics conference in Toronto. Larry Summers was the final speaker. His presentation was excellent. Whatever I might object to in Summers’ history or politics, he’s brought to the mainstream a set of views I’ve long held, and he is an engaging, cogent presenter.

I had a question for Summers that I didn’t get to ask. So I’ll ask it here.

Early in his talk, Summers pointed out, accurately, that economists really need to rethink the standard “labor / leisure tradeoff”. Almost no one prefers a life of pure “leisure”. Human beings like to regard themselves and to be regarded by others as “productive”. They like to “make a contribution” or “pay their own way” or “kick ass” or “dominate others”, to do something that they believe confers value and status. As Summers pointed out, retirement is often not so good for people. The luckiest people, young or old, are those whose work is fulfilling and enjoyable, not those who do not work at all. As people grow wealthy, they become more free to choose the ways by which, and the terms under which, they will do useful or important things. Wealth is better understood as conferring upon individuals a greater freedom of choice over what kinds of work they wish to do than as endowing lives of “leisure”. A person with wealth can explore roundabout and risky production processes (become an artist, write a novel, start a business), can opt for work with no hope of remuneration (volunteer, help raise a child or grandchild), or can hold out for only the most fulfilling or best-paid market labor. A person without wealth may be forced to accept degrading and poorly paid work, just to pay the bills.

Summers’ talk was the capstone of a conference whose theme was “innovation”. In an excellent session a day earlier (see John Cassidy for a full write-up, ht Mark Thoma), there was surprising agreement among several panelists that speculative bubbles help support innovation. William Janeway distinguished between bubbles in productive vs nonproductive sectors, financed by banks vs nonbanks, and argued that productive-sector, not-bank-financed bubbles promote socially useful innovation at modest social cost, despite high private costs to investors. He went so far as to suggest that agency problems in the delegated investment process, specifically the inability of career-minded fund managers to stay away from bubbles regardless of any personal reservations, make an important contribution to innovation. Steven Fazzari (whose work on inequality this blog has featured before) described research showing that R&D expenditures of young firms are constrained by external finance and increase in bubblicious periods. Ramana Nanda investigated whether investments made at the top of bubbles were poor, and found that they were not. They were just riskier. Firms funded by venture capitalists in heat were unusually likely to crash and burn, sure, but they were also unusually likely to succeed spectacularly. In an earlier panel, Mariana Mazzucato described the importance of “mission-oriented” investment by the public sector. States determined to gain military advantages or put humans in space accept experimentation and failure that would be intolerable to private venture capitalists (whose enthusiasm for risk, she argues, is in general overstated). The common thread in all these accounts is that too much market discipline can be socially counterproductive. If (nonbank-financed) speculative bubbles create social value that exceeds the costs borne by investors and entrepreneurs, then the fact that market participants fail to impose privately optimal discipline on their own portfolios is beneficial. If revolutionary developments in technology depend upon states accepting large, nonrecoverable expenses, a managerialist insistence on quantifiable performance metrics may be foolish. Even in the private sector, powerhouses of invention like Bell Labs and Xerox PARC thrive primarily within cushy monopolies, where they are sheltered from quotidian fretting over the bottom line, where market incentives are present but blunted.

So, Summers argued (as he has now argued for a while), Western economies may have entered a period of “secular stagnation” in which the “natural rate of interest” (the rate at which the resources of the economy, human or otherwise, would be fully employed) is so low that we cannot achieve it, or should not try (because rates so low become ineffective at spurring demand or carry with them other costs). He emphasized infrastructure investment as a solution, a near free-lunch which simultaneously increases the economy’s capacity as it spurs aggregate demand. I have no quarrel with that. Infrastructure investment would be a great thing to do, if we could solve the political and regulatory problems that have rendered competent public enterprise nearly impossible.

But we do have other options. If it is true, as Summers seems to think, that humans prefer to do important things even when they are not forced by a labor-market cudgel, and if it is also true that financial constraint causes people to accept safe and sure work rather than take chances on activities that might be speculative but more valuable, then there might be social return in having the state absorb some of the risk of failure faced by individual humans. In effect, the state could provide venture capital to the people. If ordinary citizens had a small but reliable annuity, too modest to live comfortably but enough to prevent destitution, then at the margin, we’d expect people who currently seek or accept unfulfilling, underpaid work to opt for entrepreneurship, or education, or art, or child-rearing, or just hold out for a better gig. “VC for the people” would combine a reduction in labor supply with a lot of new labor demand, forcing employers to increase wages and encouraging substitution of capital for the least desirable jobs. Both the wage effect and the annuity itself would increase the share of national income available to those without direct claims on capital, reducing inequality. In his talk, Summers mused (wonderfully) that he’d prefer we not evolve to an economy in which people are employed providing increasingly marginal services to the rich, working as specialized “knee masseurs” and the like. A straightforward way to preclude that is to ensure that everyone has the means to refuse those jobs and take chances on more meaningful and ultimately more valuable work.

“VC for the people” would reduce market discipline, but it would certainly not eliminate it. People do not require the threat of destitution to cultivate ambition. It is much better to supplement ones modest annuity with a vigorous market income than to crouch inertly in a hovel. Most people (like most of you, my not-nearly-destitute readers) will still try hard to achieve economic success. It’s just that people who have options are much more likely to actually find success than people who don’t.

“VC for the people” has a more common name. It is called a universal basic income. Properly implemented, it is not means-tested and carries no disincentive to earn. It is inflationary via increased purchasing power of ordinary people, the best kind of inflation, especially desirable in disinflationary times. Its level is a policy instrument and need not be indexed to prices. If it “works too well”, positive interest rates can tamp down spending, and, presto, no more secular stagnation.

So, what do you say, Larry Summers? Would you support a universal basic income?


Note: The title of this post is a bit of a play on Anatole Kaletsky’s QE for the people, which is similar to my own Monetary Policy for the 21st century, as well as proposals by David Beckworth, Ryan Cooper, Ashwin Parameswaran, Matt Yglesias, Haitao Zhang, and I’m sure many others.

However, it’s important to note a difference between those proposals (for fiscalist central banks that “cut checks” to regulate the macroeconomy in addition to using traditional monetary tools) and proposals like this one, for a universal basic income. A fiscalist central bank must be able to tighten as well as loosen when macroeconomic conditions change. In order to retain policy flexibility, recipients of “helicopter money” must not come to depend upon it as permanent income. A fiscalist central bank would have to take care to cut its checks irregularly, or (as I suggested) wash its transfers to the public through a lottery to avoid recipient dependence.

A universal basic income, however, is intended to be depended upon. Its purpose is to alter people’s behavior, to render them more risk-tolerant, to increase their bargaining power in wage negotiations. Macroeconomically, a universal basic income might provide a low-frequency “reset” to positive interest rates, but it should not be adjusted monthly or quarterly like a central bank policy instrument. A universal basic income should be determined like the minimum wage, via acts of Congress. “Helicopter money”, on the other hand, should not depend upon acts of Congress. Its purpose to offload a macro-stabilization component of fiscal policy from legislatures to central banks. (Larry Summers, in his talk, admitted confusion as to the point of helicopter money proposals. Don’t fiscal expenditures plus open market operations amount to the same thing? In terms of net flows to the private sector, they do amount to the same thing, but in institutional terms they are very different. Central banks are much more agile, more nimble, than legislative bodies. If fiscal policy is to be used as a macro stabilization tool, then some aspects of fiscal policy must be delegated to an agency capable of responding at the frequency required for macro stabilization. That is the attraction of “helicopter money”.)

Update History:

  • 16-Aprr-2014, 3:55 p.m. PDT: Added David Beckworth and Haitao Zhang to list of helicopter money (-ish) proposals. Added “start a business” to list of risky, roundabout production process things.

“Incentives to produce” are incentives to rig the game

That’s obvious, right? But let’s belabor the point.

All too often in discussions about the vast dispersion of circumstance we call “inequality”, people concede a kind of trade-off. Yes, reducing rewards to those at the top of the wealth/income distribution might blunt their incentives to produce. But the cost of that might be offset by utilitarian benefits of transfers to the less well off, or by greater prosperity engendered by MPC effects on aggregate demand, or by whatever.

That’s all well and good as far as it goes. But at current margins, I suspect (with Paul Krugman) there is no tradeoff. There might be a tradeoff in measured GDP, but GDP happily tallies economic coercion and rent-capture along with genuinely productive activity. Suppose that a comic-book evil pharmaceutical company secretly unleashes a disfiguring virus for which — miracle of miracles! — it has an expensive, patented treatment. After the pandemic, consumers would have a choice: tolerate an odiferous oozing eczema (but remain otherwise healthy and productive!), or pay for the treatment. GDP would likely rise! In macroeconmic terms, this kind of thing is an example of the “broken window fallacy“. Causing a disease and then expensively treating it does not in fact make the world richer. But it may well inspire economic activity — the mass production of a new drug, visits to doctors, extra hours people choose to work in order to afford the treatment, etc. In aggregate, we work harder just to stay in place. But the distributional effects of the operation are very real. The extra personal income enjoyed by the conspirators spends nicely.

In real life, it’s not so common for comic-book villains to release icky pathogens and then charge for a cure. But it is very common for doctors to restrict entry into their profession and to act politically to inflate the cost of their services. Goaded by “incentives to produce”, participants in the financial industry do a lot of “innovating” that amounts to finding ways of skimming invisible or unexpected fees from people, or persuading customers to bear underappreciated and undercompensated risks, or maximizing the value to them (and costs to others) of guarantees implicitly or explicitly provided by the state. Nearly every industry hires lobbyists to carve out favorable loopholes and subsidies and regulatory schemes at everyone else’s expense. Tech firms make a business model of invasive surveillance and selling information about people who are their users but not their customers. Patent trolls send extortion letters to users and creators of new technology. Politicians “revolve” out of government into perfectly legal, extravagantly compensated sinecures in the private sector, and then often back into government. Senior members of the military become “private sector entrepreneurs”, garnering contracts from friends and former colleagues in a burgeoning defense and intelligence industry, often for work that used to be performed more cheaply internally. Executives collude with friendly boards who rely upon transparently idiotic consulting practices to extract huge salaries. Some of these things contribute to measured GDP, to “growth”, but their effect on the actual well-being of those outside their industries is, um, questionable.

This stuff isn’t marginal, nor should we expect it to be. In fact, we should expect the prevalence of rent capture (or worse) as a source of economic profit to increase with technological progress. Why? Because, absent chicanery, technology increases the ease of production and the efficiency of distribution. As Schumpeter pointed out, the source of profit in real-life capitalism is the fact that monopoly power is ubiquitous because of natural barriers to competition. The corner store has a monopoly on the convenience of its neighbors, and so can capture some of the surplus that might otherwise be bid away to customers by competitors. On-demand delivery drones would eliminate that monopoly. Yet the corner store industry might lobby to prevent residential rooftop deliveries, in which case it is no longer exploiting a natural inefficiency but capturing a rent. In business school, students are taught that a successful business has a “moat” that makes it difficult for competitors to bid away ones margins. Technological progress renders moats that derive from nature harder to come by. Instead, successful businesses — and successful people (since under capitalism, a human is just a small business) — must rely increasingly on moats that result from social and political arrangements. We choose to grant monopoly rights to “creators” in the form of intellectual property and to expand their scope. We choose to limit the taxi business to medallion holders. We choose to prevent Indian doctors from competing in American hospitals, even though airplanes have eliminated locals’ natural monopoly. We choose to hire from the Ivy League. The distribution of profits is determined by social choices rather than by natural scarcities.

None of this is to say that any particular such choice is “wrong”. The static inefficiency inherent in patent monopolies at least under some circumstances may be overcome by the incentives to invent they yield. Minimum wage laws are restraints on competition that I enthusiastically support precisely because of to whom the “rents” are directed. Maybe sending a gigantic, very random fountain of money to producers of health-care inputs via an inscrutable hodge-podge of public and private payers really is the best way to ensure our cancers are cured before we are diagnosed with them. Who knows?

But the distribution of affluence is less and less a matter of direct attachment to production, and more and more a function of winning social games and political contests that determine to whom the fruits of production will be allocated. There’s no conspiracy in that. Nor is it an answer to say “capital” now determines who enjoys wealth. As technology improves, capital goods become mere commodities like everything else. Financial capital, whatever it is, is not an input into any material production process. It is a construct and artifact of a huge and ever-changing array of social and legal institutions. “Human capital”, “social capital”, and “organizational capital” are things we impute ex-post to winners of distributional contests as explanations of observed returns. They do not straightforwardly exist in the world.

“Inequality” — high dispersion of outcome — creates a strong incentives to be on the side of winners. There are some circumstances where being on the side of winners means making an outsize contribution to economic production. There are other circumstances where winning means aligning oneself with coalitions capable of winning legal and political contests that may be orthogonal to, or much worse than orthogonal to, any contribution to production. The two strategies don’t preclude one another. Perhaps outsize rewards are shared between those who make unusual contributions to production and those who participate in politically potent guilds. But, at best, increased dispersion increases the incentive to engage in both sorts of behavior. Incentives to produce are also incentives to contest for rents. And at any given time, for any given person, one may be an easier or more reliable means of gaining outsize rewards than the other.

Suppose, reasonably I think, that ceteris paribus humans prefer to “be good”. That is, we prefer to do work that is productive and engage in behavior that is ethical. Suppose, also reasonably, that a well ordered society depends upon people sometimes making choices opposed to their material interests on ethical or other grounds. Then it is obvious how inequality might be costly. Instead of talking about “incentives to” (produce, extract rents, whatever), we might describe outcome dispersion as a tax on refraining from mercenary behavior. If the difference between economic winners and losers is modest, people of ordinary virtue might refrain from participating in activities they consider corrupt, might even be willing to “blow the whistle”, because the cost of doing so is outweighed by their preference for behaving well. But as outcome dispersion grows, absenting oneself from or even opposing activities that would be personally remunerative but socially undesirable becomes too costly. The required sacrifice eventually overcomes a ceteris paribus preference for virtue. Preventing the misbehavior of large coalitions is a collective action problem. An isolated malcontent or whistleblower is likely to be evicted from the coalition without meaningfully improving behavior, if others choose to “circle the wagon”. Outcome dispersion both increases the costs to individuals of engaging in pro-social behavior, and diminishes the likelihood that bearing those costs will be fruitful, since others will have strong incentives not to follow.

Wouldn’t it be odd to live in a country where, say, bankers individually acknowledge that their industry often behaves destructively, where insiders perceptively describe the conditions that create incentives for people to take bad risks or fleece “muppets“, but continue to work in those places and do nothing about it? Wouldn’t it be odd to live in a country where doctors privately apologize for the way their services are “priced“, but nevertheless take home their paychecks and pay AMA dues? Or in a country where economics instructors teach agency costs using textbook pricing as a case study, during a course for which students are required to purchase a $180 textbook?

I don’t mean to criticize anyone in particular. (I used to be the economics instructor.) In all of these cases, there really isn’t anything any one individual can do to remedy the bad practices. Making a big issue of them would lead to useless excommunication. Instead we shrug ironically. In our society, an ironic attitude is a token of sophistication (a telling word, which once meant corruption but now implies competence). An ironic attitude towards collective ethics is adaptive. It helps basically decent individuals participate in coalitions that ruthlessly contend for rents. But perhaps we’d have a better society if, rather than turning our ethical discomfort into an object of aesthetic consideration, lots of us worked straightforwardly to remedy it. And perhaps more of us would do so if the risk of losing our place were not so terrible. Ethical behavior is endogenous. “Inequality” renders it costly.

Update History:

  • 29-Mar-2014, 6:00 p.m. PDT: Struck near duplicate: “…treatment, etc. GDP rises! In aggregate…”; “hodge-podge of public and private institutions payers
  • 23-Sep-2015, 3:40 a.m. PDT: A bunch of small edits: “participants in the financial industry do a lot of ‘innovating’ that amounts to finding ways of skimming invisible or unexpected fees from people, or persuading them customers to bear underappreciated and undercompensated risks, or maximizing the value to them”; “as an explanation explanations of observed returns”; “agency costs with case study of using textbook pricing as a case study“; “for which students were are required”.

Followup: Pro-family, pro-children, anti-“marriage promotion”

Responding to the previous post, James Pethokoukis misreads the views of people like me. He writes:

Folks who agree with [Waldman’s] view often advocate a hugely expanded government safety net — universal pre-K, one-year paid parental leave, a universal basic income among other programs — to do the work of transmitting social and intellectual capital that intact families no longer can.

Folks who are me do advocate for vastly expanded government benefits for families. I’d support universal pre-K, and I especially support a universal basic income. (Paid parental leave not so much, if the payer would be a prior employer.) But the purpose of these programs is not to “do the work…that intact families no longer can”. On the contrary, I support these programs because they would enable and assist the work that couples must do to stay together and in love and raise children well.

As I tried to emphasize in the previous piece (maybe the goat sex joke obscured it): There is no nonmarginal constituency in the United States advocating for alternatives to the two parent family as the core unit of childrearing. (Advocates of alternative forms of parenting by gay people might once have been an exception here, but the ascendancy of same-sex marriage has largely assimilated the gay community into the broad cultural norm.) While as a free society we should be open to alternative arrangements, my expectation is that in flourishing communities, traditional families will remain the norm. The quantitatively relevant challengers to the intact, two-parent household are divorced parents and single moms. Those households do not result from any decline in positive norms surrounding married life, though they may in part enabled by a relaxation of negative norms surrounding single parenthood and divorce. Americans do not, in large numbers, choose to become single or divorced parents when they have the option of raising children in loving, economically secure marriages. They become single parents because they want to be parents and the loving, economically secure marriage is not available. People who imagine that nefarious alternatives to married childrearing are being promoted and must be countered in the cultural sphere are simply misguided.

The effective way to support traditional families would be to increase the likelihood that a marriage chosen remains loving and economically secure. Matt Yglesias (who is much nicer than me) helpfully suggests this as a means of finding common ground:

[R]ather than being skeptical about this rhetoric [of marriage promotion], a more productive posture might be for liberals to see the family stability angle as a way of getting social conservatives more invested in helping poor people. The suite of things most likely to make for more stable working class families are basically better demand management, better schools, more wage subsidies, better transportation connections to jobs, and overall the kind of stuff that makes things better.

That’s a good idea! But promoting the social and material conditions in which people would likely form durable marriages is very different from nagging people for making poor choices that may not be poor choices, given circumstances on the ground. And it is very different from trying to narrow people’s options by bullying them into marriage with a return of shotgun weddings or restrictions on divorce. That would be the worst kind of cargo cult: One cannot conclude from correlations between voluntary unions and good outcomes that more-or-less coerced marriages would be awesome. But the coercion would carry obvious costs and risks, to people who aren’t pundits or think-tank fellows. Too often, marriage promotion is presented as a substitute rather than a complement to altering the material conditions that render people’s choices so difficult and outcomes so poor.

In a better world, social conservatives would have more confidence in the power of their own ideals. One doesn’t have to be cajoled or trapped into the good life. In the United States, people who have options — even irreligious urbanites with dissolute norms — freely choose marriage at high rates. Yes, Hollywood puts out a lot of prurient and violent movies. But the same industry produces scores of romantic comedies and sappy chick-flicks in which marriage epitomizes the happily-ever-after. Those films remain popular across all socioeconomic classes (if not across genders).

Even in social-conservative-nightmare-land, marriage-indifferent Scandinavia:

“Nowadays, it has become fashionable for the father to hand over the bride. This isn’t a Scandinavian custom, but is something that people have picked up from watching American TV programs,” according to Yvonne Hirdman, professor of history at Stockholms University. Another new imported trend is the practice of placing gifts on the table for guests at the wedding banquet. “That is another new custom that comes from America,” says Anna Lundgren, editor-in-chief of bridal magazine and internet site Bröllopsguiden.

Weddings are parties. They aren’t marriages. Nevertheless, the centrality of wedding fantasy in American cultural life reflects a powerful, durable aspiration. America really is exceptional in its attitude towards marriage.

There is every reason to believe that, if their options were better, many women who today become single moms would instead form traditional families. I know there is more to life and love than material wealth. But there is little more harmful to life and love than poverty and economic instability. Social conservatives are fond of pointing out that AFDC used to explicitly subsidize single motherhood, and that was obviously bad. (It was!) But present arrangements subsidize romantic cohabitation in preference to marriage in poorer, more precarious, communities. Household economies of scale turn into painful diseconomies when a partner neither brings in an income nor does much housework or childrearing. The option of kicking out an indigent partner is extremely valuable, especially for moms in communities where men are frequently out of work. Mothers are wise, not foolish, to retain that option. (The behavioral effects of being a male adult who brings nothing but a mouth to the dinner table ensure that exercise of this option will become emotionally justifiable, pretty fast.) Vigorous full employment, or a universal basic income, would eliminate the strong economic incentive for mothers to prefer cohabitation without commitment and make marriage rational where now it is not.

Conservatives often claim to have faith in America, in American exceptionalism. I wish they’d have a bit more faith in the institutions that they claim are valuable and in Americans who aren’t rich. Marriage “passes the market test” in America among people who could afford, in social and economic terms, to adopt more informal Scandinavian lifestyles. Rich liberals aren’t shamed, exhorted, counseled, bribed, or propagandized into marriage. They choose it. There are rational, remediable reasons why poorer Americans don’t make the same choice. I wish we would address those reasons rather than pretend the choices are mistakes or moral failures.