Tuesday, August 07, 2012

Proxy measures, sunk costs, and Chesterton's fence

G.K. Chesterton ponders a fence:
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.

Contrast the sunk cost fallacy, according to one account:
When one makes a hopeless investment, one sometimes reasons: I can’t stop now, otherwise what I’ve invested so far will be lost. This is true, of course, but irrelevant to whether one should continue to invest in the project. Everything one has invested is lost regardless. If there is no hope for success in the future from the investment, then the fact that one has already lost a bundle should lead one to the conclusion that the rational thing to do is to withdraw from the project.
The sunk cost fallacy, according to another account:
Picture this: It's the evening of the Lady Gaga concert/Yankees game/yoga bootcamp. You bought the tickets months ago, saving up and looking forward to it. But tonight, it's blizzarding and you've had the worst week and are exhausted. Nothing would make you happier than a hot chocolate and pajamas, not even 16-inch pink hair/watching Jeter/nailing the dhanurasana.
But you should go, anyway, right? Because otherwise you'd be "wasting your money"?

Think again. Economically speaking, you shouldn't go.
Has Chesterton committed the sunk cost fallacy? Consider the concept of proxy measures:
The process of determining the value of a product from observations is necessarily incomplete and costly. For example, a shopper can see that an apple is shiny red. This has some correlation to its tastiness (the quality a typical shopper actually wants from an apple), but it's hardly perfect. The apple's appearance is not a complete indicator -- an apple sometimes has a rotten spot down inside even if the surface is perfectly shiny and red. We call an indirect measure of value -- for example the shininess, redness, or weight of the apple -- a proxy measure. In fact, all measures of value, besides prices in an ideal market, are proxy measures -- real value is subjective and largely tacit.
Cost can usually be measured far more objectively than value. As a result, the most common proxy measures are various kinds of costs. Examples include:
(a) paying for employment in terms of time worked, rather than by quantity produced (piece rates) or other possible measures. Time measures sacrifice, i.e. the cost of opportunities foregone by the employee
(b) most numbers recorded and reported by accountants for assets are costs rather than market prices expected to be recovered by the sale of assets.
(c) non-fiat money and collectibles obtain their value primarily from their scarcity, i.e. their cost of replacement.
Proxy measures are important because we usually can't measure value directly, much less forecast future value with high confidence. And often we know little of the evidence and preferences that went into an investment decision. You may have forgotten or (if the original decision maker was somebody else) never learned the reason. In which case, the original decision-maker may have had more knowledge than you do -- especially if that decision-maker was somebody else, but sometimes even if that decision-maker was you. In which case it can make a great deal of sense to use the sunk cost as a proxy measure of value.

In the first account of sunk cost, there seems to be no uncertainty: by definition we know that our investment is "hopeless." In such a case, valuing our sunk costs is clearly erroneous. But the second, real-world example, is far less clear: "you've had the worst week and are exhausted.." Does this mean you won't enjoy the concert, as you originally envisioned? Or does it mean that in your exhaustion you've forgotten why you wanted to go to the concert? If it's more likely to mean the latter, then my generalization of Chesterton's fence, using the idea of proxy measures, suggests that you should use your sunk costs as a proxy measure of value, and weigh that value against the costs of the blizzard and the benefits of hot chocolate and pajamas, to decide whether you still will be made happier by going to the concert.

If your evidence may be substantially incomplete you shouldn't just ignore sunk costs -- they contain valuable information about decisions you or others made in the past, perhaps after much greater thought or access to evidence than that of which you are currently capable. Even more generally, you should be loss averse -- you should tend to prefer avoiding losses over acquiring seemingly equivalent gains, and you should be divestiture averse (i.e. exhibit endowment effects) -- you should tend to prefer what you already have to what you might trade it for -- in both cases to the extent your ability to measure the value of the two items is incomplete. Since usually in the real world, and to an even greater degree in our ancestors' evolutionary environments, our ability to measure value is and was woefully incomplete, it should come as no surprise that people often value sunk costs, are loss averse, and exhibit endowment effects -- and indeed under such circumstances of incomplete value measurement it hardly constitutes "fallacy" or "bias" to do so.

In short, Chesterton's fence and proxy measures suggest that taking into account sunk costs, or more generally being averse to loss or divestiture, rather than always being a fallacy or irrational bias, may often lead to better decisions: indeed if it is done in just those cases where substantial evidence or shared preferences that motivated the original investment decision have been forgotten or have not been communicated, or otherwise where the quality of evidence that led to that decision may outweigh the quality of evidence that is motivating one to change one's mind.. We generally have far more information about our past than about our future. Decisions that have already been made, by ourselves and others, are an informative part of that past, especially when their original motivations have been forgotten.

References:

Chesterton's Fence

Sunk Cost Fallacy  (1), (2)

Endowment Effects/Divestiture Aversion: 

Loss Aversion:

Cost as a Proxy Measure of Value






8 comments:

gwern said...

> We generally have far more information about our past than about our future. Decisions that have already been made, by ourselves and others, are an informative part of that past, especially when their original motivations have been forgotten.

If we cannot recall the motivation behind an investment, doesn't this suggest it was either not a good idea ('I don't know *why* I decided to jump off the balcony - it must have seemed like a good idea at the time...') or that whatever made it a quality idea was situational and the situation has since changed ('his quip was really good but I don't remember it exactly; I guess you had to be there')?

Some examples might help here.

nick said...

There are two examples in the article, the first being Chesterton's fence, the second being the Lady Gaga concert after an exhausting week of work that makes you no longer feel like going. If you don't like the Lady Gaga concert, think of it as a game of a team you like, or hearing a highly regarded speaker on the subject of rationality techniques, or anything else that floats your boat that you would have at some point decided to shell out money for. I also welcome suggestions of more examples where measures of value you or others made in the past are not easy to recreate.

Indeed one possibility is that one has had a genuine long-term change of preferences, or that there has been a genuine long-term change of evidence, or in the case of the concert a change that will at last at least all night long. And forgetting why one wanted the concert ticket may indeed signal such a change. In the example, you may indeed not enjoy the concert because you are exhausted, or because you bought the ticket on a whim and never really wanted to go that badly, and thus can't remember why now.

On the other hand suppose -- and I'd argue this is more common in the real world, as opposed to the experimental labs of behavioral economists -- there's been no substantial and long-term cange of preferences -- just a momentary distraction, such as being exhausted from a hard week's work. After you lose the distraction, and that pink hair or Yankee home runs or visions of the power of rationality make you forget the workweek, chances are you'll be fine, and going to the concert or meeting will recall to you why you wanted to go, and it will be more enjoyable than the pajamas and hot chocolate (or watch the game on TV, or read the book instead of go to the meeting) which you can have on any other evening.

Furthermore, the way our brains work is that we often consider substantial amounts of information, come to a decision, and then go on to some other problem that requires some other information, so that we move the old information out of conscious memory -- and it may thereafter be difficult to retrieve most of it. So, except for the trivial situations (e.g. comparing X dollar against Y dollars) encountered in the academic experiments, we need to trust our old selves and not be constantly revisiting old decisions: or to put it economicallhy, we need to put substantial weight on our previous investments, which signal to us, across our lossy and distracted memories, the value we placed in things.

In the Lady Gaga concert (or other event you had bought a ticket for) it might also help, besides rembembering the price, to remember how much thought and research, or lack thereof, you put into it, and how often you spent daydreaming about it, just as a quantitative estimate - or if you keep a diary, remembering whether or not you left a note about it in your diary -- rather than the often futile exercise of trying to remember the details behind why you had such preferences. There are things in addition to the sunk cost of the ticket that can be used as proxy measures of value. But the sunk costs are one of them, and if you're really exhausted or distracted it may be the only one you clearly remember.

nick said...

It may help to change Chesterton's fence example around a little, to make it more of a personal economic decision, while maintaining the degree of ignorance about the reasons for the fence:

(1) You've inherited a small acreage with a puzzlingly located fence from a relative you never knew.

(2) You have no clue why the fence is there, but the very sparse estate records indicate that it cost Z to build.

(3) The property is too remote to visit personally, and the neighbors wouldn't tell you anyway, so you would have to recruit and pay an investigator at cost X to learn why the fence is there. You don't have any other information by which to estimate the probability that the investigator will turn up a evidence that the fence satisfies your preferences (as opposed to your relative's former preferences), or the preferences of somebody you might sell the property to, sufficient to justify maintaining it.

(4) It costs Y / year to maintain the fence, such at the net present value of the Ys is slightly greater than X, but substantially less than Z.

Should you hire the investigator, or just keep paying the costs of maintaining the fence? I believe you should just keep paying the costs of maintaining the fence, assuming you (or somebody you are selling the property to) find the property generally to be of sufficient value to justify the resulting total of expenditures on that estate you must make including the cost of maintaining the fence. In short, you should use the fact that your relative invested Z as a proxy measure of the value of the fence, discounted by some factor to reflect the degree to which you or your buyer's preferences may not match your relative's, some probability (quite uncertain) that the value of the fence died with the relative, or similar.

George Weinberg said...

The whole thing strikes me as being too much an argument by analogy.

I don't think it's true that the radical "social reformer" can see no reason why some law or custom or social structure is in place. Rather, he can imagine one and only one reason: it was put there by the people at the top of the social order, and its purpose is to make sure the people currently on top stay on top.

The conservative sees stability as something necessary for the functioning of society, the radical just sees it as a code word for maintaining the power structure.

nick said...

George, that suggests one of the reasons why my version is better as well as being far more general in application. There is usually no one "the reason" that is sufficient to justify or refute the justification of a complicated institution, and this is often even true for personal decisions. One reason, especially just the naive application of a particular ideological theory, is usually woefully insufficient evidence for favoring tearing down the "fence."

The following is a better approach than Chesterton's, albeit somewhat more involved. It applies to almost every non-trivial decision we make about either ourselves, our investments, or our institutions. Indeed, it even tells us when we should make decisions and when we should put them off:

(1) Presume, in the absence of sufficient evidence to the contrary as described in (2), that the complex institution or investment or prior personal decision is still justified. In other words, maintain stability in your life and in our social lives until a sufficient threshold for change has been met.

(2) To the extent one is unsatisfied with the institution/investment/personal decision, fairly gather evidence for and against the utility of said state of affairs.

(3) To the same kind of extent, consider alternative states of affairs.

(4) When the quality of evidence both for and against becomes sufficiently high, as well as the weight of evidence against - (weight of evidence for + (discount factor)*(sunk cost of state of affairs)) becoming sufficiently greater than zero, abandon the institution or investment or personanl decision in favor of the best alternative state of affairs you or we are comparing it to. The lower the quality of evidence, the higher the discount factor -- lower quality evidence means weight the proxy measure of sunk costs more highly.

(4a) One can derive an adjusted net present value calculation from (4) for analyzing investments.

You can consider the absurdity of never considering sunk costs, by considering how frantic and pointless your life would be if your were constantly reconsidering every decision you made, and never trusting any decision of anybody else. To the extent you don't want to live such a preposterously hyper-rational lifestyle, we must guide ourselves by procedures similar to the above.

This may well be a good approximation of what our subconscious minds do naturally, when we are not brainwashed by superficial hyper-rationality into ignoring our supposed "biases" and "fallacies". Albeit, such high weightings of sunk costs are probably not as suitable for cases, more common now than in our evolutionary environments, but still probably a small minority of cases, where we have better evidence and better capacity to consider that evidence than the original decision makers, or substantially different preferences than they, or both.

nick said...

A bit more formally the above expression should be

relative value of two institutions/investments/decisions =
f(preferences, evidence against - (weight of evidence for + (discount factor)*(sunk cost of state of affairs)))

Where evidence maps starting objective states of the world and preferences to objective outcomes (in other words the evidence tells us what outcomes would result and how well those would satisfy our preferences).

Of course, one quite likely needs sub-heuristics or further proxy measures to calculate such a thing -- or to the extent you can't calculate it accurately, or are unconfident of your hunches, greater reliance on sunk costs as a proxy measure ef value.

Sister Y said...

I have argued (partially based on your amazing Shelling Out piece) that biases like sunk costs fallacy and labor theory of value are evolutionarily adaptive in that they facilitate survival/reproduction by facilitating commitment to pair bond partners and the group, as well as helping us select what to invest in, in EEA terms.

Anonymous said...

Reading between the lines I can't help but think this is a way to suggest we should not yet raise the block size limit. Am I correct? I wish I had a fraction of your brilliance and breadth.