Markets and Behavioral Economics

In recent years, the mentions of “behavioral economics” in the books reached 1/3 of the mentions of The Simpsons, which is a big success of science. As usual, this research revolves around cognitive biases. But how far do these biases go? I’d say, biases end where markets start.

While most economic theories try to be general (often unsuccessfully so), behavioral economics suffers from the opposite. Experiments with college students—which the field was producing over the years—describe how humans think when they are in their early 20s filling questionnaire right before the lecture. These results don’t describe an experienced stock broker making million-worth transactions each day for the last 10 years.

Behavioral economics loses its explanatory power as bets go up. Nudging works in ecommerce, in everyday services. But as money and competition appear, irrationality vanishes. And it happens way before multimillion deals.

One landmark finding by Kahneman was the weighting function in his prospect theory:

Kahneman and Tversky - 1979 - Prospect Theory
Kahneman and Tversky – 1979 – Prospect Theory

It says that people overestimate the probability of unlikely events. Kahneman and Tversky derived this from the experiments with their undergrads, and you see how the solid line deviates from the “correct” dotted line by a few percentage points.

These few points get the magnitude in big decisions. A high school grad who wants to be a movie star plays against the odds. Movie appearances are skewed toward few superstars, and the probability of being one of them has many zeros after the decimal point. How does the grad percept his chances? He misses a few zeros, thus taking success as more probable. The person makes choices he wouldn’t make if he knew the true probabilities.

But in money markets, humans learn probabilities faster. The market does start with misestimated probabilities. For example, profitable sports betting strategies make money out of the people who bet on underdogs:

Hausch and Ziemba - 2008 - Handbook of sports and lottery markets
Hausch and Ziemba – 2008 – Handbook of sports and lottery markets

The bets don’t break even because the winners pay the house, too. This strategy yields positive returns in other popular sports, like soccer. But professional bettors leave these markets as big bets behave rationally and kill successful strategies. The bettors choose unknown sports, as jai alai. Or they bet on the goal difference, or particular players. All in attempts to find small markets, where irrationality resides.

Financial markets dwarf sports betting. How’s likely any human bias then? Not at all. Money managers with biases leave the table quickly. The markets have inefficiencies, but for other reasons and with other implications. A famous example of statistical arbitrage by LTCM:

Long-Term bought the cheaper off-the-run bond, while simultaneously selling the more expensive on-the-run bond. This allowed it to lock in the spread between the two bonds while immunizing it from interest rate movements. LTCM didn’t want to bet on the future of interest rates, instead, it wanted to make a very specific bet on liquidity. By creating a long position in one bond and a short position in another similar bond, LTCM knew that any losses from interest rate movements in one bond would be wiped out by equivalent gains in the other bond.

One problem with this trade, however, was that the spread between the two types of treasuries tended to be very small. For example, in August 1993, before Long-Term entered the market, 30-year bonds yielded 7.24%, while 29½ year bonds yielded 7.36%. This 12 basis point spread would not allow it to earn the type of returns that its investors expected, so the traders at LTCM needed to leverage their trade in order to magnify this return.

Hoping that the yields converge, LTCM bought positions in bonds with maturity 30 and 29½ years. Then the 1997 Asian crisis and 1998 Russia’s default occurred, and investors fled to quality. Investors were buying 30-year bonds, and not the 29½. As 30-year bonds grew in price, their yield declined and the gap between the two types of bonds increased. LTCM suffered immediate losses that wiped out the profits from the previous four years.

LTCM was an unusual hedge fund. Its founders were careful academic researchers with solid models. So, they found a good balance between profitability and bold assumptions about the markets. The result? The 12 basis point spread is a sort of inefficiency that big markets offer, and hedge funds can’t take it free of risks. The spread had nothing to do with human rationality. Buying 29½ bonds just happened to be a different market.

In between college questionnaires and Treasury bonds, where do cognitive biases cease to be a good approximation of reality? Behavioral economics can’t say. Social sciences don’t come up with universal models; they only show how to do certain thing better. Here, psychologists showed how to do elegant experiments that predict the future within a specific domain. So far, these experiments have been successfully adopted by UX designers and advertising. Whatever one thinks about their ethics, ads now waste less than in the Mad Men period—exactly because pros became as disciplined as earlier scientists. Meanwhile, governments—which could learn a lot from behavioral economics—adopt these things sluggishly, as recent results by the UK nudge unit show. But that’s also connected with markets and competition.

Learning to Learn from Indonesia

The World Bank publishes its 2015 development report. Behavioral economics, which the report is about, already earned a Nobel and best-seller positions for popular books on topic. The Bank now politely reminds that nudges matter for public policies.

One literally illustrative case from the chapter on productivity:

wb_ch7

Which is about this:

Seaweed farmers in Indonesia, for example, had no problem noticing that the spacing between pods determined the amount of seaweed they could grow, and they could accurately report the spacing on their own lines. They failed to notice, however, that the length of the pod also mattered; they did not even know the lengths of the pods that they used, even though farmers had an average of 18 years of experience and harvested multiple crop cycles per year and thus had plenty of opportunities for learning by doing.

Even when randomized controlled trials on their own plots demonstrated the importance of both length and spacing—at least for researchers analyzing the data—the farmers did not notice the relationship between length and yields simply from looking at their yields in the experimental plots. Only after researchers presented them with data from the trials on their own plots that explicitly pointed out the relationship between pod size and revenues did farmers begin to change their production method and vary the length of the pods.

At this point, some think “Oh, those stupid Indonesian farmers! This never happens to me.” Well, it does. Lawrence Summers suggests a good example: the airport elevator that takes more time to fix than the Empire State building to build.

Apart from the decline of social trust in the United States (which is Summers’ main point), slow construction has some behavioral roots. Why doesn’t the owner fix his elevator faster? The losses from this elevator are not on his books—they’re opportunity costs that few care about. If thousands of people have to make their route two minutes longer each day while the elevator stands still, the owner doesn’t notice the losses either. The people do, and not Harvard professors alone. Customers are less satisfied with service (walking around the place is a service, too) and less likely to leave their money around. It boomerangs on the elevator owner through the long chain of revenues and rents from airport shops.

Like the Indonesian farmers, managers would notice this “only after researchers presented them with data from the trials on their own plots airports that explicitly pointed out the relationship between” time to fix an elevator and revenue from operations.

And guess what? Managers wouldn’t agree to establish a proper trial to find this out empirically. It requires some elevators to work and some not to work in a random order—a thing totally unacceptable to managers. After all, they realize that dysfunctional infrastructure isn’t that great.

A single dysfunctional elevator ain’t great either. The math is embarrassingly simple. If ten elevators reduce revenues by 10%, then one elevator reduces revenues by 1% (actually, more than that; people don’t care much if nothing works, but they still notice those small flaws if things run somewhat smoothly).

If elevators still seem to be a small issue, big construction projects don’t. Repairing a road takes time but contractors rarely use this time wisely. Why should they? A daily 8-hour shift is a cheap way to complete the works in one year. Three shifts could do it in six months, but margins would be smaller. Meanwhile, drivers spend another six months in jams around roadblocks.

A private contractor need not to care about drivers, but government must. Naturally, by paying contractor more for completing the project fast. It may mean more taxes or debt; but if drivers realize how much slow roadworks add to communing, they would pay to avoid this waste.

Again, like the Indonesian farmers and airport managers, drivers rarely draw the connection between “downsizing the government” and personal time lost in jams. First, politicians rarely puts things this way; second, people routinely underestimate opportunity costs compared to direct expenses. So, they choose fewer taxes and more jams.

Where does behavioral economics lead to? Teaching yourself biases is an option. But it probably won’t help with elevators, bridges, and roads. These are problems that deserve routine attention, which governments may provide. If governments prevent crimes, shouldn’t they take control over things that kill our time?