Two plane rides the other day afforded the opportunity to read Dan Ariely’s Predictably Irrational. Published in 2008, Ariely’s book is a popular treatment of the growing field of “behavioral economics.” This field combines economics and psychology (and sometimes neuroscience) to try to figure out whether people always behave the way the rational-actor model of economics says they will, and if not, why not. Behavioral economists use experimental methods to see how people will react to various choice situations and determine whether they pick the maximizing choice, as the standard economic model says they should.
The results, as Ariely’s very readable book summarizes, are pretty clear: People act “irrationally,” in the sense of not picking the utility-maximizing (that is, money-maximizing) choice, all the time. (Of course this notion of rationality is much more stringent than the Misesian idea of rationality as choosing the appropriate means for a desired end.) But, as his title suggests, the experimental evidence is also clear that these irrationalities are not random, but predictable. Our reasoning processes are subject to a variety of what seem to be built-in biases that lead us to deviate from the rational-actor model. Ariely doesn’t discuss the sources of these biases that much, but other literature on cognition indicates that they may be features of the very structure of our brains that reflect the long evolutionary path that created modern humans.
For example, one of the biases we have is “loss aversion.” We fear losing something more than we value gaining something. Faced with two bets where one has a slightly higher expected value but a higher risk of a larger loss, people will pick the one with the lower expected value and lower risk of a lower loss. This loss-aversion bias may be the result of evolution where avoiding danger was more important than improving one’s situation.
Whatever the source of these “predictable irrationalities,” the evidence that they exist is strong. They matter because in the hands of some, they become an argument for limiting people’s scope for choice, or at least using the power of government to structure choices in ways that will, it is argued, reduce those biases and lead to “better” choices. For some behavioral economists irrationality and its predictability undercut the case for free markets.
But need this be so? Does the case for the market really rest on human rationality this way?
There are two kinds of responses that defenders of free markets can make to this apparent irrationality. First, we can ask whether the case for the market really rests on the rationality of individual actors. Unfortunately, many mainstream approaches to economics imply this is the case, which is what leads some behavioral economists to think that predictable irrationality undermines the market. However, other economists, including the Austrian school, do not require that actors be strictly rational in order to think that markets are good.
What Austrians and their fellow travelers can argue is that it’s not the rationality of market participants that matters, but the institutional context within which they act. In other words, rationality is not a feature of the individual choosers but of the market as a whole. Even if people make “mistakes” by not acting as the strict model would suggest, they will receive feedback from the competitive marketplace that will demonstrate their errors and give them the incentive and knowledge to correct them. Those who can recognize their biases and correct for them will do better than those who can’t, and markets enable us to do that when they are genuinely free and competitive. This is what Nobel laureate Vernon Smith calls “ecological rationality.” Even if individuals are irrational, the system as a whole produces rational outcomes.
The case for markets is not about people making perfectly rational choices. Rather the question is comparative: Under what set of institutions will people learn from and have incentives to correct the mistakes they will inevitably make? The standard is not perfection; it’s learning.
But there’s a second problem with the behavioral economics case against the market. Ariely occasionally notes how this or that government policy might help deal with the biases in our decision-making. It’s not heavy-handed but it is present. What’s interesting, though, is that he never asks whether political actors will be plagued by the same biases! He seems to assume that people in government are capable of producing the ideal policies to deal with predictable irrationality. But if the biases are human biases, then why should we trust that politicians can respond to them rationally? The question then turns back to the one that compares the potential for learning in different institutional settings.
It’s worth noting as well that Ariely has a habit of explaining “American” problems by reference to these biases. Yet if the biases are human biases, then why are Americans particularly prone to them? Perhaps the problem lies in policy, not biases. Bad policies, as we saw in the housing boom and bust, can create incentives and block knowledge, leading to irrational decisions.
Humans will always be imperfect and less than totally rational, which is precisely why we cannot trust any of them to run the lives of others.
Find a Portuguese translation of this article here.