(Kuva: Anderson Mancini)
Tämä on toinen osa käyttäytymisperusteisen rahoituksen eli behavioral financen kirjoitussarjasta. Sen on kirjoittanut Oulun yliopiston rahoituksen professori Hannu Kahra ja teksti perustuu pitkälle John Nofsingerin kirjaan Psychology of Investing. Lue sarjan ensimmäinen osa tästä linkistä.
2.2 Long-Term Capital Management
Even Nobel Prize winners in the field of economics are prone to overestimating the precision of their knowledge. Consider the plight of the hedge fund Long-Term Capital Management (LTCM). The partners of the fund included John Meriwether, the famed bond trader from Salomon Brothers; David Mullins, a former vice chairman of the Federal Reserve Board; and Nobel Prize winners Myron Scholes and Robert Merton. The firm employed 24 people with Ph.D. degrees. The hedge fund began in 1994 and enjoyed stellar returns. In the beginning of 1998, LTCM had $4 billion in equity. It also had borrowed $100 billion to leverage its positions for higher returns. Its main strategy was to find arbitrage opportunities in the bond market.
In August 1998, Russia devalued its currency and defaulted on some of its debt. This action started a chain of events over the next four weeks that led to devaluation in many emerging countries. Bond and stock markets throughout the world declined. The prices of U.S. Treasury securities skyrocketed as investors fled to the safest investments.
The equity in the LTCM portfolio fell from $4 billion to $0.6 billion in one month. The Federal Reserve Bank feared that a margin call on LTCM would force it to liquidate its $100 billion worth of positions. Selling these positions during this precarious time might have precipitated a crisis that could endanger the financial system. By late September, a consortium of leading investment banks and commercial banks injected $3.5 billion into the fund in exchange for 90 percent of the equity.
How could a hedge fund with such brainpower lose 90 percent of its equity in one month? Apparently, in designing their models, the partners did not think so many things could go wrong at the same time. It appears that they set their range of possible outcomes too narrowly.
2.3 Behavioral finance
Even the smartest people are affected by psychological biases, but traditional finance has considered this irrelevant. Traditional finance assumes that people are "rational" and tells us how people should behave in order to maximize their wealth. These ideas have brought us arbitrage theory, portfolio theory, asset pricing theory, and option pricing theory.
Alternatively, behavioral finance studies how people actually behave in a financial setting. Specifically, it is the study of how psychology affects financial decisions, corporations, and the financial markets. We focuse on a subset of these issues how psychological biases affect investors. The investor who truly understands these biases also will appreciate more fully the tools traditional finance has provided.
2.4 Sources of cognitive errors
Many of the behaviors of investors are outcomes of prospect theory. This theory describes how people frame and value a decision involving uncertainty. First, investors frame the choices in terms of potential gains and losses relative to a specific reference point. Although investors seem to anchor on various reference points, the purchase price appears to be important. Second, investors value the gains/losses according to an S-shaped function as shown in Figure 2. Notice several things about the value function in the figure. First, the function is concave for gains. Investors feel good (i.e., have higher utility) when they make a $500 gain. They feel better when they make a $1,000 gain. However, they do not feel twice as good when they gain $1,000 as when they gain $500.
(Figure 2: Prospect theory)
Second, notice that the function is convex for taking a loss. This means that investors feel bad when they have a loss, but twice the loss does not make them feel twice as bad.
Third, the function is steeper for losses than for gains. This asymmetry between gains and losses leads to different reactions in dealing with winning and losing positions.
An additional aspect of prospect theory is that people segregate each investment in order to track gains and losses and periodically reexamine positions. These separate accounts are referred to as mental accounting. Viewing each investment separately rather than using a portfolio approach limits investors' ability to minimize risk and maximize return.
A different approach to the psychology of investing is to categorize behavioral biases by their source. Some cognitive errors result from self-deception, which occurs because people tend to think they are better than they really are. This self-deception helps them fool others and thus survive the natural selection process. Another source of bias comes from heuristic simplification. Simply stated, heuristic simplification exists because constraints on cognitive resources (like memory, attention, and processing power) force the brain to shortcut complex analyses. Prospect theory is considered an outcome of heuristic simplification. A third source of bias comes from a person's mood, which can overcome reason.
Human interaction and peer effects are also important in financial decision making. Human interactions are how people share information and communicate feelings about the information. The cues obtained about the opinions and emotions of others influence one's decisions.
2.5 Bias and wealth impact
Behavioral finance demonstrates how psychological biases, cognitive errors, and emotions affect investor decisions. It also shows the wealth ramifications of these biased decisions. In other words, not only do people make predictable errors, but those errors have financial costs. As an example, consider that people place too much emphasis on the few observations they have witnessed to make predictions about future outcomes. First consider the three outcomes of flipping a coin, Head, Head, and Head. We know that we should expect there to be equal numbers of Head and Tails in the long run. Observing an imbalance like three Heads leads people to behave as if there is a greater chance of a Tails on the next flip. Since we know the underlying distribution (50% chance of Heads, 50% chance of Tails), we tend to believe in a correction. This is known as the gambler's fallacy and is part of a larger misunderstanding referred to as the law of small numbers?
Consider how this behavior impacts those who play the lottery. In the long run, people know that each number in a lottery should be picked an equal number of times. So they tend to avoid numbers that have been recent picked because it seems less likely that they should be picked again so soon. So this fallacy biases people toward picking lottery numbers that have not been picked in a while. You might ask how this impacts their wealth; after all, the numbers they pick are as equally likely to be chosen as any others. Say that everyone who plays the lottery avoids the numbers that have recently been picked. Remember that lottery jackpots are split between all the winners. If I select the the recent numbers and if my numbers get chosen in the lottery, I am the only winner and get to keep the entire jackpot. If you are the winner, you are likely to split with others and thus receive only a small share of the jackpot. Our probabilities of winning are the same, but by following the crowd of people suffering from gambler's fallacy, you would have a smaller expected payoff. Notice that by understanding this bias, I am able to change my decisions to avoid it and position myself to make more money than those who suffer from it.
Belief in the law of small numbers causes people to behave a little differently in the stock market. With coins and lotteries we believe that we understand the underlying distribution of outcomes. But we don't know the underlying distribution of outcomes for different stocks and mutual funds. In fact, we believe that some stocks and mutual funds are better than others. Here we take the small number of observations we see as representative of what to expect in the future. Unusual success is believed to continue: we believe in "hot hands". When people believe they understand the underlying distribution of outcomes, they predict unusual occurrences to reverse. Alternatively, when they do not know the underlying distribution, they predict unusual performance to continue. We thus see investors "chase" last year's high performing mutual funds.
2.6 Biases
2.6.1 Overconfidence
Extensive evidence shows that people are overconfident in their judgments. This appears in two guises. First, the confidence intervals people assign to their estimates of quantities - the level of the Dow in a year, say - are far too narrow. Their 98% confidence intervals, for example, include the true quantity only about 60% of the time [Alpert and Raiffa (1982)]. Second, people are poorly calibrated when estimating probabilities: events they think are certain to occur actually occur only around 80% of the time, and events they deem impossible occur approximately 20% of the time [Fischhoff, Slovic and Lichtenstein (1977)].
Hence, people can be overconfident. Psychologists have determined that overconfidence causes people to overestimate their knowledge, underestimate risks, and exaggerate their ability to control events. Does overconfidence occur in investment decision making? Security selection is a diffcult task. It is precisely this type of task in which people exhibit the greatest degree of overconfidence. Are you overconfident?
Question: Are you a good driver? Compared with the drivers you encounter on the road, are you above average, average, or below average?
How did you answer this question? If overconfidence were not involved, approximately one-third of you would answer above average, one-third would say average, and one-third would say below average. However, people are overconfident in their abilities. In one published study, 82 percent of the sampled college students rated themselves above average in driving ability. Clearly, many of them are mistaken.
Consider this financially oriented example. Starting a business is a risky venture; in fact, most new businesses fail. When 2,994 new business owners were asked about their chances of success, they thought they had a 70 percent chance of success, but only 39 percent thought that any business like theirs would be as likely to succeed. Why do new business owners think they have nearly twice the chance of success as others? They are overconfident.
A Gallup/Paine Webber survey of individual investors conducted in early 2001 demonstrates this overconfidence. Of particular note is that many of those surveyed had recently experienced some negative outcomes after the technology stock bubble collapsed. When asked what they thought the stock market return would be during the next 12 months, the average answer was 10.3 percent. When asked what return they expected to earn on their portfolios, the average response was 11.7 percent. Typically, investors expect to earn an above-average return.
2.6.2 Optimism and wishful thinking
Most people display unrealistically rosy views of their abilities and prospects [Weinstein (1980)]. Typically, over 90% of those surveyed think they are above average in such domains as driving skill, ability to get along with people and sense of humor. They also display a systematic planning fallacy: they predict that tasks (such as writing survey papers) will be completed much sooner than they actually are [Buehler, Griffin and Ross (1994)].
2.6.3 Representativeness
Kahneman and Tversky (1974) show that when people try to determine the probability that a data set A was generated by a model B, or that an object A belongs to a class B, they often use the representativeness heuristic. This means that they evaluate the probability by the degree to which A reflects the essential characteristics of B.
Much of the time, representativeness is a helpful heuristic, but it can generate some severe biases. The first is base rate neglect. To illustrate, Kahneman and Tversky present this description of a person named Linda:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
When asked which of "Linda is a bank teller" (statement A) and "Linda is a bank teller and is active in the feminist movement" (statement B) is more likely, subjects typically assign greater probability to B. This is, of course, impossible. Representativeness provides a simple explanation. The description of Linda sounds like the description of a feminist - it is representative of a feminist - leading subjects to pick B.
Representativeness also leads to another bias, sample size neglect. When judging the likelihood that a data set was generated by a particular model, people often fail to take the size of the sample into account: after all, a small sample can be just as representative as a large one. Six tosses of a coin resulting in three heads and three tails are as representative of a fair coin as 500 heads and 500 tails are in a total of 1000 tosses. Representativeness implies that people will find the two sets of tosses equally informative about the fairness of the coin, even though the second set is much more so.
Sample size neglect means that in cases where people do not initially know the data-generating process, they will tend to infer it too quickly on the basis of too few data points. For instance, they will come to believe that a financial analyst with four good stock picks is talented because four successes are not representative of a bad or mediocre analyst. It also generates a "hot hand" phenomenon, whereby sports fans become convinced that a basketball player who has made three shots in a row is on a hot streak and will score again, even though there is no evidence of a hot hand in the data [Gilovich, Vallone and Tversky (1985)]. This belief that even small samples will reflect the properties of the parent population is sometimes known as the "law of small numbers" [Rabin (2002)].
In situations where people do know the data-generating process in advance, the law of small numbers leads to a gambler's fallacy effect. If a fair coin generates five heads in a row, people will say that "tails are due". Since they believe that even a short sample should be representative of the fair coin, there have to be more tails to balance out the large number of heads.
2.6.4 Conservatism
While representativeness leads to an underweighting of base rates, there are situations where base rates are over-emphasized relative to sample evidence. In an experiment run by Edwards (1968), there are two urns, one containing 3 blue balls and 7 red ones, and the other containing 7 blue balls and 3 red ones. A random draw of 12 balls, with replacement, from one of the urns yields 8 reds and 4 blues. What is the probability the draw was made from the first urn? While the correct answer is 0.97, most people estimate a number around 0.7, apparently overweighting the base rate of 0.5.
At first sight, the evidence of conservatism appears at odds with representativeness. However, there may be a natural way in which they fit together. It appears that if a data sample is representative of an underlying model, then people overweight the data. However, if the data is not representative of any salient model, people react too little to the data and rely too much on their priors. In Edwards' experiment, the draw of 8 red and 4 blue balls is not particularly representative of either urn, possibly leading to an overreliance on prior information.
2.6.5 Belief perseverance
There is much evidence that once people have formed an opinion, they cling to it too tightly and for too long [Lord, Ross and Lepper (1979)]. At least two effects appear to be at work. First, people are reluctant to search for evidence that contradicts their beliefs. Second, even if they find such evidence, they treat it with excessive skepticism. Some studies have found an even stronger effect, known as confirmation bias, whereby people misinterpret evidence that goes against their hypothesis as actually being in their favor. In the context of academic finance, belief perseverance predicts that if people start out believing in the Efficient Markets Hypothesis, they may continue to believe in it long after compelling evidence to the contrary has emerged.