A key insight utilised by behavioral economists is that the same choice can be more or less attractive depending on how it is framed - whether, for example, potential gains are emphasized or potential losses. This "framing effect", as it is known, was first explored in detail by the psychologists Daniel Kahneman and Amos Tversky, and forms a key plank of their prospect theory, which in turn was foundational in the development of behavioral economics.
It is always tempting to suppose that though other people fall into the traps set by cognitive biases, that one personally is immune to their dangers. So let's put this to the test. Have a go at this quick experiment, inspired by the work of Kahneman and Tversky, and then return here for an analysis of its implications.
The experiment asks you to imagine a scenario where you're the leader of a country, there's an epidemic on the way that's going to kill exactly 600 people, and you've got to do something about it. You're told to choose between two alternative medical programs:
Program A - 200 people will be saved.
Program B - there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
To date, the experiment has been completed by 31,000 people, and the results tell us that Program A is the preferred option 64% of the time (with Program B being chosen only 36% of the time).
An interesting result, in and of itself, but what's really striking is what happens if you present the two choices in a different way:
Program A - 400 people will die.
Program B - there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.
Given this setup, only 39% of people prefer Program A, with the majority, 61%, opting for Program B.
This result is striking, because Program A and Program B are identical to each other in both setups*, yet the majority preference is reversed both in direction and extent. According to Kahneman and Tversky, this occurs because we are loss aversive - we don't like losses and we seek to avoid them.
In the first setup, people likely prefer Program A, because it guarantees the lives of 200 people, in comparison to which Program B seems like a risky gamble, where everybody could end up dead. But if you frame the choice differently, then you can get the intuition to run in the opposite direction. Thus, in the second setup, Program A doesn't seem much good, because it necessarily results in a bad loss - 400 people dead - whereas Program B at least contains the chance that everybody will be saved.
This is of interest to behavioral economists because it shows that the attractiveness of choices is not determined solely by rational utility calculations. There are also non-conscious cognitive factors in play. For example, Simon Gächter et al found that junior experimental economists were more likely to register early for a conference when a penalty fee for late registration was emphasised than they were if the same two fees - early and late - were presented as discounted and normal. In line with Kahneman and Tversky's findings, the junior economists were more concerned to avoid the loss of the fine than gain the benefit of the discount even though the prices involved were identical.
*In both versions, Program A means that 200 people will be saved and 400 will die; and Program B means there's a one-third probability that nobody will die and a two-thirds probability that 600 people will die.