Information, Autonomy and the Ethics of Nudging

In their article, To Nudge or Not To Nudge, philosophers Daniel M. Hausman and Brynn Welch note that many people worrying that the current enthusiasm for "nudging" and behavioral science will undermine liberty are concerned not only about whether individuals will be deprived of a full range of choices, but also about liberty in the broader sense of what might be called rational autonomy - "the control an individual has over his or her own evaluations and choices".

This wider concern is well-founded. We tend to object to subliminal advertising, for example, not because it closes off alternatives, but because it exploits our suggestibility in a way that subverts the decision-making process with the consequence that the choices we make will likely reflect the tactics of the advertiser rather than our own (relatively) stable preferences.

Hausman and Welch argue that exploiting flaws in the decision-making process to get people to choose a particular alternative is de facto "shaping" their choices; and, therefore, that a social policy can be said to be paternalistic if it aims to advance a person's interests by employing non-rational means to push them to make a particular choice. To give a concrete example, food arranged in a delicatessen such that all healthy choices are at eye-level would satisfy the criteria of paternalism, in that it would influence people's choices, but not for reasons that should play a part in any rational deliberation between alternatives.

This conception allows Hausman and Welch to argue that merely providing information cannot be seen as an attempt to shape people's choices. They contend that placing nutritional information next to food choices, for example, does not count as paternalistic, because it merely provides the sort of information that might be drawn upon in a rational deliberation over alternatives. It does not exploit flaws in human decision-making processes, and therefore leaves people in control of their own decisions. To this exetent, merely providing information seems to respect individual liberty and any moral imperative that states that people should always been treated as rational, autonomous agents.

However, arguably at least, this sort of case is not as clear cut as Hausman and Welch apparently suppose. First off, there is the obvious point that information can be true, and yet entirely misleading. For example, if you place a notice next to a slab of bacon stating that scientists have established a clear link between cancer and the consumption of processed meat, you'll be saying something true, but almost certainly you'll cause people to overestimate the cancer risk associated with consumption (to understand why you need to know the difference between statistical significance and effect size).

There is also a more subtle way that information, even if true, can function to distort decision-making processes. Here's an example. If you've ever hiked in bear country in the United States, you'll know that a trailhead will often feature a notice warning of the possibility of a bear encounter. Such notices usually provide no information about how often encounters occur, nor about what proportion of encounters end badly, so it is probable they are misleading in the sense specified above. If you're not an experienced hiker, you may well think you're more at risk than is the case.

But, more interestingly, such notices also function to frame a nervous hiker's decision about whether to embark on a hike, or whether to turn back if they sense the presence of a bear (i.e., hear unexplained noises in the undergrowth!), with the possibility of danger right at front and center. It is true, of course, that a risk assessment is properly part of any rational decision to embark on some potentially dangerous activity, but it should not be the only factor in play (if it were, then base jumping would cease immediately). If the information provided about bears causes the hiker's attention to fall solely or largely upon the cost end of a cost-benefit analysis, then the probability of a flawed decision is increased. To put it simply, the nervous hiker is likely to forget that there are arguments for going ahead with the hike even in the face of some element of danger.

Probably this sounds a bit counterintuitive. But imagine if a government health warning on cigarette packaging read: "Smoking is dangerous, but smoking is also a pleasure." Such a statement is undeniably true, but it would be much less effective as a nudge than standard health warnings. The reason it would be less effective is obviously: "Smoking is dangerous" on its own frames the decision about whether to smoke solely in terms of its dangers - it's going to kill you, don't do it. If you add in the fact that it's pleasurable, the "right" decision is less obvious, because the information provided doesn't supply its own answer. If you want to nudge people towards not smoking, you're not going to mention its benefits - you're going to construct your notice so that it directs attention solely towards the cost side of the decision-making calculus. But if you do that, of course, you are exploiting a flaw in the way that we make decisions - a tendency to oversimplify, or to attend only selectively - and to this extent you're using true information to shape a particular outcome.