The idea of preference is fundamental to the idea of purposive choice: We prefer some possible outcomes to others and try to select actions accordingly. This is not the same as the claim that people "have" values (or preferences, goals, purposes, desires, etc.), in the sense that they can instantaneously say which of two real or imagined states they prefer al a given moment. As Fischhoff (1991) pointed out, some researchers (e.g., economists, opinion pollsters) behave as though people
500 Judgment and Decision Making
i would be decomposed, so that "job knowledge" mi^ht' elude scores for formal education, job experience, and rece training, and so on. Such trees help to connect high-level va! ues to lower level operational measures. More complex inte
; connections among value elements are also possible (set e.g., Keeney, 1992).
Utilities and Preferences
The term utility is used in two different ways in JDM. In the i formal, mathematical sense (Coombs, Dawes, & Tversky 1970), utilities are simply a set of real numbers that allow reconstruction or summary of a set of consistent choices. The rules for consistency are strict but appear to be perfectly reasonable. For example, choices must be transitive, meaning that if you choose A over B and B over C, then you must also choose A over C. Situations in which thoughtful people wish to violate these rules are of continuing interest to researchers (Allais, 1953; Ellsberg, 1961; Tversky, 1969). Utilities, in this sense, are defined in reference to a set of choices, not to feelings such as pain and pleasure.
A very powerful formulation of this choice-based view of utility (von Neumann & Morgenstern, 1947) relies on the idea of probabilistic "in-betweenness." Suppose A is (to you) the "best" in some choice set, and C is the "worst." You like B somewhere in between. Von Neumann and Morgenstern (1947) suggested that you would be prepared to trade B for a suitable gamble, in which you win (get A) with probability p and lose (get C) with probability (1 - p). You could make the gamble very attractive by setting p close to 1.0 or very unattractive by setting it close to 0.0 so that because you value B in between A and C, one of these gambles should be worth the same to you as B itself. The value of p at which this happens is your "utility" for B, and this expresses your preference for B in an unambiguous way.
The beauty of this approach is that it allows a decision maker to evaluate every outcome on a decision tree by the same metric: an equivalent (to her) best/worst gamble. Further, if some of these outcomes are uncertain, their utility can be discounted by the probability of getting them—their "expected utility." If I value some outcome at 0.7 (i.e., as attractive to me as a best-worst gamble with .7 to win, .3 to lose), then I would value a toss-up at that same outcome at (.5 x .7) or .35. This provides a tight logic for expected utility as a guide to complex choices.
It is not clear how closely this formal view of utility conforms with the experience or anticipation of pleasure, desire, attractiveness, or other psychological reactions commonly thought of as reflecting utility or disutility. Indeed, the introduction of a gambling procedure for measurement gives many people problems because it seems to involve element5
have fully articulated preferences for all possible objects and states of being, whereas others (e.g., decision analysts) sup pose that we have only a few, basic values and must derive or construct preferences from these for most unfamiliar choices. An articulated values theorist might study a series of hiring decisions with a view to inferring the relative importance that a particular HR manager gives to different candidate attrib utes, such as experience, age, and gender. In the same context a basic values theorist might work with the manager to im prove the accuracy or consistency with which her values are applied to future hiring decisions. (Indeed, it is possible to imagine doing both studies with the same manager, first cap turing her "policy" from a series of earlier decisions and then applying them routinely to subsequent decisions as a form of decision aiding.) f Whichever view of valuing one assumes, there is plenty
of evidence to indicate that the process can be imperfectly reliable and precise. Preferences for alternative medical treatments can shift substantially (for both patients and physicians) when the treatments are described in terms of their mortality rates rather than their survival rates (McNeil, Pauker, & Tversky, 1988). Subjects asked how much they would be prepared to pay to clean up one, several, or all the lakes in Ontario offered essentially the same amount of money for all three prospects (Kahneman, Knetch, & Thaler, 1986). Simonson (1990) found that people's preferences for different snacks changed markedly from what they predicted a week ahead to what they chose at the time of consumption. Strack, Martin, and Schwartz (1988) found that students' evaluation of their current life satisfaction was unrelated to a measure of their dating frequency when the two questions were asked in that order, but strongly related (r = .66) when the dating question was asked first. Apparently, the evaluation of one's life overall is affected by the aspects one is primed to consider. MBA students' ratings of their satisfaction with and the fairness of potential salary offers were markedly influenced by the offers received by other students in their class (Ordonez, Connolly, & Coughlan, 2000). As these examples suggest, measures of preferences for real-life entities are sensitive to issues of framing, timing, order, context, and a host of other influences. It is unclear whether the problems are primarily those of imperfect measurement or of imperfect development of the respondents' values and preferences themselves.
A common assumption of basic values researchers is that complex value structures are organized in the form of hierarchies or value trees (e.g., Edwards & Newman, 1982). The HR manager, for example, might consider a candidate's attractiveness in terms of a few high-level goals, such as job knowledge, motivation, and growth potential, and assign some importance to each. At a lower level these attributes
0f risk as well as outcome preferences. Many people turn down bets such as (.5 to win $10. .5 to lose $5), despite their positive expected value (EV): (.5 x $10) + (.5 x -$5) = 52.50, in the example. Why? One possibility is declining marginal utility: The $10 gain offers only a modest good feeling, whereas the $5 loss threatens a large negative feeling, so the 50-50 chance between the two is overall negative. This is referred to as risk aversion, although it may have little connection to the actual churn of feeling that the gambler experiences while the coin is in the air.
The psychology of risk—whatis seen as risky, how risk is talked about, how people feel about and react to risk—is a vast topic, beyond the scope of this brief chapter. Many studies (e.g., Fischhoff, Lichtenstein, Slovic, Derby, & Keeney, 1981; Peters & Slovic, 1996) raise doubts about our ability to assess different risks and show very large inconsistencies in our willingness to pay to alleviate them (Zeckhauser & rViscusi, 1990). Public policies toward risk are hampered by large discrepancies between expert and lay judgments of the risks involved (Fischhoff, Bostrom, & Quadrel, 1993; Slovic, 1987, 1993). The notion of risk aversion or risk tolerance as a stable personality characteristic guiding behavior across a range of situations finds little empirical support (Lopes, 1987). This rich and important literature is only imperfectly summarized by proposing a negatively accelerated utility function!