Every international event is the result, intended or unintended, of decisions made by individuals. IR does not just happen. President Harry Truman, who decided to drop U.S. nuclear bombs on two Japanese cities in 1945, had a sign on his desk: “The buck stops here.” As leader of the world’s greatest power, he had nobody to pass the buck to. If he chose to use the bomb (as he did), more than 100,000 civilians would die. If he chose not to, the war might drag on for months with tens of thousands of U.S. casualties. Truman had to choose. Some people applaud his decision; others condemn it. But for better or worse, Truman as an individual had to decide, and to take responsibility for the consequences. Similarly, the decisions of individual citizens, although they may not seem important when taken one by one, create the great forces of world history. The study of individual decision making revolves around the question of rationality. To what extent are national leaders (or citizens) able to make rational decisions in the national interest—if indeed such an interest can be deﬁned—and thus to conform to a realist view of IR? Individual rationality is not equivalent to state rationality: states might ﬁlter individuals’ irrational decisions so as to arrive at rational choices, or states might distort individually rational decisions and end up with irrational state choices. But realists tend to assume that both states and individuals are rational and that the goals or interests of states correlate with those of leaders. The most simpliﬁed rational-actor models assume that interests are the same from one actor to another. If this were so, individuals could be substituted for each other in various roles without changing history very much. And states would all behave similarly to each other (or rather, the differences between them would reﬂect different resources and geography, not differences in the nature of national interests). This assumption is at best a great oversimpliﬁcation; individual decisions reﬂect the values and beliefs of the decision maker. Individual decision makers not only have differing values and beliefs, but also have unique personalities—their personal experiences, intellectual capabilities, and personal styles of making decisions. Some IR scholars study individual psychology to understand how personality affects decision making. Psychoanalytic approaches hold that personalities reﬂect the subconscious inﬂuences of childhood experiences. For instance, Bill Clinton drew much criticism in his early years as president for a foreign policy that seemed to zigzag. A notable Clinton personality trait was his readiness to compromise. Clinton himself has noted that his experience of growing up with a violent, alcoholic stepfather shaped him into a “peacemaker, always trying to minimize the disruption.” Beyond individual idiosyncrasies in goals or decision-making processes, individual decision making diverges from the rational model in at least three systematic ways. First, decision makers suffer from misperceptions and selective perceptions (taking in only some kinds of information) when they compile information on the likely consequences of their choices. Decision-making processes must reduce and ﬁlter the incoming information on which a decision is based; the problem is that such ﬁltration often is biased. Information screens are subconscious filters through which people put the information coming in about the world around them. Often they simply ignore any information that does not ﬁt their expectations. Information is also screened out as it passes from one person to another in the decision-making process. For example, prior to the September 2001 terrorist attacks, U.S. intelligence agencies failed to adequately interpret available evidence because too few analysts were fluent in Arabic. Similarly, Soviet leaders in 1941 and Israeli leaders in 1973 ignored evidence of pending invasions of their countries. Misperceptions can affect the implementation of policy by low-level ofﬁcials as well as its formulation by high-level officials. For example, in 1988, ofﬁcers on a U.S. warship in the Persian Gulf shot down a civilian Iranian jet that they believed to be a military jet attacking them. The ofﬁcers were trying to carry out policies established by national leaders, but because of misperceptions their actions instead damaged their state’s interests. Second, the rationality of individual cost-beneﬁt calculations is un-dermined by emotions that decision makers feel while thinking about the consequences of their actions—an effect referred to as affective bias. (Positive and negative affect refer to feelings of liking or disliking some- one.) As hard as a decision maker tries to be rational in making a decision, the decision-making process is bound to be inﬂuenced by strong feelings held about the person or state toward which a decision is directed. (Affective biases also contribute to information screening, as positive information about disliked people or negative information about liked people is screened out.) Third, cognitive biases are systematic distortions of rational calculations based not on emotional feelings but simply on the limitations of the human brain in making choices. The most important of these distortions seems to be the attempt to produce cognitive balance—or to reduce cognitive dissonance. These terms refer to the tendency people have to try to maintain mental models of the world that are logically consistent (this seldom succeeds entirely). One implication of cognitive balance is that decision makers place greater value on goals that they have put much effort into achieving—the justiﬁcation of effort. This is especially true in a democracy, in which politicians must face their citizens’ judgment at the polls and so do not want to admit failures. The Vietnam War trapped U.S. decision makers in this way in the 1960s. After sending half a million troops halfway around the world, U.S. leaders found it difﬁcult to admit to themselves that the costs of the war were greater than the beneﬁts.
Decision makers also achieve cognitive balance through wishful thinking—an overestimate of the probability of a desired outcome. A variation of wishful thinking is to assume that an event with a low probability of occurring will not occur. This could be a dangerous way to think about catastrophic events such as accidental nuclear war or a terrorist attack. Cognitive balance often leads decision makers to maintain a hardened image of an enemy and to interpret all of the enemy’s actions in a negative light (because the idea of bad people doing good things would create cognitive dissonance). A mirror image refers to two sides in a conﬂict maintaining very similar enemy images of each other (“we are defensive, they are aggressive,” etc.). A decision maker may also experience psychological projection of his or her own feelings onto another actor. For instance, if (hypothetically) Indian leaders wanted to gain nuclear superiority over Pakistan but found that goal inconsistent with their image of themselves as peaceful and defensive, the resulting cognitive dissonance might be resolved by believing that Pakistan was trying to gain nuclear superi-ority (the example works as well with the states reversed). Another form of cognitive bias, related to cognitive balance, is the use of historical analogies to structure one’s thinking about a decision. This can be quite useful or quite misleading, depending on whether the analogy is appropriate. Because each historical situation is unique in some way, when a decision maker latches onto an analogy and uses it as a shortcut to a decision, the rational calculation of costs and beneﬁts may be cut short as well. In particular, decision makers often assume that a solution that worked in the past will work again—without fully examining how similar the situations really are. For example, U.S. leaders used the analogy of Munich in 1938 to convince themselves that appeasement in the Vietnam War would lead to increased communist aggression in Asia. In retrospect, the differences between North Vietnam and Nazi Germany made this a poor analogy (largely because of the civil war nature of the Vietnam conﬂict). Vietnam itself then became a potent analogy that helped persuade U.S. leaders to avoid involvement in certain overseas conﬂicts, such as Bosnia; this was called the “Vietnam syndrome” in U.S. foreign policy. All of these psychological processes—misperception, affective biases, and cognitive biases—interfere with the rational assessment of costs and beneﬁts in making a decision. Two speciﬁc modiﬁcations to the rational model of decision making have been proposed to accommodate psychological realities. First, the model of bounded rationality takes into account the costs of seeking and processing information. Nobody thinks about every single possible course of action when making a decision. Instead of optimizing, or picking the very best option, people usually work on the problem until they come up with a “good enough” option that meets some minimal criteria; this is called satisfying, or ﬁnding a satisfactory solution. The time constraints faced by top decision makers in IR—who are constantly besieged with crises requiring their attention—generally preclude their ﬁnding the very best response to a situation. These time constraints were described by U.S. Defense Secretary William Cohen in 1997: “The unrelenting ﬂow of information, the need to digest it on a minute-by-minute basis, is quite different from anything I’ve experienced before. . . . There’s little time for contemplation; most of it is action.” Second, prospect theory provides an alternative explanation (rather than simple rational optimization) of decisions made under risk or uncertainty. According to this theory, decision makers go through two phases. In the editing phase, they frame the options available and the probabilities of various outcomes associated with each option. Then, in the evaluation phase, they assess the options and choose one. Prospect theory holds that evaluations take place by comparison with a reference point, which is often the status quo but might be some past or expected situation. The decision maker asks whether he or she can do better than that reference point, but the value placed on outcomes depends on how far from the reference point they are. Individual decision making thus follows an imperfect and partial kind of rationality at best. Not only do the goals of different individuals vary, but decision makers face a series of obstacles in receiving accurate information, constructing accurate models of the world, and reaching decisions that further their own goals. The rational model is only a simpliﬁcation at best and must be supplemented by an understanding of individual psychological processes that affect decision making.
What are the implications of group psychology for foreign policy decision making? In one respect, groups promote rationality by balancing out the blind spots and biases of any individual. Advisors or legislative committees may force a state leader to reconsider a rash decision. And the interactions of different individuals in a group may result in the formulation of goals that more closely reﬂect state interests rather than individual idiosyncrasies. However, group dynamics also introduce new sources of irrationality into the decision-making process. Groupthink refers to the tendency for groups to reach decisions without accurately assessing their consequences, because individual members tend to go along with ideas they think the others support. The basic phenomenon is illustrated by a simple psychology experiment. A group of six people is asked to compare the lengths of two lines projected onto a screen. When ﬁve of the people are secretly instructed to say that line A is longer—even though anyone can see that line B is actually longer—the sixth person is likely to agree with the group rather than believe his or her own eyes. Unlike individuals, groups tend to be overly optimistic about the chances of success and are thus more willing to take risks. Participants suppress their doubts about dubious undertakings because everyone else seems to think an idea will work. Also, because the group diffuses responsibility from individuals, nobody feels accountable for actions. In a spectacular case of group-think, President Ronald Reagan’s close friend and director of the U.S. Central Intelligence Agency (CIA) bypassed his own agency and ran covert operations spanning three continents using the National Security Council (NSC) staff in the White House basement. The NSC sold weapons to Iran in exchange for the freedom of U.S. hostages held in Lebanon, and then used the Iranian payments to illegally fund Nicaraguan Contra rebels. The Iran-Contra scandal resulted when these operations, managed by an obscure NSC aide named Oliver North, became public. The U.S. war in Iraq may also provide cautionary examples to future generations about the risks of misinformation, misperception, wishful thinking, and groupthink in managing a major foreign policy initiative.
The structure of a decision-making process—the rules for who is involved in making the decision, how voting is conducted, and so forth—can affect the outcome, especially when no single alternative appeals to a majority of participants. Experienced participants in foreign policy formation are familiar with the techniques for manipulating decision-making processes to favor outcomes they prefer. A common technique is to control a group’s formal decision rules. These rules include the items of business the group discusses and the order in which proposals are considered (especially important when participants are satisfying). Probably most important is the ability to control the agenda and thereby structure the terms of debate. State leaders often rely on an inner circle of advisors in making foreign policy decisions. The composition and operation of the inner circle vary across governments. For instance, President Lyndon Johnson had “Tuesday lunches” to discuss national security policy with top national security ofﬁcials. Some groups depend heavily on informal consultations in addition to formal meetings. Some leaders create a “kitchen cabinet”—a trusted group of friends who discuss policy issues with the leader even though they have no formal positions in government. For instance, Israel’s Golda Meir held many such discussions at her home, sometimes literally in the kitchen. Russian president Boris Yeltsin relied on the advice of his bodyguard, who was a trusted friend.
The difﬁculties in reaching rational decisions, both for individuals and for groups, are heightened during a crisis. Crises are foreign policy situations in which outcomes are very important and time frames are compressed. Crisis decision making is harder to understand and predict than is normal foreign policy making. In a crisis, decision makers operate under tremendous time constraints. The normal checks on unwise decisions may not operate. Communications become shorter and more stereotyped, and information that does not ﬁt a decision maker’s expectations is more likely to be discarded, simply because there is no time to consider it. In framing options decision makers tend to restrict the choices, again to save time, and tend to overlook creative options while focusing on the most obvious ones. (In the United States, shifting time constraints are measurable in a doubling or tripling in pizza deliveries to government agencies, as decision makers work through mealtimes.) Group think occurs easily during crises. During the 1962 Cuban Missile Crisis, President John F. Kennedy created a small, closed group of advisors who worked together intensively for days on end, cut off from outside contact and discussion. Even the president’s communication with Soviet leader Nikita Khrushchev was rerouted through Kennedy’s brother Robert and the Soviet ambassador, cutting out the State Department. Recognizing the danger of groupthink, Kennedy left the room from time to time—removing the authority figure from the group—to encourage free discussion. Through this and other means, the group managed to identify an option (a naval blockade) between their first two choices (bombing the missile sites or doing nothing). Sometimes leaders purposefully designate someone in the group (known as a devil’s advocate) to object to ideas. Participants in crisis decision making not only are rushed, but experience severe psychological stress, amplifying the biases just discussed. Decision makers tend to overestimate the hostility of adversaries and to underestimate their own hostility toward those adversaries. Dislike easily turns to hatred, and anxiety to fear. More and more information is screened out in order to come to terms with decisions being made and to restore cognitive balance. Crisis decision making also leads to physical exhaustion. Sleep deprivation sets in within days as decision makers use every hour to stay on top of the crisis. Unless decision makers are careful about getting enough sleep, they may make vital foreign policy decisions under shifting perceptual and mood changes. Because of the importance of sound decision making during crises, voters pay great attention to the psychological stability of their leaders. Before Israeli prime minister Yitzhak Rabin won election in 1992, he faced charges that he had suffered a one-day nervous breakdown when he headed the armed forces just before the 1967 war. Not so, he responded; he was just smart enough to realize that the crisis had caused both exhaustion and acute nicotine poisoning, and he needed to rest up for a day in order to go on and make good decisions. Whether in crisis mode or normal routines, individual decision makers do not operate alone. Their decisions are shaped by the gov-ernment and society in which they work. Foreign policy is constrained and shaped by sub-state actors such as government agencies, political interest groups, and industries.