When a sequence of decisions (must be made, decision trees are much more appropriate tools than are payoff matrices. A decision tree is a graphic decision-making tool typically used to evaluate decisions containing a series of steps.

A decision tree consists of 3 types of nodes

1. Decision nodes/points) are commonly represented by squares

2. Event nodes/points (chance nodes/points) are represented by circles

3. End nodes/points (outcomes) are represented by rectangles

The main elements of a decision tree are shown in Figure 1.

Expected outcome

Decision
node

Alternative 1

Alternative 2

(cost 1)

(cost 2)

Event node

Event node

(probability 1)

(probability 2)

Expected outcome

Expected outcome

(probability 3)

Expected outcome

Expected outcome

Expected outcome

(probability 1)

(probability 2)

(probability 3)

Event 1

Event 2

Event 3

Event 1

Event 2

Event 3

Figure 1. The main elements of a decision tree

A decision tree is drawn from left to right. One starts a decision tree with a decision node. This node indicates a decision that one needs to make. From this node at least two lines are drawn to the right. Each line corresponds to one option (alternative). The name of a corresponding alternative is written along the line. It is also necessary to indicate costs associated with implementing the corresponding alternative.

The second component of a decision tree is an event node. This node shows the occurrence of states of nature or events over which the decision maker has no direct control. From a chance node several lines are drawn, each showing a different possible event. One should make a brief note on the line saying what it means. Different events have different probabilities which should be indicated along the corresponding lines. The events or states of nature must be mutually exclusive (and thus the sum of probabilities of the events must be equal to 100 % or 1. Note that the chance node is identified by a circle. The distinction between circles and boxes (squares) indicates whether the decision maker has control over the events that follow a node.

At the end of each line, one should indicate the outcomes – usually profits or losses associated with certain branches of a tree.

For convenience the nodes of a decision tree are numbered.

Let us draw a decision tree using the data from the example above. For the sake of simplicity let us consider that costs of all alternatives are the same (Figure 3).

Once a decision tree is drawn, the evaluation procedure starts. The evaluation is carried out in the opposite direction, i.e. from right to left. One starts on the right hand side of the decision tree, and moves back towards the left.

Knowing the outcomes and probabilities of events, one can calculate expected values for nodes 2, 3, and 4. The expected value for a node is calculated as the sum of outcomes for certain events multiplied by the probabilities of those events.

Completing a set of calculations on a node, one should record the result.

When evaluating a decision node, it is necessary to subtract (âû÷åñòü) the cost of the alternative implementation from the expected value for the previous node that has been already calculated. This will give a value that represents the benefit of that decision.

In our case we did not specify the cost of each alternative. We consider those costs equal, so we would have to subtract the same value from each of the calculated expected values, which would not affect the final result. So, to choose among the alternatives, it will be enough for us to compare the expected values for nodes 2, 3, and 4. As is seen, the highest expected value of profit is for node 2, which corresponds to establishing a chain of travel-agencies.

For the sake of simplicity, just to illustrate how to draw and use a decision tree, we considered a situation, where a decision-maker had to make a single decision. For a single decision it is more convenient to use a payoff matrix. It makes sense to use decisions trees when a decision maker has to make a series of sequential decisions, in which later decisions depend on earlier ones. Such decision trees have several decision nodes. The approach to constructing and evaluating more branched trees is the same. Decision trees involving a sequence of decisions are nothing more than a collection of smaller decision trees, each representing a single time period.