V. Extension Of The Small World Concept
Single contingent decisions
Decisions can be made regardless of whether more information has been acquired. Contingent decisions are decision rules that prescribe how decisions should be made in light of new information. A plan consists of a set of contingent decisions. Planning is equivalent to gathering information to reduce uncertainty to make contingent decisions (Friend and Hickling, 1997; Hopkins, 1981; Hopkins and Schaeffer, 1985; Schaeffer and Hopkins, 1987). In this section, we provide a theoretical basis prescribing how such a contingent decision should be made.
Following Marschak, assume, with slight modifications, the planner of the unitary organization performs two kinds of activities (1974):
The above activities can be reformulated to take into account explicitly planning behavior and how small worlds are constructed accordingly. That is, in addition to the above two kinds of activities, the planner specifies a desired small world through changing the probability distribution of the states in that world. This planning under goal setting will be discussed following a simpler situation in which the planner gathers more valuable information without anticipating changes in the probability distribution of the states.
Though the following axiomatic construct and notations are closely based on Marschak and Radner’s formulation (1978), I aim at here interpreting the normative information gathering process in planning through more rigorous terms in combination of Savage’s work as depicted earlier. Most of the following equations, e. g., Equations (1) through (7) and theorems, e. g., Lemma 1, are derived directly from Marschak and Radner’s formulation with slight modification to incorporate multiple, independent random variables to provide a theoretical context for my illustration of planning behavior. More specifically, the following argument follows exactly the argument of Marschak and Radner, but generalizes it to the case of multiple, independent random variables. Thus I have modified their equations by adding subscripts to account for multiple instances. The important point is to note that as long as the multiple variables have independent distributions that the argument of Marschak and Radner for a single variable still holds, e. g., Theorem 1. The following elaboration can therefore be considered as an extension of their original work.
Let be a state in the grand world for a particular random variable Xi, for k = 1,2, . . . , p. Each player gathers information through an information gathering process, or information structure (Marschak and Radner, 1978), , which yields a set of signals , j = 1,2,…,n, forming n partitions for the outcomes of the random variable of the grand world. Each grand world state corresponds to a signal . The states in a small world are thus the partitions in form of the signals.
Following Marschak and Radner, the action performed by a planner is determined by a decision rule and the structure (,) establishes the “organizational form” of the planner's decision situation which is equivalent to a small world according to my definition. Given the information structure and the true state of the grand world , the information signal is determined by = (); the payoff function for the planner can be denoted as
Where ai is an action taken in light of the signal and the decision rule . Assume the probability density function on to be . The expected payoff for the planner considering all the possible grand world states becomes
The problem is then to determine the uncontrolled organizational form (,) such that U is maximized. Note that and are noncontrollable and that is a contingent decision in that the decision will be made based on the information gathered about the true state , i.e., . Solving this problem for all random variables in one time period is equivalent to making plans through gathering information () which are contingent on the resulting signals and the true states of the grand world that evokes indirectly different decisions based on the decision rule . Plans consist of contingent decisions which can be evaluated based either on the states of the environment or on the signals or information. If the relation between the signals and the states, that is , is known, selection of decisions based on any one of the two particular sets of arguments is sufficient as shown above.
One way to explore the characteristics of the best decision rule is to fix , thus leaving only to the planner's choice. “The expected payoff yielded by the best decision function, given the information structure ,”(Marschak and Radner, 1978, p. 50) is denoted by
Let  = . The consequences of an action, (), are unknown to the planner because there is more than one state corresponding to the signal . In this situation, the best decision rule must be selected so as to maximize the conditional expectation of the payoff, given that the true state is in because a signal is a subset of states. That is, to maximize the expected payoff for a given decision function
we can “group the state according to the corresponding signals = ” and “the expected payoff above can be rewritten as”(Marschak and Radner, 1978, p. 51)
“Choosing a decision function that maximizes U above is equivalent to choosing, for each signal ,” “ an action () that maximizes the term”(Marschak and Radner, 1978, p. 51)
where and are, respectively, the probability that the signal is received and the conditional probability of , given that is in . Maximizing (6) is equivalent to maximizing each . The overall expected utility of a decision function is
This leads to the following lemma.
The above formulation is confined to the assumptions that the decision rules or functions are determined for one time period and that the probability density function for the states of the grand world, , is given. It is possible that the planner would also select the goal of changing that probability density function in making plans, which will be addressed following the exposition of a prescribed information gathering procedure.
Multiple, independent contingent decisions
I have shown, based closely on Marschak and Radner’s work (1978, pp. 51-52), how a best decision function for a particular random variable in the grand world should be determined so as to maximize the conditional expected payoff given an information structure for that variable. When the planner needs to make more than one contingent decision based on signals corresponding to multiple random variables in the grand world, how should he or she make such decisions? Consider m random variables X1,X2,X3, . . . , Xm. Lemma 1 depicts the overall expected utility of a decision function and the best decision function is to maximize that utility. Because the random variables are independent as assumed, maximizing the conditional expected utility of a decision function for each random variable given the corresponding information structure is equivalent to maximizing the overall expected utility of the decision functions for all random variables under consideration given the respective information structures.
Without loss of generality, assume for each random variable that there are p states and n signals or partitions of the states. Thus (7) can be rewritten as
According to Lemma 1 and (8), we can write Theorem 1 of best independent contingent decisions, not just one decision as depicted by Marschak and Radner in Lemma 1, as follows:
Theorem 1: Best Independent Contingent Decisions