Conversion in Progress
Chapter 5
The Concerns of the Decision Sciences
While it may be difficult to establish a definition which unifies the
vast field of the decision sciences, there are some characteristics of
the methods commonly practiced in this field upon which most practitioners
would agree. These include: a) The methods are principally algorithmic.
b) Their purposes are primarily to optimize and secondarily to satisfice
objectives. c) The methods are primarily numeric and quantitative.These
characteristics are commonly applied to methods within the major area disciplines
of; 1) Decision Analysis (DA) and a subset Multiple Criteria Decision Making
(MCDM), 2) Statistics, 3) Forecasting, and 4) Mathematical Programming.Other
areas contained within the decision sciences but which are emphasized less
include Production Quality Control & Scheduling, Markovian Analysis,
Project Management, Simulation, Game Theory, Queuing Theory, Inventory
Control, Material Requirements Planning, Influence Diagramming, and Financial
Modeling. Often the study of these techniques appear within the disciplines
of management science and operations research. We will now look at the
four major areas outlined above with a particular eye toward demonstrating
how the methods in each area contribute to problem solving, decision making,
and overcoming cognitive weaknesses. Decision Analysis/MCDM MethodsThere
are a number of decision techniques which have been developed to provide
a rational model for decision making. The term for the body of these techniques
is Decision Analysis (DA). Multiple Criteria Decision Making (MCDM) was
a term originally ascribed to the body of techniques known as linear programming,
but now MCDM and DA are largely used interchangeably. These techniques
may contain some or all the elements of the following general decision
model. ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿
³ Goal/Problem ³ ³ Alternatives ³ ³ Criteria or
Attributes (may include sub-criteria) ³ ³ Preference or Likelihood
of Occurrence (Uncertainty) ³ ³ Measurement Scales (e.g. $, yards,
horsepower,yes/no) ³ ³ Synthesis Technique ³ ³ ³
³ Figure 5.1 Elements of The General Decision Model ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙAs
an example a goal may be to select a new automobile. Alternatives involved
with this goal might include a BMW 325i, an Acura Legend, and a Oldsmobile
Cutlass. Criteria would include such common characteristics as cost, performance,
obsolescence period, and styling. Preference would relate to a criteria
such as styling. Likelihood would relate to a criteria such as maintenance.
The comparison scales would differ depending on the criteria. For example
$ would apply for costs, an ordinal scale such as great/good/fair/poor
for styling, and a scale such as years for obsolescence. Finally, a synthesis
technique would dictate how all the above would be "combined" to rank or
distance the alternatives. Methods here include additive, multiplicative,
geometric mean, and vector processing. See Johnson and Huber[DA2] for a
more in depth coverage. Researchers have developed a number of techniques
for organizing, controlling and effecting these elements. Some of these
follow. DA/MCDM methods are largely utilized for the summary, selection
from, and synthesis of a set of alternatives. Eight of these techniques
are listed and discussed below.ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿³
Analytical Hierarchy Process ³³ Bayesian Updating ³³
Cost Benefit Analysis ³³ Cost Effectiveness Analysis ³³
Decision Trees ³³ Matrix ³³ Outranking ³³
Subjective Judgement Theory ³³ Utility Assessment ³³
³³ Figure 5-2. Decision Analysis Techniques ³ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ
More detail concerning each of these methods follows: Analytical Hierarchy
Process (AHP) is largely a satisficing technique developed by Dr. Thomas
Saaty[MC11]. It provides selection and ranking of alternatives using criteria
and pairwise relative comparison. Synthesis technique utilizes eigenvectors
& eigenvalues processing. It also has a normalized consistency check.
Inherent in AHP is a cognitive ST memory 7+2 concept and a 1 to 9 numeric
scale for evaluation. This scale was developed using human cognitive experiments.
Bayesian Updating is a "a posteriori" technique postulated in the 18th
Century by Rev. Thomas Bayes. It combines a users beliefs with evidence
and hypotheses. It follows the basic tenets of mathematical probability
to help a user evaluate network paths for subsequent action. It was partially
developed because humans tend to understate changes in position based upon
new information.Cost Benefit Analysis (CBA) is a relatively simple technique
where a dollar assignment is made to a list of benefits and costs. Final
evaluation is made through an additive or ratio comparison of costs and
benefits. A major complaint of this method is the difficulty in determining
benefits.Cost Effectiveness Analysis is a technique in the same genre as
CBA. An effectiveness measure is created for each criterion. The ratio
of cost to effectiveness then provides a ranking of alternatives. Example:
Measure of Effectiveness = time to reach 60mph Measure of Cost = dollars
time cost ratio Alternative A 5.0 sec $20,000 250Alternative B 15.0 sec
$ 5,000 333One criticism of this method is the lack of consumer "preference"
inherent in these ratios. For example in the above, cost may be a significant
factor to one consumer but not another. This technique assumes cost "indifference."
Another problem is the lack of a specific synthesis techniqueto combine
the scaled criteria.Decision Trees utilize the application of probabilistic
factors and "payoffs" to outcomes (alternatives). A tree is created representing
the outcome of all possible states within the stages of a multifaceted
decision. One criticism of the tree method is that it uses the "expected
value" approach. This approach does not account for the element of risk,
which varies from decision maker to decision maker. Matrix is likely the
most commonly used technique. It is a satisficing technique which utilizes
a simple matrix for the selection of a "best" alternative. It utilizes
a subjective weight assignment for applying weights to the criteria, and
for applying scores to each alternative's criteria. While simple, it fails
two important criticisms of DA techniques - the accounting for interdependence
between criteria, and establishing distance measures among alternatives
on every criterion.Outranking created by B. Roy at the University of Paris.
Outranking is less concerned with a method for applying weights to attributes,
and more with a holistic comparison of Alternative A to Alternative B.
Roy's utilizes both concordance and discordance measures for accomplishing
this. The concordance measure is a ratio computed by summing the weights
for those attributes for alternative A which are superior to the attributes
for Alternative B divided by the weights for Alternative A as a whole.
The closer this ratio is to 1.0, the more superior Alternative A is to
Alternative B. The discordance measure looks the largest difference for
the attribute sets of A over B compared to the largest difference over
all alternatives.Subjective Judgement Theory - a statistically oriented
technique which requires the user to evaluate "holistic" hypothetical combinations
of criteria. SJT converts these evaluations into weights to be applied
to each pre-defined criterion using the least squares method. One criticism
is the large number of evaluations which must be performed to elicit these
numbers.Utility Assessment encompasses several known techniques for extracting
a decision maker's preferences. These include simple ranking, category
methods, direct methods, gamble methods and indifference methods. There
is considerable value to establishing what is known as a utility curve
for each attribute of a decision. This curve establishes a utility score
(e.g. a number from 1 to 10) over the range of values a criterion can assume.
For example an automobile's acceleration (from 0 to 60 mph) may take a
range of 4.0 to 25.0 seconds, being assigned respectively scores of 10
and 0. A linear curve for this attribute would appear as follows.ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿³
³ ³³ 10 ³ ³³ ³ ³³ ³ ³³
score ³ ³³ ³ ³³ ³ ³³ 0 ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ
³³ ³³ 4.0 25.0 ³³ ³³ seconds ³³
³³ Figure 5-3. Example of a Utility Curve ³ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙKeeney
and Raiffa [DM4] proposed some characteristics for evaluating the ability
of a technique to properly reflect the decision environment. These include;
- expression of all dimensions of the problem (DIM), - a meaningful link
between the alternatives and the criteria (LNK), - independence of certainty
and preference in an attribute (IND), - clear independence in the measures
(MEA), and - minimal expression of relationships (REL).It may be somewhat
helpful to look at these characteristics in the context of the techniques
just covered. An arbitrary scale of high, moderate and low is applied to
indicate the degree of attribute presence within a technique. In addition
to the stated criteria, two have been added. The first, valuation (VAL),
has been created to evaluate the effectiveness within a technique for attaching
value to a specific measure. For example how well is a specific $ measure
attached to a criterion, or how can a velocity measure be utilized within
the context of the technique. A second addition, cognitive assistance (COG),
has been added to allow for the inherent ability of a technique to contribute
to cognitive weaknesses in decision making.
TECHNIQUE DIM LNK IND MEA REL VAL COG
AHP mod hgh hgh mod mod mod hgh
Bayesian mod mod hgh low mod hgh hgh
CBA mod mod low low low mod low
CEA mod mod mod hgh mod hgh low
Decision Trees low mod hgh low low mod low
Matrix mod mod low mod low hgh low
Outranking mod mod low mod mod low mod
SJT mod mod low mod mod low mod
Utility low mod hgh mod mod mod mod
Figure 5-4. Decision Analysis Technique ComparisonStatistical Methods
The fundamental purpose of statistical methods, specifically the subclass
of inferential statistics, is to utilize available, but limited data to
make decisions. Information about the limited data is gathered and summarized
in a numeric indicator called a statistic. This process is, in effect,
induction. Deduction is then applied via a general technique termed hypothesis
testing to fix the limits around which the statistic may be considered
valid. The statistic to be used varies depending upon the form of the data,
and the decision to be made. Some major statistical tests include; 1) the
ttest, 2) the Coefficient of Correlation, 3) Chi Square, 4) Analysis of
Variance, and 5) Analysis of Co-varianceSome typical decisions and associated
questions which could be answered based upon these techniques follow: The
ttest Should we tighten up our grading system? Is our college GPA in line
with the mean national college GPA? Coefficient of correlation Should we
change the composition of our work groups? Is there a correlation between
the effectiveness of a group and the personalities of group members?Chi
Square Should we market more aggressively in ourSouthern region? Has the
demand for automobiles decreased equivalently across the regions of the
country?Analysis of Variance Should we institute new measures to reduce
our production error rates? Are the number of errors increasing? Analysis
of Co-variance Should we strengthen the personnel in the North District
Office? Is the time that it takes to process a form greater for the North
District than the Central Office all other factors remaining constant?
Hypothesis TestingThe purpose of hypothesis testing is to pose questions
concerning the possible conclusions that can be reached through an analysis
of the data. This type of testing is only done with data that is representative
of the population, not the population itself. For example let's say we
want to known the average number of apples eaten daily by American men.
To be 100% sure of our answer we would have to poll every man in America.
Since this is an impractical task, we instead gather a representative sample
of men, ask them how many apples they eat, and then use this data for making
inferences about the population as a whole. This is done by constraining
the limits within which one can fail to reject those conclusions. We can
never fully "accept" these conclusions because we do not have 100% of the
data. Typical parts of a hypothesis test include; - a null hypothesis,
- an alternative hypothesis, - a specific statistical test, - a rejection
region, and - assumptions.A null hypothesis test sets up a statement of
fact which the full test will hope to disprove or reject. For example a
null hypothesis would be "The average age of apple eaters is greater than
or equal to 60". The alternative hypothesis is complementary to the null
hypothesis, and is actually the thing you are trying to prove. In this
case it would be "The average age of apple eaters is less than 60." The
specific statistical test to select would depend upon the data and what
you are trying to prove. In this case it happens to be the t-test. The
rejection region demarcates how certain you are of the results to be derived
from the data you do collect. This again revolves around the fact that
we can never be 100% certain of the results indicated by our data unless
we have the entire population. Finally, some assumptions need to be made
specific. An example is that our sample is representative. If we were to
collect our apple data in Miami Beach, where many retirees live, the data
may not be representative of the population as a whole. Most of the statistical
tests only work under certain assumptions. In summary statistical methods
attack those cognitive weaknesses dealing with data in the perception,
thinking and memory areas. When exposed to only portions of the population
of data, our views cannot always be seen as representative. These methods
force the problem into a rigorous structure upon which constrained, yet
valid conclusions can be drawn. Forecasting Frequently decision makers
are asked to project either the current path of an organization's policies
or the effects of alternative decisions on future paths of the organization.
For example, what will be the predicted effect on sales if we add two salespersons,
or reduce our staff by one? Orwhat are the possible consequences of pursuing
a more aggressive operations policy? Forecasting techniques developed for
helping to answer questions of both these varieties fall under the headings
quantitative and qualitative. QuantitativeQuantitative forecasting techniques
utilize past data to help determine relationships between organizational
(and/or outside) elements, or to predict what the future will look like.
There are two general types - regressive and time series. Regressive methods
attempt to point out relationships between or among variables. For example
a rise in the overall value in the stock market may be related to a fall
in the interest rates charged. Time series methods attempt to model the
behavior of variables over time. For example the average cost of new homes
rose at a rate of 10% per year for the last ten years. The available methods
within each major group vary from simple to complex. Some of these major
method types are outlined below with explanations of their use.Regression
Linear Regression is a simple regressive model where behavior of a single
variable is predicted based upon a values taken on by another variable.
For example we could attempt to predict the weight of a college student
by asking him his height. We would use past data on other college students
to create an equation such as; WEIGHT = 2.5 x HEIGHT. (dependent) (independent)A
common method termed least squares is used to compute this relationship.
Terms used to label these variables are the dependent and independent variables.
The WEIGHT is dependent upon the value of HEIGHT. Multiple Regression (MR)
is an extension of linear regression which may use 2 or more variables
for predicting the weight. As an example; WEIGHT = 2.5 x HEIGHT + 3.2 x
WAIST - 250. The least squares method may also be used in MR as long as
the relationships remain 1st order. MR models may also involve equations
of greater than 1st order, i.e., powers, roots, and trigonometric functions
but require more sophisticated methods for establishing these relationships.
Another regressive method is the two stage least squares. This method is
used for establishing complex relationships in areas such as sophisticated
economic forecasting. The method is called two stage because more than
one equation is involved in the process, because there are error terms
in the equations, and because the dependent variables in some equations
are independent variables in others. Resolution of this process requires
a trial and error approach - and thus yields the term two stage.CorrelationCorrelation
is a measure to look at how strong two variablesrelate to each other. For
example take the following pairs of numbers - (1,2);(2,4);(3,6);(4,8);(5,10).
Note that the relationship between the numbers in each pair remains the
same - i.e. double. We would say that there is perfect correlation in these
numbers - which in the language of forecasting is given a value of 1.0
which is termed the correlation coefficient. Now note the following pairs
- (2,.5);(3,.3);(4,.25);(5,.20);(6,.166) Again the pattern in the relationship
is consistent. But now the relationship is inverse - as the first number
rises, the second one falls. This relationship is given a correlation coefficient
of -1.Where there is no discernable relationship a coefficient of 0 is
assigned.Time SeriesAs explained earlier, time series methods involve tracking
a variable over a time axis. There are two major thrusts in the time series
methods - traditional and auto-regressive. The traditional methods are
simplistic ones which extrapolate past data into the future. Variations
on this theme provide for "smoothing" the predictions, for "decomposing"
elements such as seasonal fluctuations, and business cycles, and for "filtering"
out noise or random fluctuations in data. The auto-regressive techniques
assume that there is dependence between/among the data in the time periods.
For example, an increase in the purchase of new cars this month may have
been caused, at least in part, by the lack of car sales last month. The
traditional methods do not make this assumption, or look for these relationships.
Some time series methods are presented in the figure below.ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿³
Moving Average ³³ Exponential Smoothing ³³ Classical
Decomposition ³³ Auto Regressive Moving Average (ARIMA) ³³
Box Jenkins ³³ ³³ Figure 5-5. Time Series Analysis
Techniques ³ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙQualitative
Forecasting MethodsIn addition to the quantitative methods just outlined,
a number of qualitative approaches for predicting the results of current
and proposed policy exist. Since the focus of this chapter is on algorithmic
approaches, these will just be mentioned. The primary focus of these methods
is to gather a "gut feel" consensus about the future path of events. This
consensus may be gathered from small groups, large groups or by sampling.
On a gross basis, these methods include questionnaires and interviews.
Questionnaires can be sent anonymously or with attribution. A well regarded
method known as the Delphi Technique first polls individuals, compiles
the results, distributes the results and then re-polls the participants.
Another well regarded approach is the Influence Diagram. This technique
requires decision makers to diagram out cause and effect relationships
among all the variables in a problem. In this manner the effects of changes
made to some variables can be followed through. Qualitative methods generally
place more emphasis on the experience, intuition, and wisdom of the method
participants. In summary forecasting is available in both quantitative
and qualitative varieties. The best approach toward predicting future events
is often a combination of both approaches. In absence of a crystal ball,
however, any forecasting technique will only be as good as the soundness
and quality of the data which goes into it. Mathematical ProgrammingAnother
general class of decision science methods, mathematical programming, focuses
upon optimizing some objective. The form of this objective may vary substantially.
It may be to optimally procure or allocate resources, whether they be material
goods, human or financial resources. Or it may be to provide the most efficient
scheduling, movement, balancing, or packaging of goods and services. While
many sophisticated techniques have been developed in this area, many significant
problems remain to be solved. Additionally, many of the techniques already
developed are being constantly replaced by even more efficient algorithms.
The complexity and mass of elements in this genre of problem make this
area particularly challenging. Some major areas of application and research
include Linear Programming, Integer Programming, Goal Programming, Network
Modeling, and Dynamic Programming. Each of these areas are briefly covered
below. Linear Programming (LP) is a technique which optimizes some objective,
subject to constraints we impose. It involves the application of criteria
constraints to an objective function describing the alternatives. For example
we may wish to allocate our research and development funds among the various
divisions of our company. Our objective then may be to maximize the allocation
such that the highest potential return is realized on our research investment.
While the prediction of this return may be subjective in nature, nonetheless
we could apply some quantitative measure to predict it. But we also would
have constraints surrounding this allocation. For example, to spread our
research "risk", we may wish to allocate some minimum amount to each division.
And we of course have some overall limit on the amount of funds we can
allocate. This example portrays only one of many areas that can be approached
using this technique. Typical areas of application for LP include; Mixing
Assignments Scheduling SelectionTherefore LP an be used to optimize many
things - hours, dollars, pounds of material, or miles travelled. Three
algorithmic approaches to solving these kinds of problems are available.
In order of increasing sophistication and power, they are the graphical
technique, the simplex method, and the Karmarkar method. Integer Programming
(IP) is an extension of Linear Programming for problems which require "integer"
solutions. While solutions using the LP techniques take a "continuous"
form, i.e. real numbers such as 25345.89901, solutions using IP can only
take an integer form. This only makes sense for many problems. We cannot
purchase 12.365 trucks, or move 16.876 people to Minneapolis. For many
problems we cannot simply take the LP solution and round it up or down.
This sometimes results in a less than optimal choice. Goal Programming
(GP) is another extension of LP. While LP focuses upon a single objective
and answer, goal programming focuses upon multiple objectives, or stages
of a problem. Frequently problems have multiple, layered, and competing
objectives. Goal programming would establish as its central objective the
resolution of all the sub-objectives. Each sub-objective then becomes a
constraint on the entire problem, and is solved as a unique problem. The
unique results are then folded into the larger problem. Network Modeling
is a very broad area where the algorithms are concerned with working through
optimal paths in networks. These networks can represent physical objects,
activities, or events. For example, the network may be a group of computers
with wire interconnections. Or a network may be a group of roads in a city,
with interspersed retail outlets. In the area of activities, it may be
the stages of building a house - foundation, framing, plumbing, electrical,
roofing, etc. Or in events, the stages of planning surrounding a major
conference. The objectives in establishing a network model vary from minimizing
or maximizing distance covered, to evenly spreading the workload, to simply
determining that the foundation activities are completed before the finishing
touches. Dynamic Programming (DP) is an extension of the general network
model. It works by dividing a problem into stages, then working backward
from the last stage, analyzing all possible states the network can assume.
In this manner a users objective can assume different optimal configurations
at each stage. DP is effective in multi-period planning where an organization
can work backward from a desired state through periods, analyzing what
decision paths need to be modified at the various stages of development.
SummaryThe objective of development in the decision sciences has been to
improve the ability of the human decision maker to make more timely and
quality decisions. Toward this end, extensive algorithmic techniques have
been developed for discovering information about current operations, for
optimally using resources, for looking ahead, and for properly framing
the decision itself. Practicing decision science departments have been
vastly successful in planting these techniques in daily private and public
sector operations. But they are also aware of limitations and shortfalls
in their usage. Development continues in the DS arena, and should be enhanced
significantly by parallel developments in the AI arena. At the same time
education in this area needs to be expanded. Many decision makers continue
to rely upon faulty, personal, inductive methods and are unaware of the
efficacy of decision science methods.