Conversion in Progress
Chapter 6
Cross Fertilization in AI & DS
This chapter examines underlying common elements in AI and DS and also
looks at combinations of theory and technique. It will be shown that these
common elements and combinations can be synthesized to create an enhanced
approach toward problem solving. The following sections address the issue
of technique commonality. They do this by; 1) combining elements in purpose,
2) addressing common underlying theory, and 3) establishing an awareness
of specific techniques which accomplish "like" objectives.The last part
of this chapter examines a number of examples where AI has been added to
DS based systems and vice versa. The systems examined include real world
implementations, prototypes and commercial software packages. A Combined
Purpose The beginnings of chapters 4 and 5 outlined the principle characteristics
of AI and DS systems. By combining these characteristics we find our new
AI/DS goals to be; the enhancement of algorithmic methods with heuristic
extensions, the extension of analysis by satisficing goals where optimization
is not possible or practical, the addition of qualitative/symbolic techniques
to a model which is largely quantitative. In the instances above the reverse
should also prove true, i.e. AI methods can, in many cases, be improved
via DS technology. For example, in early AI languages such as LISP and
PROLOG the handling of math variables and the accessing of databases was
very difficult. After much complaining by analysts & programmers, vendors
and language creators made numeric and large data handling an integral
part of their products. Another example is the proposed use of AHP to help
determine the value of certainty factors in Expert Systems. [ES2] An interesting
sidebar to any effort to combine these fundamental purposes is the preexisting
commonality in the underlying nature of the two areas. There are a wide
variety of techniques available in the AI and DS worlds. Andriole[GN1]
has catalogued over 1000. And Hopple[DS3] has created a taxonomy for classification
of them. It is shown below. ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
Figure 6-1. Hopples Taxonomy of Techniques Why look at commonality and/or
extensionality in AI/DS techniques? There are 2 major potential problems
with the use of techniques. The first is that we need to be sure we do
not get trapped into what Hopple terms "Methodoliatry". This is where we
attempt to use one technique to solve all problems. As the adage goes "To
a three year old with a hammer, all the world is a nail." The second is
that we do not spend too much time reinventing the wheel the Not Invented
Here (NIH) syndrome. Overcoming these problems is difficult because it
requires a large body of knowledge on both what techniques are available
and on how to apply them. Researchers including Andriole[GN1], Nunamaker[GN8],
and Westling[GN13] have researched and built systems to help mitigate this
situation. Despite their work, differentiating among techniques is difficult.
Potential solutions to this problem are discussed in the final chapter.
We all know that there are many ways to travel from here to California,
e.g. by plane, train, bus, car, feet. Which method is better? What criteria
exist for establishing the "better" method? Do we use processing time?
processing cost? simplicity of the technique? validity & reliability
of the solution? least cognitive dissonance? all of these? If so, how do
we discover and/or test our evaluation, especially when minor differences
exist, and substitutions may prove equally effective from a body of thousands
of techniques?There is no easy answer except perhaps a better understanding
of the fundamental thrust in the individual areas. The following diagram
outlines some of the criteria which may be helpful in determining commonality
and/or differences within technique classes.
Neural Net Decision System Expert System
Learning some none none
Problem limited broad scope moderate scope
Domains
Repeatable part of purpose usually unique part of purpose
Complexity of black box; non simple & complex simple; linear;
Problem linear linear & non linear heuristic
Solution algorithmic
Ability to built in poor; model needs none
handle respecification
changing
environment
Time and short; moderate depends on technique short to long;
difficulty to moderate
implement
Figure 6-2. Technique Class Attribute ComparisonUnderlying Commonality
in Axioms & Theory The overlap in AI and DS methods and theory exist
fromboth from a practical standpoint and a theoretical one. From a practical
standpoint there is an overlap in three principal areas. These include
the extensive use of the computer for data storage and access, the computational
burden of numeric or symbolic manipulation, and the heavy use of modeling.
In a more theoretical vein, White[GN14] pointed out the overlap in the
conceptual bases of graphs, networks and search. To this should be added
the topics of subjectivity and uncertainty.ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿³
Graph & Network Theory ³³ Search Algorithms ³³
Uncertainty Handling ³³ Subjectivity Measures ³³ ³³Figure
6-3. Theoretical Underpinnings in AI and DS ³ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ
Graph theory is a heavily researched area in the decision sciences. This
is also true of AI because the underlying knowledge representation mechanisms
semantic nets, frames, and neural networks have graphs as their basis.
Classic operations research problems such as the traveling salesman problem
and source to destination routing utilize various search algorithms to
minimize distances. Likewise in the AI discipline "Search problems are
ubiquitous, popping up everywhere Artificial Intelligence researchers and
students go"[Winston [AI11] p. 87]. The handling of uncertainty is a large
domain within generic OR to include classic probability, decision trees,
and bayesian methods. Virtually all statistical methods utilize probability
distributions as a partial measure of uncertainty. In artificial intelligence
we find certainty factors, Dempster Shafer techniques, and fuzzy logic,
all for dealing with inexact reasoning. See Ng and Abramson[GN7] for a
discussion of 6 techniques for the application of uncertainty in expert
systems, 3 of which are in common use in decision systems. Finally subjectivity
measures exist strongly within the decision sciences in the form of risk
preference, and within AI as the very basis of AI design -resident in rules,
frames or nets. Both practitioners and theoreticians need to be aware of
these common elements to eliminate unnecessary incursions into research
areas which have already been explored. Some examples of that duplicity
follow. Commonality/Extensionality in Techniques AI and DSs cannot escape
the old cliche "There is nothing new under the sun." For a number of these
techniques, while perhaps having radically different origins, accomplish
virtually identical tasks with equal alacrity. Some of these examples follow
in the table below. Lipppman, in the NeuralWorks manual, provides a list
of statistical methods compared to their NN counterparts. The Anderson
et al article reviews in some depth the use of eigenvector/ eigenvalue
processing for feature analysis in neural network systems. Lawrence provides
a critique of the Brainmaker software package, but also discusses the use
of this tool as a "fully automated nonlinear multidimensional regression
analysis tool". He also addresses the use of NNs against Non Polynominal
(NP) problems. Odom looks at a comparative study of back propagation "generalization"
(using Neuroshell) against multivariate discriminant analysis (MDA) (using
the SAS DISCRIM procedure). White explains that back propagation is identical
to a statistical procedure called "stochastic approximation method". He
differentiates the techniques by such characteristics as binary v. continuous
valued input, supervised v. unsupervised learning, and pattern type. Rendell
tests 6 different learning methods including curve fitting, response surface
fitting, and genetic algorithms. ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿³AI
Based DS Based Reference ³³ ³³Neural Nets Statistical
Methods Lippman [S4] ³³Neural Nets AHP Anderson[S1] ³³Neural
Nets NonLinear Multiple Lawrence [S6] ³³ Regression ³³Neural
Nets Multivariate Odom [S8] ³³ Discriminant Analysis ³³Neural
Nets Stochastic Approx. White [S10] ³³Genetic Curve Fitting Rendell
[] ³³ Algorithms ³³Learning Explor. Data Analy. Fisher&Langley
³³ Algorithms ³³Neural Nets Correlation Xenakis [S11]
³³Neural Nets Bayes Classifier Guyver[] ³³Constraint
Integer Programming Dhar &Rang. ³³ Satisfaction [RA2] ³³Rules
Algebraic Fordyce [GN5] ³³ Representation ³ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙFigure
6-4. AI & DS Techniques w/ similar Bases or Purpose We have now examined
the issue of commonality. It is now time to turn to looking at specific
supporting examples in both the research and commercial sectors. Practical
AI applied to DS What follows is an examination of 38 systems or proposals
for systems. Ten of these come from "in place", real world systems. Twenty
could best be labeled research prototypes or proposed, detailed directions
for applying AI to DS. In addition eight commercial software packages,
which in some form provide an enhanced AI capability to the fundamental
DS function, are examined. These systems exist in all DS areas including
MCDM, statistics, forecasting, linear programming, planning, scheduling,
queuing, and project management. It should be understood, however that
in some cases the thrust of the real world or prototype system really emanates
from the AI world, and the minor emphasis is on DS. The order of presentation
follows the 1) real world, 2) prototype or proposed, and 3) software format
already established. The table below provides a complete list. REAL WORLD
AI Based DS Based System Name Reference 1) Rules MCDM n/a Levine et al
2) Rules MAUT n/a Madey et al 3) Expert System Statistics PREVISTA Walker&Miller
4) Rules Statistics REX Gale 5) Frames Statistics ZEERA Marcoulides 6)
Expert System Forecasting IFFX Walker & Miller 7) Expert System Forecasting
EMEX Walker & Miller p. 158 8) Expert System Forecasting SMARTFORECAST
Walker & Miller p. 192 9) Rules Resource Alloc. n/a Levine & Pomerol10)
Expert System Project Mgmt XPM Walker & Miller p. 189PROTOTYPES/PROPOSALS
AI Based DS Based System Name Reference 1) Rules Decision Analy. n/a Holtzmann
2) Neural Nets Assessment MCDM n/a Wang 3) Neural Nets MAUT n/a Madey 4)
Learning MAUT n/a Madni et al 5) Fuzzy Logic Cost/Effectiv. n/a Dockery
6) Predicate Logic Decision Theory n/a Fox et al 7) Induction Decision
Trees n/a Quinlan 8) Rules & NL Statistics IS Remus 9) Inference Statistics/Math
AR Lacy10) Expert System Statistics ASA Walker & Miller p. 21111) Expert
System Statistics CARDS Walker & Miller p. 21112) Expert System Statistics
Experiplan Walker & Miller p. 21213) Expert System Forecasting n/a
Kumar & Hsu 14) Expert System Forecasting n/a Kuo 15) Neural Nets Linear
Progr. n/a Mort16) Rules Linear Progr. n/a Murphy and Stohr17) Frames Linear
Progr. n/a Binbasioglu/Jarke18) Semantic Nets Linear Progr. n/a Evans and
Camm 19) Natural Lang. Queuing NLPQ Feignebaum20) Expert System Queuing
SQS Hossein et al.SOFTWARE AI Based DS Based System Name 1) Cognitive MCDM
SmartEdge 2) Rules Matrix MCDM Lightyear 3) Induction Matrix MCDM Expert
87 4) Cognitive AHP MCDM Expert Choice 5) Expert Systems Statistics Statistical
Navigator 6) Expert Systems Statistics Knowledge Seeker 7) Expert Systems
Project Mgmt Proj. Mgmt. Advantage 8) Expert System Influence Diag. AIDA
Figure 6-5. Existing AI/DS Systems Some caution need be expressed when
looking at these real world and proposed systems. For example, the systems
either proposed or built by Remus & Kotteman, Kumar & Hsu, as well
as the software products Statistical Navigator and Knowledge Seeker all
fall into the category of advisory systems. That is, they simply prescribe
a statistical or forecasting technique based upon some descriptions of
what the user needs, and a description of the data. Obviously a similar
system could be built for guidance through the family of linear programming
techniques and other disciplines in the OR/MS world. There is therefore
little technological symbiosis here, just a simple treelike advisor in
a black box. However there are systems which have been created which do
exhibit a powerful melding of the AI and DS disciplines. Some of these
follow.Some Real World Systems Four "real world" projects in which technological
symbiosis is particularly evident include the Madey et al contract bidding
system, the Levine & Pomerol French Railway system, the Levine, Pomerol,
and Saneh postal sorting machine selection, and the Gale et al REX system.
Madey et al developed a combination expert system/ multiattribute utility
model to help an aerospace firm bid on selected projects. Resident in the
expert system component are certain guidelines revolving around financial
areas such as npv and payback, and around technical areas such as estimating
work involved. The MAUT component acts as an evaluation function. Ultimately
the user is provided with a continuous versus discrete score for ranking
of the projects. Levine and Pomerol were concerned with railcar distribution
throughout France's 33 nationwide regions. The use of strictly linear programming
was considered, but left 2 gaps the first in computational time and the
second in top level strategy. Processing of the routing of 100 different
types of railcars, using 1056 variables in 33 regions, etc. etc. required
an all night LP run to come up with a static solution. When sticky problems
would occur after the run, the operators would use some top level reasoning
to tweak the new routings. The new system combines a traditional allocation
algorithm with operator level reasoning placed in schema trees. These trees
define the priority of requests and arising deficiencies. Diagrams of the
general architecture and a schema tree are shown below. Figure 6-6A. Railcar
System Architecture Figure 6-6B. Region Schema Tree Levine, Pomerol and
Saneh used an interesting transform of rules to a decision matrix to help
the postal service select a sorting machine. The essence of this system
is a set of rules which analyze data to either aggregate some subcriteria
qualities or to evaluate nonquantitative criteria such as political risk.
That is the system begins with raw data and produces an matrix with established
values. An example of some of the rules and the resultant decision matrix
are depicted below. ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
º º º º º º º Postal 1 º º
º º º º º º º º º º º
º º º º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
Figure 6-7A. Rules for Sorting Machine Selection Figure 6-7B. Decision
Matrix Derived from Rules The REX system was built by Gale and Pregibon
at Bell Labs. REX is used to guide a user through building a regression
analysis. It extends traditional advisory systems in that it conducts an
ongoing dialogue with the user while performing data analysis. Some features
that it offers include depth first search of problem possibilities, test
interpretation including a lexicon, and graphics with interpretation. Test
offerings including granularity, extremepoint analysis, spacing analysis,
missing data analysis, and skewness tests. As Gale states "Existing statistical
software, although quite powerful, requires considerable statistical knowledge
to be used effectively. The capability to serve more people would be the
basis for a viable product because it would increase productivity and reduce
training requirements." The diagram below depicts a screen from a session
with REX. Figure 6-8. Screen of Session with REX Statistics is an interesting
area for the application of AI techniques. Inferential statistics could
be viewed as a marriage of inductive and deductive reasoning. The number
crunching to produce a statistic from data is an inductive process. And
the purposeful limitation of the scope of application through a hypothesis
test is recognized as deduction. The origin of some of the mainstream research
in inductive reasoning follows from basic statistical analysis such as
hypothesis testing of a single dependent variable. See Barr & Feigenbaum[AI4]
for a description of BACON, CLS, and ID3 three inductive reasoning systems.
Some Prototypes and Proposals A number of interesting prototypes and system
proposals exist. Five of these, which deal with a wide array of possibilities,
will be highlighted here. Three of these deal with applying learning, in
two cases within the realm of MCDM, and the third as applied to limited
resource allocation. Another example looks at using natural language processing
for creating queuing models. And the last example utilizes a combined expert
system/Bayesian model to support forecasting. Learning in neural networks
(NNs) has become a hot topic in the scientific world. The effects of applying
it to decision making shows promise. Wang[MC13] has proposed a model for
using NNs as an assessment technique in MCDM. As he states "The motivation
of this approach is to capture the essence of the decision makers rational
preferential behavior with artificial neural networks via supervised learning."
While other accepted methods now elicit preference behavior, two weaknesses
in these other approaches include conditional independence of attributes,
and inflexibility in the defined, a priori state. White [GN14] examines
this same problem, pointing to the use of rules as a partial solution.
Wang's method creates a mapping mechanism resembling the decision maker's
preference behavior. His mechanism should be contrasted to decomposition
methods now in common use. It shows superiority in assumptions of independence,
and in immunity to noisy data. Weaknesses however arise when examining
the likely limited data set which must be used for training. Figure 69
below portrays this system. ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
Figure 6-9. Neural Network Preference Assessment Model Madni et al proposed
an interesting system which combines the advantages of an adaptive learning
model with the attributes of a MCDM model. These researchers wished to
establish a model which would filter relevant messages from a large incoming
stream. A user would begin by evaluating a message by applying weights
to a set of criteria for judging the value of that message. These criteria
included content area, age, specificity, familiarity, precedence, and locale.This
process would be repeated for a series of messages. At the same time each
message would be evaluated on a rank order basis, establishing its place
within the message set. Based upon previous weightings, the system would
also estimate the message ranking. Differences between the user's rank
and the system rank would be used to adjust the attribute weight factors.
Once the system error margin had been reduced to an acceptable level, new
incoming messages could then be channeled by the system. Figure 610 below
portrays this system. ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
Figure 6-10. Message Attribute Training Schematic Mort has suggested the
application of artificial neural networks to limited resource allocation.
Unlike traditional DS methods, such as integer programming, Mort's method
uses a time dimension. This time dimension is used to apply learning. His
system contains the elements of system effectiveness, constraints, and
how to best respond. System effectiveness corresponds roughly to an objective
function evaluation in traditional mathematical programming. Constraints
act in a manner identical to their mathematical programming cousins. Since
this is a time based system, resources may be released (as responses) in
different time periods. Mort has postulated a unique approach toward understanding
system effectiveness which he calls "differential ratio learning". It is
based on a common neural network learning algorithm developed by Hebb[NN1].
While Mort's approach is of limited value in a static environment, it would
have application in environments with changing available resource levels
and changing goals. As an example, he uses it in a battle scenario where
force levels and threats may be very dynamic. Figures 611A and 6-11B below
provide a depiction of his approach. ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
Figure 6-11A. Layered Limited Resources Network ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
Figure 6-11B. Neural Network Structure The Natural Language Programming
for Queuing (NLPQ)[Q2] simulation was created by George Heidorn, who is
now with IBM. Heidorn's system takes English statements and builds a GPSS
program to simulate a queuing situation. Underlying the system are production
rules which guide the question and answer series, and build the program.
The process includes about 300 English decoding rules and 500 English and
GPSS encoding rules. Through the decoding rules, the Q&A session guides
the building of a semantic network, which becomes the internal description
of the problem (IDP). Once the IDP is built, the encoding rules create
the GPSS simulation. Figure 612 below is an example of a conversation with
NLPQ. ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º º º º º º º º
º º º º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ
Figure 6-12. NLPQ ConversationSome Off the Shelf SoftwareThis section will
look at 5 off the shelf software packages which have some combined elements
of DS and AI. The first three are MCDM packages with some overt or inherent
AI functionality. The last two are packaged expert systems which deal with
DS topics. The packages are respectively, Lightyear, Expert Choice, Expert
87, Project Management Advantage, and Statistical Navigator. LightyearLightyear
(LY) is a matrix based multiple criteria decision making package. Unlike
other MCDM packages on the market, LY has a rule capability. The rule capability
may be used for alternative elimination, or adding/subtracting weight to/from
an alternative's score. Despite the general classification of rules as
AI, it could be persuasively argued that this capability provides little
in the way of "intelligence" in this software. There is no inference engine
to interpret cascading rules. Each rule is simply evaluated one by one
against each alternative. Figure 613A below provides an example set of
rules. Figure 613B shows the rule creation process. Figure 6-13A. Rules
in Lightyear Figure 6-13B. Creation of Rules in LightyearExpert ChoiceExpert
Choice (EC) is an AHP based MCDM package. The AI features in EC are less
overt than in the other packages. Three features point toward artificial
intelligence. These are a short term memory feature, the inherent concept
of relative pairwise comparison, and a consistency measure. Expert Choice
limits the number of attributes which may be considered on a level to seven
in deference to Miller's research[CL5]. The concept of pairwise relative
comparison has been shown by Saaty[MC11] to be cognitively superior to
other MCDM value applying methods and is recognized by Hofstadter as a
primary criteria for intelligence. And finally a measure is builtin which
evaluates a user's consistency when making pairwise comparisons. It would
indicate to the user if inconsistency is present, and allow them, if they
felt it appropriate, to reevaluate the options. Expert 87 Expert 87 (E87)
is also a matrix based MCDM package. It utilizes an inductive technique
termed Social Judgement Theory (SJT) for eliciting preference assessments.
The creators of E87 bill this technique as an "intuitive" method for applying
weights to attributes. The user is presented with combinations of criteria
and asked to grade the combination as a whole. Figure 6-14 provides a view
of the SJT elicitation process. Figure 6-14. Expert 87 SJT Elicitation
ProcessProject Management Advantage PMA is a an expert system which provides
generic advice. It was written in an expert system shell called "1stClass",
which utilizes an example based approach. Packaged with the expert system
are some spreadsheets which create typical PM tools such as a Work Breakdown
Schedule and an Earned Value Analysis. PMA divides its advice into 6 phases
including Definition & Justification Planning and Budgeting Design
Development/Construction Launch Preparation Delivery & Conclusion Each
phase has approximately 10 small knowledge bases (KB) upon which question
and answer sequences are drawn. These KB range in mission from creating
novel concepts to printing out contracts between the project sponsors and
the project leader. Some do simple math calculations such as expected time
estimates and probability of completion computations. Statistical NavigatorStatistical
Navigator is another expert system which provides generic advice. It was
written in an expert system shell called "EXSYS", which is a rule based
system. SN also accesses some external programs to do some calculations.
The program asks for assumptions and for your objectives. During this process
confused users may ask for an explanation. The SN output is a report on
which statistical tests seem to best fit the described situation.A Summary
of Commercial PackagesAn analysis of these packages using Hofstadter's
characteristics of intelligence follows. As can be seen no full implementation
(or approaching full) of any of these exist. Does the system (not the user)
have: LY EC E7 PMA SNFlexible response ability N N P P PFortuitous circumstance
recognition N N N N NAmbiguous/contradictory message recogn. N P P N NRelative
importance recognition N P N N NSimilarities recognition P P P P PDistinction
recognition P P P P POld & new concept synthesis N N N N NNovel idea
generation N N N P PF Full Implementation P Partial Implementation N NoneGenerally,
progress toward commercially available PC based packages combining AI and
DS techniques has been slow. A number of reasons account for this problem,
many of which are highlighted in the following chapter.Chapter Conclusions
As has been demonstrated, research efforts to combine the powers of AI
and DS are growing. But little of what has been created, and even less
of what has emigrated from the laboratory capitalizes upon the strengths
in each area. While the DS side of the house has matured, bringing strong
AI into the mainstream to help DS remains a weak link. AI researchers will
need to make AI techniques more accessible. This and other similar issues
are addressed in the next chapter.