perm filename NINE.PUB[BIB,CSR] blob sn#537615
filedate 1980-09-26 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00019 PAGES
C REC PAGE DESCRIPTION
C00003 00002 .require "setup.csr[bib,csr]" source file
C00004 00003 .once center
C00006 00004 %3STAN-CS-80-806:
C00010 00005 %3STAN-CS-80-807:
C00012 00006 %3AIM-337 (STAN-CS-80-808):
C00013 00007 %3AIM-338 (STAN-CS-80-809):
C00016 00008 %3STAN-CS-80-810:
C00017 00009 %3PVG-18 (STAN-CS-80-811):
C00020 00010 %3HPP-80-21 (STAN-CS-80-812):
C00023 00011 %3AIM-340 (STAN-CS-80-813):
C00027 00012 %3HPP-80-17 (STAN-CS-80-814):
C00032 00013 %3HPP-80-16 (STAN-CS-80-815):
C00035 00014 %3STAN-CS-79-816:
C00037 00015 %3STAN-CS-79-817:
C00039 00016 %3AIM-341 (STAN-CS-80-818):
C00043 00017 %3STAN-CS-80-819:
C00048 00018 %3HPP-80-22 (STAN-CS-80-820):
C00052 00019 %3STAN-CS-80-821:
.require "setup.csr[bib,csr]" source file;
%3Stanford University Computer Science Reports%1
List Number 9↔October 1980
Listed here are abstracts of the most recent reports published by the
Department of Computer Science at Stanford University.
%3Request Reports:%1 Complete the enclosed order
form, and return the entire order form page (including mailing label)
as soon as possible. In many cases we can print only a limited number of copies,
and requests will be filled on a first come, first served basis as the reports
become available. If the code
(FREE) is printed on your mailing label, you will not be charged for hardcopy.
This exemption from payment is limited primarily to libraries. (The costs
shown include all applicable sales taxes. %2Please send
no money now, wait until you get an invoice%1.)
%3Alternatively:%1 Copies of most Stanford CS Reports may be obtained by writing
(about 2 months after the "Most Recent CS Reports" listing) to
Technical Information Service, 5285 Port Royal Road, Springfield, Virginia 22161.
Stanford Ph.D. theses are available from University Microfilms, 300 North
Zeeb Road, Ann Arbor, Michigan 48106.
%2On the Approximate Solution of Hyperbolic Initial-Boundary Value Problems%1
by William M. Coughran, Jr. (Thesis, 177 pages, June 1980)
Hyperbolic initial-boundry value problems arise in a number of scientific
disciplines, such as meteorology, ocanography, geophysics, aerodynamics, acoustics,
and magnetohydrodynamics. These problems usually cannot be solved analytically,
so approximate methods must be used. Unfortunately, the construction of stable
finite difference approximations is a subtle matter, which often confuses the
practitioner; the existing theories for establishing the well-posedness of
continuous initial-boundary value problems and the stability of discrete analogs
involve the verification of complicated algebraic conditions. Moreover, the
stability theory fo discrete initial-boundary value problems in more than one
space dimension is not well developed.
In this thesis, the existing stability theory for discrete initial-boundary value
problems, which has only been applied to (essentially) salar model problems, is
used to analyze the stability of some %22 %4x %22%1 model problems, not written
in characteristic variables; it is noted that the most accurate interior/boundary
difference scheme combinations are the least stable to perturbations in the
coefficients. (A practical numerical procedure for verifying the stability of
discrete initial-boundary value problems is also introduced.) The stability
results for %22 %4x %22%1 systems can be used in the stability analysis of larger
systems where characteristics occur only singly and in pairs; in particular,
discretizations of the wave equation, the shallow water equations, and the Eulerian
equations for gas dynamics, which involve boundary conditions written in "natural"
variables, are examined. The stability theory is also extended to multi-dimensional
initial-bondary value problems by means of the concept of "tangential dissipativity";
as an application, a tangentially dissipative leap-frog metho is shown to be stable
with Euler boundary conditions for a two-dimensional wave equation problem. The
viability and limitations of the theory are demonstrated with some computational
experiments. Finally, combining stability results with accuracy considerations,
various approximations and boundary conditions are ranked.
%2Path-Regular Graphs%1 by David W. Matula and Danny Dolev
(39 pages, June 1980)
A graph is vertex-[edge-]path-regular if a list of shortest paths, allowing
multiple copies of paths, exists where every pair of vertices are the endvertices
of the same number of paths and each vertex [edge] occurs in the same number of
paths of the list. The dependencies and independencies between the various
path-regularity, regularity of degree, and symmetry properties are investigated.
We show that every connected vertex-[edge-]symmetric graph is vertex-[edge-]path-regular,
but not conversely. We show that the product of any two vertex-path-regular
graphs is vertex-path-regular but not conversely, and the iterated product
%2G %4x %2G %4x * * * %2G%1 is edge-path-regular if and only if
%2G%1 is edge-path-regular. An interpretation of path-regular graphs is given regarding
the efficient design of concurrent communication networks.
%2Basic Research in Artificial Intelligence and Foundations of Programming%1
by John McCarthy (Principal Investigator), Thomas Binford, David Luckham,
Zohar Manna, Richard Weyhrauch (Associate Investigators)
(75 pages, May 1980)
Recent research results are reviewed in the areas of formal reasoning,
mathematical theory of computation, program verification, and image
%2An Extention of Screw Theory and its Application to the Automation of
Industrial Assemblies%1 by Jorgan S. Ohwovoriole (Thesis, 186 pages, April 1980)
Interest in mathematical models that adequately predict what happens in the
process of assembling industrial parts has heightened in recent times. This is
a result of the desire to automate the assembly process. Up to this point there
has not been much success in deriving adequate mathematical models of the assembly
This thesis is an attempt to develop mathematical models of parts assembly.
Assembly involves motion of bodies which generally contact each other during
the process. Hence, we study the kinematics of the relative motion of contacting
Basic to the theory of assembly is the classical theory of screws which,
however, required substantial extensions for this application. The thesis begins
with a review of basic screw theory, including line geometry and reciprocal screw
systems, and new and more general derivations of some of these screw systems.
We then extend the screw theory by introducing such concepts as "repelling" and
"contrary" screw pairs, and "total freedom."
Finally, we give a method of characterizing assemblies of industrial parts.
Using the extended screw theory, we then analyze the "general peg-in-hole assembly"
and subsequently give a mathematical description of this particular assembly.
%2Lower Bounds for algebraic Decision Trees%1 by
J. Michael Steele and Andrew C. Yao
(12 pages, July 1980)
A topological method is given for obtaining lower bounds for the height of
algebraic decision trees. The method is applied to the knapsack problem where
a %6W%2(n↑2)%1 bound is obtained for trees with bounded-degree polynomial tests,
thus extending the Dobkin-Lipton result for linear trees. Applications to the
convex hull problem and the distinct element problem are also indcated. Some
open problems are discussed.
%2An Extended Semantic Definition of Pascal for Proving the Absence of
Common Runtime Errors%1 by Steven M. German
(57 pages, June 1980)
We present an axiomatic definition of Pascal which is the logical basis of the
Runcheck system, a working verifier for proving the absence of runtime errors
such as arithmetic overflow, array subscripting out range, and accessing an
uninitialized variable. Such errors cannot be detected at compile time by
most compilers. Because the occurrence of a runtime error may depend on the
values of data supplied to a program, techniques for assuring the absence of
errors must be based on program specifications. Runchck accepts Pascal programs
documented with assertions, and proves that the specifications are consistent
with the program and that no runtime errors can occur. Our axiomatic definition
is similar to Hoare's axiom system, but it takes into account certain restrictions
that have not been considered in previous definitions. For instance, our
definition accurately models uninitialized variables, and requires a variable to
have a well defined value before it can be accessed. The logical problems of
introducing the concept of uninitialized variables are discussed. Our definition
of expression evaluation deals more fully with function calls than previous
Some generalizations of our semantics are presented, including a new method of
verifying programs with procedure and function parameters. Our semantics can
be easily adopted to similar languages, such as ADA.
One of the main potential problems for the user of a verifier is the need to
write detailed, repetitious assertions. We develop some simple logical properties
of our definition which are exploited by Runcheck to reduce the need for such
%2Knowledge Engineering: The Applied Side of Artificial Intelligence%1
by Edward A. Feigenbaum (14 pages, September 1980)
Expert System research is an emerging area of computer science that
exploits the capabilities of computers for symbolic manipulation and
inference to solve complex and difficult reasoning problems at the
level of performance of human experts. The methods of this area are
designed to acquire and represent both the formal and the informal
knowledge that experts hold about the tasks of their disciplin.
Numerous applications to science, engineering, and medicine have been
accomplished. Expert System projects represent applied artificial
intelligence research, though they also make salient numerous
fundamental research issues in the acquisition, representation, and
utilization of knowledge by computer programs. Knowledge engineering
approaches promise significant cost savings in certain applications;
intelligent computer-based aids for practitioners in fields whose
knowledge is primarily nonmathematical; and the elucidation of the
heuristic knowledge of experts -- the largely private knowledge of
practice. There are major problems of knowledge engineering
including the shortage of adequate computer equipment, the shortage
of trained specialists in applied artificial intelligence, the
scientific base for adequate knowledge acquisition, and the lack of
%2Obstacle Avoidance and Navigation in the Real World
by a Seeing Robot Rover%1 by Hans Peter Moravec
(Thesis, 174 pages, September 1980)
The Stanford AI lab cart is a card-table sized mobile robot
controlled remotely through a radio link, and equipped with a TV camera
and transmitter. A computer has been programmed to drive the cart through
cluttered indoor and outdoor spaces, gaining its knowledge about the world
entirely from images broadcast by the onboard TV system.
The cart determines the three dimensional location of objects around
it, and its own motion among them, by noting their apparent relative
shifts in successive images obtained from the moving TV camera. It
maintains a model of the location of the ground, and registers objects it
has seen as potential obstacles if they are sufficiently above the surface,
but not too high. It plans a path to a user-specified destination which
avoids these obstructions. This plan is changed as the moving cart
perceives new obstacles on its journey.
The system is moderately reliable, but very slow. The cart moves
about one meter every ten to fifteen minutes, in lurches. After rolling a
meter, it stops, takes some pictures and computes for a long
time. Then it plans a new path, and executes a little of it, and pauses
The program has successfully driven the cart through several 20
meter indoor courses (each taking about five hours) complex enough to
necessitate three or four avoiding swerves. A less sucessful outdoor run,
in which the cart swerved around two obstacles but collided with a third,
was also done. Harsh lighting (very bright surfaces next to very dark
shadows) resulting in poor pictures, and movement of shadows during the
cart's creeping progress, were major reasons for the poorer outdoor
performance. These obstacle runs have been filmed (minus the very dull
%2Prototypes and Production Rules: A Knowledge Representation for
Computer Consultations%1 by Janice S. Aikins
(Thesis, 204 pages, August 1980)
This thesis presents a system called CENTAUR, which demonstrates the
effectiveness of representing prototypical knowledge in a combination of frames and
production rules for performing computer consultations. Key knowledge representation and
control structure problems in %2production rule%1 systems similar to MYCIN are identified, and a
set of important characteristics of the structures used for representing problem-solving
knowledge is given. CENTAUR's frames, or %3prototypes%1, complement the production rules to
satisfy these characteristics and represent expected patterns of data that permit a more
focused, hypothesis-directed approach to problem solving.
Among the characteristcs identified as desrible in the representation structures are
the ability to %2explicitly%1 represent (a) prototypical cases, (b) the %2context%1 in which knowledge
is applied, and (c) the %2strategies%1 for applying that knowledge. CENTAUR's prototypes
consist of patterns of knowledge in the domain which serve as broad contexts, guiding the
more detailed processing of the production rules. Strategies for the consultation, or control
knowledge, are represented in the prototypes separately from other kinds of domain
knowledge. This allows the domain expert to specify control knowledge that is specific to
each prototype. Examples are presented which demonstrate how this explicit
representation facilitates explanations of the system's reasoning. Further, the organization
of knowledge in CENTAUR provides a useful framework for acquiring new knowledge.
CENTAUR has been applied to the domain of pulmonary (lung) physiology in which it
provides interpretations of pulmonary function tests. The prototypes represent standard
pulmonary disease patterns, and the production rules serve as a stylized form of %2attached
procedure%1. At the highest level, the stages of the consultation itself are represented in a
%2Consultation%1 prototype. Thus the advantages of explicit representation apply to control of
the consultation process as well.
Other important feaures of the representation include the use of prototypes as a
standard of comparison in order to detect inconsistent or erroneous data, and the
representation in production rules of domain expertise to deal with data discrepancies and
Several experiments demonstrating the flexibility of the representation have also
been performed. These include the implementation of different top-level prototype
selection strategies (%2confirmation, elimination%1, and %2fixed-order%1), and the use of a second
high-level prototype which can review knowledge stored in the domain-level prototypes.
%2Two Papers on Medical Computing -- (1) Medical Cybernetics: The Chalenges of
Clinical Computing, (2) Consultation Systems for Physicians: The Role of Artificial
Intelligence Techniques%1 by Edward H. Shortliffe, M.D., Ph.D.
(56 pages, July 1980)
This memo contains two papers that deal with medical computing. The
first, written for a book on cybernetics and society, examines the range of
medical computing systems, plus some of the logistical and human engineering
challenges limiting their utility or acceptance. It addressed five recurring
themes that characterize the introduction of medical computing systems: 1)
the need for the proposed application, 2) the system users, 3) the logistics
of system introduction, 4) the required computational techniques, and 5) the
required technological resources. In the context of these topics,
suggestions are offered for long-range research and resource policies that
are appropriate for assuring the development of practical clinical computing.
The second paper, presented at a meeting on artificial intelligence
in May 1980, takes a more detailed look at the reasons that medical computing
systems have had a limited impact on clinical medicine. When one examines
the most common reasons for poor acceptance for such systems, the potential
relevance of artificial intelligence techniques becomes evident. The paper
proposes design criteria for clinical computing systems and demonstrates
their relationship to current research in knowledge engineering. The MYCIN
System is used to illustrate the ways in which one research group has
responded to the design criteria cited.
%2Automating the Study of Clinical Hypotheses on a Time-Oriented Data Base:
The RX Project%1 by Robert L. Blum, M.D.
(12 pages, November 1979)
The existence of large chronic disease data bases offers the
possibility of studying hypotheses of major medical importance. An
objective of the RX Project is to assist a clinical researcher with
the tasks of experimental design and statistical analysis. A major
component of RX is a knowledge base of medicine and statistics,
organized as a frame-based, taxonomic tree. RX determines confounding
variables, study design, and analytic techniques. It then gathers data,
analyzes it, and interprets results. The American Rheumatism Association
Medical Information System is used.
%2Analysis of Coalesced Hashing%1 by Jeffrey Scott Vitter
(Thesis, 111 pages, August 1980)
This thesis analyzes the %2coalesced hashing%1 method, in which a portin
of memory (called the %2address region%1) serves as the range of the hash function
while the rest of memory (called the %2cellar%1) is devoted solely to storing recrds
that collide when inserted. If the cellar should get full, subsequent colliders
must be stored in empty slots in the address region and, thus, may cause later
collisions. Varying the relative size of the cellar affects search performance.
The main result of this thesis expresses he average search times as a
function of the number of records and the cellar size, solving the long-standing
open problem described in [Knuth, %2The Art of Computer Programming%1, Vol. 3,
section 6.4-43]. We use these formulas to pick the cellar size that leads to %2optimum%1
search performance and then show that this "tuned" method is competitive
with several well-known hashing schemes.
We also address other important practical issues, such as developing good
deletion algorithms and efficient implementations, and outline some interesting
problems for future research.
%2Expression Procedures and Program Derivation%1
by William Louis Scherlis
(Thesis, 178 pages, August 1980)
We explore techniques for systematically deriving programs from
specifications. The goal of this exploration is a better understanding
of the development of algorithms. Thus, the intention is not to develop
a theory of programs, concerned with the analysis of existing
programs, but instead to develop a theory of programming, dealing with the
process of constructing new programs. Such a theory would have
practical benefits both for programming methodology and for automatic
We investigate the derivation of programs by program transformation
By expanding an ordinary language of recursion equations to include
a generalized procedure construct (the %2expression procedure%1), our
ability to manipulate programs in that language is greatly facilitated.
The expression procedure provides a means of expressing information not
just about the properties of individual program elements, but also
about the way they relate to each other.
A set of three operations --- abstraction, application, and composition ---
for transforming programs in this extended language is presented.
We prove using operational semantics that these operations preserve the
strong equivalence of programs. The well-known systems of Burstall and
Darlington and of Manna and Waldinger are both based on an underlying rule
system that does not have this property.
Our transformation system is illustrated with several small examples,
which are examined in detail to give insight to the heuristic
problems of program development. A tactic of %2program specialization%1
is shown to underlie many of these derivations.
While we present no implemented system, some consideration is given
to issues related to the development of programming tools.
The practical value of our approach is demonstrated in the systematic
development of a partial family tree of parsing algorithms, similar in
spirit to the treatment of sorting by Darlington and by Green and Barstow.
Several intricate parsing algorithms (Earley's, Cocke-Younger-Kasami,
and Top-Down) are derived in detail.
%2Computational Uses of the Manipulation of Formal Proofs%1
by Christopher Alan Goad
(Thesis, 122 pages, August 1980)
Mechanical procedures for the manipulation of formal proofs have played a central
role in proof theory for more than fifty years. However, such procedures have not
been widely applied to computational problems. One reason for this is that work in computer
science to do with formal proof systems has emphasized the use of formal proofs
as evidence -- as tools for automatically establishing the truth of propositions.
As a consequence of this emphasis, the problem for mchanizing the construction
of proofs has received much attention, whereas the manipulation of proofs -- that
is, the conersion of one form of evidence into another -- has not.
However, formal proofs can serve purposes other than the presentation of evidence.
In particular, a formal proof of a proposition having the form, "for each %2x%1
there is a %2y%1 such that the relation %2R%1 holds between %2x%1 and %2y%1"
provides, under the right conditions, a method for computing values of %2y%1
from values of %2x%1. That is, such a proof describes an algorighm %2A%1 where
%2A%1 satisfies the specification %2R%1 in the sense that for each %2x%1,
%2R(x,A(x))%1 holds. Thus formal proof systems can serve as programming
languages -- languages for the formal description of algorithms. A proof which
describes an algorithm may be "executed" by use of any of a variety of procedures
developed in proof theory.
A proof differs from more conventional descriptions of the same algorithm in that
it formalizes additional information about the algorithm beyond that formalized in
the conventional description. This information expands the class of transformations
on the algorithm which are amenabel to automation. For example, there is a
class of "pruning" transformations which improve the computational efficiency of a
natural deduction proof regarded as a program by removing unneeded case analyses.
These transforations make essential use of dependency information which finds
formal expression in a proof, but not in a conventional program. Pruning is
particularly useful for removing redundancies which arise when a general purpose
algorithm is adapted to a special situation by symbolic execution.
This thesis concerns (1) computational uses of the additional information contained
in proofs, and (2) efficient methods for the representation and transformation of
proofs. An extended lambda-calculus is presented which allows compact expression
of the computationally significant part of the information contained in proofs.
Terms of the calculus preserve dependency data, but can be efficiently executed
by an interpreter of the kind used for lambda-calculus based languages such as
LISP. The calculus has been implemented on the Stanford Artificial Intelligence
Laboratory PDP-10 computer. Results of experiments on the use of pruning
transformations in the specialization of a bin-packing algorithm are reported.
%2A Domain-Independent System That Aids in Constructiong Knowledge-Based
Consultation Programs%1 by William van Melle
(Thesis, 192 pages, June 1980)
This thesis demonstrates an effective domain-independent system, called EMYCIN,
for constructing one class of expert computer programs: rule-based consultants.
Such a consultant uses knowledge specific to a problem domain to provide consultative
advice to a client. Domain knowledge is represented in EMYCIN primarily as production
rules, which are applied by a goal-directed backward-chaining control structure.
Rules and consultation data may have associated measures of certainty, and
incomplete data are allowed. The system includes an explanation facility that
can display the line of reasoning followed by the consultation program, or answer
questions form the client about the contents of its knowledge base. Other build-in
human-engineering features allow the system architect to produce, with a minimum
of effort, a consultation program that is pleasing in appearance to the client.
To aid the system designer in producing a knowledge base for a domain quickly and
accuractely, EMYCIN provides a terse, stylized, but easily understood language
for writing rules; performs extensive checks to catch common user errors, such as
misspellings; and handles all necessary bookkeeping chores. To improve efficiency
in a running consultation program, EMYCIN provides a %2rule compiler%1 that
transforms the system's production rules into a decision tree, eliminating the
redundant computation inherent in a rule interpreter. It then compiles the
resulting tree into machine code. The program can thereby use an efficient deductive
mechanism for running the actual consultation, while the flexible rule format remains available
for acquisition, explanation, and debugging.
Several consultation programs have been written using EMYCIN, including consultants
for medical problems and a consultant for a structural engineering problem.
%2Semiantichains and Unichain Coverings in Direct Products of Partial Orders%1
by Doublas B. West and Craig A. Tovey
(20 pages, September 1980)
We conjecture a generalization of Dilworth's theorem to direct products of
partial orders. In particular, we conjecture that the largest "semiantichain"
and the smallest "unichain covering" have the same size. We consider a special
class of semiantichains and unichain coverings and determine when equality holds
for them. This conjecture implies the existene of %2k%1-saturated partitions.
A stronger conjecture, for which we also prove a special case, implies the
Greene-Kleitman result on simultaneous %2k%1 and %2(k + 1)%1-saturated partitions.