perm filename ELEVEN.PUB[BIB,CSR]3 blob sn#572301 filedate 1981-03-23 generic text, type C, neo UTF8

COMMENT ⊗ VALID 00017 PAGES C REC PAGE DESCRIPTION C00001 00001 C00002 00002 .require "setup.csr[bib,csr]" source file C00005 00003 %3STAN-CS-80-828: C00007 00004 C00009 00005 C00011 00006 C00014 00007 C00016 00008 C00019 00009 C00022 00010 C00025 00011 C00028 00012 C00029 00013 C00031 00014 C00033 00015 C00035 00016 C00036 00017 C00045 ENDMK C⊗; .require "setup.csr[bib,csr]" source file; .once center %3Stanford University Computer Science Reports%1 List Number 11↔March 1981 Listed here are abstracts of the most recent reports published by the Department of Computer Science at Stanford University. %3Request Reports:%1 Complete the enclosed order form, and return the entire order form page (including mailing label) as soon as possible. In many cases we can print only a limited number of copies, and requests will be filled on a first come, first served basis as the reports become available. If the code (FREE) is printed on your mailing label, you will not be charged for hardcopy. This exemption from payment is limited primarily to libraries. (The costs shown include all applicable sales taxes. %2Please send no money now, wait until you get an invoice.%1) %3Alternatively:%1 Copies of most Stanford CS Reports may be obtained by writing (about 2 months after the ''Most Recent CS Reports'' listing) to National Technical Information Service, 5285 Port Royal Road, Springfield, Virginia 22161. Stanford Ph.D. theses are available from University Microfilms, 300 North Zeeb Road, Ann Arbor, Michigan 48106. %3STAN-CS-80-828: %2Breaking Paragraphs Into Lines%1 by Donald E. Knuth and Michael F. Plass (66 pages, November 1980) This paper discusses a new approach to the problem of dividing the text of a paragraph into lines of approximately equal length. Instead of simply making decisions one line at a time, the method considers the paragraph as a whole, so that the final appearance of a given line might be influenced by the text on succeeding lines. A system based on three simple primitive concepts called 'boxes', 'glue', and 'penalties' provides the ability to deal satisfactorily with a wide variety of typesetting problems in a unified framework, using a single algorithm that determines optimum breakpoints. This algorithm avoids backtracking by a judicious use of the techniques of dynamic programming. Extensive computational experience confirms that the approach is both efficient and effective in producing high-quality output. The paper concludes with a brief history of line-breaking methods, and an appendix presents a simplified algorithm that requires comparatively few resources. %3STAN-CS-80-829: %2The Dinner Table Problem%1 by Bengt Aspvall and Frank Liang (13 pages, December 1980) This report contains two papers inspired by the "dinner table problem": If %2n%1 people are seated randomly around a circular table for two meals, what is the probability that no two people sit together at both meals? We show that this probability approaches %2e∩[-2]%1 as %2n → %5α∞%1, and also give a closed form. We then observe that in many similar problems on permutations with restricted position, the number of permutations satisfying a given number of properties is approximately Poisson distributed. We generalize our asymptotic argument to prove such a limit theorem, and mention applications to the problems of derangements, menages, and the asymptotic number of Latin rectangles. %3STAN-CS-80-830: %2Two Linear-Time Algorithms for Five-Coloring a Planar Graph%1 by David Matula, Yossi Shiloach and Robert Tarjan (23 pages, December 1980) A "sequential processing" algorithm using bicolor interchange that five- colors an %2n%1 vertex planar graph in O(%2n%1↑2) time was given by Matula, Marble, and Isaacson [MMI 72]. Lipton and Miller used a "bach processing" algorithm with bicolor interchange for the same problem and achieved an improved O(%2n%1 log %2n%1) time bound [LM 78]. In this paper we use graph contraction arguments instead of bicolor interchange and improve both the sequential processing and batch processing methods to obtain five-coloring algorithms that operate in O(%2n%1) time. %3STAN-CS-80-831: %2An O(nm log n) Algorithm for Maximum Network Flow%1 by Daniel D. K. Sleator (Thesis,81 pages, December 1980) This thesis presents a new algorithm for the maximum network flow problem. The problem is this: Given a directed network of %2m%1 edges and %2n%1 vertices with two distinguished vertices, a source and a sink, and with a nonnegative capacity associated with each edge, find a way of labeling the edges with nonnegative real numbers representing flow in such a way that: (1) The flow into each vertex equals the flow out (except at the source and sink). (2) The flow at each edge does not exceed the capacity at that edge. (3) The flow out of the source is a maximum over-all flow satisfying (1) and (2). Our algorithm finds a maximum flow in O(%2nm%1log%2n%1) time, which is a factor of log %2n%1 faster than the previous fastest algorithm. The central innovation of our algorithm is a sophisticated data structure that is used to represent a certain subset of edges that form a forest. The edges of this forest are further partitioned into two classes: broken and solid. Long paths of solid edges are grouped together and stored in a balanced tree data structure called a biased 2-3 tree. Each leaf of the biased 2-3 tree corresponds to an edge in the path. %3STAN-CS-80-832: %2Scheduling Wide Graphs%1 by Danny Dolev (43 pages, December 1980) The problem of scheduling a partially ordered set of unit length tasks on %2m%1 identical processors is known to be %2NP%1-complete. There are efficient algorithms for only few special cases of this problem. In this paper we explore the relations between the structure of the precedence graph (the partial order) and optimal schedules. We prove that in finding an optimal schedule for certain systems it suffices to consider at each step high roots which belong to at most the %2m%1 -1 highest components of the precedence graph. This result reduces the number of cases we have to check during the construction of an optimal schedule. Our method may lead to the development of linear scheduling algorithms for many practical cases and to better bounds for complex algorithms. In particular, in the case the precedence graph contains only inforest and outforest components, this result leads to efficient algorithms for obtaining an optimal schedule on two or three processors. %3STAN-CS-80-833: %2Graph Separator Theorems and Sparse Gaussian Elimination%1 by John Russell Gilbert (Thesis, 104 pages, December 1980) Much of the process of making computer programs consists of creating methods for the manipulation of discrete, finite objects such as characters, vectors, sets, graphs, and permutations. "Combinatorial algorithms" is the awkward but serviceable name for this branch of computer science. Designing efficient combinatorial algorithms usually involves applying or inventing subtle data structures and search techniques, and is often associated with mathematics of significant depth and beauty. This association gives the subject an appeal beyond that of usefulness: Combinatorial algorithms are fun. Like most branches of mathematics and science, the study of combinatorial algorithms began as a collection of loosely related tricks, constructions, and special insights. As the discipline has matured, unifying ideas and principles have begun to emerge. One central theme is the formation of complex data structures (together with algorithms for their manipulation) out of simpler data structures. A body of well-known representations for basic data structures such as queues and graphs has grown up, and continues to expand. Another theme is the use of such paradigms for algorithm design as divide-and-conquer, search, and dynamic programming. %3STAN-CS-80-834: %2Numerical Solution of the Biharmonic Equation%1 by Petter E. Bjorstad (Thesis, 139 pages, December 1980) The numerical solution of discrete approximations to the first biharmonic boundary value problem in rectangular domains is studied. Several finite difference schemes are compared and a family of new fast algorithms for the solution of the discrete systems is developed. These methods are optimal, having a theoretical computational complexiy of O(N↑2) arithmetic operations and requiring N↑2+O(N) storage locations when solving the problem on an N by N grid. Several practical computer implementations of the algorithm are discussed and compared. These implementations require aN↑2 + bN↑2logN arithmetic operations with b<<a . The algorithms take full advantage of vector or parallel computers and can also be used to solve a sequence of problems efficiently. A new fast direct method for the biharmonic problem on a disk is also developed. It is shown how the new method of solution is related to the associated eigenvalue problem. The results of extensive numerical tests and comparisons are included throughout the dissertation. It is believed that the material presented provides a good foundation for practical computer implementations and that the numerical solution of the biharmonic equation in rectangular domains from now on, will be considered no more difficult than Poisson's equation. %3STAN-CS-80-835: %2Approximation and Optimization of Electron Density Maps%1 by Eric H. Grosse (Thesis, 118 pages, December 1980) The three-dimensional structure of proteins is commonly determined from x-ray diffraction data by manually inspecting the electron density, displayed as a stack of two dimensional contour maps. A network connecting peaks and passes of the density function has been suggested as a representation more suitable for automatic interpretation. However, locating all critical points of such a complicated function is a difficult problem. The solution proposed here is to partition the domain into cubes and approximate the density function on each cube by a polynomial of three variables, chosen so that all first derivatives are continuous across the cube boundaries. Two algorithms are examined for this approximation: one is least squares fitting with smooth tent-shaped basis functions (known as tensor product B-splines) by repeated univariate least squares spline fits, and the other scales Fourier coefficients of the electron density function. Critical points can then be sought independently on each cube. One variable is solved for immediately in terms of the other two; the resulting two-dimensional problems are approximated on a subgrid and finally reduced to one-dimensional quadratic equations. This method locates critical points more accurately than discrete pattern matching and more quickly than repeated Newton searches. %3STAN-CS-81-836: %2Temporal Verification of Concurrent Programs, Part I: The Temporal Framework for Concurrent Programs%1 by Zohar Manna and Amir Pnueli (70 pages, January 1981) This is the first of a series of reports which describes the application of temporal logic to the specification and verification of concurrent programs. We first introduce Temporal Logic as a tool for reasoning about sequences. Models of concurrent programs based both on transition graphs and on linear-text representation are introduced and the notions of concurrent and fair executions are defined. The general temporal language is then specialized to reason about states which are execution states and sequences which are fair computations of a concurrent program. The temporal language is then used for the description of properties of concurrent programs. The set of interesting properties is classified into %2Invariance%1 (Safety), %2Eventualities%1 (Liveness), and %2Precedence%1 (Until) properties. Among the properties studied are: Partial Correctness, Global Invariances, Clean Behavior, Mutual Exclusion, Deadlock Absence, Termination, Total Correctness, Intermittent Assertions, Accessibility, Starvation Freedom, Responsiveness, Safe Liveness, Absence of Unsolicited Response, Fair Responsiveness, and Precedence. In the following reports of this series we will develop proof methodologies for proving the properties discussed here, using the temporal formalism. The proof methods will also be classified according to the three classes of properties: Invariance, Eventualities, and Precedence. %3STAN-CS-81-837: %2Research on Expert Systems%1 by Bruce G. Buchanan (37 pages, February 1981) All Al programs are essentially reasoning programs. And, to the extent that they reason well about a problem area, all exhibit some expertise at problem solving. Programs that solve the Tower of Hanoi puzzle, for example, reason about the goal state and the initial state in order to find "expert-level" solutions. Unlike other programs, however, the claims about expert systems are related to questions of usefulness and understandability as well as performance. %3STAN-CS-81-838: %2Dynamic Program Building%1 by Peter Brown (13 pages, February 1981) This report argues that programs are better regarded as dynamic running objects rather than as static textual ones. The concept of %2dynamic building%1, whereby a program is constructed as it runs, is described. The report then describes the %2Build%1 system, which is an implementation of dynamic building for an interactive algebraic programming language. Dynamic building aids the locating of run-time errors, and is especially valuable in environments where programs are relatively short but run-time errors are frequent and/or costly. %3STAN-CS-81-839: %2Short Waits%1 by Arthur L. Samuel (37 pages, February 1981) This is an introductory manual describing the SU-AI timesharing system that is available primarily for sponsored research in the Computer Science Department. The present manual is written for the beginner and the user interested primarily in the message-handling capability as well as for the experienced computer user and programmer who either is unfamiliar with the SU-AI computer or who uses it infrequently. References are made to the available hard-copy manuals and to the extensive on-line document files where more complete information can be obtained. The principal advantages of this system are: 1) The availability of a large repertoire of useful system features; 2) The large memory; 3) The large file-storage system; 4) The ease with which one can access other computers via the ARPA net; 5) The file transfer facilities via the EFTP program and the ETHERNET; 6) The XGP and the DOVER printers and the large collections of fonts available for them and 7) The fast and convenient E editor with its macro facilities. %3STAN-CS-81-840: %2Verification of Link-Level Protocols%1 by Donald E. Knuth (6 pages, January 1981) Stein Krogdahl [1] has given an interesting demonstration of the partial correctness of a "protocol skeleton", by which the validity of the essential aspects of a large variety of data transmission schemes can be demonstrated. The purpose of this note is to present a simpler way to obtain the same results, by first establishing the validity of a less efficient skeleton and then "optimizing" the algorithms. The present approach, which was introduced for a particular protocol by N. V. Stenning [2], also solves a wider class of problems that do not require first-in-first-out transmissions. %3STAN-CS-81-841: %2Huffman's Algorithm via Algebra%1 by Donald E. Knuth (6 pages, March 1981) The well-known algorithm of David A. Huffman for finding minimum redundancy codes has found many diverse applications, and in recent years it has been extended in a variety of ways. The purpose of this note is to discuss a simple algebraic approach that seems to fit essentially all of the applications of Huffman's method that are presently known. %3STAN-CS-80-842: %2The Role of Plans In Intelligent Teaching Systems%1 by Michael R. Genesereth (19 pages, November 1980) One of the keys to effective remedial instruction is information about the student's beliefs and misconceptions. This information can often be obtained by analyzing the student's efforts in solving problems that require knowledge of the subject matter. In some areas, it is possible to design diagnostic tests that can pinpoint a student's misconception directly from his answers. For example, in their work on the Buggy system, Burton adn Brown developed a methodology for automatically generting diagnostic tests that discoffer the underlying problems responsible for a student's errors in subtraction. However, in subject areas where the student chooses his own problem or when there are several ways of solving a problem, it is helpful to consider not only the student's final answer but also his steps in producing it. This paper is concerned with the process of automatically analyzing a student's steps to infer his rationale for taking those steps and the use of this information in providing remediation tailored to the student.