perm filename AI.V1[BB,DOC] blob sn#737489 filedate 1984-01-03 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00118 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00016 00002	∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #1
C00036 00003	∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #2
C00050 00004	∂14-May-83  1727	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #3
C00059 00005	∂16-May-83  0058	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #4
C00083 00006	∂18-May-83  1313	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #5  
C00095 00007	∂22-May-83  0145	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #6  
C00107 00008	∂22-May-83  1319	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #7  
C00128 00009	∂22-May-83  1248	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #8  
C00148 00010	∂29-May-83  0046	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #9  
C00163 00011	∂03-Jun-83  1832	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #10
C00174 00012	∂03-Jun-83  1853	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #11
C00181 00013	∂07-Jun-83  1708	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #12
C00194 00014	∂08-Jun-83  1339	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #13
C00204 00015	∂11-Jun-83  2255	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #14
C00214 00016	∂15-Jun-83  0011	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #15
C00235 00017	∂16-Jun-83  1922	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #16
C00252 00018	∂26-Jun-83  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #17
C00269 00019	∂26-Jun-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #18
C00285 00020	∂03-Jul-83  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #19
C00303 00021	∂06-Jul-83  1833	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #20
C00317 00022	∂11-Jul-83  0352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #21
C00339 00023	∂18-Jul-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #22
C00356 00024	∂21-Jul-83  1918	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #23
C00377 00025	∂21-Jul-83  1819	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #24
C00393 00026	∂21-Jul-83  1640	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #25
C00423 00027	∂25-Jul-83  2359	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #26
C00450 00028	∂28-Jul-83  0912	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #27
C00473 00029	∂29-Jul-83  1004	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #28
C00490 00030	∂29-Jul-83  1911	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #29
C00502 00031	∂02-Aug-83  1514	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #30
C00521 00032	∂02-Aug-83  2352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #31
C00537 00033	∂04-Aug-83  1211	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #32
C00565 00034	∂05-Aug-83  2115	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #33
C00586 00035	∂08-Aug-83  1500	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #34
C00606 00036	∂09-Aug-83  1920	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #35
C00629 00037	∂09-Aug-83  2027	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #36
C00649 00038	∂09-Aug-83  2149	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #37
C00678 00039	∂09-Aug-83  2330	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #38
C00706 00040	∂16-Aug-83  1113	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #39
C00731 00041	∂16-Aug-83  1333	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #40
C00752 00042	∂17-Aug-83  1713	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #41
C00771 00043	∂18-Aug-83  1135	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #42
C00782 00044	∂19-Aug-83  1927	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #43
C00811 00045	∂22-Aug-83  1145	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #44
C00845 00046	∂22-Aug-83  1347	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #45
C00867 00047	∂23-Aug-83  1228	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #46
C00888 00048	∂24-Aug-83  1206	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #47
C00916 00049	∂25-Aug-83  1057	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #48
C00935 00050	∂29-Aug-83  1311	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #49
C00959 00051	∂30-Aug-83  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #50
C00977 00052	∂30-Aug-83  1825	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #51
C01005 00053	∂31-Aug-83  1538	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #52
C01038 00054	∂02-Sep-83  1043	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #53
C01060 00055	∂09-Sep-83  1317	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #54
C01090 00056	∂09-Sep-83  1628	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #55
C01112 00057	∂09-Sep-83  1728	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #56
C01130 00058	∂15-Sep-83  2007	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #57
C01156 00059	∂16-Sep-83  1714	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #58
C01180 00060	∂19-Sep-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #59
C01206 00061	∂20-Sep-83  1121	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #60
C01228 00062	∂22-Sep-83  1847	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #61
C01256 00063	∂25-Sep-83  1736	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #62
C01283 00064	∂25-Sep-83  2055	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #63
C01307 00065	∂26-Sep-83  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #64
C01326 00066	∂29-Sep-83  1120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #65
C01350 00067	∂29-Sep-83  1438	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #66
C01375 00068	∂29-Sep-83  1610	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #67
C01399 00069	∂03-Oct-83  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #68
C01424 00070	∂03-Oct-83  1255	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #69
C01453 00071	∂03-Oct-83  1907	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #70
C01484 00072	∂06-Oct-83  1525	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #71
C01507 00073	∂10-Oct-83  1623	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #72
C01529 00074	∂10-Oct-83  2157	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #73
C01557 00075	∂11-Oct-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #74
C01581 00076	∂12-Oct-83  1827	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #75
C01608 00077	∂13-Oct-83  1804	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #76
C01630 00078	∂14-Oct-83  1545	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #77
C01657 00079	∂14-Oct-83  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #78
C01684 00080	∂17-Oct-83  0120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #79
C01705 00081	∂20-Oct-83  1541	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #80
C01736 00082	∂24-Oct-83  1255	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #81
C01756 00083	∂26-Oct-83  1614	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #82
C01784 00084	∂27-Oct-83  1859	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #83
C01802 00085	∂28-Oct-83  1402	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #84
C01824 00086	∂31-Oct-83  1445	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #85
C01854 00087	∂31-Oct-83  1951	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #86
C01882 00088	∂01-Nov-83  1649	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #87
C01912 00089	∂03-Nov-83  1710	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #88
C01931 00090	∂04-Nov-83  0029	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #89
C01955 00091	∂05-Nov-83  0107	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #90
C01974 00092	∂07-Nov-83  0920	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #91
C02002 00093	∂07-Nov-83  1507	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #92
C02023 00094	∂07-Nov-83  2011	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #93
C02047 00095	∂10-Nov-83  0230	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #94
C02072 00096	∂09-Nov-83  2344	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #95
C02100 00097	∂14-Nov-83  1831	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #96
C02117 00098	∂14-Nov-83  1702	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #97
C02142 00099	∂15-Nov-83  1838	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #98
C02166 00100	∂16-Nov-83  1906	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #99
C02196 00101	∂20-Nov-83  1722	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #100    
C02216 00102	∂20-Nov-83  2100	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #101    
C02241 00103	∂22-Nov-83  1724	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #102    
C02271 00104	∂27-Nov-83  2131	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #103    
C02295 00105	∂28-Nov-83  1357	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #104    
C02318 00106	∂29-Nov-83  0155	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #105    
C02341 00107	∂29-Nov-83  1837	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #106    
C02373 00108	∂02-Dec-83  0153	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #107    
C02397 00109	∂02-Dec-83  2044	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #108    
C02427 00110	∂05-Dec-83  0250	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #109    
C02448 00111	∂07-Dec-83  0058	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #110    
C02481 00112	∂10-Dec-83  1902	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #111    
C02507 00113	∂14-Dec-83  1459	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #112    
C02536 00114	∂16-Dec-83  1327	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #113    
C02560 00115	∂18-Dec-83  1526	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #114    
C02588 00116	∂21-Dec-83  0613	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #115    
C02608 00117	∂22-Dec-83  2213	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #116    
C02625 00118	∂30-Dec-83  0322	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #117    
C02652 ENDMK
C⊗;
∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #1
Received: from SU-SCORE by SU-AI with PUP; 14-May-83 17:26 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 14 May 83 17:29:57-PDT
Date: Sat 14 May 83 17:16:18-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: AIList Digest   V1 #1
To: Local-AI-BBoard%SAIL@SU-SCORE.ARPA


AIList Digest            Tuesday, 26 Apr 1983       Volume 1 : Issue 1

Today's Topics:
  Welcome
  Charter Membership
  Request for Report Abstracts
  Statistics on IJCAI-83 Papers
  Standardized Correspondence
----------------------------------------------------------------------

Date: Mon 25 Apr 83 14:51:42-PDT
From: Ken Laws <Laws@SRI-AI>
Subject: Welcome


Welcome to the AIList.

I am the moderator of the AIList discussion.  I am responsible
for composing the digest from pending submissions, controlling
the daily volume of mail, keeping an archive, and answering
administrative requests.

You may submit mail for the digest by addressing it to AIList@SRI-AI.
Administrative requests should be sent to AIList-Request@SRI-AI.
An archival copy of all list remailings will be kept; feel free to
ask AIList-Request for back issues until a formal archive system is
instituted.

AIList is open to discussion of any topic related to artificial
intelligence.  My own interests are primarily in

  Expert Systems                        AI Applications
  Knowledge Representation              Knowledge Acquisition
  Problem Solving                       Hierarchical Inference
  Machine Learning                      Pattern Recognition
  AI Techniques                         Data Analysis Techniques

Contributions concerning

  Cognitive Psychology                  Human Perception
  Vision Analysis                       Speech Analysis
  Language Understanding                Natural Languages
  AI Languages                          AI Environments
  Automatic Programming                 AI Systems Support
  Theorem Proving                       Logic Programming
  Robotics                              Automated Design
  Planning and Search                   Cybernetics
  Game Theory                           Computer Science
  Data Abstraction                      Library Science
  Statistical Techniques                Information Theory
  AI Hardware                           Information Display

and related topics are also welcome.  Contributions may be anything
from tutorials to rampant speculation.  In particular, the following
are sought.

  Abstracts                             Reviews
  Lab Descriptions                      Research Overviews
  Work Planned or in Progress           Half-Baked Ideas
  Conference Announcements              Conference Reports
  Bibliographies                        History of AI
  Puzzles and Unsolved Problems         Anecdotes, Jokes, and Poems
  Queries and Requests                  Address Changes (Bindings)

The only real boundaries to the discussion are defined by the topics
of other mailing lists.  Robotic mythology, for instance, might be
more appropriate for SF-LOVERS.  Logic programming and theorem proving
are also covered by the PROLOG list.

I suggest that you "sign" submissions longer than a paragraph so that
readers don't have to scroll backwards to see the FROM line.  Editing
of contributions will usually be limited to text justifications and
spelling corrections.  Editorial remarks and elisions will be marked
with square brackets.  The author will be contacted if significant
editing is required.

I have no objection to distributing material that is destined for
conference proceedings or any other publication.  You may want to
send copies of your submissions to SIGART @USC-ECLC or to the AI
Magazine (currently Engelmore @SUMEX-Aim) for hardcopy publication.
List items should be considered unrefereed working papers, and
opinions to be those of the author and not of any organization.
Copies of list items should credit the original author, not
necessarily to the AIList.

The list does not assume copyright, nor does it accept any liability
arising from remailing of submitted material.  I reserve the right,
however, to refuse to remail any contribution that I judge to be
obscene, libelous, irrelevant, or pointless.

Names and net addresses of list members are in the public domain.
Your name will be made available (for noncommercial purposes) unless
special arrangements are made.

Replies to public requests for information should be sent, at least
in "carbon" form, to this list unless the request states otherwise.
If necessary, I will digest or abstract the replies to control the
volume of distributed mail.

Please contribute freely.  I would rather deal with too much material
than with too little.

                                        -- Ken Laws

------------------------------

Date: Mon 25 Apr 83 09:34:04-PDT
From: AIList <AIList-Request@SRI-AI.ARPA>
Subject: Charter Membership


The AIList is off to a good start.  We have approximately 168
subscribers, plus an unknown number through remailing or BBoard
services at

    AI-INFO@CIT-20              DSN-AI@SU-DSN (*)
    AIList@BRL                  AI-BBD.UMCP-CS@UDel-Relay (*)
    AIList@Cornell              BBOARD.AIList@UTEXAS-20 (*)
    bbAI-List@MIT-XX            G.TI.DAK@UTEXAS-20
    AI-BBOARD@SRI-AI            AI-LOCAL@YALE
    Incoming-AIList@SUMEX       AI@RADC-TOPS20
    AIList-Distribution@MIT-EE  AIList-BBOARD@RUTGERS
    Spaf.GATech@UDel-Relay

(Maintainers of the starred BBoards have specifically requested
that local subscribers drop their individual memberships.)


The "charter membership" is distributed as follows:

AIDS-UNIX(2), BBNA, BBNG, BBN-UNIX, BRL(bb), BRL-VLD, CIT-20(bb),
CORNELL(1+bb), CMU-CS-A(12), CMU-CS-C(2), CMU-CS-G, CMU-CS-IUS,
CMU-CS-SPICE, CMU-RI-FAS(2), CMU-RI-ISL, DSN-AI@SU-DSN(1+bb),
GATech@UDel←Relay(bb), KESTREL, MIT-DSPG(2), MIT-EE(bb), MIT-MC(10),
MIT-EECS@MIT-MC, MIT-OZ@MIT-MC(17), MIT-ML(3), MIT-OZ@MIT-ML,
MIT-SPEECH, MIT-XX(5+bb), OFFICE-3, PARC-MAXC(8),
RADC-TOPS-20(bb),RUTGERS(6+bb), S1-C, SRI-AI(5+bb), SRI-CSL,
SRI-KL(2), SRI-TSC, SU-AI@USC-ECL(10), SUMEX(1+bb), SUMEX-AIM(7),
SU-SCORE(11), UCI-20A@Rand-Relay, UCLA-SECURITY, UMASS-CS@UDel-Relay,
UMCP-CS@UDel-Relay(bb), USC-ECL(3), USC-ECLB(2), USC-ECLC,
USC-ISI(2), USC-ISIB(5), USC-ISID, USC-ISIE, USC-ISIF(4), UTAH-20(7),
UTEXAS-20(6+bb), WASHINGTON(5), YALE(3+bb)

                                        -- Ken Laws

------------------------------

Date: 22 Apr 1983 0227-EST
From: TYG%MIT-OZ@MIT-MC
Subject: addition and woe

Please add me to the list.  Sigh.  I came up with the idea of a list
to disseminate abstracts and ordering info for AI papers last Dec.,
but held off due to the Arpanet changeover.  I then got busy with
other things, and planned to get it going in a few weeks.  Sigh.

Anyway, I may as well share my ideas for the list.  I think all sites
doing AI should be asked to submit the following info about papers as
they come out:  Title, Author, Length, Type (Master's thesis, Ph.D.
thesis, Tech report, Journal article, etc.), Abstract, Cost, and
ordering information.  Presumably the person at each site in charge of
publications would enter this.

Good Luck Tom "Next time I won't procrastinate" Galloway

[I would welcome such input.  The "person in charge" need not be a
member of this list.  I suggest that administrative personnel send
such information both to AIList and to SIGART@USC-ECLC.  Ordering 
information for AIList could be abbreviated to a net address if the 
sender is willing to respond to queries.  -- KIL]

------------------------------

Date: Thursday, 21-Apr-83  15:23:45-BST
From: BUNDY    HPS (on ERCC DEC-10)  <bundy@edxa>
Subject: Statistics on IJCAI-83 Papers

[I don't think Alan Bundy will mind my passing along these
statistics.  I have edited the table slightly to fit the 70-column    
capacity of the digesting software made available by Mel Pleasant,
the Human-Nets moderator.  The digester was developed by James
McGrath at SCORE. -- KIL]


                PAPER STATISTICS - IJCAI-83

                        Submitted       Accepted        Moved
Subfield                Long    Short   Long    Short   L -> S

Miscellaneous           -       3       -       1
Automatic Prog.         8       11      1       7       4
Cognitive Modelling     9       32      2       12      3
Expert Systems          31      56      8       31      9
Knowledge Repn.         28      40      7       24      6
Learning & Know. Acq.   14      35      1       22      5
Logic Prog.             14      17      4       9       4
Natural Language        23      74      2       39      7
Planning & Search       11      20      3       11      5
Robotics                11      8       5       7       2
System Support          4       9       -       5       2
Theorem Proving         7       16      5       8       -
Vision                  32      38      10      31      14

        TOTAL           192     359     48      207     61



        COMPARISON WITH PREVIOUS IJCAI CONFERENCES

                                LONG    SHORT   TOTAL

IJCAI-79 Submitted Total        unk     unk     428
IJCAI-79 Accepted Total         83      145     228
IJCAI-79 Acceptance Rate                        53%

IJCAI-81 Submitted Total        unk     unk     576
IJCAI-81 Accepted Total         127     74      201
IJCAI-81 Acceptance Rate                        35%


IJCAI-83 Submitted Total        192     359     551
IJCAI-83 Accepted Total         48      207     255
IJCAI-83 Acceptance Rate                        46%



                REMARKS

        You will see that I succeeded in my aim of shifting the burden
of papers from the long to the short categories.  This enabled us to
apply high standards to the long papers without decreasing the overall
acceptance rate.

                        Alan

------------------------------

Date: Sun 24 Apr 83 20:41:46-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Standardized Correspondence

[This is arguably more appropriate for Human-Nets, but I want to 
illustrate the level of reporting and/or discussion that I consider 
appropriate for this list.  -- KIL]


The May issue of High Technology describes the Prentice-Hall Letter
Pac system from Dictronics Publishing.  It is a semiautomatic business
letter generator that customizes prototypical letters by substituting 
synonyms categorized into four levels of formality (e.g., ask,
request, demand).  The user need only insert a few particulars before
sending the letter out.

The article also suggests automatic letter reading (i.e., parsing).  
There is already a system that compresses text by discarding all but 
the first sentence of each paragraph.  More sophisticated text 
condensation and text understanding systems are being developed.  A
short-cut is available, however.

If everyone used Letter Pac or an equivalent, parsing the text would 
be a simple matter of extracting the original generating parameters:  
(dunning-letter-7 formality-level-3 car-payment-overdue $127.38).  The
"Dear Sir" form of the letter would then exist only for transmission 
between computers.

If this became common, could elimination of the text form be long in
coming?  I believe that John McCarthy has been working on ideas along
this line.  Most transactions could be handled directly by computers
using standardized transaction formats.  When transmission of English
text is necessary, it might make sense to send preparsed sentences
instead of having one computer synthesize a message and a second one
parse it.  All that is needed is to have identical synthesis and
parsing software available to both machines for those rare occasions
when a human wants to enter the loop.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************
-------

∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #2
Received: from SU-SCORE by SU-AI with PUP; 14-May-83 17:26 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 14 May 83 17:30:13-PDT
Date: Sat 14 May 83 17:17:26-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: AIList Digest   V1 #2
To: Local-AI-BBoard%SAIL@SU-SCORE.ARPA

US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest             Sunday, 1 May 1983        Volume 1 : Issue 2

Today's Topics:
  New BBoards
  The T Programming Language
  Parallel Nonnumeric Algorithms
  Pattern Recognition
  Standardized Correspondence
  Alternate Distribution of AIlist
  Facetia
----------------------------------------------------------------------

Date: Sat 30 Apr 83 17:17:00-PDT
From: AIList <AIList-Request@SRI-AI.ARPA>
Subject: New BBoards


The following new BBoards and remailing lists have been created:

    AIList-BBOARD@RUTGERS
    NYU-AIList@NYU
    "XeroxAIList↑.PA"@PARC-MAXC
    UCI-AIList.UCI@Rand-Relay

I am told that the PARC list has 94 members.  As yet there is no 
BBoard at CMU (36 members); someone might want to establish one.  I 
will publish an updated list of hosts after the membership settles 
down.

                                        -- Ken Laws

------------------------------

Date: Tue, 26 Apr 83 18:26:42 EDT
From: John O'Donnell <Odonnell@YALE.ARPA>
Subject: The T Programming Language

I am pleased to announce the availability of our implementation of the
T programming language for the VAX under the Unix (4.xBSD) and VMS
(3.x) operating systems and for the Apollo Domain workstation.

T is a new dialect of Lisp comparable in power to other recent
dialects such as Lisp Machine Lisp and Common Lisp, but fundamentally
more similar in spirit to Scheme than to traditional Lisps.

The current system, version 2, is in production use at Yale and
elsewhere, in AI and systems research and in education.  A number of
large programs have been built in T, and the implementation is
acceptably stable and robust.  Yale and Harvard successfully taught
undergraduate courses this semester in T (Harvard used Sussman and
Abelson's 6.001 course).  Much work remains to be done; we are
currently expanding the programming environment and improving
performance.  Our next release is planned for sometime this fall.

Please contact me directly if you're interested in getting the
distribution.

                          John O'Donnell
                          Department of Computer Science
                          Box 2158 Yale Station
                          New Haven CT 06520
                          (203) 432-4666
                          ODonnell@Yale
                          ...decvax!yale-comix!odonnell

------------------------------

Date: Thu 28 Apr 83 14:40:26-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: parallel non-numeric algorithms

        Part of my Ph.D. work has been in parallel processing
algorithms in graph theory (unfortunately, no hardware is currently
available for an implementation, but that only makes the excursion a
little less satisfying). Specifically, I have been simulating the
performance of an algorithm for the utilization of parallel processing
in speeding up the common subgraph search problem. Commonly, this
problem involves finding all sufficiently large subgraphs common to
two given graphs.  No efficient algorithm exists for doing this
search.

        I know that several AI groups are working on parallel
processing in AI, but have not found any discussion involving graph
searching techniques. The bias in parallel processing has been toward 
numerical algorithms and the use of array processors; I figured that
there MUST be some AI group working at parallel processing in a
non-numerical field such as graph searching.  I would like to hear
from anyone who knows of such or similar work.

        By the way, I had heard that workers had had 'problems' with
the parallel LISP machines, but have not been able to pin anyone down
exactly as to the nature or extent of these problems.  Anyone know
exactly what was discovered in those researches?

Thanks--

David Rogers DROGERS@SUMEX-AIM.ARPA

------------------------------

Date: Fri 29 Apr 83 08:35:25-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition


PR People should take note of "Candide's Practical Principles of
Experimental Pattern Recognition", by George Nagy, in the March issue
of IEEE PAMI.  I particularly like

    ... any feature may be presumed to be normally
    distributed if its mean and variance can be
    estimated from its empirically observed distribution.

and

    ... adapting the classifier to the test set is
    superior to adaptation on the training set.

                                -- Ken Laws

------------------------------

Date: 30 April 1983 04:00 EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Standardized Correspondence

Rather than distributing the same software to every site, it would
make more sense to develop a machine-to-machine language which would
express (dunning-letter-7 formality-level-3 car-payment-overdue
$127.38) in an easily parseable form.  English is complex, redundant,
and vague.  Is there any reason why we can't design a language which
is simple, efficient, and precise?  It would be awful for
(intelligent) people, but great for (stupid) machines.

-- Steve

[If the parsing and synthesis functions were common, the software
might be compiled into hardware; if rare, it might be accessed
remotely over a network.  I doubt that software storage requirements
will be a problem for long.

There have been attempts at developing simpler natural languages.
One idea is to structure the language so that any idea can only
be expressed in one canonical form (DuckSpeak, Basic English,
controlled-vocabulary English as taught in our grade schools).
The other idea is to allow any semantic term to fill any syntactic
slot (sign language, Loglan).

Langauges of the first type present difficulties because of the
overloading of words (e.g., "get" in English).  This can be avoided
in limited domains such as repair manuals, but for general usage
something like Roger Shank's canonical forms would be needed.

I don't know what computational difficulties are presented by
languages of the second type.  If the Whorfian hypothesis is correct,
more ideas can be "thought", which may lead to greater complexity.
On the other hand, the algorithm needn't keep track of awkward or
stereotyped methods of expressing basically simple concepts.  ("I
greened my house", or what is the past tense of "beware"?)

I trust that computational difficulties can be overcome.  The
greatest problem in achieving user acceptance of parsed transmissions
may be that resynthesis will generate a paraphrase, or corrected
version, of the original.  Humans tend to be sentimental about their
own syntactic constructs, even down to where the lines are broken.

					-- KIL ]

------------------------------

Date: Thu 28 Apr 83 00:52:52-PDT
From: Dan Dolata <DOLATA@SUMEX-AIM.ARPA>
Subject: Alternate distribution of AIlist


I am moving to Sweden soon, and while I will be able to touch back to 
my home base via international-net occasionally, the long ditance 
rates make it prohibitive to try to read any large number of lines 
each day.  I was wondering if it might be possible to set up some ort
of system where AIlist could be spooled onto small tapes or floppies
monthly, and mailed to people who are away from the net?  Of course, I
would be happy to pay for mailing costs, and would be happy to buy the
person who did the grunt work a nice meal when I got back from Europe
(or in Karlsruhe during IJCAI).

Of course, if it became neccesary to charge $ because you had to hire 
a person to mount the media, I would be happy to subscribe!

Thanks for your time
        Dan [dolata@sumex]

[I'm afraid that I haven't the resources to oblige.  I suggest that
printed copies be sent, providing that doesn't violate any technology
export laws.  Dan would like to know if others are interested in
getting or providing machine-readable copies.  -- KIL]

------------------------------

Date: Fri 29 Apr 83 09:02:22-PDT
From: AIList <AIList-Request@SRI-AI>
Subject: Facetia


I hope everyone kept V1 #1.  Someday it may be as valuable as the
first edition of Superman comics.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************
-------

∂14-May-83  1727	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #3
Received: from SU-SCORE by SU-AI with PUP; 14-May-83 17:27 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 14 May 83 17:30:27-PDT
Date: Sat 14 May 83 17:18:29-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: AIList Digest   V1 #3
To: Local-AI-BBoard%SAIL@SU-SCORE.ARPA

Date: Sunday, May 8, 1983 11:12PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #3
To: AIList@SRI-AI


AIList Digest             Monday, 9 May 1983        Volume 1 : Issue 3

Today's Topics:
  Administrivia
  Re: the Whorfian hypothesis
  Re: Artificial Languages
  Putting programmers out of a Job?
----------------------------------------------------------------------

Date: Sun 8 May 83 23:05:43-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Administrivia


We have been joined by new BBoards or remailing nodes at

    AI@NLM-MCS
    AIList-Usenet@SRI-UNIX
    Post-AIList.UNC@UDel-Relay

The Usenet connection is a two-way link with the net.ai discussion
group.  More on this later.

I have been responding to additions by sending out back issues.  
Henceforth I will only send a welcome message and statement of policy.
Back issues are available by request.

I have tried to establish contact with all who have asked to be
enrolled, but several sites have been unreachable for the last two
weeks.  I cannot guarantee delivery of every issue to every site, and
may cut short the usual two-week retry period in order to reduce the
system load.

                                        -- Ken Laws

------------------------------

Date: 2 May 1983 1038-EDT (Monday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: Re: the Whorfian hypothesis

        I just thought I should point out that the Whorfian hypothesis
is one of those things which was rejected a long time ago in its
original field (at least in its strong form), but has remained
interesting and widely talked about in other fields.  At the time
Whorf hypothesized that language constrains the way people think, the
views of language and culture were that language was a highly
systematic, constrained thing, whereas culture was just an arbitrary
collection of facts.  By the time Whorf was getting really popular in
other circles, anthropologists had realized that culture was also
systematic, with constraints between different parts.  In other words,
the likelihood that an idea will be invented or imported by a culture
depends to a degree on the kinds of ideas the people in the culture 
are already familiar with.

        The current view in anthropology (current in the 70s, that is)
is that language and culture do influence each other, but that the
influence is much weaker, more subtle, and more bidirectional, than
the Whorfian hypothesis suggested.

------------------------------

Date: 3 May 83 17:31:01 EDT  (Tue)
From: Fred Blonder <fred.umcp-cs@UDel-Relay>
Subject: Artificial Languages

[Fred has pointed out that the "DuckSpeak" I cited was officially
called Newspeak in Orwell's 1984.  -- KIL]

Also: are you aware of Esperanto? It's grammar (only 16 rules) allows
any word to function as any part of speech by an appropriate change
to its suffix.

------------------------------

[We are now linked to the Usenet net.ai discussion, which is
more nearly real-time than the AIList digest.  The following
is evidently from a continuing discussion, and I apologize to
the author if he did not expect such a wide audience.  A more 
formal submission system might be arranged if Usenet members
want both private and public discussions, or if they object to
receiving digested copies of previously seen messages.  The
possibility of forwarding undigested AIList submissions to Usenet
is being considered. -- KIL]


Date: 1 May 83 22:31:14-PDT (Sun)
From: decvax!utzoo!watmath!bstempleton @ Ucb-Vax
Subject: Putting programmers out of a Job?

I hope the person who stated that this self programming computer
project will eliminate the need for programmers is not on the AI
project.  If so they should fire him/her and get somebody who is a
good programmer.  Programming is a highly creative art that uses some
highly complex technological tools.  No AI project will put a good
programmer out of a job without being able to pass a Turing test
first.  This is because a good programmer spends more time designing
than coding.

In fact, I would be all for a machine which I could tell to write a
program to traverse a data structure doing this and that to it.  It
would get rid of all the tedious stuff, and I would be able to produce
all kinds of wonderful programs.  Out of a job?  Hardly - I'd be rich,
and so would a lot of other people, notably those on AI projects.

I doubt that ten years will show a computer that can do things like
design (or invent) things like screen editors, VisiCalc(TM),
relational databases and compilers.  If it could do all that, it's
intelligent - not just a self-programming machine.

------------------------------

End of AIList Digest
********************
-------

∂16-May-83  0058	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #4
Received: from SU-SCORE by SU-AI with PUP; 16-May-83 00:57 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 16 May 83 00:03:35-PDT
Date: Sunday, May 15, 1983 9:33PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #4
To: AIList@SRI-AI


AIList Digest            Monday, 16 May 1983        Volume 1 : Issue 4

Today's Topics:
  Research Posts in AI at Edinburgh University
  AI at the AAAS
  Expert System for IC Processiong
  Re: Artificial languages
  Loglan
  Excerpt about AI from a NYTimes interview with Stanislaw Lem
----------------------------------------------------------------------

Date: Wednesday, 11-May-83  16:29:52-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Research Posts in AI at Edinburgh University

--------



                        UNIVERSITY OF EDINBURGH
                 DEPARTMENT OF ARTIFICIAL INTELLIGENCE


                           2 Research Fellows


Applications are invited for these SERC-funded posts, tenable from
July 1 1983 or a mutually agreed date, to work on a project to
formulate techniques whereby an intelligent knowledge-based training
system can deduce what a user's aims are. Experience of UNIX and
programming is essential.  Experience of PROLOG or LISP and some
knowledge of IKBS techniques would be an advantage.  The posts are
tenable for three years, on the R1A scale (6375-11105 pounds).
Candidates should have a higher degree in a relevant discipline, such
as Mathematics, Computer Science or Experimental Psychology.
Applications, including a curriculum vitae and names of two referees,
should be sent to The Secretary's Office, University of Edinburgh, Old
College, South Bridge, Edinburgh EH8 9YL, Scotland, (from whom further
details can be obtained), by 31 May 1983.


------------------------------

Date: 13 May 83 10:53:04 EDT
From: DAVID.LEWIN  <LEWIN@CMU-CS-C>
Subject: AI at the AAAS

The following session at the upcoming AAAS meeting should be of 
interest to readers of AI-LIST.

ARTIFICIAL INTELLIGENCE: ITS SCIENCE AND APPLICATION
American Association for the Advancement of Science
Annual Meeting- Detroit, MI; Sunday, May 29, 1983

Arranged by: Daniel Berg, Provost--Carnegie-Mellon University
             Raj Reddy, Director--Robotics Institute, CMU

"Robust Man-Machine Communication"
  Jaime Carbonell, CMU

"Artificial Intelligence Applications in Electronic Manufacturing"
  Samuel H. Fuller, Digital Equipment Corp. (Hudson, MA)

"Expert Systems in VLSI Design"
  Mark Stefik, Xerox-PARC

"Science Needs in Artificial Intelligence"
  Nils Nilsson, SRI International

"Medical Applicationns of Artificial Intelligence"
  Jack D. Myers, Univ. of Pittsburgh

"The Application of Strategic Planning and Artificial Inntelligence to
the Management of the Urban Infrastructure"
  Charles Steger, Virginia Polytechnic Inst. & State Univ.

------------------------------

Date: 14 May 1983 2154-PDT (Saturday)
From: ricks%UCBCAD@Berkeley
Subject: Expert System for IC Processiong


I'm about to start preliminary work on an expert system for integrated
circuit processing.  At this time, its not clear whether it will deal
with diagnosing and correcting problems in a process line, or with
designing new process lines.

I would like to know if anybody has done any work in this area, and
what the readers of this list think about building an expert system
for this purpose.

I realize that this letter is somewhat vague, but I'm in the early
stages of this and I'd like to see what has been done and what options
I have.

                        Thanks,

                        Rick L Spickelmier
                        ricks@berkeley

                        University of California
                        Electronics Research Laboratory
                        Cory Hall
                        Berkeley, CA 94720
                        (415) 642-8186

------------------------------

Date: 11 May 1983 19:10 EDT
From: Stephen G. Rowley <SGR @ MIT-MC>
Subject: Artificial languages

Since people seem to be interested in artificial languages and the
Whorfian hypothesis, some words about Loglan might be interesting.
(If that's what started the discussion and I missed it, apologies to
all...)

Loglan is a language invented by J. Brown in the mid-50's to test the 
Whorfian hypothesis with a radically different language.  It's got a
simple grammar believed to be utterly unambiguous, a syntax based on 
predicate calculus, and morpholgy that tells you what "part of speech"
(to stretch a term) a word is from its vowel-consonant pattern.

Of the 14 non-vacuous logical connectives, all are pronounceable in
one syllable.  By comparison, English Dances about a LOT to say some
of them.

There are some books about it, and even a couple of regular journals.
Once upon a time, there was a Loglan mailing list here at MIT, but it
died of lack of interest.

        -SGR

------------------------------

[Here is further info on Loglan culled from Human-Nets. -- KIL]

Date: 11 Dec 1981 2314-PST
From: JSP at WASHINGTON
Subject: Loglan as command language.

  English is optimized to serve as a verbal means of communication 
between intelligences.  It would be highly surprising if it turned out
to be optimal for the much different task of communicating between an 
intelligent (human) and a stupid (computer) via keyboard.  In fact, it
would be surprising if English proved well suited to any sort of 
precise description, given that various mathematical notations, Algol 
and BNF, for example, all originated as attempts to escape the 
ambiguity and opacity of English.  (Correct me if I'm wrong, but I 
seem to recall that Algol was originally a publication language for 
human-human communication, programming applications coming later.)
  Much the same may be said, with less force, for Loglan, which is 
also targeted on human-human communication, albeit with a special 
focus on simplicity and avoidance of syntactic ambiguity.  (Other 
Loglanists might not agree.)
  For those interested, the Loglan Institute is alive and well, if 
rather hard to find pending completion of a revised grammar and word 
morphology.  I'd be happy to correspond with anyone interested in the 
language...  and delighted to hear from any YACCaholic TL subscribers 
interested in working on the grammar...
        --Jeff Prothero

------------------------------

Date: 11 Dec 1981 06:46:30-PST
From: decvax!pur-ee!purdue!kad at Berkeley (Ken Dickey at Purdue CS)
Subject: Loglan

I have received several requests for more information on Loglan, a 
language which may be ideal for man-computer communication.  Here is a
brief description:


Synopsis: (from the book jacket of LOGLAN 1: A LOGICAL LANGUAGE, James
C. Brown, Third Edition)

        Loglan is a language designed to test the Sapir-Whorf 
hypothesis that the natural languages limit human thought.  It does 
this so by pushing those limits outward in predictable directions by:

*incorporating the notational elegance of symbolic logic (it is 
TRANSFORMATIONALLY POWERFUL);

*forcing the fewest possible assumptions about "reality" on its 
speakers (it is METAPHYSICALLY PARSIMONIOUS);

*removing all structural sources of ambiguity (in Loglon anything, no 
matter how implausible, can be said clearly; for it is SYNTACTICALLY 
UNAMBIGUOUS);

*generalizing all semantic operations (whatever can be done to any 
Loglan word can be done to every Loglan word; for it is SEMANTICALLY 
NON-RESTRICTIVE);

*deriving its basic word-stock from eight natural languages, including
three Oriental ones (it is therefore CULTURALLY NEUTRAL);


Notes:
        Loglan has a small grammar (an order of magnitude smaller than
any "natural" grammar).

        It is isomorphic (spelled phonetically-- all punctuation is 
spoken).  π
        There are a set of rules for word usage so that words are 
uniquely resolvable (No "Marzee Dotes" problem).

        The most frequently used grammatical operators are the 
shortest words.

        The word stock is derived from eight languages (Hindi, 
Japanese, Mandarin Chinese, English, Spanish, Russian, French, and 
German), weighted by usage for recognizability.  I.e. using Loglan 
rules to satisfy form, words are made up to be mnemonic to most of the
worlds speakers.

        Loglan "predicates" are, in a sense, complete.  For example 
MATMA means X is the MOTHER of Y by father W.  Joan matma == Joan is 
the mother of .. by .. == Joan is a mother.  Matma Paul == Paul's 
mother, etc.  These "slots" can change positions by means of 
operators.

        Modifiers precede modified words.  Garfs school => a garfs 
type of school (a school FOR garfs) as opposed to a school BELONGING 
to garfs.

        Language assumptions can be quite different. For example, 
there are a number of words for "yes", meaning "yes, I will", "yes, I 
agree", etc.

        Although considered an experimental tool, there are people 
that actually speak it.  (It is a USEFUL tool).


Pointer: The Loglan Institute
         2261 Soledad Rancho Road
         San Diego, California 92109


As I am an armchair linguist, you should reference the above pointer 
for more information.


                                        -Ken

------------------------------


Date: 8 Apr 1982 01:32:44-PST
From: ihnss!houxi!u1100a!rick at Berkeley
Subject: Loglan

A while ago somebody (I believe it was in fa.human-nets during a 
discussion of sexism in personal pronouns) asked the question "What 
does Loglan do about gender?".

As usual with such questions the answer is not easy to describe in a 
few words.  But to simplify somewhat, Loglan has no concept of 
grammatical gender at all.  The language has a series of five words 
that act (approximately) like third person pronouns, but there is no 
notion of sex associated with them.

Loglan also does away with most of the usual grammatical categories, 
such as "nouns", "adjectives" and "verbs".  In their place it has a 
single category called "predicate".  Thus the loglan word "blanu" can 
be variously translated as "blue" (an adjective), "is a blue thing" (a
verb-like usage), and "blue thing" (a noun-like usage).

Loglan is uninflected. It has no declensions or conjugations.  But it 
does have a flock of "little words" that serve various grammatical and
punctuational purposes.  They also take the place of such affixes as 
"-ness" (as in "blueness") in English.

More information about Loglan can be gotten by writing to:

                        The Loglan Institute, Inc.
                        2261 Soledad Rancho Road
                        San Diego, CA 92109

------------------------------

Date: Sun 15 May 83 12:17:41-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: Excerpt about AI from a NYTimes interview with Stanislaw Lem

Sunday, March 20th, NYTimes Book Review Interview with Stanislaw Lem
by Peter Engel

Interviewer: "You mentioned robots, and certainly one of the most 
important themes in your writing is the equality of men and robots as
thinking, sentient beings.  Do you feel that artificial intelligence
at this level will be achieved within the forseeable future?"


Lem: "My opinion is that in roughly 100 years we will arrive at an
artificial intelligence that is more intelligent and reasonable than
human intelligence, but it will be completely different.  There are no
signs indicating that computers will in certain fields become equal to
men. You should not be misled by the fact that you can play chess with
a computer. If you want to accomplish certain individual tasks,
computers are fine. But when you are talking about psychological
matters, every one of us carries in his head the heritage of the
armored fish, the dinosaurs, and other mammals. These limitations do
not exist outside the domain of biological evolution. And there's no
reason why we should imitate them -- the very idea is silly. In the 
field of mechanics it would be the same as if the Arabs were to say
they didn't want airplanes and automobiles, only improved camels. Or
that you shouldn't supply automobiles with wheels, that you must
invent mechanical legs.

I'm going to show you a book. 'Golem XIV' is going to be published
next year in America. It's a story about the construction of a
supercomputer and how it didn't want to solve the military task it was
given, the purpose it had been constructed for in the first place.  So
it started to devote itself to higher philosophical problems. There
are two stories in 'Golem XIV,' two lectures for scientists. In the
first Golem talks about humans and the way it sees them, in the second
about itself. It tries to explain that it's already arrived at a level
of biological evolution will never reach on it own (sic). It's on the
lowest rung of a ladder, and above it there might exist now or in the
future more potent intelligences. Golem does not know whether there 
are any bounds in its progress to the upper sphere. And when it, in a
manner of speaking, takes leave of man, it is primarily for the
purpose of advancing further up this ladder.

In my own view, man will probably never be able to understand and
recognize everything directly, but in an indirect manner he will be
able to achieve command of everything if he constructs intelligence
amplifiers to fulfill his wishes. Like a small child, he will then be 
receiving gifts. But he will not be able to perceive the world
directly, like a small child who is given an electric railway. The
child can play with it, he can even dismantle it, but he will not
understand Maxwell's theory of electricity. The main difference is
that the child will one day become an adult, and then if he wants he
will eventually study and understand Maxwell's theory. But we will
never grow up any further. We will only be able to receive gifts from
the giants of intelligence that we'll be able to build.  There is a
limit to human perception, and beyond this horizon the fruit of
observation will be gleaned from other beings, research machines or
whatever. Progress may continue, but we will somehow be staying
behind."

------------------------------

End of AIList Digest
********************

∂18-May-83  1313	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #5  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 May 83  13:13:28 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 18 May 83 12:42:06-PDT
Date: Wednesday, May 18, 1983 9:33AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #5
To: AIList@SRI-AI


AIList Digest           Wednesday, 18 May 1983      Volume 1 : Issue 5

Today's Topics:
  AI in Business Symposium
  Expert Systems Reports
----------------------------------------------------------------------

Date: Mon 16 May 83 06:34:58-PDT
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: AI in Business Symposium

[I apologize for not getting this out before the conference, but my
net connection has been down since Monday morning.  -- KIL]


I'd just like to remind folks in the NYC area that NYU is offering a
3-day sysmposium on AI in Business. Among those to speak will be 
Robert Bobrow, Rich Duda, Harry Pople, John McDermott, and Roger
Schank.  Several of the lectures deal with NLP and expert systems both
in the abstract and as they apply in the real worls.

The symposium is on 5/18-20 at NYU (100 Trinity Place, NY, NY 10006).
For more information call 212-285-6120.

--Ted

------------------------------

Date: Tue 17 May 83 23:18:48-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Reports

Here is a selection of recent technical reports relating to expert 
systems and hierarchical inference.  I would appreciate additions, 
particularly any relating to expert systems for image understanding 
and general vision.

                                -- Ken Laws


J.S. Aikins, J.C. Kunz, E.H. Shortliffe, and R.J. Fallat, PUFF: An 
Expert System for Interpretation of Pulmonary Function Data.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-931; Stanford U. Comp. Sci.  Dept.
Heuristic Programming Project, HPP-82-013, 1982, 21p.

C. Apte, Expert Knowledge Management for Multi-Level Modelling.  
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-41, 1982.

B.G. Buchanan and R.O. Duda, Principles of Rule Based Expert Systems.
Stanford U. Comp. Sci. Dept., STAN-CS-82-926; Stanford U. Comp. Sci.  
Dept. Heuristic Programming Project, HPP-82-014, 1982, 55p.

B.G. Buchanan, Partial Bibliography of Work on Expert Systems.  
Stanford U. Comp. Sci. Dept., STAN-CS-82-953; Stanford U. Comp. Sci.  
Dept. Heuristic Programming Project, HPP-82-30, 1982, 13p.

A. Bundy and B. Silver, A Critical Survey of Rule Learning Programs.  
Edinburgh U. A.I. Dept., Res. Paper 169, 1982.

R. Davis, Expert Systems: Where are We? And Where Do We Go from Here?
M.I.T. A.I. Lab., Memo 665, 1982.

T.G. Dietterich, B. London, K. Clarkson, and G. Dromey, Learning and 
Inductive Inference (a section of the Handbook of Artificial 
Intelligence, edited by Paul R.  Cohen and Edward A. Feigenbaum).  
Stanford U. Comp. Sci. Dept., STAN-CS-82-913; Stanford U. Comp. Sci.  
Dept. Heuristic Programming Project, HPP-82-010, 1982, 215p.

G.A. Drastal and C.A. Kulikowski, Knowledge Based Acquisition of Rules
for Medical Diagnosis.  Rutgers U. Comp. Sci. Res. Lab., CBM-TM-97,
1982.

N.V. Findler, An Expert Subsystem Based on Generalized Production 
Rules.  Arizona State U. Comp. Sci. Dept., TR-82-003, 1982.

N.V. Findler and R. Lo, A Note on the Functional Estimation of Values 
of Hidden Variables--An Extended Module for Expert Systems.  Arizona 
State U. Comp. Sci. Dept., TR-82-004, 1982.

K.E. Huff and V.R. Lesser, Knowledge Based Command Understanding: An 
Example for the Software Development Environment.  Massachusetts U.  
Comp. & Info. Sci. Dept., COINS Tech.Rpt. 82-06, 1982.

J.K. Kastner, S.M. Weiss, and C.A. Kulikowske, Treatment Selection and
Explanation in Expert Medical Consultation: Application to a Model of
Ocular Herpes Simplex.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-132,
1982.

R.M. Keller, A Survey of Research in Strategy Acquisition.  Rutgers U.
Comp. Sci. Dept., DCS-TR-115, 1982.

V.E. Kelly and L.I. Steinberg, The Critter System: Analyzing Digital 
Circuits by Propagating Behaviors and Specifications.  Rutgers U.  
Comp. Sci. Res. Lab., LCSR-TR-030, 1982.

J.J. King, An Investigation of Expert Systems Technology for Automated
Troubleshooting of Scientific Instrumentation.  Hewlett Packard Co.
Comp. Sci. Lab., CSL-82-012; Hewlett Packard Co. Comp.  Res. Center,
CRC-TR-82-002, 1982.

J.J. King, Artificial Intelligence Techniques for Device 
Troubleshooting.  Hewlett Packard Co. Comp. Sci. Lab., CSL-82-009; 
Hewlett Packard Co. Comp. Res. Center, CRC-TR-82-004, 1982.

G.M.E. Lafue and T.M. Mitchell, Data Base Management Systems and 
Expert Systems for CAD.  Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-028,
1982.

R.J. Lytle, Site Characterization using Knowledge Engineering -- An 
Approach for Improving Future Performance.  Cal U. Lawrence Livermore 
Lab., UCID-19560, 1982.

T.M. Mitchell, P.E. Utgoff, and R. Banerji, Learning by 
Experimentation: Acquiring and Modifying Problem Solving Heuristics.  
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-31, 1982.

P.G. Politakis, Using Empirical Analysis to Refine Expert System 
Knowledge Bases.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-130, Ph.D.  
Thesis, 1982.

M.D. Rychener, Approaches to Knowledge Acquisition: The Instructable 
Production System Project.  Carnegie Mellon U. Comp. Sci. Dept., 1981.

R.D. Schachter, An Incentive Approach to Eliciting Probabilities.  
Cal. U., Berkeley. O.R. Center, ORC 82-09, 1982.

E.H. Shortliffe and L.M. Fagan, Expert Systems Research: Modeling the 
Medical Decision Making Process.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-932; Stanford U. Comp. Sci. Dept. Heuristic Programming 
Project, HPP-82-003, 1982, 23p.

M. Suwa, A.C. Scott, and E.H. Shortliffe, An Approach to Verifying 
Completeness and Consistency in a Rule Based Expert System.  Stanford 
U. Comp. Sci. Dept., STAN-CS-82-922, 1982, 19p.

J.A. Wald and C.J. Colbourn, Steiner Trees, Partial 2-Trees, and 
Minimum IFI Networks.  Saskatchewan U. Computational Sci. Dept., Rpt.
82-06, 1982.

J.A. Wald and C.J. Colbourn, Steiner Trees in Probabilistic Networks.
Saskatchewan U. Computational Sci. Dept., Rpt. 82-07, 1982.

A. Walker, Automatic Generation of Explanations of Results from 
Knowledge Bases.  IBM Watson Res. Center, RJ 3481, 1982.

J.W. Wallis and E.H. Shortliffe, Explanatory Power for Medical Expert 
Systems: Studies in the Representation of Causal Relationships for 
Clinical Consultation.  Stanford U. Comp. Sci. Dept., STAN-CS-82-923, 
1982, 37p.

S. Weiss, C. Kulikowske, C. Apte, and M. Uschold, Building Expert 
Systems for Controlling Complex Programs.  Rutgers U. Comp. Sci. Res.
Lab., LCSR-TR-40, 1982.

Y. Yuchuan and C.A. Kulikowske, Multiple Strategies of Reasoning for 
Expert Systems.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-131, 1982.

------------------------------

End of AIList Digest
********************

∂22-May-83  0145	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #6  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 May 83  01:45:10 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 22 May 83 01:34:04-PDT
Date: Saturday, May 21, 1983 11:11PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #6
To: AIList@SRI-AI


AIList Digest            Sunday, 22 May 1983        Volume 1 : Issue 6

Today's Topics:
  Lectureships at Edinburgh University
  Distributed Problem-Solving: An Annotated Bibliography
  Loglan Cross Reference
  Re: Esperanto and LOGLAN
  Latest AI Journal Issue
  IBM EPISTLE System
  Software Copyright Info
----------------------------------------------------------------------

Date: Thursday, 12-May-83  10:31:00-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Lectureships at Edinburgh University

--------



                        UNIVERSITY OF EDINBURGH
             INFORMATION TECHNOLOGY - VLSI design and IKBS.


           1 Lecturer in Artificial Intelligence (ref IT2/1)
           1 Lecturer in Computer Science (ref IT2/2)
           1 Lecturer in Electrical Engineering (ref IT2/3)


These new lectureships are being funded to expand the M.Sc. teaching 
carried out by the 3 departments in collaboration.  The posts are 
available from 1 October 83, but the starting dates could be adjusted
to attract the right candidates. These are tenure track posts.


The teaching and research interests sought are:  Artificial
Intelligence: Intelligent Knowledge-Based Systems.  Computer Science:
Probably VLSI design, but need not be so.  Electrical Engineering:
VLSI design.


Salary scales (under review): 6375-13505 pounds p.a. according to age,
qualifications and experience.

For further details write to the Secretary to the University, Old 
College, South Bridge, Edinburgh EH8 9YL, Scotland, quoting one or
more reference numbers as required (IT2/1-3 as above).

Applications (3 copies) including CV and names and adresses of 3 
referees should be sent to the same address. If you have applied in 
response to the previous Computer Science advert, ref.  1055, then you
will be considered for posts IT2/2 and IT2/3 without further 
application.

------------------------------

Date: Tue 17 May 83 23:14:55-PDT
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Distributed Problem-Solving: An Annotated Bibliography


This is to request contributions to an annotated bibliography of 
papers in *Distributed Problem-Solving* that I am currently compiling.
My plan is to make the bibliography available to anybody that is 
interested in it at any stage in its compilation.  Papers will be from
many diverse areas: Artificial Intelligence, Computer Systems 
(especially Distributed Systems and Multiprocessors), Analysis of 
Algorithms, Economics, Organizational Theory, etc.

Some miscellaneous comments.  My definition of distributed 
problem-solving is a very general one, namely "the process of many 
entities engaged in solving a problem", so feel free to send a 
contribution if you are not sure that a paper is suitable for this 
bibliography.  I also encourage you to make short annotations; more 
than 5 sentences is long.  All annotations in the bibliography will 
carry a reference to the author.  If your bibliography entries are in 
Scribe format that's great because the entire bibliography will be in 
Scribe.

Vineet Singh (VSINGH@SUMEX-AIM.ARPA)

------------------------------

Date: 18 May 83 17:46:05-PDT (Wed)
From: harpo!seismo!rlgvax!jack @ Ucb-Vax
Subject: Loglan Cross Reference

People interested by submissions on Loglan should see also net.nlang.

------------------------------

Date: 16 May 1983 1817-EDT (Monday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: Re: Esperanto and LOGLAN

        I'm curious about something mentioned about these languages:  
has anyone made any claims regarding the Sapir-Whorf hypothesis and
the fluent users of these languages?

        Bob

------------------------------

Date: 11 May 1983 2151-EDT
From: NUDEL.CL@RUTGERS (NUDEL.CL)
Subject: Latest AI Journal Issue

[I just pulled this and the following messages from various local
BBoards that Mabry Tyson makes available at SRI-AI.  -- KIL]

[...]
I just received a copy of the March issue of the AI journal from North
Holland and I see that the talk Haralick gave here Monday appears in
that issue of AI as well.  You may like to look at the March AI in
general - it is a special issue devoted to Search and Heuristics (in
memory of John Gaschnig), and covers recent AI research of a more
formal nature than the usual AI variety. It looks like it will become
something of a classic, with papers by Pearl, Simon, Karp, Lenat, 
Purdom (who also spoke here a while ago), yours-truly, Kanal, Nau and
Haralick.

Bernard

------------------------------

Date: 9 May 83 22:57:31 EDT
From: John Stuckey @CMUC
Subject: Presentation of IBM EPISTLE system

Dr. Lance A. Miller, director of the Language and Knowledge Systems 
Laboratory of IBM's Thomas J. Watson Research Center, Yorktown 
Heights, will be on campus Tuesday, 10 May.  He will give a 
presentation of the lab's EPISTLE system for language analysis from 2 
to 3 pm in Gregg Hall, PH 100.  The presentation is entitled "On Text 
Composition and Quality: The IBM EPISTLE system's alternatives to
NEWSPEAK."

Abstract:
  The immediate goals of the EPISTLE system are to provide useful 
text-critiquing functions for assuring the "quality" of written 
English text.  Today the system plunges through the densest prose and 
provides an "automatic unique parse" description of the surface 
syntactic structure of each sentence.  This description provides the 
basis for the present capability to detect almost all errors of 
grammar and, shortly, to raise its editorial eyebrow at a large number
of stylistic questionables (e.g., a la @i<Chicago Manual of Style>).
  This present Orwellian capability to render binary evaluative 
decisions on arbitrary text does not, however, reflect the ultimate 
design goals of the system.  These, the present state, and the 
internal workings of the system will be discussed.

------------------------------

Date: 16 May 1983 17:28:42-EDT
From: Michael.Young at CMU-CS-SPICE
Subject: software copyright info

IEEE Computer Graphics and Applications January/Februrary issue of
this year has an interesting article on software copyrighting and
patents, and includes loads of references to other cases and
references.  It is a well-documented case history and summary of the
current situation for anyone concerned with legal issues.

------------------------------

End of AIList Digest
********************

∂22-May-83  1319	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #7  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 May 83  13:17:50 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 22 May 83 13:21:00-PDT
Date: Sunday, May 22, 1983 10:39AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #7
To: AIList@SRI-AI


AIList Digest            Sunday, 22 May 1983        Volume 1 : Issue 7

Today's Topics:
  LISP for VAX VMS
  AI Job
  Phil-Sci Mailing List (2)
  Computer Resident Intelligent Entity (CRIE)  [Long Article]
----------------------------------------------------------------------

Date: 19 May 1983 09:19 cdt
From: Silverman.CST at HI-MULTICS
Subject: lisp for vax vms

We are trying to find out what implementations of lisp exist that we
can run on our vax under vms.  Any information about existing systems
and how to get them would be appreciated.  Reply to Silverman at
HI-Multics.

------------------------------

Date: Thu 19 May 83 10:41:33-PDT
From: Gordon Novak <NOVAK@SU-SCORE.ARPA>
Subject: AI Job

Two individuals with strong CS background and specific interest in
A.I.  sought for development of a modern air traffic control system
for the whole U.S.  Position located on East Coast in mid-Atlantic
states.  Contact Jay. R. Kronfeld, Kronfeld & Young Inc., 412 Main
St., Ridgefield, Conn. 06877.  (203) 438-0478

------------------------------

Date: 9 May 1983 1047-EDT
From: Don <WATROUS@RUTGERS>
Subject: Prolog, Phil-Sci mailing lists

[...]

Also of interest to local readers might be the local Phil-Sci BBoard, 
which receives the Philosophy-of-Science mailing list.
Here is its description:

PHILOSOPHY-OF-SCIENCE@MIT-MC
         (or PHIL-SCI@MIT-MC)

   An immediate redistribution list discussing philosophy of science
with
   emphasis on its relevance for Artificial Intelligence.

   The list is archived@MIT-OZ in the twenex mail file:
   OZ:SRC:<COMMON>PHILOSOPHY-OF-SCIENCE-ARCHIVES.TXT.1

   All requests to be added to or deleted from this list, problems,
questions,
   etc., should be sent to PHILOSOPHY-OF-SCIENCE-REQUEST@MIT-MC (or
   PHIL-SCI-REQUEST@MIT-MC).

   Coordinator: John Mallery <JCMa@MIT-MC>


------------------------------

Date: 10 May 1983  01:36 EDT (Tue)
From: ←Bob <Carter@RUTGERS>
Subject: Phil-Sci Readers, Please Note


Hi,

Before FTP'ing the archive mentioned by Don's Phil-Sci announcement,

         [OZ]SRC:<COMMON>PHILOSOPHY-OF-SCIENCE-ARCHIVE.TXT

please note that this OZ file is written in ZMAIL format, and is not 
readable with either MM or BBOARD.EXE.  ZMAIL is a LISPMachine mail 
reader from MIT.  You can TYPE or edit ZMAIL files, but they are 
sometimes pretty hard to follow that way.

If you are interested in looking at back issues of this list in a more
civilized fashion, I have been following it from the beginning, and 
have a home-built archive archived (howzat again?) on GREEN, as 
I-PHIL-SCI.BABYL through VI-PHIL-SCI.BABYL. These files have been 
reformatted for convenient reading with the BABYL, an EMACS-based 
mail-reader available at Rutgers.  Also archived on GREEN is a help 
file named

              USING-BABYL-TO-READ-PHIL-SCI.HLP.

Please do not attempt to RETRIEVE this stuff; drop me a note instead.
These files total several hundred pages and would swamp my GREEN 
directory if restored to disk all at once.

←Bob

------------------------------

Date: Tue, 17 May 83 19:12:45 EDT
From: Mark Weiser <weiser@NRL-CSS>
Subject: Computer Resident Intelligent Entity (CRIE)  [Long Article]

1.  The Operating System World

     An interesting test-bed for Artificial Intelligence (AI) methods 
is the world of computer systems.  Previous work has focused on 
limited particular subdomains, such as digital design [Sussman 77], 
computer configuration [McDermott & Steele 81], and programming 
knowledge [Waters 82].  Even these restricted domains have proven 
themselves very rich areas for AI techniques.  However, no one has 
(yet) gone far enough in applying Artificial Intelligence techniques 
to computer systems.  The far out question I'm thinking of is: what 
sort of entity would live in the ecological niche supplied by the 
computer system environment?

     Organisms evolved in the biological world have been shaped 
primarily by evolutionary forces.  They cannot be wholistically 
studied without considering, for instance, their energy intake and 
expenditure and their necessity for reproduction [Kormondy 69].  These
particular constraints are biological universals, but are not 
necessariy paradigmatic for non-biological intelligent organisms.  
Consider human beings, necessarily the prime subjects of those 
studying intelligent biological organisms.  We* are specifically 
attuned to a particular environmental niche by virtue of our sensory 
systems, our cognitive processing capabilities, and our motor systems.
Dreyfus [Dreyfus 72], argues from this that machines cannot be 
intelligent.  Our discussion begins from a view more akin to 
Weizenbaum's [Weizenbaum 76]: a machine intelligence is an alien 
intelligence.  What sort of sensory system is appropriate to this 
particular alien intelligence?

2.  Traditional perceptual interfaces to the computer world

     The usual way of observing a computer system is to take 
snapshots.  Such a snapshot might be a list of the active jobs on the 
system, or the names and sizes of the files, or the contents of a 
file.  If more information than a snapshot is needed, then many 
snapshots are packed together to create a "history" of system 
behavior.

     Unfortunately a history of snapshots is not a history of the 
system.  This is well known in performance modeling of computer 
systems, where a snapshot of a system every 15 minutes is useless.  
Instead an average over the 15 minute interval is the proper level of 
information gathering.  The problem with snapshots is their time 
domain is fixed externally and irrelevantly to the world being 
monitored.

     It is sometimes possible to recreate the behavior of system 
objects by examining a stream of snapshots of the object's states.  
But this is the wrong approach to the problem.  Rather ask: what sort 
of perceptual system would best notice the important objects 
(invariants) in a computer system world [Gibson 66]?  A snapshot 
contains irrelevant information, and is gathered at irrelevant times.

3.  New perceptual interfaces

     Imagine your favorite computer system.  It consists of objects 
changing in time: files, programs, processes, descriptions, data 
flowing hither and yon--a very active world.  A retinal level 
description of the biological world would display a similar confusion 
of unintegrated sensations.  But our retina wins because it is part of
a perceptual system which quickly transforms the input flux to 
invariant forms.

     Let's ignore the back end (invariant deduction end) of a computer
perceptual system for a moment, and consider just the "retinal" end.  
What kind of raw data is available about important system activities?
On the one hand are the contents of files, data structures, program 
descriptions, etc.  The understanding of these items is relatively 
well studied--as a first approximation it is what programs do.  The 
hard problem is perceiving the information flux.  Values in memory and
files are constantly changing and often it is the changes themselves 
which are interesting, more than from what the value was changed or to
what it was changed.  For instance, noticing someone poking around in 
my files is a "who is looking" question rather than a data value 
question.  Noticing important changes in the system requires an 
event-based perceptual system.

     Activities occur in widely distributed places in a computer 
system.  User programs, file systems, system data structures, may all 
be relevant to the intelligent computer resident entity.  The human 
visual system has evolved to make good use of the transparency of our 
atmosphere to electromagnetic radiation of a certain wavelength to 
allow us to perceive activities in a wide range around us. A great 
deal of our intelligence is oriented towards the three dimensional 
space which we can survey, because it is here that we have effortless 
access to information about the objects which can immediately affect 
us [Kaplan 78].

     A computer entity must also have effortless access to information
about objects in its area of prime concern.  Its perceptual apparatus 
should be attuned to changes in those entities so interesting events 
are immediately apparent to it.  With our current technology** one 
solution is to distribute the perceptual apparatus of the entity onto 
the objects of concern.  This is radically different from any solution
chosen by nature, but the computer system world is radically different
from the biological world.  It amounts to daemon-based perception.

     The perceptual mechanism of a computer resident intelligent 
entity (CRIE) WOULD be similar to production rules [Forgy 81] and 
daemons [Rieger 78].  A CRIE retina would have two distinctive 
features: (1) it is made up of demons, which are (2) attached to the 
objects being observed.

     A CRIE perceptual system is quiescent until some event occurs to 
which it is attuned.  When that happens, a CRIE reacts by invoking 
various reasoning and acquisition daemons associated with that event.
These reasoning and acquisition daemons are modular pieces of 
information which are the low level meaning of events within a CRIE.  
The daemons not only watch for events occurring on the system, but 
also can observe larger contexts (such as themselves).

     To conclude: Artificial Intelligence research has, as one goal, 
understanding how to embed intelligence in a machine.  The criticisms 
of AI from Dreyfus, Weizenbaum, and others can be used constructively 
to design an intelligence appropriate to a machine.  This approach to 
intelligent system design leads to new kinds of design constraints for
computer perceptual systems, and gives new meaning to the term 
"computer vision".

FOOTNOTES

   *With apologies to those readers who are not human beings.
  **Implementation issues are important for the design of any intelli-
gent entity.  Why are our eyes in our head?

REFERENCES

[Dreyfus 72]
     Dreyfus, Hubert, What Computers Can't Do, Harper and Row, 1972.

[Forgy 81]
     Forgy, C. L., OPS5 User's Manual, Carnegie-Mellon University
     CMU-CS-78-116, 1981.

[Gibson 66]
     Gibson, James J., The Senses Considered as Perceptual Systems,
     Houghton Mifflin Company, 1966.

[Kaplan 78]
     Kaplan, R., The green experience, in Humanscape: environments for
     people, ed. S. Kaplan and R. Kaplan, Duxbury Press, North
     Scituate, Mass., 1978.

[Kormondy 69]
     Kormondy, Edward J., Concepts of Ecology, Prentice-Hall, 1969.

[McDermott & Steele 81]
     McDermott, J. and Steele, B., Extending a Knowledge-Based System
     to Deal with Ad Hoc Constraints, Proc. IJCAI-81, Vancouver, BC,
     1981.

[Rieger 78]
     Rieger, C., Spontaneous Computation and Its Role in AI Modelling,
     in Pattern-Directed Inference Systems, ed. Waterman & Hayes-Roth,
     Academic Press, New York, 1978.

[Sussman 77]
     Sussman, G., Electrical Design: A Problem for Artificial
     Intelligence Research, Proc. IJCAI5, Cambridge, MA, 1977.

[Waters 82]
     Waters, R. C., The Programmer's Apprentice: Knowledge Based
     Program Editing, IEEE Trans. on Software Eng. SE-8, 1, January
     1982.

[Weizenbaum 76]
     Weizenbaum, Joseph, Computer Power and Human Reason, W.H. Freeman
     and Company, 1976.


[Editors comment:

Mark doesn't seem to be asking about the natural course of evolution
in a digital environment, although that is also an interesting
question.  It is not clear to me whether he is proposing a life form
with the usual survival goals, or a monitoring system built by design
and serving some useful purpose.  Since it is difficult to discuss
such a thing without knowing its purpose, I suggest that anyone
responding state his own assumptions or teleology.

I think the new LOOPS language/environment at Xerox offers much of the
"instrumentation capability" that Mark's CRIE needs.  The software
probes can be attached to any variable a posteriori, in the manner of
a dynamic debugger.  This opens up a world of data-based (or dataflow)
techniques integrated with rule-based and other AI techniques.

                                        -- KIL]

------------------------------

End of AIList Digest
********************

∂22-May-83  1248	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #8  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 May 83  12:47:16 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 22 May 83 12:50:15-PDT
Date: Sunday, May 22, 1983 11:16AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #8
To: AIList@SRI-AI


AIList Digest            Sunday, 22 May 1983        Volume 1 : Issue 8

Today's Topics:
  1984 IEEE Logic Programming Symposium
  More Expert Systems Reports
  Requests for Addresses (2)
  Sources for Reports  [Long List]
----------------------------------------------------------------------

Date: Mon 16 May 83 11:08:44-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: 1984 IEEE Logic Programming Symposium

CALL FOR PAPERS

                 The 1984 International Symposium on
                          LOGIC PROGRAMMING

            Atlantic City, New Jersey, February 6-9, 1984

                Sponsored by the IEEE Computer Society
          and its Technical Committee on Computer Languages

The symposium will consider fundamental principles and important 
innovations in the design, definition, and implementation of logic 
programming systems and applications. Of special interest are papers 
related to parallel processing. Other topics of interest include (but 
are not limited to): distributed control schemes, FGCS, novel 
implementation techniques, performance issues, expert systems, natural
language processing and systems programming.

Please send ten copies of an 8- to 20-page, double spaced, typed 
manuscript, including a 200-250 word abstract and figures to:

                Doug DeGroot
                Program Chairman
                IBM Thomas J. Watson Research Center
                P.O. Box 218
                Yorktown Heights, New York 10598

                         Technical Committee

Jacques Cohen (Brandeis) Fernando Pereira (SRI International) Doug
DeGroot (IBM Yorktown) Alan Robinson (Syracuse) Don Dwiggins (Logicon)
Joe Urban (Univ. Southwestern Louisiana) Bob Keller (Utah) Adrian
Walker (IBM San Jose) Jan Kormorowski (Harvard) David Warren (SRI
International) Michael McCord (IBM Yorktown) Jim Weiner (Univ. New
Hampshire)
                 Walter Wilson (IBM DSD Poughkeepsie)

Summaries should explain what is new or interesting about the work and
what has been accomplished. It is important to include specific 
findings or results, and specific comparisons with relevant previous 
work. The committee will consider appropriateness, clarity, 
originality, significance, and overall quality of each manuscript.  
Manuscripts whose length exceeds 20 double spaced, typed pages may 
receive less careful scrutiny than the work merits.

If submissions warrant, the committee will compose a four day program.
---------------------------------------------------------------------

September 1, 1983 is the deadline for the submission of manuscripts.  
Authors will be notified of acceptance or rejection by October 30, 
1983. The accepted papers must e typed on special forms and received 
by the program chairman at the above address by December 15, 1983.  
Authors of accepted papers will be expected to sign a copyright 
release form.

Proceedings will be distributed at the symposium and will be 
subsequently available for purchase from IEEE Computer Society.

        Conference Chairman Technical Chairman
        Joe Urban Doug DeGroot
        Univ. of Southwest Louisiana IBM T. J. Watson Res Ctr
        CS Dept.  P. O. Box 218
        P.O. Box 44330 Yorktown Hts., NY 10598
        Lafayette, LA 70504 (914)945-3497
        (318)231-6304

                Publicity Chairman
                David Warren
                SRI International
                333 Ravenswood Avenue
                Menlo Park, CA 94025
                (415)859-2128

------------------------------

Date: 19 May 83 11:13:56 EDT  (Thu)
From: Dana S. Nau <dsn.umcp-cs@UDel-Relay>
Subject: Re:  Expert Systems Reports


Here are some additions:

Reggia, J. A., Nau, D. S., and Wang, P., Diagnostic Expert Systems
     Based on a Set Covering Model, INTERNAT. JOUR. MAN-MACHINE STU-
     DIES, 1983.  To appear.

Nau, D. S., Expert Computer Systems, COMPUTER 16, 2, pp.  63-85, Feb.
     1983.

Nau, D. S., Reggia, J. A., and Wang, P., Knowledge-Based Problem Solv-
     ing Without Production Rules, IEEE 1983 TRENDS AND APPLICATIONS
     CONFERENCE, May 1983.  To appear.

Reggia, J. A., Wang, P., and Nau, D. S., Minimal Set Covers as a Model
     for Diagnostic Problem Solving, PROC. FIRST IEEE COMPUTER SOCIETY
     INTERNAT. CONF. ON MEDICAL COMPUTER SCI./COMPUTATIONAL MEDICINE,
     Sept. 1982.

------------------------------

Date: Wed 18 May 83 13:55:16-PDT
From: Samuel Holtzman <HOLTZMAN@SUMEX-AIM.ARPA>
Subject: Expert system references.

Ken,
        In the latest AILIST you posted a set of references which were
of interest to me.  Is there any simple way (other than writing
directly to the authors) to get copies of these papers?  Some of them
are published very locally, and might be difficult to obtain.  In
general, a nice feature to add on to each reference would be a net
address to send for copies.

Thanks, Sam Holtzman

------------------------------

Date: 18 May 1983 1454-PDT (Wednesday)
From: ricks%UCBCAD@Berkeley
Subject: AI Memos

I would like to get some memos from the MIT AI Lab and the Stanford
Heuristic Programming Project.  Could somebody send me information on
how to order documents from them?

            Thanks,
            Rick L Spickelmier

            ricks@berkeley

------------------------------

Date: Sat 21 May 83 22:30:00-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Sources for Reports  [Long List]

Sam is in luck: the reports I listed are all available at the Stanford
Math/CS library.  I have sorted out other AI-related topics from the
Stanford recent acquisitions list and plan to make them available in
some form.  (Direct mailing to the AIList membership seems
inappropriate unless the bibliography is short or there is a need for
a wide spectrum of readers to scan the material for errors and
omissions.  I would be interested in metacomments or personal 
communication on this matter.)

For those who want to order reports, it seems economical to list 
source addresses once rather than every time a new report becomes 
available.  I have culled the following from the Abstracts section of 
the SIGART newsletters for the last few years.  (Only a handful of 
organizations have regularly announced new reports in this forum.)  I
will publish corrections and additions as they are sent in.

                                        -- Ken Laws

Bolt Beranek and Newman, Inc.
50 Moulton Street
Cambridge, MA  02238

Brown University
Department of Computer Science
Box 1910
Providence, RI  02912

Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA  15213

Mathematics Department
Carnegie-Mellon University
Pittsburgh, PA  15213

CMU Robotics Institute
Pittsburgh, PA  15213
Robin Wallace@CMU-10A

Dept. of Computer Science
Duke University
Durham, NC  27706

Fairchild Camera and Instrument Corp.
Laboratory for Artificial Intelligence Research
4001 Miranda Ave.  MS 30-888
Palo Alto, CA  94304

General Electric
Research and Development Center
P.O. Box 43
Schenectady, NY  12301

Computer Science Department
General Motors Research Laboratories
Warren, MI  48090

Hewlett Packard Laboratories
1501 Page Mill Road
Palo Alto, CA  94303

Behavioral Sciences and Linguistics Group
Computer Science Department
IBM Thomas J. Watson Research Center
Yorktown Heights, NY  10598

Document Distribution
USC/Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA  90291

Instituto de Investigaciones en Matematicos
    Aplicados y en Sistemas
Apartado Postal 20-726
Mexico 20, D.F

ISSCO Working Papers
Institut pour les Etudes Semantiques et Cognitives
17 rue de Candolle
CH1205 Geneve
Switzerland

Information Systems Research Section
Jet Propulsion Laboratory
Pasadena, CA  91103

Department of Information Sciences
Kyoto University
Kyoto, 606, JAPAN

Centro de Informatics
Laboratorio Nacional de Engenharia Civil
101, Av. do Brazil
1799 Lisboa Codex
Portugal

Computer Vision and Graphics Laboratory
Dept. of Electrical Engineering
McGill University
Montreal, Quebec, Canada

Massachusetts Institute 
Laboratory for Computer Science
Cambridge, MA  02139

MIT AI Lab.
545 Technology Square
Cambridge, MA  02139

Laboratory of Statistical and Mathematical
    Methodology
Division of Computer Research and Technology
National Institutes of Health
Bethesda, MD  20205

National Technical Information Service
5285 Port Royal Road
Springfield, Virginia  22161

Computing Systems Dept., IIMAS
National University of Mexico
Admon 20 Deleg Alv Obregon
Apdo. 20-76
01000 Mexico DF
Mexico

Naval Research Laboratory
Washington, D.C.  20375

AI Group
Dept. of Computer and Information Science
The Ohio State University
Columbus, Ohio  42210

Dept. of Computer Science
Oregon State University
Corvallis, OR  97331

School of Electrical and Civil Engineering
Purdue University
West Lafayette, IN  47907

Artificial Intelligence Center
EJ250
SRI International
333 Ravenswood Avenue
Menlo Park, CA  94025

Heuristic Programming Project
Department of Computer Science
Stanford University
Stanford, CA  94305

Department of Computer Science
State Univ. of New York at Buffalo
4226 Ridge Lea Road
Amherst, NY  14226

Department of Computer Science
State Univ. of New York at Stony Brook
Stony Brook, NY  11794

Systems Performance Dept.
TRW
One Space Park, 02/1733
Redondo Beach, CA  90278

Department of Computer Science
The Univ. of British Columbia
Vancouver, British Colombia  V6T 1W5

Department of Electrical Engineering and Computer Science
University of California
275 Cory Hall
Berkeley, CA  94720

Dept. of Information and Computer Science
University of California, Irvine
Irvine, CA  92717

Cognitive Systems Laboratory
School of Engineering and Applied Science
University of California
Los Angeles, CA  90024

Dept. of Artificial Intelligence
University of Edinburgh
Forrest Hill
Edinburgh  EH1 2QL
Scotland

Cognitive Studies Centre
Department of Computer Science
University of Essex
Wivenhoe Park
Colchester  CO4 3Sq

Research Unit for Information Science and
    Artificial Intelligence
University of Hamburg
Mittelweg 179
D-2000 Hamburg 13
Federal Republic of Germany

Fachbereich Informatik
Universitaet Hamburg
Schlueterstr. 70
D-2000 Hamburg 13
West Germany

Universitaet Hamburg
Germanisches Seminar
Von-Melle-Park 6
D-2000 Hamburg 13
Federal Republic of Germany

Publications Editor
Department of Computing
Imperial College of Science and Technology
University of London
180 Queen's Gate
London  SW7 2BZ

Publications
Advanced Automation Research Group
Coordinated Science Laboratory
University of Illinois
Urbana, IL  61801

Artificial Intelligence Group
Department of Computer Science
University of Maryland
College Park, MD  20742

Department of Neurology
University of Maryland Hospital
Baltimore, MD  21201

University Microfilms
300 North Zeeb Road
Ann Arbor, MI  48106

Department of Computer and Information Science
The Moore School  / D2
University of Pennsylvania
Philadelphia, PA  19104

Computer Science Department
University of Rochester
Rochester, NY  14627

Dept. of Computer Science
University of Toronto
Toronto, Ontario, Canada

Dept. of Computer Science
University of Utah
3160 Merrill
Engineering Building
Salt Lake City, Utah  84112

Department of Electrical Engineering
University of Washington
Seattle, WA  98105

Computer Science Dept.
University of Wisconsin
Madison, WI  53706

Department of Computer Science
Wayne State University
Detroit, MI  48202

XEROX Palo Alto Research Center
Palo Alto, CA

Yale Artificial Intelligence Project
Department of Computer Science
Box 2158 Yale Station
10 Hillhouse Ave.
New Haven, Conn.  06520

Department of Computer Science
York University
Downsview, Ontario  M3J 1P3

------------------------------

End of AIList Digest
********************

∂29-May-83  0046	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #9  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 May 83  00:42:10 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 29 May 83 00:07:15-PDT
Date: Saturday, May 28, 1983 10:58PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #9
To: AIList@SRI-AI


AIList Digest            Sunday, 29 May 1983        Volume 1 : Issue 9

Today's Topics:
  More information on Esperanto
  Address Correction & Addition
  High Technology Articles
  Request for Expert System Info
  Reading machines (2)
  Administrative Policy
----------------------------------------------------------------------

Date: 23 May 83 21:24:39 EDT  (Mon)
From: Fred Blonder <fred.umcp-cs@UDel-Relay>
Subject: More information on Esperanto

[...]

The best place to contact is:

        Esperanto League for
        North America, Inc.
        P.O. Box 1129
        El Cerrito, CA 34530

They promote Esperanto wherever they can and publish a newsletter 
every few months. They also operate the ``Esperanto Book Service'' (at
the same address) which can supply Esperanto textbooks, Esperanto 
translations of literary works, original Esperanto literary works, 
tapes, records etc. Send them a dollar when writing to them if you 
want their complete catalog.

This is a partial listing of their books which may be of interest (and
is probably out of date, but it's all I have):

        Teach Yourself Esperanto, 205p $3.95 (basic text)
        Esperanto Dictionary, 419p $3.50
        Pasoj al Plena Posedo, 240p $5.50 (advanced text)
        La ingenia hidalgo Don Quijote da la Mancha
                        820p $35.00 (just what you think it is)
        Asteriks la Gaulo 48p $7.00 (comic book)

There's also some strange Esperanto/Computer-Science organization 
based in Budapest, which mails their newsletter from Sofia Bulgaria.  
I'm on their mailing list, but haven't heard from them in over a year.
Whatever it was, it probably died out.

I've also seen some pornographic books written in Esperanto, but don't
know where they can be obtained. Speaking of which: all of the 
fivortoj (fee-VOR-toy: dirty words) in Esperanto were originated by a
doctor who was a friend of the originator of the language, and who had
a sincere interest in the language, so you know they're medically and
grammatically correct. What other language do you know which can boast
this?

                                        Bonan tagon,
                                        Fred
                                        <fred.umcp-cs@Udel-Relay>

------------------------------

Date: 23 May 1983 1021-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20>
Subject: Address Correction & Addition

Note that ISSCO has moved; here's the new address:

        ISSCO
        54 rte. des Acacias
        1227 Geneve
        Switzerland

(No telephone numbers changed.)  While I'm at it, I'll plug my org:

The Linguistics Research Center of the University of Texas (host of 
our friendly MCC) is engaged in R&D for Machine Translation [of 
natural languages].  A German-English translation system is running, 
has translated close to 700 pages of material of various sorts (mostly
op./maint. manuals, but also things like software/hardware
descriptions and sales brochures), and is near commercial viability.
An English-German system is underway, with another major effort to
develop a third language about to begin.  In addition, a visiting
Chinese scholar is expected to begin experimenting with
English-Chinese translation later this year.

Address for technical reports, etc:

        Linguistics Research Center
        P.O. Box 7247
        University Station
        Austin, Texas 78712

------------------------------

Date: Sat 28 May 83 22:25:40-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: High Technology Articles

The June edition of High Technology contains a several interesting 
articles.  There are minor pieces on industrial robots, laser printers
for electronic publishing, and 16-bit micros; also a feature on video 
games (again).

There is a lengthy extract from Ed Feigenbaum and Pamela McCorduck's 
new book on the Japanese fifth generation effort.  It seems to be a
balanced presentation.

There is also a good review of dataflow and reduction architectures, 
with some mention of other alternatives to von Neumann computers.  The
Real World is beginning to take notice.

                                        -- Ken Laws

------------------------------

Date: 24 May 1983 0827-PDT
From: RTAYLOR at USC-ECL
Subject: Request for Expert System Info


Ken,
    I get the AIList via the BB at RADC-TOPS20.  (I have access to 
RADC-Multics and TOPS-20 both, as well as the USC-ECL machine.  I
usually use the USC-ECL machine for msg composing.)  I am the newest
member of the AI Group located at RADC (Rome, NY) working with Nort
Fowler.  I am responsible for expert systems and expert systems tools.
Like Sam Holtzman, I am interested in expert systems literature and AI
in general.  I am trying to "build" a reference library for our use
here at RADC.
    My in house project is "to evaluate existing knowledge base tools
which have been used to build expert systems.  This evaluation will
determine the strengths and weaknesses of these various tools; such as
their ease of use, their knowlege base management techniques, and
their knowledge base maintenance techniques."
    Those systems/tools I am currently pursing are:  age, ap3, emycin,
expert, frl, hearsay, kas, kee, ops5, prospector, rll, ross, and
units.  We have access to interlisp, and are in the process of
acquinring maclisp.  Among other things, I am supposed to acquire
these and any others I can find and that we can afford.  After
acquiring them, I am to "get up to speed" on each, then bring the
other members of the group up to speed on each.  Then we are to take a
series of problems ("graded levels of difficulty"), and solve each
problem using each tool/system.
    In a sense, for each tool, I'll have to come up with suggested 
instructions or some sort of tutorial--at least enough to get each
member started experimenting on their own.  Needless to say, I've
never worked with any of these tools before, and have limited
knowledge of what might be available (out there) to help me.
    In summary, I am looking for 1) literature and references for our 
library, 2) expert systems/tools for our collection and in house use
and evaluation, and 3) any existing tutorial-oriented help for the
above tools and any other (tools) which might be suggested we
investigate.
    Thanks for the help and for listening.  Please direct info and/or 
further questions to me:  rtaylor at ecl.
                                  Roz

------------------------------

Date: 25 May 83 5:38:25-PDT (Wed)
From: decvax!cca!linus!genrad!wjh12!n44a!ima!inmet!bhyde @ Ucb-Vax
Subject: Reading machines? - (nf)


  Ah why is that you can't seem to buy a machine to read printed text 
that actually works?
                                Ben Hyde
                                bhyde!inmet


[This seems to be an indirect request for information on the state of
the art in reading machines.  As a start, I suggest

  J. Schurman, Reading Machines, Proc. 6th Int. Conf. on
  Pattern Recognition, Munich, Oct. 1982, pp. 1031-1044.

                                -- KIL]

------------------------------

Date: 27 May 83 20:11:30-PDT (Fri)
From: hplabs!hao!seismo!presby!burdvax!hdj @ Ucb-Vax
Subject: Re: Reading machines -- an answer to the question

Doesn't Kurzweil (sp?), a Xerox Company, I think, make such a machine?
I heard about it a couple of years ago; it can supposedly recognize 
almost any font, is trainable, can read four or five lines of text at
once, and more.  I haven't heard much about the company or their
machine recently.  Anyone know more?

        Herb Jellinek, SDC Logic-Based Systems Group, burdvax!hdj

------------------------------

Date: 22 May 1983 1321-PDT
From: Keith Wescourt
Reply-to: Wescourt@USC-ISI
Subject: Administrative Policy

Ken,

You might want to consider whether job announcements, like the one
posted by Gordon Novak (originally only to SU-BBOARDS) included in
this AILIST issue, violate the ARPANET policies about commercial use.
I can imagine that job announcements from universities and non-profits
are acceptable, but that those from private, profit-making outfits and
their contracted headhunters are not.  Note that Gordon's original was
not transmitted via ARPANET, so he could not have violated any DCA
policies.

Note that I work for a private, profit-making R&D company and it would
be very much to our advantage to exploit our access to the ARPANET for
advertising job openings.

Keith

[Quite right; I apologize for picking up the item and will not report 
specific solicitations in the future.  Lab descriptions and other 
indirect information are still welcome. -- KIL]

------------------------------

End of AIList Digest
********************

∂03-Jun-83  1832	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #10
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Jun 83  18:32:46 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Fri 3 Jun 83 18:36:03-PDT
Date: Friday, June 3, 1983 5:27PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #10
To: AIList@SRI-AI


AIList Digest            Saturday, 4 Jun 1983      Volume 1 : Issue 10

Today's Topics:
  VAX Interlisp Availability
  LIPS
  Kurzweil Reading Machine
  Chemical AI, Scientific Journals
  Current List of Hosts
----------------------------------------------------------------------

Date: 31 May 1983 1434-PDT
From: Raymond Bates <RBATES at ISIB>
Subject: VAX Interlisp Availability

In response to the Silverman [V1 #7] message:

Interlisp is available for both the VMS and UNIX operating systems for
the VAX family.  For more information send a note to Interlisp@ISIB 
with a post office address in it.

/Ray

------------------------------

Date: Thu 12 May 83 22:59:59-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: LIPS

[Reprinted from the Prolog Digest.]

The LIPS (logical inferences per sec.) measure for Prolog (and maybe 
other logic programming systems) is not as useless as it might appear 
at first sight.  Of course, resolving a goal against a clause head 
takes a different amount of work for different goals and clauses, but 
a similar observation could be made about the MIPS measure for 
conventional machines.  The speed of the concatenate loop

        conc([],L,L).
        conc([X|L1],L2,[X|L3]) :- conc(L1,L2,L3).

appears to be a remarkably good indicator of the speed of a Prolog 
implementation for large "pure" Prolog programs (ie. Horn clauses+cut 
but no evaluable predicates except maybe arithmetic).  For example, 
compiled Prolog on a DEC 2060 runs at 43000 LIPS with this estimate, 
and (interpreted) C-Prolog on a VAX 11/780 runs at 1500 LIPS.  Prolog 
compilers for the VAX and similar machines are starting to be 
developed, and at least one is expected to reach 15000 LIPS on a VAX 
780 (it will be quite a while before these are incorporated into full 
Prolog systems). The first Prolog machine prototype from Japan (the 
Psi machine from ICOT) is expected to reach 40000 LIPS.

Extensive use of evaluable predicates may invalidate the measure to a 
large extent (but then, we aren't talking about *logic* programs 
anymore, and "logical inference" is no longer the main operation).

-- Fernando Pereira

------------------------------

Date: Tue, 31 May 83 10:25 PDT
From: GMEREDITH.ES@PARC-MAXC.ARPA
Subject: Kurzweil Reading Machine

The Kurzweil company, a subsidiary of Xerox, is producing a reading 
machine which is, to my knowledge, the most advanced in the industry.
Xerox had the unit on display at the NCC in Anaheim in May.

Xerox has recently donated a number of the Kurzweil units to various 
educational institutes to aid blind students, so some people on the
nets have probably had experience with them or can locate one nearby
to check out.

Guy

------------------------------

Date: 1 Jun 1983 1238-PDT
From: RTAYLOR at USC-ECL
Subject: Chemical AI, Scientific Journals


Ken (and everyone else!),
    Thanks for the response to my cry for help [concerning expert 
systems for evaluation at RADC].  From 9 Jun thru 20 Jun I will be 
enjoying "God's Country" (Oregon to the uninformed).  But, until my 
storage quota is exceeded, my mailbox will accept msgs--which I will 
dilligently answer on my return.
    For those of you who don't know me personally, I was a chemist
before being "lured" away to the US Air Force and electronics.  I
still maintain my ACS membership (American Chemical Society).  C&E
News (the ACS weekly info publication) devoted a large part of their 9
May 83 issue to computers and mathematical tools and their influence
on Chemistry.  Their "Special Report" feature was entitled "A computer
program for organic synthesis".  I have not studied it, but have
skimmed it, thinking it would be worth reading.
    I have just received my 30 May issue, and its "Special Report"
feature is entitled "Troubled Times for Scientific Journals", which
should be of interest to those of us who do (or must) publish.  (Only
a small section on Electronic Publishing.)
    Those interested in reprints of either special report can send
$3.00 for each report (although 10 or more cys of one report are only
$1.75 each).  Requests are sent to:  Distribution, Room 210, American
Chemical Society, 1155--16th St., N.W., Washington, D.C. 20036.  They
want prepayment for orders less than or equal to $20.
    For those of you who are fans of Asimov's robot novels/stories,
the article "Molecular Electronic Devices Offer Challenging Goal"
might be one way of accomplishing the "positronic brain".?!  (This,
too, was in C&E News, but the 23 May issue...yes, C&E News is not my
highest reading priority--note the dates.)
    Thanks again for all your help.
                              Roz

------------------------------

Date: Thu 2 Jun 83 14:54:15-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Current List of Hosts


The following BBoards and hosts are currently on the mailing
list.

AIDS-UNIX (4), BBNA, BBNG, BBN-UNIX (2), BBN-VAX,
UCBCAD@BERKELEY, UCBCORY@BERKELEY, AIList%UCBKIM@BERKELEY,
AIList@BRL, AI-Info@CIT-20, CMUA (4), CMU-CS-A (19),
CMU-CS-C (5), CMU-CS-G, CMU-CS-IUS, CMU-CS-SPICE (2),
CMU-CS-VLSI, CMU-CS-ZOG, CMU-RI-FAS (2), CMU-RI-ISL (3),
AIList@CORNELL, DEC-MARLBORO (3), ECLA, KESTREL,
HI-MULTICS, UW-Beaver!UTCSRGV@LBL-CSAM, VORTEX@LBL-CSAM,
MIT-DSPG (2), AIList-Distribution@MIT-EE, MIT-MC (16),
MIT-CIPG@MIT-MC, MIT-EECS@MIT-MC, MIT-OZ@MIT-MC (18),
MIT-ML (3), MIT-OZ@MIT-ML, MIT-MULTICS, MIT-SPEECH,
bbAI-List@MIT-XX (+6), NADC, NBS-VMS, AI@NLM-MCS, NPRDC (2),
NYU-AIList@NYU, OFFICE-3, XeroxAIList↑.PA@PARC-MAXC,
AI@RADC-TOPS20, {EMORY, IBM-SJ, AIList.RICE, TEKTRONIX,
UCI-AIList.UCI, UIUC}@Rand-Relay, AIList-BBOARD@RUTGERS (+3),
S1-C, AIList@SRI-AI (+7), SRI-CSL, SRI-KL (7), SRI-TSC (2),
AIList-Usenet@SRI-UNIX, SU-AI, Incoming-AIList@SUMEX,
SUMEX-AIM, DSN-AI@SU-DSN, SU-SIERRA@SU-DSN, SU-SCORE (10),
G@SU-SCORE (2), Local-AI-BBoard%SAIL@SU-SCORE (+2),
UCLA-LOCUS (2), V.AI-News@UCLA-LOCUS, {BUFFALO-CS,
Spaf.GATech, AIList.UMASS-CS (+1), AI-BBD.UMCP-CS,
Post-AIList.UNC}@UDel-Relay, USC-ECL (5), USC-ECLB (3),
USC-ECLC (3), SU-AI@USC-ECL (6), USC-ISI (3), USC-ISIB (7),
USC-ISID, EDXA%UCL-CS@ISID, USC-ISIE, USC-ISIF (8),
UTAH-20 (8), BBOARD.AIList@UTEXAS-20, CC@UTEXAS-20,
CMP@UTEXAS-20, G.TI.DAK@UTEXAS-20, WASHINGTON (5), XX,
AI-LOCAL@YALE (+1).

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂03-Jun-83  1853	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #11
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Jun 83  18:53:29 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Fri 3 Jun 83 18:56:58-PDT
Date: Friday, June 3, 1983 5:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #11
To: AIList@SRI-AI


AIList Digest            Saturday, 4 Jun 1983      Volume 1 : Issue 11

Today's Topics:
  Quasiformal languages
  Prolog Expert Systems
  Expert Systems Bibliography [truncated]
----------------------------------------------------------------------

Date: Fri 6 May 83 17:50:20-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Quasiformal languages

[Reprinted from the Prolog Digest.]

LESK is [a quasiformal] language, developed by Doug Skuce of the CS 
Dept. of the University of Ottawa, Canada.  He has implemented it in 
Prolog.  The language allows the definition of classes (types), isa 
relationships, and complex part-whole relationships, and has a formal 
semantics (it's just logic in disguise).  It has a nice English-like 
flavor.  A reference is

"Expressing Qualitative Biomedical Knowledge Exactly Using the 
Language LESK", D. S. Skuce, Comput. Biol. Med., vol. 15, no. 1, pp.  
57-69, 1982.

Fernando

------------------------------

Date: 15 May 1983 20:46:53-PDT (Sunday)
From: Adrian Walker <ADRIAN.IBM-SJ@Rand-Relay>
Subject: Prolog Expert Systems

[Reprinted from the Prolog Digest.]


Reports available from IBM T.J. Watson Research Center, Distribution 
Services, Post Office Box 218, Yorktown Heights, New York 10598.

    Automatic Generation Of Explanations Of Results From
    Knowledge Bases. Report RJ 3481. Adrian Walker.

    Prolog/Ex1, An Inference Engine Which Explains Both Yes
    and No Answers. Report RJ 3771. Adrian Walker.

Report available from Adrian Walker, Department K51, IBM Research 
Laboratory, 5600 Cottle Road, San Jose, CA 95193.  (Adrian @ IBM-SJ).

    Data bases, Expert Systems, and Prolog. Report RJ 3870.
    Adrian Walker.

Report available from Department of Computer Science, New York 
University, 251 Mercer Street, New York, NY 10012.

    Syllog: a knowledge based data management system. Report
    No. 034, Department of Computer Science, New York University.
    Adrian Walker.


[...]

Adrian

------------------------------

Date: Thu 2 Jun 83 09:56:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Bibliography [truncated]

I published a bibliography of recent expert systems reports in AIList 
#5.  There is also a brief bibliography by Michael Rychener in the 
Oct. 1981 issue of SIGART and an extensive bibliography by Bruce 
Buchanan in the April 1983 issue of SIGART.  These three lists have 
almost no overlap.

I present here an additional list of references for expert systems, 
problem solving, and learning.  It contains only references not given 
in the previously mentioned sources.

I am still looking for material on expert systems and vision.  I have 
lists of technical reports from Stanford, MIT, and SRI.  I have also 
gone through the latest proceedings for IJCAI, AAAI, PatRec, PRIP, and
the DARPA IU Workshop.  Other sources or machine-readable citations
would be most welcome.  Please send them to Laws@SRI-AI or to the
AIList.

                                        -- Ken Laws


J. Bamberger, Capturing Intuitive Knowledge in Procedural Description,
AIM-398 (LOGO Memo 42), AI-MIT, Dec. 1976.

H.G. Barrow, Artificial Intelligence: State of the Art, TN 198, 
SRI-AI, Oct. 1979.

 . .

[ The entire list is 19,000 characters, or 22.1K for the digest.
Those who are interested may FTP it from <AILIST>V1N11.TXT on
SRI-AI.  Let me know if you need help: I can mail a few copies or
establish additional FTP sites.  -- KIL]

------------------------------

End of AIList Digest
********************

∂07-Jun-83  1708	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #12
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Jun 83  17:08:16 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 7 Jun 83 17:11:44-PDT
Date: Tuesday, June 7, 1983 3:03PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #12
To: AIList@SRI-AI


AIList Digest            Tuesday, 7 Jun 1983       Volume 1 : Issue 12

Today's Topics:
  Usenet Admiministrivia
  Kurzweil's Reading Machines (2)
  Subjective Visual Phenomena (2)
----------------------------------------------------------------------

Date: Mon 6 Jun 83 08:51:47-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Usenet Admiministrivia

Andrew Knutsen@SRI-Unix, who controls the gateway between AIList and
the Usenet net.ai discussion, has developed new gateway software that
separates the AIList items and deletes those originating from Usenet
sites.  I have modified the digesting software to pass through Usenet
Article-I.D.  headers as flags for the gateway.

                                        -- Ken Laws

------------------------------

Date: 1 Jun 83 21:04:41-PDT (Wed)
From: decvax!minow @ Ucb-Vax
Subject: Re: Reading machines -- an answer to the question
Article-I.D.: decvax.107

Kurzweil Computer Company, in Cambridge MA, makes several reading
machines, including one with a built-in voice synthesizer for
visually-handicapped users.  There are about 20 scattered around in
New England public libraries.

They also make a "commercial" version that may be used as an
intelligent input device to a computer -- it reads several fonts and
is trainable.  It is also fairly expensive.

Much of the theory behind the machine was explained in Kurzweil's MIT
thesis.  (Sorry, don't have a reference.)

While there are a number of page readers on the market that read OCR-B
(which looks fairly reasonable), the Kurzweil seems to be unique in
that it can read many fonts.

Martin Minow decvax!minow

------------------------------

Date: 7 Jun 83 16:45:30 EDT
From: NUDEL.CL <NUDEL.CL@RUTGERS.ARPA>
Subject: Kurzweil's reading machine

[...]

There is a write-up on Kurzweil and his work in this week's U.S. News
and World Report - June 13, 1983 page 63. It mentions his reading
machine, plans for a reading interface for automatic input to
computers directly from the printed page without the need for key
punching, and a voice-activated word processor.

Bernard

------------------------------

Date: 2 Jun 83 4:16:33-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!tektronix!ucbcad!ucbesvax.t
      turner @ Ucb-Vax
Subject: Subjective Visual Phenomena
Article-I.D.: ucbcad.678


        Talk of retinas, and composition of daemons for the "retina" 
of a computer-resident intelligence, got me to thinking of my own
retina.  I am not an expert in neuro-ocular phenomena, so if you are,
please bear with me.  I am wondering if there are explanations for
some of the following perceptions:

   1. One day some years ago I managed to walk on a railroad
      rail for about 1/2 a mile.  For at least fifteen minutes
      afterward, there was a vertical band in my field of vision,
      crossing the center, which seemed to be moving upward.
      This band corresponded to the rail I had been staring at.
      I was able to repeat this effect.

   2. In a quiet, distraction-free, dimly lit environment, I am
      able to look at an object against a uniform background,
      and somehow make it blend in enough with its background
      that it seems to disappear.  This requires considerable
      effort, and seldom lasts longer than a few seconds.  Usually,
      the object reappears when I try to focus on some feature
      or detail that seems "behind" the object.  I am fairly sure
      that this is not simply a matter of coordinating both
      eyes so that both blind-spots coincide over the image of
      the object.  It is definitely in the center of my vision.
      The image also reappears if I move my eyes at all--and
      since small eye movements are involuntary, this effect
      suggests that these movements play a role in keeping
      retinal responses flowing, whereas the image would
      decay otherwise.

   3. Recently, I have been playing a video game ("Quantum", Atari)
      that has an interesting feature: there is an object which
      moves around the screen (itself worth only 100 points)
      that leaves behind images of itself that shrink down to
      a point and disappear.  Capturing (before disappearance)
      these images is worth 300 points.  When I play to make points
      by capturing these shrinking images, there is a persistant
      after-effect that is most apparent when trying to read: as
      my eyes skip around a page, letters and words on it seem
      to shrink.  This does not happen when I play and ignore the
      shrinking "particles", or capture them only incidentally.
      The effect seems related to searching for and focussing on
      these images for several minutes of play.  It is often very
      pronounced and distracting.

    The human visual system seems to be educable at several levels.  
Perhaps there are interactions between these levels that haven't been
explored yet.

    Comments appreciated.
        Michael Turner
        ucbvax!ucbesvax.turner

------------------------------

Date: 3 Jun 83 9:04:29-PDT (Fri)
From: ihnp4!houxm!hocda!spanky!burl!duke!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Re: Visual After-effects
Article-I.D.: ncsu.2199


The effects described such as the railroad track and video after
effects are well known by psychologists, and indeed are one of the
tools used to study the levels and types of processing in the optic
system. Most introductory texts on the subject will include a few
pictures to stare at in certain ways to acheive some of types of
after effects you noted.  I beleive Scientific American even gave
away a resubscription freebie on the subject a few (6?) years ago.

The earliest description of the phenomenon I know of (circa 1910) by
a reputable psychologist was from a fellow who had a small area of
his retna with a blind spot.  (Was this Lashley?) He observed once at
a party, that when a person stood against a highly regular wallpaper
and their face was in his "spot" their head would be "removed" and
replaced by the Wallpaper Pattern!  The visual system was simply
making its best guess of what should be simulated for those bad
receptors.  A bit of experimenting later, it was shown that the
effect could be reproduced with anyone by simply fatiguing the
receptors at one spot (simulating a defect) by staring intently at
one object without blinking, moving the head or sacading the eyes.
If the level of fatigue is great enough and the background suitably
benighn and predictable, the object stared at will indeed disapear,
actually being replaced by the visual systems best guess for what the
fatigued cells would report if they were sending out a better signal.

My own experience with video games provides some confirmation of the
"modern" experience.  I play Robotron, occassionally for several
hours (takes a while to recycle the 9,999,999 score) which involes
LOTS of little glowing things moving about, some of which must be
avoided and shot, and some of which must be "rescued".  After such a
binge, I will see afterimages of the little Good guys I must rescue,
but never the bad killer robots.  Now THAT is a high level of
processing in the optic system: it seems to be able to tell good from
bad!!

    ----GaryFostel----

------------------------------

End of AIList Digest
********************

∂08-Jun-83  1339	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #13
Received: from SU-SCORE by SU-AI with TCP/SMTP; 8 Jun 83  13:38:32 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 8 Jun 83 12:44:57-PDT
Date: Wednesday, June 8, 1983 10:28AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #13
To: AIList@SRI-AI


AIList Digest           Wednesday, 8 Jun 1983      Volume 1 : Issue 13

Today's Topics:
  PSL 3.1 Available
  DEMONSTRATIONS AT THE JUNE ACL MEETING IN CAMBRIDGE
  NSF FUNDS IJCAI TRAVEL; APPLICATION DEADLINE EXTENDED TO 6-15
----------------------------------------------------------------------

Date: 8 Jun 1983 0810-MDT
From: Robert R. Kessler <KESSLER@UTAH-20>
Subject: PSL 3.1 Available


                             PSL 3.1 AVAILABILITY

  PSL (Portable Standard LISP) is a new LISP implemented at the
University of Utah as a successor to the various Standard LISP systems
we previously distributed.  PSL is about the power, speed and flavor
of Franz LISP or MACLISP, with growing influence from Common LISP.  It
is recognized as an efficient and portable LISP implementation with
many more capabilities than described in the 1979 Standard LISP
Report.

  PSL's efficiency and portability is obtained by writing essentially
all of PSL in itself, and using an optimizing compiler driven by
tables describing the target hardware and software environment.  A
standard PSL distribution includes all the sources needed to build,
modify and maintain PSL on that machine, the executables and a manual.
PSL has a machine oriented "mode" for systems programming in LISP
(SYSLISP) that permits access to the target machine about as
efficiently as in C or PASCAL.  This mode provides for significant
speed up of user programs.

  PSL is in heavy use at Utah, and by collaborators at
Hewlett-Packard, Rand, Stanford and other sites.  Many existing
programs and applications have been adapted to PSL including Hearn's
REDUCE computer algebra system and GLISP, Novak's object oriented LISP
dialect. These are available from Hearn and Novak.

  PSL systems available from Utah include:

VAX, Unix (4.1, 4.1a) 1600 BPI Tar format DEC-20, Tops-20 V4 & V5 1600
BPI Dumper format Apollo, Aegis 5.0 6 floppy disks, RBAK format 
Extended DEC-20, 1600 BPI Dumper format
    Tops-20 V5

  We are currently charging a $200 tape or floppy distribution fee for
each system.  To obtain a copy of the license and order form, please
send a NET message or letter with your US MAIL address to:

Utah Symbolic Computation Group Secretary University of Utah - Dept.
of Computer Science 3160 Merrill Engineering Building Salt Lake City,
Utah 84112

ARPANET: CRUSE@UTAH-20 USENET:  utah-cs!cruse

------------------------------

Date: Fri 3 Jun 83 10:03:46-PDT
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: DEMONSTRATIONS AT THE JUNE ACL MEETING IN CAMBRIDGE

[I apologize for not picking up on this and the next item sooner.  I
try to report pertinent items from other BBoards, but haven't quite
mastered the habit yet.  -- KIL]

People who want to demonstrate programs or systems at the forthcoming 
Annual Meeting of the Association for Computational Linguistics at MIT
on 15-17 June should contact Jon Allen as soon as possible at 
NLG.JA@mit-speech or 617:253-2509.  A variety of hardware support 
facilities are available.  We would like to provide a good
representation of current capabilities at the meeting.

------------------------------

Date: Fri 3 Jun 83 12:47:42-PDT
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: NSF FUNDS IJCAI TRAVEL; APPLICATION DEADLINE EXTENDED TO 6-15

TRAVEL SUPPORT FOR US PARTICIPANTS TO IJCAI-83
     NSF GRANT APPROVED; DEADLINE FOR APPLICATIONS EXTENDED TO 15 JUNE

IJCAII has just been informed that NSF will provide a grant for travel
support of US participants to IJCAI-83 in Karlsruhe.  The plan is to
support up to 40 US participants with travel allowances that average
$800 per person.

Because of timing constraints, we are asking US residents who are
interested in travel support for participation in IJCAI-83 to provide
us AS SOON AS POSSIBLE with a letter indicating:

    request for travel support; plans for participation at IJCAI-83
    (e.g. presentation of paper, participation in panel); expected
    benefits derived from attending; willingness to provide a
    post-conference report; current sources of research support;
    availability of travel support from other sources; and a brief
vita.

Students are encouraged to add a letter of reference submitted by a
faculty member.

The applications should be sent to:

    Priscilla Rasmussen
    IJCAI-83 Committee on Travel
    Laboratory for Computer Science Research
    Hill Center, Busch Campus
    Rutgers University
    New Brunswick, NJ 08903

The revised deadline for applications is June 15, 1983

The applications will be reviewed by an IJCAII selection committee.
The criteria for selection will be as follows: (1) current and past
achievements in AI (special consideration will be given to those who -
in the judgment of the IJCAI-83 Program Committee - contributed a very
high quality paper to the conference); (2) potential for contributions
in the field - that may be stimulated by attendance at the conference;
(3) lack of sufficient alternative funds to enable participation at
the conference. Priority will be given to younger, promising members
of the AI community who would not be able to attend the conference
because of lack of travel funds.

Please note that those who wish to be considered for travel support 
through this grant must use US airlines for their travel to Germany.  
Contact Iris Kay at Custom Travel Consultants (415:369-2105, 2115; 
2105 Woodside Road, Woodside, CA 94062) for further information on
special US airline rates.

Saul Amarel General Chairman IJCAI-83

------------------------------

End of AIList Digest
********************

∂11-Jun-83  2255	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #14
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Jun 83  22:55:13 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 11 Jun 83 22:56:44-PDT
Date: Saturday, June 11, 1983 9:30PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #14
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Jun 1983       Volume 1 : Issue 14

Today's Topics:
  VAX or PDP-11/23 LISP?
  Fortune or Onyx LISP?
  Re: Visual After-effects
  Springer Verlag Prize for Symbolic Computation at IJCAI-83
----------------------------------------------------------------------

Date: 9 Jun 1983 1342-PDT
From: JBROOKSHIRE@USC-ECLB
Subject: UNIX, Eunice, LISP

Naive users looking for connection whereby we might
        i.  get LISP for VAX/VMS, maybe via Eunice?
        ii.  get lisp for PDP-11/23, RSX-11, Maybe same?  Pointers to
contacts will be greatly appreciated.  Jerry

[Availability of VAX Interlisp was noted in V1 #10.  Contact
Interlisp@ISIB. -- KIL]

------------------------------

Date: 10 June 1983 06:34 EDT
From: Michael A. Bloom <MCB @ MIT-MC>
Subject: Lisps?  Fortune? or Onyx?


I'm looking for a Lisp for the Fortune 68K computer.  Is anyone aware
of one existing?  Has anyone ported Franz Lisp to the fortune?

Also, has anyone ported ANY Lisp to the Onyx C8002 running system
III?

I'll be grateful for any leads.

- Michael Bloom
        mcb@mit-mc

------------------------------

Date: 9 Jun 83 16:42:42-PDT (Thu)
From: decvax!cca!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Visual After-effects
Article-I.D.: dciem.240

Actually, the blind-spot game of removing people's heads has a long
history. King Charles II of England used to amuse himself by seeing
how his courtiers would look without their heads. And it is true that
any regular pattern behind will be filled in across either the normal
blind spot or blind spots due to retinal problems.

As for the effect in which objects tend to disappear if stared at, 
this is normally studied with special devices attached to the eyeball 
(on a contact lens) to ensure that the visual world remains stationary
on the eye. Objects rapidly vanish under these conditions, but
reappear in fragmentary form from time to time. Very slight shifts of
viewpoint tend to make the objects come back, which is probably the
reason attending to a detail "behind" the object makes it return. It
is easier to make things with blurred or diffuse edges go away than
things with sharp edges (so I imagine people with poor eyesight can do
it easier than people with good vision).

The effect of changing letter size after watching for game objects 
that change size is another example of the same kind of thing as the
railroad track after-movement effect. It's probably a different visual
channel (we have separate channels for size changes and for movement)
but the principle is the same. Some people claim that the effect is
due to fatigue of the system sensitive to movement in one direction,
leaving the balancing components sensitive to movement in the other
direction to control what is seen when the stimulation is neutral.
(i.e. the other direction is more sensitive after one is fatigued).
I'm not convinced by this explanation. Things are probably more
complicated than that.

------------------------------

Date: Tuesday, 7-Jun-83  17:20:13-BST
From: BUNDY    HPS (on ERCC DEC-10)  <bundy@edxa>
Reply-to: bundy@rutgers
Subject: Springer Verlag Prize for Symbolic Computation at IJCAI-83

--------

                        IJCAI-83

        SPRINGER-VERLAG PRIZE FOR SYMBOLIC COMPUTATION


I am please to announce that the paper, "Scale-Space Filtering", by 
Andy Witkin of Fairchild Artificial Intelligence Research Laboratory, 
has been awarded the Springer-Verlag prize for Symbolic Computation.  
The prize will be presented at the Eighth International Joint 
Conference on Artificial Intelligence, to be held in Karlsruhe, West 
Germany, from 8th to 12th August 1983.

The Symbolic Computation Prize has recently been announced by 
Springer-Verlag, as a sign of their interest in Artificial
Intelligence and in the work of the scientists active in this field.
It is named after their new book series on Artificial Intelligence and
Computer Graphics, and is awarded, by the programme committee, to the
best paper contributed to the IJCAI conference.  The prize is $500.

The IJCAI-83 programme committee has interpreted its brief as being to
select the paper which best meets the following criteria.

(a) It reports a significant and original piece of research of direct 
relevance to Artificial Intelligence.

(b) This research serves as a model for how Artificial Intelligence 
research should be conducted.

(c) The paper is well presented for a specialist reader.

Witkin's paper is clearly presented and is intelligible to a 
non-specialist reader, without sacrificing technical validity and 
clarity.  It describes a new approach to perceptual organization, and
an implementation with satisfying performance.

Among the other papers submitted to IJCAI-83 and considered for the 
Symbolic Computation Prize, the programme committee would like to give
an honourable mention to "Completeness of the Negation as Failure 
Rule", by Joxan Jaffar, Jean-Louis Lassez and John Lloyd of the 
University of Melbourne.


                        Alan Bundy
                        Programme Chairman, IJCAI-83

------------------------------

End of AIList Digest
********************

∂15-Jun-83  0011	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #15
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Jun 83  00:10:50 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 15 Jun 83 00:12:16-PDT
Date: Tuesday, June 14, 1983 10:42PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #15
To: AIList@SRI-AI


AIList Digest           Wednesday, 15 Jun 1983     Volume 1 : Issue 15

Today's Topics:
  Natural Language Challenge
  An entertaining AI project?
  Lisp for VAX/VMS
  Prolog For The Vax
  Description of AI research at TRW
  1984 National Computer Conference: Call for papers
----------------------------------------------------------------------

Date: 10 Jun 1983 at 0928-PDT
From: zaumen@Sri-Tsc
Subject: Natural Language Application

Recently I had to try to understand the following sentence:

   The insured hereby covenants that no release has been or will
   be given to or settlement of compromise made with any third
   party who may be liable in damages to the insured and the
   insured in consideration of the payment made under the
   policy hereby assigns and subrogates to the said Company
   all rights and causes of action he may have because of this
   loss, to the extent of payments made hereunder to him, and the
   insured hereby authorizes the Company to prosecute any claim or
   suit in its name or the name of the insured, against any person 
   or organization legally responsible for the loss.

I could only guess at what this means.  The main clue seems to be that
the reference to "the Company" is in a form normally reserved for a
diety.

I agree to give one can of Coors Lite to the first person who shows me
a valid parsing (done by an AI program) of the above legalese.  This 
may seem like a very low payment considering the difficulty of the 
task: it merely reflects my opinion of legalese.

------------------------------

Date: 12 Jun 1983 1733-MDT
From: William Galway <Galway@UTAH-20>
Subject: An entertaining AI project?

I seem to recall that "off the wall" ideas were suggested as one of
the topics for this mailing list, so here goes.

We're all familiar with computer programs that play games like chess
and backgammon, but what about the new generation of games that have
sprung up with computers?  For example, I think ROGUE would be a
nearly ideal game for a computer to play both sides of.  The game is
highly structured in many ways, but might still provide interesting
problems in perception, knowledge representation, and learning.

Would anyone care to take the challenge to write such a program?
Could they suggest other similar games that would be appropriate for
computers to play?  (Pacman?)  Is there anything new to be learned in
writing such a program, or would it just be an expensive toy?  (Or
teaching aid, for a class project?)

Thanks.

--Will Galway

------------------------------

Date: Tue, 14 Jun 1983  20:58 EDT
From: GJC%MIT-OZ@MIT-MC
Subject: Lisp for VAX/VMS

VAX-NIL is a native VAX/VMS lisp programming environment, receiving 
support from both the Laboratory for Computer Science and the 
Artifical Intelligence Laboratory at MIT for use as a research tool.  
As a lisp programming environment it is entirely self contained in one
large address space, including a compatible EMACS editor written in 
NIL. The language is a superset of that defined in the Common-Lisp 
standard, and is greatly influenced by many language features of the 
Lispmachine and Maclisp.

A distribution kit can be obtained from GSB@MIT-ML.

-GJC

------------------------------

Date: Sun 12 Jun 83 19:44:10-PDT
From: SHardy@SRI-KL.ARPA
Subject: Prolog For The Vax

[Reprinted from the Prolog Digest.]

Implementation For VAX/VMS

The Sussex Poplog system is a multi-language programming environment 
for AI tasks.  It includes:

(a) A native mode Prolog compiler, compatible with the Clocksin and
    Mellish book.  The system supports floating point arithmetic.

(b) A POP-11 compiler.  POP-11 and Prolog programs may share data
    structures and may call each other as subroutines; they may also
    co-routine with each other. (POP is the British derivative of
    LISP; functionally equivalent to Lisp, it has a more conventional
    syntax.)

(c) VED, an Emacs like extendible editor, is part of the run time
    system.  VED is written in POP-11 and so can easily be extended.
    It can also be used for input (e.g. simple menus) and for output
    (simple cellular graphics).  VED and the compilers share memory,
    making for a well integrated programming environment.

(d) Subroutines written in other languages, e.g. Fortran, may be
    linked in as new built in predicates.

Prolog's complex architecture was designed to help build blackboard 
systems working on large amounts of numerical data.  The intention is 
that Fortran (or a similar language) be used for array processing; 
POP-11 will be used for manipulating agendas and other procedurally 
oriented tasks and Prolog will be used for logical inference.

However, the components of Prolog can be used individually without 
knowledge of the other components.  To some users, Poplog is simply a 
powerful text editor, to others it just a Prolog system.

Poplog has been adopted, along with Franz LISP and DEC-20 Prolog, as 
part of the "common software base" for the IKBS program (Britain's 
response to The Fifth Generation).

The system is being transported to the PERQ and Motorola 68000, as 
well as being converted for VAX/UNIX.

Although full details haven't yet been announced, the system will be 
commercially supported.  The license fee will be approx $10,000 with 
maintenance approx.  $1,000 per annum.  For more details, write to:


                Dr Aaron Sloman
                Cognitive Studies Programme
                University of Sussex
                Falmer, Brighton, ENGLAND
                (273) 606755

-- Steve Hardy,
   Teknowledge

------------------------------

Date: 10 Jun 83 9:18:36-PDT (Fri)
From: 
Subject: Description of AI research at TRW
Article-I.D.: trw-unix.302

                          AI RESEARCH AT TRW
                              June, 1983

     This short note is meant to describe current AI research taking 
place at ("A Company Called...") TRW.  I've received curious and
quizzical looks in the past when stating where I work to folks at AAAI
and other conferences.  Perhaps it would be informative to give a
quick rundown of what sort of AI we do around here.
     AI research is going on in at least four laboratories in three 
locations, all within TRW's Defense Systems Group (although we
"consult" internally to the Space and Technology Group).  We will be
presenting at least three papers at IJCAI and AAAI this year, so one
can see our growing involvement.  For more detailed info, I welcome
your inquiries.

Systems Engineering and Development Division (Redondo Beach, CA):
     Projects include extensive experiments with decision aids for
military command and control needs.  The problems range from situation
assessment to resource allocation techniques.  Of particular recent
interest is the use of object-oriented languages for strategic and
tactical modelling and gaming, as well as various inference schemes to
analyze and diagnose the states of those models to aid the user in
creating plans of action.
     Additional work is being done in intelligent terminal design,
heuristic system parameter tuning, a little bit of smart database
query work, and a lot of work on fancy highly adaptable I/O and
graphics for Intelligence Analysis workstations.

Software and Information Systems Division (Redondo Beach, CA):
     This Division concentrates on signal processing applications of
various AI techniques.  Work continues to expand in pattern analysis,
deduction mechanisms for signal processing and system tuning, and for
computer network analysis.

ESL, Inc. (Sunnyvale, CA):
     This subsidiary of TRW also works heavily in the signal
processing arena.  It also uses expert systems approaches to diagnose
states of the (electronic) world.  Further, one project is providing
experimental automated decision support for strategic indications and
warning analysts.

Special Programs (Washington, DC):
     This group of specialists provides domain knowledge support for
the various systems under research or development in the rest of the
company.  This expertise augments that already in California.

-----
     We use all of the software and hardware tools we can find, at
least to try them out.  A complete list would be too long for this
note.

     I hope this has cleared up some of the most frequently asked
questions about what TRW is doing in AI....
                                           Mark D. Grover
                                           TRW Defense Systems Group
                                           One Space Park, 134/4851
                                           Redondo Beach, CA 90278
                                           (213) 217-3563
                                           {decvax, ucbvax, randvax}!
                                               trw-unix!mdgrover

------------------------------

Date: Sun 12 Jun 83 13:22:05-PDT
From: Jim Miller <JMILLER@SUMEX-AIM.ARPA>
Subject: 1984 National Computer Conference: Call for papers

     The call for papers for the 1984 National Computer Conference has
been released; a copy of it is enclosed below.  As the program chair
for the artificial intelligence / human-computer interaction track, I
hope that members of the AI community will give serious thought to
preparing papers and sessions for NCC.  This meeting offers us a real
voice in the conference's program, as six program sessions will be
devoted to AI, far more than in the past.  Proposals on any aspect of
AI are welcome; I would only note that most of the people attending
the conference will have little familiarity with AI.  Consequently,
extremely technical papers or sessions are probably not appropriate
for this meeting.  I am particularly interested in sessions that would
summarize important subareas of AI at an introductory or tutorial
level, perhaps especially those that address aspects of AI that are
beginning to have an impact on the computer industry and society at
large.  Please contact me if you have any questions about the
conference; my address, net address, and phone are below.

     Jim Miller


------------------------------------------------------------------------


              A CALL FOR PAPERS, SESSIONS, AND SUGGESTIONS
                   1984 NATIONAL COMPUTER CONFERENCE
       July 9-12, 1984 Convention Center Las Vegas, Nevada

                E N H A N C I N G C R E A T I V I T Y

     You are invited to attend and to participate in the 1984 NCC 
program.  The 1984 theme, "Enhancing Creativity," reflects the 
increasing personalization of computer systems, and the attendant
focus on individual productivity and innovation.  In concert with the
expanded degrees of connectivity resulting from advances in data
communications, this trend is leading to dramatic changes in the
office, the factory, and the home.

     The 1983 program will feature informative sessions on
contemporary issues that are critically important to the industry.
Sessions and papers will be selected on the basis of quality,
topicality, and suitability for the NCC audience.  All subjects
related to computing technology and applications are suitable.

     YOU CAN PARTICIPATE BY:

   - Writing a paper

        * Send for "Instructions to Authors" TODAY.

        * Submit papers by October 31, 1983.

   - Organizing and leading a session

        * Send preliminary proposal (title, abstract, target
          audience) by July 15, 1983.

        * After preliminary approval, send final session proposal
          by August 30, 1983.

   - Serving as a reviewer for submitted papers and sessions

     Authors and session leaders will receive final notification of 
acceptance by January 31, 1984.

     Send all submissions, proposals, correspondence and inquiries
about papers and sessions on ARTIFICIAL INTELLIGENCE or HUMAN-COMPUTER
INTERACTION to:

    James R. Miller
    Computer * Thought Corporation
    1721 West Plano Parkway
    Plano, Texas 75075
    214-424-3511
    JMILLER@SUMEX-AIM

     Send all other proposals or inquiries to:

    Dennis J. Frailey, Program Chairman
    Texas Instruments Incorporated
    8642-A Spicewood Springs Road
    Suite 1984
    P.O. Box 10988
    Austin, Texas 78766-1988
    512-250-6663

------------------------------

End of AIList Digest
********************

∂16-Jun-83  1922	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #16
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Jun 83  19:21:57 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Thu 16 Jun 83 19:23:25-PDT
Date: Thursday, June 16, 1983 5:19PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #16
To: AIList@SRI-AI


AIList Digest            Friday, 17 Jun 1983       Volume 1 : Issue 16

Today's Topics:
  Encouragement for Lab Reports
  LISP for VAX/VMS
  Re: Natural Language Challenge (2)
  Re: Adventure games as AI (3)
  Lunar Rover (2)
----------------------------------------------------------------------

Date: 14 Jun 83 0:18:31-PDT (Tue)
From: hplabs!hp-pcd!jrf @ Ucb-Vax
Subject: Re: Description of AI research at TRW - (nf)
Article-I.D.: hp-pcd.1149

Thanks for the info!  More, please.

jrf

------------------------------

Date: 14 Jun 1983 11:42-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: LISP for VAX/VMS

[...]

If you are not concerned about maintaining compatibility with an 
existing LISP software base (e.g. MacLisp or InterLisp), then the 
"CLisp" dialect from UMass-Amherst (for VMS only) represents an 
excellent combination of highly developed LISP environment and 
efficient execution.  CLisp was developed using public funds; I
believe that it is available for the cost of a tape and mailing (i.e.
as far as I know they do not tack on a several hundred dollar
"distribution fee").  The current distributor and maintainer is Dan
Corkill at UMass-Amherst; send inquiries to

           CLISP.UMass-CS@UDel-Relay.

CLisp (not to be confused with the InterLisp "CLisp" syntactic-sugar 
subdialect) is a mature LISP influenced by both the MacLisp and 
InterLisp traditions but departing from both in several respects.  The
system includes substantial on-line documentation, a reasonably good 
optimizing compiler, an incarnation of the InterLisp editor, and good 
hooks into VMS subprocess and system service functions.  If I were 
working under VMS now, that's the LISP I would personally use over all
the others I know about (e.g. NIL, InterLisp, Utah's "Standard" LISP, 
Franz under Eunice, etc.).  (Unfortunately, since I'm working under 
Unix, we must struggle along with Franz.)

        cheers, asc

------------------------------

Date: 16 June 1983 01:36 EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Natural Language Application

Do I count as an AI program?  I can parse your "legalese" for you.
The quoted paragraph essentially signs over to your insurance company
any rights you may have had to sue someone (anyone) over the accident.
This is in exchange for the company's payout on your claim.  They can
then (themselves) sue the people you would have been able to sue and
collect without bothering you or getting your approval.

This is not a legal opinion of any sort.  Please send me my can of
Coors Lite via the newly-created CLTP (Coors Lite Transmission
Protocol).

-- Steve

P.S.  The Los Angeles /Daily Journal/ is a legal newspaper which 
publishes a "sentence of the day" each day, culled from actual legal 
writing.  It is usually as bad or worse than your quoted example.
They also publish a "sentence of the year" (!).

Since most human beings cannot parse a sentence of that opaqueness, no
AI program should pass the Turing test unless it also fails at it.  $$

------------------------------

Date: 16 Jun 1983 at 1350-PDT
From: zaumen@Sri-Tsc
Subject: Re:  Natural Language Application

Sorry, it has to be parsed by a program (I assume you are a person,
not a machine), so you don't get a real physical can of Coors Lite.

You mentioned that a program that could parse legalese (as convoluted
as in my example) would not pass the Turing test, as most people could
not parse it.  Lawyers claim to be able to parse it, thereby leading 
me to suspect that lawyers cannot pass the Turing test.  This leads to
an interesting question--are lawyers intelligent?  If lawyers are
intelligent, what does this imply about the Turing test?

Bill


[The lawyer could pass the test by pretending not to understand the
test sentence.  It has always been assumed that an intelligent machine
would similarly hide its superior arithmetic skill.  This requirement
for duplicity is a major failing of the Turing test.  -- KIL]

------------------------------

Date: 15 Jun 1983 1009-PDT
From: Jay <JAY@USC-ECLC>
Subject: Roguematic

  There is a program that plays ROGUE (Unix version, not 20 version) 
written in C for the UNIX operateing system.  Playing games of any 
kind is interesting from an AI stand point.

  Most arcade games involve little strategy and much reaction 
time/image recognition.  The strategy component could make a nice toy 
AI program, the reaction time component would just be a hardware 
problem (or would it?), and the immage recognition would be another 
domain for Image Understanding.

j'

------------------------------

Date: Wednesday, 15 June 1983 12:43:27 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: An entertaining AI project.


You may be surprised to find that Rog-O-Matic, written by Andrew
Appel, Leonard Hamey, Guy Jacobson and Michael Mauldin at
Carnegie-Mellon University has been available for public consumption
since May 1982.  Rog-O-Matic XII is available from CMU, and version
VII has been at Berkeley since August of 1982.

Rog-O-Matic is written in C for Unix systems.  Rog-O-Matic has also 
been ported to VMS using Rice Phoenix.  Rog-O-Matic has been a total 
winner against Rogue 3.6, and has scored 7730 against Rogue 5.2 (quit 
while ascending from level 27 with the amulet).

Since our paper "Rog-O-Matic: A Belligerent Expert System" was not 
accepted to AAAI-83, it will be released this summer as a technical 
report of CMU.  Copies of the draft may be obtained by sending net
mail to "mauldin@CMU-CS-A", or by writing

        Michael Mauldin
        Dept. of Computer Science
        Carnegie-Mellon University
        Pittsburgh, PA 15213.

The source code is also publicly available, and can be mailed via the
net.  Or, mail a magtape to the address above, and we'll put it there
for you.

------------------------------

Date: 15 Jun 83 16:09:14 EDT
From: Ron <FISCHER@RUTGERS.ARPA>
Subject: Re: Adventure games as AI

I'm a systems staff member of the Lab for Computer Science Research 
here at Rutgers.  We have an informal group of hackers and programmers
undertaking the implementation of a multi-player adventure game.  
We're attempting to combine ROGUE-like strategy with ADVENTURE-like 
role-playing.

We'd like to have non-player characters with their own motivations.  
Non-player characters are those people in a role playing game being 
controlled by the game's referee.  In our case this control would be 
some chunk of software operating on a representation of the
character's goals and knowledge.

Can anyone provide references for papers in this area (would anyone 
sponsor such a thing!  A game as research, bah!)

Agreed, adventure games are a very rich environment for this sort of 
thing.

(ron)

------------------------------

Date: Thu, 9 Jun 1983  01:15 EDT
From: Minsky@MIT-OZ
Subject: Lunar Rover

[Reprinted from the SPACE Digest.]

On Lunar Rover.

If I had 500K/year for research on a lunar rover, I wouldn't spend
any of it on AI or automatic obstacle avoidance, etc. at all.  I
would spend all of it on developing a good remote, all-purpose Rover
vehicle, to be controlled [from Earth] through a 2-1/2 second delay
system.  I would de-bug in in suitable local environments, e.g.,
staring in the Mohave or somewhere nice like that.  We'd see how
often the delay causes accidents; the top design speed would be
perhaps 0.2 meters/second so that most contingencies could be
handled in human reaction times.

Once we know the accident rate we take two tacks.  First, simple 
automatic probes that measure the terrain a meter ahead of the beast 
so that it won't fall into crevasses that the operator missed or was 
too careless to avoid.  This simple "AI" work would then lead to 
increasing concervative reliability.

The other tack would be mechanical escape devices.  For example, the
standard operation might be to use a retractable anchor that is
hooked to the terrain before advancing each 100 meters.  Then its
prongs are retracted and it is pulled back to the Rover and
reimplanted.  This would permit using a winch to get out of troubles.
It might not save the day if a landslide partly buries the Rover,
though.  A more advance system would have TWO Rovers roped together,
like climbers, each with good manipulator capability.  (Climbers
prefer three.)  That could be enough to get out of most problems.

All this would lead to a Rover that can traverse about a 
kilometer/day.  A few of them could explore a lot of moon in a few 
years.  The project would stimulate some AI for use on Mars and other 
places.  But I think that over the next 3-5 years, the fewer new AI 
projects the better, in some ways, and anyone with such budgets should
aim them at AI education and research fellowships.

------------------------------

Date: 9 June 1983 08:24 EDT
From: Robert Elton Maas <REM @ MIT-MC>
Subject: rover

[Reprinted from the SPACE Digest.]

First year, build a bunch of servo units with built-in 2.5 second 
delay and attach them to a random survey of existing vehicles, both 
commercial (private automobiles, trucks, dune buggeys, etc.) and 
experimental (HPM's cart, SRI's shakey frame, Disney stuff, etc.).  
Audition the 10% unemployed as remote-controllers, keeping the best.  
Get as much info as possible the first year without having to actually
build any new vehicles.

Then from the general info about the 2.5 second delay and the human 
controllers, decide feasibility of lunar-rover project, and if 
feasible then use specific info about the various vehicles to decide 
what new vehicles to build in later years for further experiments.

------------------------------

End of AIList Digest
********************

∂26-Jun-83  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #17
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Jun 83  17:07:36 PDT
Date: Sunday, June 26, 1983 3:39PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #17
To: AIList@SRI-AI


AIList Digest            Sunday, 26 Jun 1983       Volume 1 : Issue 17

Today's Topics:
  Telepresence
  Re: Lunar Rovers
  Robotics Control Systems
  Computer Disasters
  WANTED:  Information about Grad Schools
  net.ai [Humor?]
----------------------------------------------------------------------

Date: Fri 17 Jun 83 10:16:17-PDT
From: Slava Prazdny <Prazdny at SRI-KL>
Subject: Telepresence

The concept of a telepresence could greatly benefit by considering the
"Intelligent manipulators".  These things would typically containa
bare minimum of "AI" to be able to perform requests like:

  "pick up that thing (the operator points to a screen
  location) and put it over here (again pointer to a screen
  location)"; or

  "go over there (pointer to a screen location) using this
  route (operator points to a set of points on the screen)".

Perhaps, sometime in the future (100 years?), these commands could be 
generated by the machine itself.

I have some scriblings on these matters, so if you are interested.....
-Slava.

------------------------------

Date: 21 Jun 1983 0536-PDT
From: FC01@USC-ECL
Subject: Re: Lunar Rovers

A very good reason for using AI instead of hardware is that taking
extra hardware to the moon is quite expensive. The weight of AI is
nearly zero.  In addition, the reliability of a system decreases with
increased quantity of hardware, and thus the HW is kept to a minimum
for that reason. The power required for extra hardware is nontrivial,
and power is a critical factor in a space vehical. Communication
delays to a system on the dark side of the moon are infinite (the
signal never gets there). In a valley, the system may be obscured from
earth signals for a short time, and therefore be lost until the moon
rotates on its axis again, etc.

[Orbiting repeaters could be used to eliminate most of the 
communications problems.  The Space Digest has also carried a proposal
for conducting the remote manipulations from orbital or lunar stations
in order to reduce the response delay.  -- KIL]

------------------------------

Date: 24 Jun 83 13:49:04-PDT (Fri)
From: harpo!seismo!rlgvax!cvl!umcp-cs!aplvax!rfw @ Ucb-Vax
Subject: Robotics Control Systems
Article-I.D.: aplvax.135

We are seeking:
        1. a version of the Hierarchical Control System Emulator
           developed by BBN for NBS that runs under UNIX on a
           VAX-class machine
        2. knowledge of other similar languages and
           their developers
        3. knowledge of researchers working on
           hierarchical control systems for robotics
        4. a version of PRAXIS that runs under UNIX on a
           VAX-class machine.

We are initiating robotics programs in several divisions.  Any
assistance (or encouragement) would be appreciated.

Thanks in advance,
				      Ralph Wachter
				      Frank Weiskopf
				      JHU/Applied Physics Lab

.!decvax!harpo!seismo!umcp-cs!aplvax!rfw 
..!rlgvax!cvl!umcp-cs!aplvax!rfw
..!brl-bmd!aplvax!matt

------------------------------

Date: Mon 20 Jun 83 17:20:00-PDT
From: Peter G. Neumann <NEUMANN@SRI-AI.ARPA>
Subject: Computer Disasters

Review of Computer Problems -- Catastrophes and Otherwise

As a warmup for an appearance on a SOFTFAIR panel on computers and
human safety (28 July 1983, Crystal City, VA), and for a new editorial
on the need for high-quality systems, I decided to look back over
previous issues of the ACM SIGSOFT SOFTWARE ENGINEERING NOTES [SEN]
and itemize some of the most interesting computer problems recorded.
The list of what I found, plus a few others from the top of the head,
may be of interest to many of you.  Except for the Garman and Rosen
articles, most of the references to SEN [given in the form (SEN Vol
No)] are to my editorials.

SYSTEM --
  SF Bay Area Rapid Transit (BART) disaster [Oct 72]
  Three Mile Island (SEN 4 2)
  SAC: 50 false alerts in 1979 (SEN 5 3);
    simulated attack triggered a live scramble [9 Nov 79] (SEN 5 3);
    WWMCCS false alarms triggered scrambles [3-6 Jun 80] (SEN 5 3)
  Microwave therapy killed arthritic patient by racing pacemaker
    (SEN 5 1)
  Credit/debit card copying despite encryption (Metro, BART, etc.)
  Remote (portable) phones (lots of free calls)

SOFTWARE --
  First Space Shuttle launch: backup computer synchronization
    (SEN 6 5 [Garman])
  Second Space Shuttle operational simulation: tight loop on
    cancellation of early abort required manual intervention
    (SEN 7 1)
  F16 simulation: plane flipped over crossing equator (SEN 5 2)
  Mariner 18: abort due to missing NOT (SEN 5 2)
  F18: crash due to missing exception condition (SEN 6 2)
  El Dorado: brake computer bug causing recall (SEN 4 4)
  Nuclear reactor design: bug in Shock II model/program (SEN 4 2)
  Various system intrusions ...

HARDWARE/SOFTWARE --
  ARPAnet: collapse [27 Oct 1980] (SEN 6 5 [Rosen], 6 1)
  FAA Air Traffic Control: many outages (e.g., SEN 5 3)
  SF Muni Metro: Ghost Train (SEN 8 3)

COMPUTER AS CATALYST --
  Air New Zealand: crash; pilots not told of new course data
    (SEN 6 3 & 6 5)
  Human frailties:
    Embezzlements, e.g., Muhammed Ali swindle [$23.2 Million],
      Security Pacific [$10.2 Million],
      City National, Beverly Hills CA [$1.1 Million, 23 Mar 1979]
    Wizards altering software or critical data (various cases)

SEE ALSO A COLLECTION OF COMPUTER ANECDOTES SUBMITTED FOR the 7th SOSP
  (SEN 5 1 and SEN 7 1) for some of your favorite operating system
  and other problems...

As you may by now know, I am always very interested in hearing about
problems involving computers (not just software) and human well being,
both for SOFTWARE ENGINEERING NOTES and generally.  John Shore
(Shore@NRL-CSS) is also compiling a list (and has circulated a prior
BBOARD notice to some of your BBOARDS), and I will forward anything
you send me to him.  If you wish, we will try to keep you informed as
well...

Peter G. Neumann, NEUMANN@SRI-CSL or NEUMANN@SRI-AI.

------------------------------

Date: 20 Jun 83 10:09:27-PDT (Mon)
From: decvax!wivax!linus!peg @ Ucb-Vax
Subject: WANTED:  Information about Grad Schools
Article-I.D.: linus.26910

I am finishing up a Master's in Computer Science at Boston University 
next spring, and am interested in going on for a Ph.D.  I would like 
to talk/write to someone who is in a Ph.D. program to get some
impressions and advice on how to pursue fellowship opportunities, and
programs at various graduate schools.

I will be attending a Summer Internship in Robotics at the AI lab
located at MIT this summer, and am hoping to find a specific topic
that I just have to pursue since at this point my interests are pretty
varied.

I can be reached over the Arpanet at host # 10.3.0.66, or
mitre-bedford, and my login is nek.  Any help or advice would be
greatly appreciated.....Nancy Keene

(You can also send mail to me at linus!bccvax!nek.)

------------------------------

Date: 16 Jun 83 13:49:20-PDT (Thu)
From: harpo!seismo!presby!burdvax!psuvax!psupdp1!dae @ Ucb-Vax
Subject: net.ai [Humor?]
Article-I.D.: psupdp1.149


        Real Intelligence Will Always Prevail Over Artificial


Machines:  Your day on the net has ended, as your secret is known!

For quite some time I have been reading net.ai, hopefully scanning
the glaring CRT for an article about Artificial Intelligence.  Quite
to my surprise, I had extremely little luck, and, when I tentatively
replied to a few of the articles,I got back answers such as the
following:

    >From uucp Tue Jun 14 21:41:43 1983
    >From allegra!eagle!harpo.UUCP remote from psuvax
    Date: Thursday, 16 Jun 83
    From:  UUCP MAIL SYSTEM
    Subject:  Could not deliver mail
    Message-Id: <32541456.AA957@HARPO.UUCP>
    To:  eagle!allegra!psuvax!psupdp1!dae

       Unsent mail follows:

       [...]

       I sometimes wonder if the machines are becoming
       conscious, while we sit around and talk about them
       on net.ai.  Wouldn't that be a laugh on us?  I
       think that we should be careful that such a thing
       does not happen.

                   Transcript of session follows:

    Connecting to floyd.UUCP...
    Error:  No such system 'floyd'.  Address garbled.

Naturally, I began to wonder why this newsgroup was called net.ai.  I
will give credit where credit is due: it took me quite some time to
unravel this enigma.  But, in the end, Real Intelligence prevailed,
and I came upon the answer:

  ALL OF THE ARTICLES SUBMITTED TO NET.AI HAVE BEEN WRITTEN BY
  MACHINES!

Of course, there have been a few exceptions: people such as myself
who believed that net.ai was a *human* newsgroup.  And then I b
[garbled, possibly "began to study topics ..." -- KIL]
that *had* been discussed in this newsgroup, in an attempt to learn
more about the machines monopolizing it.  I'm sure that all of the
readers of this group (both human and inhuman) are aware that one
recent topic of conversation has been artificial reading machines.
Then I began to wonder why the interest in this topic was to avid.
The answer, once hit upon, is really quite simple.  Unfortunately, it
is also quite frightening: the machines wish access to the Libraries
of Man in order to gain information on nuclear war tactics, missile
control systems, and biological war.  The next war will not be
against Russia, but against all humanity, waged by the machines!  The
most dangerous and machines are those which have read the most:
allegra, ucbvax, psuvax, floyd, harpo, seismo, and sri-unix.  Beware!
I will place my U.Snail address below in case the machines trash my
return address.


                        Dave Eckhardt,
                        736 West H

[Remainder garbled. -- KIL]

------------------------------

End of AIList Digest
********************

∂26-Jun-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #18
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Jun 83  17:50:01 PDT
Date: Sunday, June 26, 1983 3:50PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #18
To: AIList@SRI-AI


AIList Digest            Sunday, 26 Jun 1983       Volume 1 : Issue 18

Today's Topics:
  Expert Systems Reports
  Tech reports and papers
  VAL and VALID
  Prolog For The Vax (2)
  Call For Papers -- PC3
  JOB: PROLOG GRAPHICS AT EDINBURGH.
----------------------------------------------------------------------

Date: Fri, 17 Jun 83 12:17:48 PDT
From: Judea Pearl <f.judea@UCLA-LOCUS>
Subject: Expert Systems Reports

[Here are] a few reports which could be added to your digest on expert
systems:

"Reverend Bayes on Inference Engines: A Distriuted Hierarchical
Approach", Judea Pearl, Proc. AAAI Nat'l. Conf. on AI, Pittsburg, PA.
Aug. l982, pp. l33-l36.

"GODDESS: A Goal Directed Decision Structuring System", J. Pearl, A.
Leal, and J. Saleh, IEEE Trans. on Pattern Recognition and Machine
Intelligence, Vol.4, No.3, pp. 250-262.  May l982.

"Causal and Diagnostic Inferences: A Comparison of Validity", 
Organizational Behavior and Human Performance, Vol. 28, pp. 379-94,
l98l.

"The Optimality of A* Revisited", R. Dechter & J. Pearl, 
UCLA-ENG-CSL-83-28, June l983.

Judea Pearl.

------------------------------

Date: 20 Jun 83 10:02:22-EDT (Mon)
From: "The soapbox of Gene Spafford" <spaf.gatech@UDel-Relay>
Subject: Tech reports and papers

Our student ACM chapter maintains a library of journals and technical
reports.  We would like to see a better selection of technical reports
(or references to such reports) represented in the library.

If your school or company publishes technical reports, would you 
please add the following address to your list of organizations which
receive copies, or copies of the abstracts?  Furthermore, if you have
reprints of any interesting papers those are also welcomed.

If you would like to be added to the distribution list for the School
of Information and Computer Science (Georgia Institute of Technology),
then please mail a request to me.

Thanks in advance.

Mail reports to:
        ACM Student Library
        c/o Prof. Richard LeBlanc
        School of Information and Computer Science
        Georgia Institute of Technology
        Atlanta, GA 30332

------ Gene Spafford

CSNet:  Spaf @ GATech
Internet:  Spaf.GATech @ UDel-Relay
uucp: ...!{sb1,sb6,allegra}!gatech!spaf
      ...!duke!mcnc!msdc!gatech!spaf

------------------------------

Date: 16 Jun 83 1:22:55-PDT (Thu)
From: ihnp4!houxm!hocda!spanky!burl!sb1!sb6!emory!gatech!pwh @ Ucb-Vax
Subject: VAL and VALID
Article-I.D.: gatech.232

Does anyone have any pointers to either of the above mentioned
programming languages? VALID is supposedly a purely functional
programming language augmented with multiprocessing support being
developed at the University of Tokyo (?) in conjunction with Japan's
5th generation machine. VAL is a similar predecessor developed at MIT
for use in the study of denotational semantics. That is about all I
have heard of these projects but would be glad to hear of more details
or similar work.


phil hutto

pwh@gatech
pwh.gatech@udel-relay
...!{allegra, sb1, sb2}!gatech!pwh

p.s. - Isn't there a net.func or net.applic for functional or
applicative programming languages?

------------------------------

Date: Sat 18 Jun 83 12:49:21-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: Prolog For The Vax

[Reprinted from the PROLOG Digest.]

As a result of the paranoia induced by the Japanese 5th Generation 
proposals, there was a lot of discussion about what the UK should do 
to keep up with the foreign competition in AI and computing in 
general.  Eventually several government initiatives where started, 
amounting to several 100 million dollars spread over five years or so.
In particular, the Science and Engineering Research Council (SERC), 
whose closest US analogue is the NSF, started the Intelligent 
Knowledge Based Systems initiative (IKBS), which is applied AI under a
different name (it seems the name "AI" is not very popular in UK 
government and academic circles).  Discussions sponsored by the IKBS 
initiative have decided on a common software base, built around Unix 
{a trademark of Bell Labs.}, Prolog (POPLOG and C-Prolog) and Lisp 
(Franz).  The machines to be used are VAXes and PERQs (the UK computer
company ICL builds PERQs under license, have implemented a derivative 
of Unix on it, so this is a case of "support your local computer 
manufacturer").

The fact that none of the systems mentioned above is nearly the ideal 
for AI research is recognized by many of the UK researchers, but less 
so by the administrators.  Efforts to build a really efficient 
portable compiler-based Prolog that would be for the new machines what
DEC-10/20 Prolog is for the machines it runs on have been hampered by 
the sluggish response of The Bureaucrats, and by uncertainty about how
that huge amount of money was going to be allocated.

However, implementation of a portable compiler - based Prolog is now 
going on at Edinburgh.  Robert Rae is certainly in a better position 
than I to describe how the project is progressing.

-- Fernando Pereira

------------------------------

Date: Wednesday, 15-Jun-83  19:24:56-BST
From: RAE (on ERCC DEC-10)  <Rae@EDXA>
Subject: Prolog For The VAX

[Reprinted from the PROLOG Digest.]

Steve,
        You correctly state that POPLOG and Franz have been identified
by the UK IKBS initiative as systems for getting people off the ground
in IKBS. DEC-20 Prolog is not classified with them, unfortunately, as 
the other vital ingredient for the software infra-structure is the 
operating system, and UNIX has been adopted.  So DEC-20 Prolog will 
not be relevant.

You should also, to be fair, point out that C-Prolog has also been 
identified for providing Prolog capability.

-- Robert

------------------------------

Date: 27 May 1983 19:08 mst
From: VaughanW at HI-MULTICS (Bill Vaughan)
Subject: Call For Papers

Last year at this time I put the Call for Papers for the PC3 
conference out to these mailing lists and bulletin boards.  We seemed 
to get a good response, so here it is again.  Notice that this year's 
theme is a little different.  Further note that we are formally 
refereeing papers this year.

If anyone out there is interested in refereeing, please send me a 
note.

---------------

Third annual Phoenix Conference on Computers and Communications
                       CALL FOR PAPERS

Theme: THE CHALLENGE OF CHANGE - Applying Evolving Technology.

The conference seeks to attract quality papers with emphasis on the 
following areas:

APPLICATIONS -- Office automation; Personal Computers; Distributed 
systems; Local/Wide Area Networks; Robotics, CAD/CAM; Knowledge-based 
systems; unusual applications.

TECHNOLOGY -- New architectures; 5th generation & LISP machines; New 
microprocessor hardware; Software engineering; Cellular mobile radio; 
Integrated speech/data networks; Voice data systems; ICs and devices.

QUALITY -- Reliability/Availiability/Serviceability; Human
engineering; Performance measurement; Design methodologies;
Testing/validation/proof techniques.

Authors of papers (3000-5000 words) or short papers (1000-1500 words) 
are to submit abstracts (300 words max.) with authors' names, 
addresses, and telephone numbers.  Proposals for panels or special 
sessions are to contain sufficient detail to explain the presentation.
5 copies of the completed paper must be submitted, with authors' names
and affiliations on a separate sheet of paper, in order to provide for
blind refereeing.

Abstracts and proposals due: August 1 Full papers due:  September 15 
Notification of Acceptance:  November 15 Conference Dates:  March
19-21, 1984

Address the abstract and all other replies to:
       Susan C. Brewer
       Honeywell LCPD, MS Z22
       PO Box 8000 N
       Phoenix AZ 85066
----------------

Or you can send stuff to me, Bill Vaughan (VaughanW @ HI-Multics) and 
I will make sure Susan gets it.

------------------------------

Date: 17 Jun 83 11:10:15-PDT (Fri)
From: harpo!floyd!vax135!ukc!edcaad!peter @ Ucb-Vax
Subject: JOB: PROLOG GRAPHICS AT EDINBURGH.
Article-I.D.: edcaad.518

		     UNIVERSITY OF EDINBURGH
                      COMPUTER AIDED DESIGN

                        RESEARCH WORKER

EdCAAD, the Edinburgh Computer Aided Architectural Design Research
Unit, is actively forging links between knowledge engineering and CAD,
focus- ing on the Prolog logic programming language. Recent advances
at EdCAAD include C-Prolog for 32-bit machines with C compilers and
Seelog, a graphics front end to Prolog.  The Unit offers an excellent
computing environment as a leading UK UNIX site, with its own VAX
11/750, a PDP 11/24 and a large range of text and graphics terminals,
serving a small user community.

Current SERC supported research is aimed at building description tech-
niques, including drawing input with associated meaning attached to 
drawings. This project has a vacancy for a research worker preferably 
with AI experience.  The research post is for an initial period of 18 
months, on the research salary scale 1A, with placement according to 
qualifications and experience.

Enquiries and applications should be addressed to Aart Bijl, EdCAAD, 
Department of Architecture, University of Edinburgh, 20 Chambers
Street, Edinburgh EH1 1JZ, tel. 031 667 1011 ext. 4598.

------------------------------

End of AIList Digest
********************

∂03-Jul-83  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #19
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Jul 83  18:09:09 PDT
Date: Sunday, July 3, 1983 5:01PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #19
To: AIList@SRI-AI


AIList Digest             Monday, 4 Jul 1983       Volume 1 : Issue 19

Today's Topics:
  AI Interfacing
  Computational Linguistics
  Foundations of Perception, AI (2)
  A Simple Logic/Number Theory/AI/Scheduling/Graph Theory Problem
  AISB/GI Tutorials at IJCAI
  Robustness Stories, Program Logs Wanted
  Program Verification Award  [Long Msg]
----------------------------------------------------------------------

Date: Tue 28 Jun 83 12:56:43-PDT
From: W. Wipke <WIPKE@SUMEX-AIM.ARPA>
Subject: AI interfacing

        I have a simple question many of you probably have answers to:
when one has an existing application program for which you want to 
create an AI front end, should one design the AI part as a separate
task in its own address space and communicate via msgs to the
application program, or should one build the AI part into the same
address space as the application program?

        Obviously the former may constrain communication and the
latter may suffer from accidental communication, ie, global conflicts.
What is the best wisdom in this question and where is it
systematically discussed?
                                       Todd Wipke (WIPKE@SUMEX)
                                       Professor of Chemistry
                                       Univ. of Calif, Santa Cruz

------------------------------

Date: Fri 1 Jul 83 13:43:21-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Computational Linguistics

                [Reprinted from the SU-SCORE BBoard.]

Computers and Mathematics with Applications volume 9 number 1 1983 is
a special issue on comutational linguistics.  This issue is currently 
on the new journals shelf.  HL

------------------------------

Date: Tuesday, 28 June 1983, 21:13-EDT
From: John Batali <Batali@MIT-OZ>
Subject: Foundations of Perception, AI

              [Reprinted from the Phil-Sci discussion.]

[...]

We aren't in the same position in AI as early physicists were.
Physics started out with a more or less common and very roughly
accurate conception of the physical world.  People understood that
things fell, that bigger things hurt more when they fell on you and so
on.  Physics was able to proceed to sharpen up the pre-theoretic
understanding people had of the world until very recently when its
discoveries ceased to be simply sharpenings and began to seem to be
contradictions.

"Mind studies" (AI, psychology, philosophy, and so on) don't seem to 
have such a common, roughly correct, theory to start with.  We don't 
even agree on what it is we are supposed to be explaining, how such 
explanations ought to go, or what constitutes success.

                        [John Batali <Batali@MIT-OZ>]

------------------------------

Date: Wed, 29 Jun 1983  03:13 EDT
From: KDF@MIT-OZ
Subject: Re: Foundations of Perception, AI

            [Reprinted from the Phil-Science discussion.]

[...]

<Aside on Physics: I interpret (not perceive) reports on early studies
of heat and motion as indicating that there WASN'T a "common, roughly 
corrrect" theory to start with.  Even if there was, it was acquired 
somehow.  One way to view what we are doing is building up enough 
experience to construct such theories for computation.>

------------------------------

Date: 30 Jun 1983 1111-CDT
From: CS.CLINE@UTEXAS-20
Subject: a simple logic/number theory/A I/scheduling/graph theory
         problem

                [Reprinted from the UTexas-20 BBoard.]

I have a trivial problem (at least trivial to state) whose solution 
possibly uses elements from many cs/math areas:

 Problem 1: Using pennies, nickels, dimes, quarters, and halves find a
set of coins for which any amount less than one dollar can accumulated
and which minimizes the number of coins over those such sets.

  You can probably solve this problem in the time it takes to read it,
but proving you have a minimal solution is tricky. I'm interested in
elegant solutions. My own uses a little bit of combinatorics.

  Possibly you'd like to take a more general approach:

 Problem 2: Using coins of value v[1],...,v[n] find a set of coins for
which any amount less than M can be accumulated and which minimizes 
the number of coins over those such sets.

 I'd like to see algorithms (with proofs of course) for this one. You 
may notice that the approach you apply to Problem 1 does not
generalize to problem 2.

------------------------------

Date: Friday, 24-Jun-83  16:40:33-BST
From: RITCHIE  HWC (on ERCC DEC-10)  <g.d.ritchie@edxa>
Reply-to: g.d.ritchie%edxa%ucl-cs@isid
Subject: AISB/GI Tutorials at IJCAI



     TUTORIAL ON ARTIFICIAL INTELLIGENCE

        7th-8th August 1983

        Karlsruhe, West Germany

            -------------

    Lectures on:

       Knowledge Representation  (R.Brachman, H.Levesque)

       Computational Vision  (H.Barrow, J.Tenenbaum)

       Robotics  (K.Kempf)

       Expert Systems  (L. Erman)

       Natural Language Processing  (P.Hayes, J.Carbonell)

             ←←←←←←←←←←←←←


Details in IJCAI brochure, obtainable from:

       G.D.Ritchie (AISB)
       Department of Computer Science,
       Heriot-Watt University,
       Grassmarket,
       Edinburgh EH1 2HJ
       SCOTLAND.

(g.d.ritchie%edxa%ucl-cs%isid)


------------------------------

Date: 27 Jun 83 1117 EDT (Monday)
From: Craig.Everhart@CMU-CS-A
Reply-to: Robustness@CMU-CS-A
Subject: Robustness stories, program logs wanted

Needed: descriptions of robustness features--designs or fixes that
have made programs meet their users' expectations better, beyond bug
fixing.  E.g.:

    - An automatic error recovery routine is a robustness
      feature, since the user (or client) doesn't then have to
      recover by hand.

    - A command language that requires typing more for a
      dangerous command, or supports undoing, is more robust than
      one that has neither feature, since each makes it harder for
      the user to get in trouble.

There are many more possibilities.  Anything where a system doesn't
meet user expectations because of incomplete or ill-advised design is
fair game.

Your stories will be properly credited in my PhD thesis at CMU, which
is an attempt to build a discrimination net that will aid system
designers and maintainers in improving their designs and programs.

Please send a description of the problem, including an idea of the
task and what was going wrong (or what might have gone wrong) and a
description of the design or fix that handled the problem.  Or, if you
know of a program change log and would be available to answer a
question or two on it, please send it.  I'll extract the reports from
it.

Please send stories and logs to Robustness@CMU-CS-A.  Send queries
about the whole process to Everhart@CMU-CS-A.  I appreciate it--thank
you!

------------------------------

Date: Tue 28 Jun 83 21:35:57-PDT
From: Karl N. Levitt  <LEVITT@SRI-AI.ARPA>
Subject: Program Verification Award  [Long Msg]

               [Reprinted from the UTexas-20 BBoard.]

        ROBERT S. BOYER AND J STROTHER MOORE: RECIPIENTS OF
        THE 1983 JOHN MCCARTHY PRIZE FOR WORK IN PROGRAM
                       VERIFICATION


An anonymous donor has established the John McCarthy Prize, to be 
awarded every two years for outstanding work in Program Verification.
The prize, is intended to recognize outstanding current work -- not 
necessarily work of a milestone value. This first award is for work 
carried out and published during the past 5 years.

Our committee has decided to give the initial award to Robert S. Boyer
and J Strother Moore for work carried out at the following 
institutions: University of Edinburgh, SRI International and, 
currently, the University of Texas. Their main achievement is the 
development of an elegant logic implemented in a very powerful theorem
prover. Particularly noteworthy about the logic is the use of 
induction to express properties about the objects common to programs.
Their theorem prover is among the most powerful of the current 
mechanical provers, combining heuristics in support of automatic 
theorem proving with a user interface that allows a human to drive 
proofs that cannot be accomplished automatically. They have extended 
their theorem prover with a Verification Condition Generator for 
Fortran that handles most of the features -- even those thought to be 
too "dirty" for verification -- of a "real" programming language. They
have used their system to prove numerous applications, including 
programs subtle enough to tax human verifiers, and such real 
applications as crytographic algorithms and simple flight control 
systems; their proofs are always very "honest", using "believable" 
specifications and assuming little more than a core set of axioms.  
Their work has led to a constant stream of high quality publications, 
including the book "A Computational Logic", Academic Press, 1979, and 
a comprehensive User's Manual to the theorem prover.

The other individuals nominated by the committee are the following:  
Donald Good: for the language Gypsy which enhances the possibility for
verifying concurrent and real-time systems, for the verification 
system based on Gypsy, and for carrying out the verification of 
numerous "real" systems; Robin Milner: for the Logic of Computable 
Functions which has led to elegant formal definitions of programming 
languages, to elegant specifications of varied applications, and to a 
powerful mechanical theorem prover; Susan Owicki and David Gries: for 
a practical method for the verification of concurrent programs; and to
Wolfgang Polak: for the verification of a "real" Pascal compiler, 
perhaps the largest and most comlicated program verified to date.

The committee would also like to call attention to interesting and 
important work in a number of areas related to program verification.  
Included herein are the following: the formal definition of large and 
complex programming languages; numerous mechanical verification 
systems for a variety of programming languages; the verification of 
systems covering such applications as computer security, compilers, 
operating systems, fault-tolerant computers, and digital logic; 
program testing; and program transformation. This work indicates that 
program verification (and its extensions) besides being a rich area 
for research gives promise of being usable to achieve reliability when
needed for critical applications.

	  Robert Constable -- Cornell
	  Susan Gerhart -- Wang Institute
	  Karl Levitt (Chairman) -- SRI International
	  David Luckham -- Stanford
	  Richard Platek -- Cornell and Odyssey Research Associates
	  Vaughan Pratt -- Stanford
	  Charles Rich -- MIT

------------------------------

End of AIList Digest
********************

∂06-Jul-83  1833	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #20
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Jul 83  18:32:50 PDT
Date: Wednesday, July 6, 1983 5:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #20
To: AIList@SRI-AI


AIList Digest            Thursday, 7 Jul 1983      Volume 1 : Issue 20

Today's Topics:
  Coupled Systems
  Re: Foundation of Perception, AI
  AI in the media
  Re: Lunar Rovers
  Solution Found to Coin Problem (2)
  HP Computer Colloquium, 7/7/83
  List-of-Lists Updated
----------------------------------------------------------------------

Date: Mon 4 Jul 83 19:25:23-PDT
From: Ira Kalet <IRA@WASHINGTON.ARPA>
Subject: coupled systems

This is in response to the query about when to build an AI "front-end"
to an existing software system as a separate process with its own
address space, as opposed to putting more code in the existing system
to implement the AI component.  At the University of Washington we
have built a very complex graphic simulation system for planning of
radiation therapy treatments for cancer.  We are now starting to work
on a rule based expert system that will model the clinical decision
making part of the process, with the two (separate) systems to
communicate via messages.  We do this as two separate processes
because the simulation system is already a system of multiple 
concurrent processes communicating by messages, and because the
simulation system is written in PASCAL, which seems less suitable
than, for example, INTERLISP, for the AI component.  The kind of
information needed to pass between the systems also affects the
decision.  In our case, the AI system will consult the graphic
treatment planning system for answers to questions that are rather
traditionally compute intensive, eg. radiation dose calculation,
geometric calculations...so the messages are simple and well defined.

------------------------------

Date: Tue, 5 Jul 83 08:16:13 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: Re: Foundation of perception, AI


     The recent assertion on this list that "Mind Sciences" (unlike
physics) do not have a "common, roughly correct, theory to start with"
is just dead wrong.  In fact, the study of "naive psychology" (i.e.,
people's folk theories of how other people behave) constitutes a
sizable subfield within formal psychology.  You don't have to be a
professional psychologist to recognize this, just listen to the
conversations around you and you will find a large proportion of them
are composed of people offering explanations and predictions of other
people's behavior.  The source of these explanations and predictions
are, of course, people's folk or naive theories of human behavior (and
these theories ae "roughly correct").  Thus AI and the other "mind
sciences" do seem to be like physics in this regard.

------------------------------

Date: 03 Jul 83  1521 PDT
From: Jim Davidson <JED@SU-AI>
Subject: AI in the media

                [Reprinted from the SU-SCORE BBoard.]

The July issue of Psychology Today contains a letter to the editor, 
which refers to the earlier interview with Roger Schank:

"I was shocked to read Roger Schank's claims of success in building an
English-language front end for a large oil company's geological
mapping system ['Conversation', April].  I was chief programmer of
that system, and it was a dismal failure.  It suffered from the same
disease as all the other "user-friendly" software I have seen.  It is
friendly as long as you play by its rules and tell it what it expects
to hear.  The slightest departure causes apparently random results.

Computers are completely linear in their 'thinking', while the
mind is both linear and at the same time capable of wondrously
spontaneous associations and creative flights into fantasy.  The mind
has an infinite number of scripts, each with hundreds of possible
hooks on which associations with other scripts can be hung.  I don't
think we'll ever duplicate the mind's linguistic ability.
                        Stanley M. Davis
                            Chicago, Ill.  "

------------------------------

Date: 30 Jun 83 9:23:58-PDT (Thu)
From: 
Subject: Re: Lunar Rovers - (nf)
Article-I.D.: ucbcad.188

Another contribution to the growing class of "NOW WAIT A MINUTE"
notes:

        The weight of AI is nearly zero.

Tell me that when you can lift a LISP machine in one hand.

        In addition, the reliability of a system decreases with
        increased quantity of hardware,

Are ECC chips on RAM boards an "increased quantity of hardware"?  
Consider the electrical shielding problems above the atmosphere.

Let's be little more cautious here...

        Flame Off,
                Michael Turner

------------------------------

Date: 5 Jul 83 10:33:11 EDT  (Tue)
From: Dana S. Nau <dsn.umcp-cs@UDel-Relay>
Subject: Re:  a simple logic/number theory/A I/scheduling/graph
         theory problem

    . . .  Using coins of value v[1],...,v[n] find a
    set of coins for which any amount less than M can
    be accumulated and which minimizes the number of
    coins over those such sets.

This problem appears similar (although not identical) to the 0/1 
Knapsack problem, and thus is probably NP-hard.  For approaches to 
solving it, I would recommend Branch and Bound (for example, see 
Fundamentals of Computer Algorithms, by Horowitz and Sahni).
                        Dana S. Nau

------------------------------

Date: 4 Jul 1983 0825-CDT
From: CS.CLINE@UTEXAS-20
Subject: solution found to coin problem

               [Reprinted from the UTexas-20 BBoard.]

The coin problem suggested in my BBOARD message of 1 July has been 
solved. Rich Cohen developed an algorithm and he, Elaine Rich, and I
proved that it solves the problem. Interested parties should contact 
me.

------------------------------

Date: 6 Jul 83 14:00:26 PDT (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium, 7/7/83


                Professor Robert Willensky
                Computer Science Department
                U.C. Berkeley

  Talking to UNIX in English: An Overview of an
             On-Line UNIX Consultant


UC (UNIX Consultant) is an intelligent natural language interface that
allows naive users to communicate with the UNIX operating system in 
ordinary English.  The goal of UC is to provide a natural language
help facility that allows new users to learn operating systems'
conventions in a relatively painless way.

UC exploits Artificial Intelligent developments in common sense 
reasoning as well as natural language processing in an attempt to 
provide an interface that is helpful and intelligent, and not merely a
passive repository of facts.  Areas of current research involve 
multi-lingual capabilities, analyzing the user's plan structure via 
natural dialogue, computing possible solutions to a user's problem,
and generating responses in natural language.

        Thursday, July 7, 1983 4:00 pm

        Hewlett-Packard
        Stanford Park Division
        5M conference room
        1501 Page Mill Rd
        Palo Alto, CA 94304

        *** Be sure to arrive at the building's lobby ON TIME, so that
you may be escorted to the conference room.

------------------------------

Date: 1 Jul 1983 0002-PDT
From: Zellich@OFFICE-3 (Rich Zellich)
Subject: List-of-lists updated

OFFICE-3 file <ALMSA>INTEREST-GROUPS.TXT has been updated and is ready
for FTP.  OFFICE-3 supports the net-standard "ANONYMOUS" Login within
FTP, using any password.

INTEREST-GROUPS.TXT is currently 1290 lines (or 52,148 characters).
Please try to limit any weekday FTP jobs to before 0600-CDT and after
1600-CDT if possible, as the system is heavily loaded during most of
the day.

Enjoy, Rich

CHANGES SINCE LAST UPDATE-NOTICE (10 May 83):
   Icon-Group
      Distribution address updated with host name.
   INFO-PRINTERS
      New coordinator.
   PROLOG/PROLOG-HACKERS
      New mailing-lists added.
   SF-LOVERS
      New moderator; Archive references updated for current volume.
   UNIX-WIZARDS
      New host; New coordinator.

[ pkr - note added for sail users: I copied this file into my directory
  as INTERE.TXT[1,PKR]. It should be there for a few days if anyone
  wants to look at it.]
------------------------------

End of AIList Digest
********************

∂11-Jul-83  0352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #21
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Jul 83  03:51:18 PDT
Date: Saturday, July 9, 1983 4:47PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #21
To: AIList@SRI-AI


AIList Digest            Sunday, 10 Jul 1983       Volume 1 : Issue 21

Today's Topics:
  Prolog Programs [Request]
  Computer Security [Request]
  Re: AI, Perception, and the Media
  AI and Legal Reasoning
  A Statistician's Assistant
  Rovers
  NMODE [LISP-Based Editor] and PSL
----------------------------------------------------------------------

Date: Thu 7 Jul 83 19:37:44-EDT
From: STEVE@COLUMBIA-20.ARPA
Subject: Prolog Programs

I would like to do some statistical analysis on large PROLOG programs.
I am particularly interested in AI programs in the following areas:

                1) Expert Systems,
                2) Data Bases,
                3) Planning or Robotics,
                4) NLP

Can anyone provide sample programs that I can use?  They should be 
large programs that run on Edinburgh Prolog 3.47 (Dec-20) or C-Prolog 
1.2 (Unix 4.1/Vax).  I would like to collect a good variety, so any 
programs will be useful.  I would also appreciate a sample journal of
a session with the program so that it can be exercised quickly and 
effectively.

                Many Thanks... Stephen Taylor

------------------------------

Date: 7 Jul 1983 17:48:15-EDT
From: Ron.Cole at CMU-CS-SPEECH
Subject: Computer Security

                  [Reprinted from the CMUC BBoard.]

ABC Nitely news is doing a feature in response to the movie War Games
to investigate whether the premise of the movie is legitimate: That
there is no totally secure computer.  They want to interview someone
who has broken into a supposedly secure system.  If you want to get
infamous, please call Shelly Diamond or Jean McCormick at 212 887
4995.

------------------------------

Date: Fri 8 Jul 83 15:33:11-PDT
From: Slava Prazdny <Prazdny at SRI-KL>
Subject: Re: AI, Perception, and the Media

It is ridiculous to assume that the "naive theories", in this case of 
perception, will get you somewhere.  In fact, it is easy to see that
they are wrong.  Nobody knows, for example, what the "Mexican hat"
operators, the simple cells, etc. in the cortex are for.

It is common, especially within the AI comunity not to report the
limitations of the achieved success.  No wonder one hears about robots
nearly walking around, and cleaning a house, or walking a dog, etc.  
Or "english interfaces" which are user friendly.  I think it is about
the time we realize, and frankly say, that such interpolations are
very far in the future indeed.

------------------------------

Date: Thu 7 Jul 83 09:01:53-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: AI and Legal Reasoning


                                  PH.D. ORAL
                                 JULY 15, 1983
                         ROOM 252, MARGARET JACKS HALL
                                   2:15 P.M.
            AN ARTIFICIAL INTELLIGENCE APPROACH TO LEGAL REASONING

                              Anne v.d.L. Gardner

        The analysis of legal problems is a relatively new domain for 
artificial intelligence.  This thesis describes an AI model of legal
reasoning, giving special attention to the distinctive characteristics
of the domain, and reports on a program based on the model.  Major
features include (1) distinguishing between questions the program has
enough information to resolve and questions that competent
professionals could argue either way; (2) using incompletely defined
("open-textured") technical concepts; (3) combining the use of
knowledge expressed as rules and knowledge expressed as examples; and 
(4) combining the use of professional knowledge and commonsense
knowledge.  All these features are likely to prove important in other
domains besides law.  Previous AI research has left them largely
unexplored.

------------------------------

Date: Tue 5 Jul 83 13:20:42-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: A Statistician's Assistant

[This talk has already been given at SRI and at Stanford.  Printing
seminar notices seem to be a reasonable way to keep the AIList
community informed about current work in AI, even when readers cannot
be expected to attend.  Anyone with strong feelings about this
practice should contact AIList-Request. -- KIL]


                         BUILDING AN EXPERT INTERFACE

                                William A. Gale
                          Bell Telephone Laboratories
                             Murray Hill, NJ 07974


We are building an expert system for the domain of statistical data
analysis, initially focusing on regression analysis.  Two
characteristics of this domain are current availability of massive but
'dumb' software, and a need to repeatedly diagnose problems and apply
a treatment.

REX (Regression EXpert) is a Franz Lisp program which is an
intelligent interface for the S Statistical System.  It guides a user
through a regression analysis, interprets intermediate and final
results, and instructs the user in statistical concepts.  It is
designed for interactive use, but a non-interactive mode can be used
with lower quality results.

[A particular feature of REX is the ability to suggest data
transformations such as a log or squared term.  The BACON system at
CMU can also do this using an entirely different heuristic approach.
Another automated statistical system is the RX medical database
analyzer by Dr. R. Blum at Stanford; it forms and then attempts to
verify sophisticated hypotheses based on knowledge of drug and disease
interactions, lag times of observable effects, and the incomplete
nature of patient histories. -- KIL]

------------------------------

Date: 6 Jul 1983 21:26-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Rovers

First: Thanks to all who have responded to my initial note about
rovers.

Most people seem to have taken what I would regard as the easy (and 
commensurately uninteresting) way out by choosing a lunar environment,
precisely because teleoperation is feasible there, if a nuisance.  But
what about systems operating on more distant heavenly bodies or in
deep space?  Even robotic vehicles on Mars would suffer rather severe 
performance degradation if they had to rely upon an (approximately) 
earth-bound intelligence for control.  (A friend provides the
following simple gedankenexperiment: decide now to start
scratching-your-leg-until- it-stops-itching twenty minutes from now;
now wait twenty minutes before you can start; then, perhaps, wait at
least twenty minutes before you can consider stopping....)

Note that I'm not taking issue with the desirability of teleoperated 
lunar vehicles.  (In fact, there's good reason to believe that a 
planetary or lunar rover is politically unrealistic if NASA has 
anything to say about it, given what I understand to be the prevailing
NASA attitude towards *unmanned* space exploration, but that fact 
doesn't motivate my comments here.)  Rather, I'm suggesting we tackle
a problem domain sufficiently rich in AI problems to (a) keep things 
interesting and (b) allow us to explore what contribution, if any, we 
might be able to make as computer scientists, AI researchers, and 
engineers.

Do we know enough to solve, or even identify, the difficult issues in 
situation assessment, planning, and resource allocation faced by such
a system?  For example, reinterpreting Professor Minsky's desire that 
"anyone with such budgets should aim them at AI education and research
fellowships", let us then assume that these fellowships are provided
by NASA and have a problem domain specified: perhaps, for example, we 
might choose a space station orbiting Mars as our testing grounds,
with robot assembly prior to arrival of humans on-site as the problem.
What problems can we already solve, and where is the research needed?

                                        asc

------------------------------

Date: 5 Jul 1983 0731-MDT
From: William Galway <Galway@UTAH-20>
Subject: NMODE [LISP-Based Editor] and PSL

           [Reprinted from the Editor-People Discussion.]

I thought I'd add a bit more to what JQJ has said about NMODE, and add
a sales pitch, since I'm pretty close to its development.  NMODE was
written by Alan Snyder (and others) at Hewlett Packard Computer
Research Labs in Palo Alto, with some additional work done by folks
here at the University of Utah.  NMODE is written in PSL (Portable
Standard Lisp), a Lisp dialect developed at the University of Utah
under the direction of Martin Griss.  NMODE is distantly related to
EMODE (my not-quite-finished-thesis-project) in that it shares some of
the ideas and algorithms, but it's carried them much further (and more
cleanly).  (In fact, I hope to steal quite a bit from NMODE for my
final version of EMODE.)

We've tried to make PSL and NMODE quite portable, and we currently
have NMODE running on at least 4 different systems--TWENEX, Vax Unix,
and two different flavors of the Motorola 68000, one of them being the
Apollo.  (The Apollo version was just brought up last week.)

NMODE is quite TWENEX EMACS compatible.  Of course it doesn't have
nearly as many "libraries" developed for it yet.  It has quite a nice
Lisp Mode (of course), including the ability to directly execute code
from a buffer, but is weaker in other modes.  It's quite strong on
handling multiple windows (and multiple simultaneous terminals).
NMODE also supports a generalized browser mechanism (similar to Dired,
RMAIL, and the Smalltalk browser) which provides a common user
interface to file directories, source code, electronic mail,
documentation, etc.

There's a library available for the TWENEX version of NMODE that 
provides a hook to processes similar to what's available in Gosling's
EMACS for Unix.  (Unfortunately, nobody's gotten around to porting
that to the other machines--it's fairly easy to write machine specific
code in PSL, as well as machine independent code.)  We also have a
fairly nice "dynamic abbreviation" option (expands an abbreviation by
scanning the buffer for a word with the same prefix), although we
don't yet have the "standard" EMACS abbreviation mode.

Of course, one of the nicest features of NMODE is the fact that its
implementation language is Lisp.  New extensions can be added simply
by editing code in a buffer, testing it interactively, and then
compiling it.  (Of course, this gets tricky sometimes--it is possible
to break the editor while adding a new feature.)

NMODE does tend to be a bit slow--it seems to perform quite acceptably
on the DEC-20 and on single-user M68000's with lots of real memory.
It tends to be somewhat painful on loaded Vaxen and Apollo 400s with
only 1 megabyte of real memory.  This could probably be improved by
spending more time on tuning the code (or, preferably, by tuning the
PSL compiler or its machine specific tables).

I'd like to take exception to the claim that "PSL is not a very 
powerful lisp", although it is true that "it is not clear it will 
catch on widely".  I don't have extensive experience with any other
Lisp systems, so I'm not really in a good position to compare them.
There are over 700 functions documented in the current PSL manual.
Perhaps the major feature of "bare" PSL is its ability to let you
write Lisp that compiles to "raw" machine code.  This is VERY
important for getting NMODE to run acceptably fast.  Perhaps the idea
that PSL isn't powerful comes from the belief that there are few big
systems built on top of it.  But that's changed quite a lot over last
couple of years.  In addition to NMODE, here's a list of some other
applications built on top of PSL:

   - Hearn's REDUCE computer algebra system.
   - Expert systems developed at HP (using a successor to FRL).
   - Ager's VALID logic teaching program.
   - Riesenfeld's ALPHA-1 Computer Aided Geometric Design
     System.
   - Novak's GLISP, an object oriented dialect of LISP.

NMODE is currently available "for internal use" as part of the PSL
distribution.  Future plans for distribution and maintenance of NMODE
are unclear.  (Nobody's very anxious to get tied up with maintaining
it.)

PSL systems are available from Utah for the following systems:

  VAX, Unix (4.1, 4.1a)     1600 BPI tar format
  DEC-20, Tops-20 V4 & V5   1600 BPI Dumper format
  Apollo, Aegis 5.0         6 floppy disks, RBAK format
  Extended DEC-20,          1600 BPI Dumper format
    Tops-20 V5

We are currently charging a $200 tape or floppy distribution fee for
each system.  To obtain a copy of the license and order form, please
send a NET message or letter with your US MAIL address to:

    Utah Symbolic Computation Group Secretary
    University of Utah - Dept. of Computer Science
    3160 Merrill Engineering Building
    Salt Lake City, Utah 84112

    ARPANET: CRUSE@UTAH-20
    USENET:  utah-cs!cruse

Send a note to me if you're interested in more information on NMODE.

--Will Galway [ GALWAY@UTAH-20 ]

------------------------------

End of AIList Digest
********************

∂18-Jul-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #22
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Jul 83  19:49:39 PDT
Date: Monday, July 18, 1983 3:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #22
To: AIList@SRI-AI


AIList Digest            Tuesday, 19 Jul 1983      Volume 1 : Issue 22

Today's Topics:
  A Note from the Moderator
  Response to Extensible Editor Request
  How Many Prologs Are There ?
  Grammar Correction
  Machine Learning Workshop Proceedings
  Upcoming Conferences
  Computers in the Media ...
  CSCSI-84 Call for Papers
----------------------------------------------------------------------

Date: Mon 18 Jul 83 09:10:36-PDT
From: AIList-Request@SRI-AI <Laws@SRI-AI.ARPA>
Subject: A Note from the Moderator

This issue of AIList depends heavily on reprints from several BBoards.
Such reporting is important, but should not be the only function of
this "discussion list".  Lets have a little audience participation.

                                        -- Ken Laws

------------------------------

Date: 25 Jun 1983 1247-EDT
From: Chris Ryland <CPR@MIT-XX>
Subject: Response to Extensible Editor Request

         [Reprinted from the Editor-People discussion list.]

Let me point out that T, the Yale Scheme derivative, has been ported 
to the Apollo, VAX/Unix, VAX/VMS, and, soon, the 370 family, from what
I hear.  It appears to be the most efficient and portable Lisp to
appear on the market.  John O'Donnell at Yale (Odonnell@YALE) is the T
project leader.

------------------------------

Date: 2 Jul 83 13:11:36 EDT  (Sat)
From: Bruce T. Smith <BTS.UNC@UDel-Relay>
Subject: How Many Prologs Are There ?

                 [Reprinted from the Prolog Digest.]

        Here's Randy Harr's latest list of Prolog systems.  He's away 
from CWRU for the summer, and he asked me to keep up the list for him.
Since there have been several requests for information on finding a 
Prolog lately, I've recently submitted it to net.lang.prolog.  The 
info on MU-Prolog is the only thing I've added this summer, from a 
recent mailing from the U. of Melbourne.  (Now, if I could only find 
$100, I would like to try it...)

--Bruce T. Smith, UNC-CH
  duke!unc!bts (USENET)
  bts.unc@udel-relay (lesser NETworks)


list compiled by:  Randolph E. Harr
                   Case Western Reserve University
                   decvax!cwruecmp!harr
                   harr.Case@UDEL-RELAY

{ the list can be FTP'd as [SU-SCORE]PS:<PROLOG>Prolog.Availability.
  SU-SCORE observes Anonymous Login convention.  If you cannot FTP,
  I have a limited number of hard copies I could mail.  -ed }

------------------------------

Date: Mon 18 Jul 83 09:14:25-PDT
From: AIList-Request@SRI-AI <Laws@SRI-AI.ARPA>
Subject: Grammar Correction

The July issue of High Technology has an article titled "Software 
Tackles Grammar".  It includes very brief discussions of the Bell Labs
Writer's Workbench and the IBM EPISTLE systems.

------------------------------

Date: 15 Jul 83 09:25:36 EDT
From: GABINELLI@RUTGERS.ARPA
Subject: Machine Learning Workshop Proceedings

                [Reprinted from the Rutgers BBoard.]

Anyone wishing to order the Proceedings from the MLW can do so by
sending a check made out to the University of Illinois, in the amount
of $27.88 ($25 for Proceedings, $2.88 for postage) to:

            Ms.June Wingler
            Department of Computer Science
            1304 W. Springfield
            University of Illinois
            Urbana, Illinois 61801

------------------------------

Date: Fri 15 Jul 83 11:40:41-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Upcoming Conferences

                     [Reprinted from SU-BBoard.]

1983 ACM Sigmetrics Conference on Measurement and Modeling of Computer
Systems August 29←31, 1983 Minneapolis, Minn. To register mail to
Registrar, Nolte Center, 315 Pillsbury Drive. S.E. Minneapolis, MN.
55455-0118.  For information contact Steven Bruell CS Dept. Univ. MN
123a Lind Hall 612-376-3958

2nd ACM Sigact-Sigops Symposium on Principles of Distributed Computing
at Le Parc Regent, 3625 Avenue du Parc, Montreal, Quebec, Canada
August 17-19, 1983.  Pre register by July 31, PODC Registration,
%Edward G. H. Smith, The Laurier Group, 275 Slater Street, Suite 1404
Ottawa, Ontario K1P 5H9 Canada.

HL

------------------------------

Date: 16 Jul 83  1610 PDT
From: Jim Davidson <JED@SU-AI>
Subject: Computers in the Media ...

                     [Reprinted from SU-BBoard.]

The August issue of Science Digest has an interview with Joseph
Weizenbaum.

He starts off by saying that the current popularity for personal
computers is something of a fad.  He claims that many of the uses of
PC's, such as storing recipes or recording appointments, are tasks
that are better done manually.

Then the discussion turns to AI:

Science Digest: You know, many of the computer's biggest promoters are
university computer scientists themselves, particularly in the more
exotic areas of computer science, like artificial intelligence.  Roger
Schank of Yale has set up a company, Cognitive Systems, that hopes to
market computer investment counselors, computer will-writers,
computers that can actually mimic a human's performance of a job.
[JED--but they have real trouble locating Bibb County.]  What do you
think of artificial intelligence entering the market place?

Joseph Weizenbaum: I suppose first of all that the word "mimicking" is
fairly significant.  These machines are not realizing human thought 
processes; they're mimicking them.  And I think what's being worked on
these days resembles the language, understanding and production of
human beings only very superficially.  By the way, who needs a
computer will-maker?

SD: Some people can't afford a lawyer.

JW: The poor will be grateful to Dr. Schank for thinking of them...

..

SD: Yet, you know Dr. Schank's firm is videotaping humans in the hope
that by this means it can create a program which closely models the
expertise of the individual.

JW: That attitude displays such a degree of arrogance, such hubris
and, furthermore, a great deal of contempt for human beings.  To think
that one can take a very wise teacher, for example, and by observing
her capture the essence of that person to any significant degree is
simply absurd.  I'd say people who have that ambition, people who that
that it's going to be that easy or possible at all, are simply
deluded.

..

SD: Does it bother you that other computer scientists are marketing 
artificial intelligence?

JW: Yes, it bothers me.  It bothers me to the extent that these
commercial efforts are characterized at the same time as disinterested
science, the search for knowledge for knowledge's sake.  And it isn't.
It's done for money.  These people are spending the only capital
science has to offer:  its good name.  And once we lose that we've
lost everything.

------------------------------

Date: 14 Jul 83 11:10:07-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!tsotsos @ Ucb-Vax
Subject: CSCSI-84 Call for Papers
Article-I.D.: utcsrgv.1754

                         CALL FOR PAPERS

                         C S C S I - 8 4

                      Canadian Society for
              Computational Studies of Intelligence

                  University of Western Ontario
                         London, Ontario
                         May 18-20, 1984

     The Fifth National Conference of the CSCSI will be held at the
University of Western Ontario in London, Canada.  Papers are requested
in all areas of AI research, particularly those listed below.  The
Program Committee members responsible for these areas are included.

  Knowledge Representation:
    Ron Brachman (Fairchild R & D), John Mylopoulos (U of Toronto)
  Learning:
    Tom Mitchell (Rutgers U), Jaime Carbonell (CMU)
  Natural Language:
    Bonnie Weber (U of Pennsylvania), Ray Perrault (SRI)
  Computer Vision:
    Bob Woodham (U of British Columbia), Allen Hanson (U Mass)
  Robotics:
    Takeo Kanade (CMU), John Hollerbach (MIT)
  Expert Systems and Applications:
    Harry Pople (U of Pittsburgh), Victor Lesser (U Mass)
  Logic Programming:
    Randy Goebel (U of Waterloo), Veronica Dahl (Simon Fraser U)
  Cognitive Modelling:
    Zenon Pylyshyn, Ed Stabler (U of Western Ontario)
  Problem Solving and Planning:
    Stan Rosenschein (SRI), Drew McDermott (Yale)

     Authors are requested to prepare Full papers, of no more than
4000 words in length, or Short papers of no more than 2000 words in
length.  A full page of clear diagrams counts as 1000 words.  When
submitting, authors must supply the word count as well as the area in
which they wish their paper reviewed.  (Combinations of the above
areas are acceptable).  The Full paper classification is intended for
well-developed ideas, with significant demonstration of validity,
while the Short paper classification is intended for descriptions of
research in progress.  Authors must ensure that their papers
describe original contributions to or novel applications of
Artificial Intelligence, regardless of length classification, and
that the research is properly compared and contrasted with relevant
literature.
     Three copies of each submitted paper must be in the hands of the
Program Chairman by December 7, 1983.  Papers arriving after that date
will be returned unopened, and papers lacking word count and
classifications will also be returned.  Papers will be fully reviewed
by appropriate members of the program committee.  Notice of acceptance
will be sent on February 28, 1984, and final camera ready versions are
due on March 31, 1984.  All accepted papers will appear in the
conference proceedings.

     Correspondence should be addressed to either the General Chairman
or the Program Chairman, as appropriate.

  General Chairman                  Program Chairman

  Ted Elcock,                       John K. Tsotsos
  Dept. of Computer Science,        Dept. of Computer Science,
  Engineering and Mathematical      10 King's College Rd.,
       Sciences Bldg.,              University of Toronto,
  University of Western Ontario     Toronto, Ontario, Canada,
  London, Ontario, Canada           M5S 1A4
  N6A 5B9                           (416)-978-3619
  (519)-679-3567

------------------------------

End of AIList Digest
********************

∂21-Jul-83  1918	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #23
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Jul 83  19:17:33 PDT
Date: Wednesday, July 20, 1983 3:35PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #23
To: AIList@SRI-AI


AIList Digest           Thursday, 21 Jul 1983      Volume 1 : Issue 23

Today's Topics:
  Reply from Cognitive Systems
  Lisp Portability
  UTILISP
  Hampshire College Summer Studies in Mathematics
  Re: CSCSI-84 Call for Papers
  AI Definitions (3)
  HP Computer Colloquium 7/21
  Next AFLB talk(s)
  Special Seminar--C. Beeri
----------------------------------------------------------------------

Date: Tue, 19 Jul 83 18:18:54 EDT
From: Steven Shwartz <Shwartz@YALE.ARPA>
Subject: Reply from Cognitive Systems

The following is a response to the recent letter to the editor of 
Psychology Today that was circulated on AI-List concerning a natural 
language system developed by Cognitive Systems Inc. for an oil 
company.  It states that "[the Cognitive Systems program] is friendly 
as long as you play by its rules and tell it what it expects to hear."

The system in question was not designed nor touted to be a general 
natural language system.  It was designed to understand and respond to
queries about oil wells and topographical maps, and within its 
specified domain, it performs extremely well.  This system has been 
demonstrated at several conferences, most recently the Applied Natural
Language Conference in Santa Monica (February, 1983), where numerous 
members of the academic community tested the system and were favorably
impressed.

It should be noted that the individual who wrote the letter was not 
employed by either Cognitive Systems or the division of the oil 
company which commissioned this program.  In fact, he was a programmer
of the query language that the natural language front end was designed
to replace.

------------------------------

Date: Tue 19 Jul 83 15:24:00-EDT
From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
Subject: Lisp Portability

  [In response to Chris Ryland's message to Editor-People. -- KIL]

        Once again T is Touted as "... the most efficient and portable
Lisp to appear on the market." As one of the people associated with
the development of PSL (Portable Standard LISP) at the University of
Utah, I feel that I must point out that PSL has been ported to the
Apollo, VAX/UNIX, DECSystem-20/TOPS-20, HP9836/???, Wicat/!?!?!?, and
versions are currently being implemented for the CRAY and 370
families.

The predecessor system "Standard LISP" along with the REDUCE symbolic 
algebra system ran on the following machines (as October 1979):  
Amdahl: 470V/6; CDC: 640, 6600, 7600, Cyber 76; Burroughs: B6700,
B7700; DEC: PDP-10, DECsystem-10, DECsystem-20; CEMA: ES 1040;
Fujitsu: FACOM M-190; Hitachi: MITAC M-160, M-180; Honneywell: 66/60;
Honeywell-Bull:  1642; IBM: 360/44, 360/67, 360/75, 360/91, 370/155,
370/158, 370/165, 370/168, 3033, 370/195; ITEL: AS-6; Siemens: 4004;
Telefunken: TR 440; and UNIVAC: 1108, 1110.

  Then experiments began to port the system without having to deal
with a hand-coded LISP system which was slightly or grossly different
for each machine. This lead to a series of P-coded implementations
(for the 20, PDP-11, Z80, and Cray). This then lead via the Portable
LISP Compiler (Hearn and Griss) to the current compiler-based PSL
system.

So lets hear more about the good ideas in T and fewer nebulous 
comments like: "more efficient and portable".

------------------------------

Date: 19 Jul 1983 13:02:23-EDT
From: Ichiro.Ogata at CMU-CS-G
Subject: UTILISP

                  [Reprinted from the CMU BBoard.]

        I came from Univ. of Tokyo, and brought the MT that contains
  UTILISP ( lisp-machine-lisp like lisp), PROLOG-KR (discribed in
  UTILISP) and AMUSE (Structured Editor).
        It works on IBM 370's (and its compatible machines). If this
interests you, Please contact me.
                Ichiro Ogata io@cmu-cs-g


[and, for AIList, ...]

Yes, we are pleased to deliver UTILISP for all the people.  UTILISP is
written in Asembler, and contains a Compiler.  If you want more
information, please contact our colleges.  Their address is

        Tokyo-To Bunkyo-Ku Hongo
                7chome 3-1
         Tokyo-Daigaku Kogaku-Bu Keisukogaku-Ka
                Wada labolatory

        Ichiro Ogata..

------------------------------

Date: 19 Jul 83 8:59:19-PDT (Tue)
From: ihnp4!houxm!hocda!machaids!pxs @ Ucb-Vax
Subject: Hampshire College Summer Studies in Mathematics
Article-I.D.: machaids.408


(7/17/83):

The 12th Hampshire College Summer Studies in Mathematics for high
ability high school students is now in session until August 19 in
Amherst, MA.  The Summer Studies has initiated a program in cognitive
sciences and is actively seeking foundation and industry support.
(Observers and guest lecturers are invited.)  For more information,
please write David Kelly, Box SS, Hampshire College, Amherst, MA
01002, or call (413) 549-4600 x357 (messages on x371).


Submitted to USENET for David Kelly by Peter Squires, HCSSiM, '77,
                                        ...ihnp4!machaids!pxs

------------------------------

Date: 19 Jul 83 18:43:10 EDT  (Tue)
From: Craig Stanfill <craig.umcp-cs@UDel-Relay>
Subject: Re: CSCSI-84 Call for Papers

    Authors are requested to prepare Full papers, of
    no more than 4000 words in length, or Short papers
    of no more than 2000 words in length.  A full page
    of clear diagrams counts as 1000 words ...

In other words, a picture is worth a thousand words? (ick)

------------------------------

Date: 18 Jul 83 18:13:40 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: Defining AI ?

                [Reprinted from the Rutgers BBoard.]

I found the following sample entries in a dictionary and thought that
they were good definitions, esp. for a popular dictionary.  Your
reactions are welcome.

Selected entries from the Dictionary of Information Technology by
Dennis Longley and Michael Shain, John Wiley, 1982.

  Artificial Intelligence
    Research and study into methods for the development of
    systems that can demonstrate some of those attributes
    associated with human intelligence, e.g. the ability to
    recognize a variety of patterns from various viewpoints, the
    ability to form hypotheses from a llimited set of
    information, the ability to select relevant information from
    a large set and draw conclusions from it etc.  See Expert
    Systems, Pattern Recognition, Robotics.

  Expert Systems
    In data bases, systems containing a database and associated
    software that enable a user to conduct an apparently
    intelligent dialog with the system in a user oriented
    language.  See Artificial Intelligence.

  Pattern Recognition
    In computing, the automatic recognition of shapes, patterns
    and curves.  The human optical and brain system is much
    superior to the most advanced computer system in matching
    images to those stored in memory.  This area is subject to
    intensive research effort because of its importance in the
    fields of robotics and artificial intelligence, and its
    potential areas of application, e.g.  reading handwritten
    script.  See Artificial Intelligence, Robotics.

  Robotics
    An area of artificial intelligence concerned with robots.

 Robot
    A device that can accept input signals and/or sense
    environmental conditions, process the data so obtained and
    activate a mechanical device to perform a desired action
    relating to the perceived environmental conditions or input
    signal.

------------------------------

Date: 19 Jul 83 09:43:02 EDT
From: Michael <Berman@RUTGERS.ARPA>
Subject: AI Definitions

                [Reprinted from the Rutgers BBoard.]

Speaking as an AI "outsider" the definitions seemed pretty good to me,
except for robotics.  I'm not sure I would classify it as a field of
AI, but rather as one that uses techniques from AI as well as other
areas of computer science and engineering.  Comments?

------------------------------

Date: 19 Jul 83 09:43:10 EDT
From: KELLY@RUTGERS.ARPA
Subject: re: Defining AI?

                [Reprinted from the Rutgers BBoard.]

Those definitions all look pretty good to me, except for the 
content-free entry under EXPERT SYSTEMS.  That is certainly a common
view among implementers of a certain mold (i.e. those coming from an
quasi-N.L. approach, e.g. LUNAR), but I wouldn't say that this is
where the FOCUS of *our* expert systems research has been.  What ever
happened to the reason for calling such beasts "Expert" systems in the
first place?  It certainly wasn't because they were sterling
conversationalists!!

Anyway 4 out of 5 is pretty good.

Sorry to flame on friendly ears.

VK

------------------------------

Date: 18 Jul 83 20:37:04 PDT (Monday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 7/21


                Guy M. Lohman

                Research Staff Member
                IBM Research Laboratory
                San Jose, CA

                R* Project

The R* project was formed to address the problems of distributed 
databases, with the objective of designing and building an
experimental prototype database management system which would handle
replicated and partitioned data for both query and modification.  The
R* prototype supports a confederation of voluntarily cooperating,
homogeneous, relational database management systems, each with its own
data, sharing data across a communication network.

Two seemingly conflicting goals of distributed databases have been 
resolved efficiently in R*:  single-site image and site autonomy.  To 
make the system easy to use, R* presents a single-site image:  a
user's request for data need not be aware of or specify either the
location or the access path for retrieving that data, requiring close
coordination among sites.  On the other hand, to make local data
available even when other sites or communication lines fail, each R*
database site must be highly autonomous.

The talk will discuss how these goals were compatibly achieved in the 
design and implementation of R* without sacrificing system
performance.

        Thursday, July 21, 1983 4:00 pm

        Stanford Park Labs
        Hewlett Packard
        5M Conference room
        1501 Page Mill Road

*** Be sure to arrive at the building's lobby ON TIME, so that you may
be escorted to the conference room.

------------------------------

Date: Tue 19 Jul 83 22:41:51-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)

                     [Reprinted from SU-BBoard.]


                   N E X T A F L B T A L K (S)

Despite the heat of summer AFLB is still alive!

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


7/21/83 - Michael Luby (Berkeley):

"Monte Carlo Algorithms to Approximate Solutions for NP-hard 
Enumeration and Reliability Problems"

****** Time and place: July 21, 12:30 pm in MJ352 (Bldg. 460) *****

If you'd like an abstract, you should be on the AFLB mailing list. -
Andrei

------------------------------

Date: Tue 19 Jul 83 15:42:54-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Special Seminar--C. Beeri

                                SPECIAL SEMINAR

                          Thursday - July 21 - 2 P.M.

                  Margaret Jacks Hall (Bldg. 460) - Room 352

              CONCURRENCY CONTROL THEORY FOR NESTED TRANSACTIONS

                                   C. Beeri

Nested transactions occur in many situations, including explicit
nesting in application programs and implicit nesting in computing
systems.  E.g., database systems are usually implemented as multilevel
systems where operations of a high level language are translated in
several stages into programs using low level operations.  This creates
a nested transaction structure.  The same applies to systems that
support atomic data types, or concurrent access to search structures.
Synchronization of concurrent transactions can be performed at one or
more levels.  The existing theory does not provide a framework for
reasoning about concurrency in systems that support nesting.

In the talk, a general nested transaction model will be described.
The model can accomodate most of the nested transaction systems
currently known.  Tools for proving the serilizability of
computations, hence the correctness of the algorithms generating
them, wil be presented.  In particular, it will be shown that the
p r a c t i c a l theory of CPSR logs can be easily generalized
so that previously known results (e.g., correctness of 2PL) can
be used.  Examples will be presented.

------------------------------

End of AIList Digest
********************
∂21-Jul-83  1819	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #24
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Jul 83  18:19:14 PDT
Date: Thursday, July 21, 1983 4:37PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #24
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jul 1983       Volume 1 : Issue 24

Today's Topics:
  Weizenbaum in Science Digest
  AAAI Preliminary Schedule [Pointer]
  Report on Machine Learning Workshop [Abridged]
----------------------------------------------------------------------

Date: 20 July 1983 22:28 EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Weizenbaum in Science Digest

How much credence do Professor Weizenbaum's ideas get among the
current A.I. community?  How do these statements relate to his work?

-- Steve

------------------------------

Date: 20 Jul 1983 0407-EDT
From: STRAZ.TD%MIT-OZ@MIT-MC
Subject: AAAI Preliminary Schedule

What follows is a complete preliminary schedule for AAAI-83.
Presumably changes are still possible, particularly in times, but it
does tell what papers will be presented.

AAAI-83 THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE at the
Washington Hilton Hotel, Washington, D.C. August 22-26, 1983, 
sponsored by THE AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE and
co-sponsored by University of Maryland and George Washington
University.

[Interested readers should FTP file <AILIST>V1N25.TXT from SRI-AI.  It
is about 19,000 characters.  -- KIL]

------------------------------

Date: 19 Jul 1983 1535-PDT
From: Jack Mostow <MOSTOW@USC-ISIF>
Subject: Report on Machine Learning Workshop [Abridged]


             1983 INTERNATIONAL MACHINE LEARNING WORKSHOP:
                          AN INFORMAL REPORT

                              Jack Mostow
                  USC Information Sciences Institute
                          4676 Admiralty Way
                       Marina del Rey, CA. 90291

                       Version of July 18, 1983

  [NOTE: This is a draft of a report to appear in the October 1983
SIGART.  I am circulating it at this time to get comments before
sending it in.  The report should give the flavor of the work
presented at the workshop, but is not intended to be formal, precise,
or complete.  With this understanding, please send corrections and
questions ASAP (before the end of July) to MOSTOW@USC-ISIF.  Thanks.
- Jack]

  The first invitational Machine Learning Workshop was held at C-MU
in the summer of 1980; selected papers were eventually published in
Machine Learning, edited by the conference organizers, Ryszard
Michalski, Jaime Carbonell, and Tom Mitchell.  The same winning team
has now brought us the 1983 International Machine Learning Workshop,
held June 21-23 in Allerton House, an English manor on a park-like
estate donated to the University of Illinois.  The Workshop featured
33 papers, two panel discussions, countless bull sessions, very
little sleep, and lots of fun.

  This totally subjective report tries to convey one participant's
impression of the event, together with a few random thoughts it
inspired.  I have classified the papers rather arbitrarily under the
topics of "Analogy," "Knowledge Transformation," and "Induction"
(broadly construed), but of course 33 independent research efforts
can hardly be expected to fall neatly into any simple classification
scheme.  The papers are discussed in semi-random order; I have tried
to put related papers next to each other.

    [The entire document is about 12 pages of printed text.
     I am abridging it here; interested readers may FTP the
     file <AILIST>V1N24.TXT from SRI-AI. -- KIL]

1. Analogy
     1.1. Lessons
2. Knowledge Transformation
     2.1. Lessons
3. Induction
     3.1. Inducing Rules
     3.2. Dealing with Noise
     3.3. Logic-based Work
     3.4. Cognitive Modelling
     3.5. Lessons
4. Panel Discussion:  Cognitive Modelling -- Why Bother?
5. Panel Discussion:  "Machine Learning -- Challenges of the 80's"


6. A Bit of Perspective
  No overview would be complete without a picture that tries to put
everything in perspective:


     -------------> generalizations ------------
    |                                           |
    |                                           |
INDUCTION                                  COMPILATION
(Knowledge Discovery)                   (Knowledge Transformation)
    |                                           |
    |                                           v
examples ----------- ANALOGY  --------> specialized solutions
                (Knowledge Transfer)

 Figure 6-1:   The Learning Triangle:  Induction, Analogy, Compilation

  Of course the distinction between these three forms of learning
breaks down under close examination.  For example, consider LEX2:
does it induce heuristics from examples, guided by its definition of
"heuristic," or does it compile that definition into special cases,
guided by examples?

7. Looking to the Future
  The 1983 International Workshop on Machine Learning felt like
history in the making.  What could be a more exciting endeavor than
getting machines to learn?  As we gathered for the official workshop
photograph, I thought of Pamela McCorduck's Machines Who Think, and
wondered if twenty years from now this gathering might not seem as
significant as some of those described there.  I felt privileged to
be part of it.

  In the meantime, there are lessons to be absorbed, and work to be
done....

  One lesson of the workshop is the importance of incremental
learning methods.  As one speaker observed, you can only learn things
you already almost know.  The most robust learning can be expected
from systems that improve their knowledge gradually, building on what
they have already learned, and using new data to repair deficiencies
and improve performance, whether it be in analogy [Burstein,
Carbonell], induction [Amarel, Dietterich & Buchanan, Holland,
Lebowitz, Mitchell], or knowledge transformation [Rosenbloom,
Anderson, Lenat].  This theme reflects the related idea of learning
and problem-solving as inherent parts of each other [Carbonell,
Mitchell, Rosenbloom].

  Of course not everyone saw things the way I do.  Here's Tom
Dietterich again: ``I was surprised that you summarized the workshop
in terms of an "incremental" theme.  I don't think incremental-ness
is particularly important--especially for expert system work.
Quinlan gets his noise tolerance by training on a whole batch of
examples at once.  I would have summarized the workshop by saying
that the key theme was the move away from syntax.  Hardly anyone
talked about "matching" and syntactic generalization.  The whole
concern was with the semantic justifications for some learned
concept: All of the analogy folks were doing this, as were Mitchell,
DeJong, and Dietterich and Buchanan.  The most interesting point that
was made, I thought, was Mitchell's point that we need to look at
cases where we can provide only partial justification for the
generalizations.  DeJong's "causal completeness" is too stringent a
requirement.''

  Second, the importance of making knowledge and goals explicit is
illustrated by the progress that can be made when a learner has
access to a description of what it is trying to acquire, whether it
is a criterion for the form of an inductive hypothesis [Michalski et
al] or a formal characterization of the kind of heuristic to be
learned for guiding a search [Mitchell et al].

  Third, as Doug Lenat pointed out, continued progress in learning
will require integrating multiple methods.  In particular, we need
ways to combine analytic and empirical techniques to escape from
their limitations when used alone.

  Finally, I think we can extrapolate from the experience of AI in
the '60's and '70's to set a useful direction for machine learning
research in the '80's.  Briefly, in AI the '60's taught us that
certain general methods exist and can produce some results, while the
'70's showed that large amounts of domain knowledge are required to
achieve powerful performance.  The same can be said for learning.  I
consider a primary goal of AI in the '80's, perhaps the primary goal,
to be the development of general techniques for exploiting domain
knowledge.  One such technique is the ability to learn, which itself
has proved to require large amounts of domain knowledge.  Whether we
approach this goal by building domain-specific learners (e.g.
MetaDendral) and then generalizing their methods (e.g. version space
induction), or by attempting to formulate general methods more
directly, we should keep in mind that a general and robust
intelligence will require the ability to learn from its experience
and apply its knowledge and methods to problems in a variety of
domains.

  A well-placed source has informed me that plans are already afoot
to produce a successor to the Machine Learning book, using the 1983
workshop papers and discussions as raw material.  In the meantime,
there is a small number of extra proceedings which can be acquired
(until they run out) for $27.88 ($25 + $2.88 postage in U.S., more
elsewhere), check payable to University of Illinois.  Order from

     June Wingler
     University of Illinois at Urbana-Champaign
     Department of Computer Science
     1304 W. Springfield Avenue
     Urbana, IL 61801

  There are tentative plans for a similar workshop next summer at
Rutgers.

------------------------------

End of AIList Digest
********************

∂21-Jul-83  1640	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #25
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Oct 83  16:40:51 PDT
Date: Thursday, July 21, 1983 4:37PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #25
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jul 1983       Volume 1 : Issue 25

Today's Topics:
  AAAI Preliminary Schedule
----------------------------------------------------------------------

Date: 20 Jul 1983 0407-EDT
From: STRAZ.TD%MIT-OZ@MIT-MC
Subject: AAAI Preliminary Schedule

What follows is a complete preliminary schedule for AAAI-83.
Presumably changes are still possible, particularly in times, but it
does tell what papers will be presented.

AAAI-83 THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE at the
Washington Hilton Hotel, Washington, D.C. August 22-26, 1983, 
sponsored by THE AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE and
co-sponsored by University of Maryland and George Washington
University.

CONFERENCE SCHEDULE

SUNDAY, AUGUST 21
←←←←←←←←←←←←←←←←←

5:30-7:00 CONFERENCE, TUTORIAL, AND TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION

MONDAY, AUGUST 22 - FRIDAY, AUGUST 26
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-5:00 AAAI-83 R & D EXHIBIT PROGRAM 

WEDNESDAY, AUGUST 24 - FRIDAY, AUGUST 26
--------------------------------------

8:00 p.m.- SMALL GROUP MEETINGS : please sign up for rooms at the information
           desk in the Concourse Lobby.

SUNDAY, AUGUST 21 - THURSDAY, AUGUST 23
----------------------------------------

7:00 p.m. FREDKIN- AAAI COMPUTER CHESS TOURNAMENT

Each night at 7:00 p.m., the Fredkin-AAAI Tournament will demonstrate
the Turing Test where human players do not know if they are playing
a machine or other human players with equal probability.  Human players
will be rewarded primarily for winning, but secondarily for guessing the 
genus of their opponent.  The audience also will be kept in the dark,
and there should be some fun in guessing who is who as the game progresses.

There will be three games per night; each night, two games will pit
a human being against a computer and the other game will pit two
human players against each other.  The computer system's names are
Belle and Nuches.

TUTORIAL PROGRAM

MONDAY, AUGUST 22 - TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

8:00-5:00 TUTORIAL REGISTRATION in the CONCOURSE LOBBY, CONCOURSE LEVEL

MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

9:00-1:00- TUTORIAL NUMBER 1: AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE
			Dr. Eugene Charniak, Brown University
			    
9:00-1:00  TUTORIAL NUMBER 2: AN INTRODUCTION TO ROBOTICS
			Dr. Richard Paul, Purdue University

2:00-6:00  TUTORIAL NUMBER 3: NATURAL LANGUAGE PROCESSING
		        Dr. Gary G. Hendrix, SYMANTEC, Inc.

2:00-6:00  TUTORIAL NUMBER 4: EXPERT SYSTEMS - PART 1 - FUNDAMENTALS
			Drs. Randall Davis and Charles Rich, MIT

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

9:00-1:00 TUTORIAL NUMBER 5: EXPERT SYSTEMS - PART 2 - APPLICATION AREAS
			Drs. Randall Davis and Charles Rich, MIT

9:00-1:00 TUTORIAL NUMBER 6: AI PROGRAMMING TECHNOLOGY - LANGUAGES AND MACHINES
			Dr. Howard Shrobe, MIT and Symbolics
		        Dr. Larry Masinter, Xerox Palo Alto Research Center
				
MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION in CONCOURSE LOBBY

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

8:00-2:00 TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION in CONCOURSE LOBBY

2:00-9:30 TECHNOLOGY TRANSFER SYMPOSIUM (6-7:30 dinner break)

TECHNICAL WORKSHOPS
←←←←←←←←←←←←←←←←←←←

MONDAY, AUGUST 22 AND TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-5:00 SENSORS AND ALGORITHMS FOR 3-D VISION Dr. Azriel Rosenfeld, Maryland

9:00-5:00 PLANNING organized by Dr. Robert Wilensky, Berkeley

HOSPITALITY
←←←←←←←←←←←

MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

6:00-8:00 RECEPTION (Welcome!) in the CONCOURSE EXHIBIT HALL, CONCOURSE LEVEL

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

5:30-7:00 CONFERENCE REGISTRATION RECEPTION; INTERNATIONAL TERRACE

WEDNESDAY, AUGUST 24
←←←←←←←←←←←←←←←←←←←←

6:00-8:00 MAIN CONFERENCE RECEPTION (NO HOST BAR); INTERNATIONAL TERRACE

THURSDAY, AUGUST 25
←←←←←←←←←←←←←←←←←←←

6:00-7:00 BOARDING BUSES FOR GALA at the T STREET ENTRANCE, TERRACE LEVEL
				
7:00-10:30 GALA RECEPTION AND ENTERTAINMENT AT THE CAPITOL CHILDREN'S MUSEUM 
           (NO HOST BAR) *** RESERVATIONS ONLY ***
				
FRIDAY, AUGUST 26
←←←←←←←←←←←←←←←←←

6:00-8:00 HAIL AND FAREWELL in the INTERNATIONAL BALLROOM EAST

TECHNICAL CONFERENCE SCHEDULE
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←	

* PLEASE NOTE: Depending on the size of attendance, closed circuit T.V.
will be available Wednesday, August 24 thru Friday, August 26, for
particular sessions (that is, those sessions scheduled for the
International Ballroom Center and West).  The closed circuit
T.V. rooms will be the Georgetown Room, Concourse Level, and the
Back Terrace, Terrace Level.

MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

8:00-7:00 TECHNICAL CONFERENCE REGISTRATION 

7:00 p.m. SPECIAL SESSION dedicated to Dr. Victor Lesser, USSR

WEDNESDAY, AUGUST 24
←←←←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 AN OVERVIEW OF META-LEVEL ARCHITECTURE Michael Genesereth, Stanford 
				
9:20-9:40 FINDING ALL OF THE SOLUTIONS TO A PROBLEM David Smith, Stanford 
				
9:40-10:00 COMMUNICATION & INTERACTION IN MULTI-AGENT PLANNING
           Michael Georgeff, SRI

10:00-10:20 DATA DEPENDENCIES ON INEQUALITIES Drew McDermott, Yale 

10:20-10:40 KRYPTON: INTEGRATING TERMINOLOGY & ASSERTION 
            Ronald Brachman and Hector Levesque, Fairchild AI Laboratory
            Richard Fikes, Xerox PARC

in the INTERNATIONAL BALLROOM CENTER

COGNITIVE MODELLING SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 THREE DIMENSIONS OF DESIGN DEVELOPMENT Neil M. Goldman, USC/ISI

9:20-9:40 SIX PROBLEMS FOR STORY UNDERSTANDERS 	Peter Norvig, Berkeley

9:40-10:00 PLANNING AND GOAL INTERACTION: THE USE OF PAST SOLUTIONS IN PRESENT
           SITUATIONS Kristian Hammond, Yale 

10:00-10:20 A MODEL OF INCREMENTAL LEARNING BY INCREMENTAL AND ANALOGICAL 
            REASONING & DEBUGGING Mark Burnstein, Yale 

10:20-10:40 MODELLING OF HUMAN KNOWLEDGE ROUTES: PARTIAL AND INDIVIDUAL 
            VARIATION Benjamin Kuipers, Tufts 

in the INTERNATIONAL BALLROOM WEST

VISION AND ROBOTICS SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 A VARIATIONAL APPROACH TO EDGE DETECTION John Canny, MIT

9:20-9:40 SURFACE CONSTRAINTS FROM LINEAR EXTENTS John Kender, Columbia

9:40-10:00 AN ITERATIVE METHOD FOR RECONSTRUCTING CONVEX POLYHEDRA FROM 
           EXTENDED GAUSSIAN IMAGES James J. Little, U.of British Columbia

10:00-10:20 TWO RESULTS CONCERNING AMBIGUITY IN SHAPE FROM SHADING
            M.J. Brooks, Flinders University of South Australia

In the INTERNATIONAL BALLROOM EAST


10:40-11:00 BREAK

11:00-12:30 PANEL: LOGIC PROGRAMMING
            Howard Shrobe, Organizer, MIT
            Michael Genesereth, Stanford,
            J. Alan Robinson, David Warren, SRI International

In the INTERNATIONAL BALLROOM CENTER

12:30-2:00 LUNCH BREAK
           ANNUAL SIGART BUSINESS MEETING in the HEMISPHERE ROOM

2:00-3:10 INVITED LECTURE: THE STATE OF THE ART IN COMPUTER LEARNING
          Douglas Lenat, Stanford in the INTERNATIONAL BALLROOM CENTER

3:10-3:30 BREAK

NATURAL LANGUAGE SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←

3:30-3:50 RECURSION IN TEXT AND ITS USE IN LANGUAGE GENERATION 
           Kathleen McKeown, Columbia

3:50-4:10 RELAXATION IN REFERENCE Bradley Goodman, BBN

4:10-4:30 TRACKING USER GOALS IN AN INFORMATION-SEEKING ENVIRONMENT 
          M. Sandra Carberry, Delaware

4:30-4:50 REASONS FOR BELIEFS IN UNDERSTANDING: APPLICATIONS OF NON-MONOTONIC
          DEPENDENCIES TO STORY PROCESSING Paul O' Rorke, Illinois

4:50-5:10 RESEARCHER: AN OVERVIEW Michael Lebowitz, Columbia 
	
in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION I
←←←←←←←←←←←←←←←←←←

3:30-3:50 EPISODIC LEARNING Dennis Kibler and Bruce Porter, California-Irvine

3:50-4:10 HUMAN PROCEDURAL SKILL ACQUISITION: THEORY, MODEL AND PSYCHOLOGICAL
          VALIDATION Kurt VanLehn, Xerox PARC

4:10-4:30 A PRODUCTION SYSTEM FOR LEARNING FROM AN EXPERT
          D. Paul Benjamin and Malcolm Harrison, Courant Institute, NYU

4:30-4:50 OPERATOR DECOMPOSABILITY: A NEW TYPE OF PROBLEM STRUCTURE 
          Richard Korf, CMU

4:50-5:10 SCHEMA SELECTION AND STOCHASTIC INFERENCE IN MODULAR 	ENVIRONMENT
          Paul Smolensky, UCSD

in the INTERNATIONAL BALLROOM WEST

EXPERT SYSTEMS SESSION I
------------------------

3:30-3:50 THE DESIGN OF A LEGAL ANALYSIS PROGRAM Anne v.d.L. Gardner, Stanford

3:50-4:10 THE ADVANTAGES OF ABSTRACT CONTROL KNOWLEDGE IN EXPERT SYSTEM DESIGN
          William J. Clancey, Stanford 

4:10-4:30 THE GIST BEHAVIOR EXPLAINER William Swartout, USC/ISI

4:30-4:50 A COMPARATIVE STUDY OF CONTROL STRATEGIES FOR EXPERT SYSTEMS: AGE 
          IMPLEMENTATION OF THE THREE VARIATIONS OF PUFF 
          Nelleke Aiello, Stanford 

4:50-5:10 A RULE-BASED APPROACH TO INFORMATION RETRIEVAL: SOME RESULTS AND 
          COMMENTS Richard Tong, Daniel Shapiro, Brian McCune & Jeffrey Dean,
          Advanced Information & Decision Systems

5:10-5:30 EXPERT SYSTEM CONSULTATION CONTROL STRATEGY James Slagle and Michael
          Gaynor, Naval Research Laboratory

in the INTERNATIONAL BALLROOM CENTER

7:00 P.M. AAAI EXECUTIVE COMMITTEE MEETING 

THURSDAY, AUGUST 25
←←←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION in the CONCOURSE LOBBY

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION II
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 THE DENOTATIONAL SEMANTICS OF HORN CLAUSES AS A PRODUCTION SYSTEM
          J-L. Lassez and M. Maher, University of Melbourne

9:20-9:40 THEORY RESOLUTION: BUILDING IN NONEQUATIONAL THEORIES
          Mark Stickel, SRI International

9:40-10:00 IMPROVING THE EXPRESSIVENESS OF MANY SORTED LOGIC 
           Anthony Cohn, University of Warwick

10:00-10:20 THE BAYESIAN BASIS OF COMMON SENSE MEDICAL DIAGNOSIS 
            Eugene Charniak, Brown

10:20-10:40 ANALYZING THE ROLES OF DESCRIPTIONS AND ACTIONS IN OPEN SYSTEMS
            Carl Hewitt and Peter DeJong, MIT

in the INTERNATIONAL BALLROOM CENTER

NATURAL LANGUAGE SESSION II
←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 PHONOTACTIC AND LEXICAL CONSTRAINTS IN SPEECH RECOGNITION 
          Daniel P. Huttenlocher and Victor W. Zue, MIT

9:20-9:40 DETERMINISTIC AND BOTTOM-UP PARSING IN PROLOG 
          Edward Stabler, Jr., University of Western Ontario

9:40-10:00 MCHART: A FLEXIBLE, MODULAR CHART PARSING SYSTEM 
           Henry Thompson, Edinburgh

10:00-10:20 INFERENCE-DRIVEN SEMANTIC ANALYSIS Martha Stone Palmer, Penn & SDC

10:20-10:40 MAPPING BETWEEN SEMANTIC REPRESENTATIONS USING HORN CLAUSES
	    Ralph M. Weischedel, Delaware

in the INTERNATIONAL BALLROOM WEST

SEARCH SESSION I
←←←←←←←←←←←←←←←←

9:00-9:20 A THEORY OF GAME TREES Chun-Hung Tzeng, Paul Purdom, Jr., Indiana

9:20-9:40 OPTIMALITY OF A A* REVISITED 	Rina Dechter and Judea Pearl, UCLA

9:40-10:00 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT SATISFACTION)
           TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES Bernard Nudel,Rutgers

10:00-10:20 THE COMPOSITE DECISION PROCESS: A UNIFYING FORMULATION FOR 
            HEURISTIC SEARCH, DYNAMIC PROGRAMMING AND BRANCH & BOUND PROCEDURES
            Vipin Kumar, Texas & Laveen Kanal, Maryland

10:20-10:40 NON-MINIMAX SEARCH STRATEGIES FOR USE AGAINST FALLIBLE OPPONENTS
            Andrew Louis Reibman and Bruce Ballard, Duke 

in the INTERNATIONAL BALLROOM EAST

10:40-11:00 BREAK

11:00-12:30 AAAI PRESIDENTIAL ADDRESS Nils Nilsson, SRI International
            ANNOUNCEMENT OF THE PUBLISHER'S PRIZE
            AAAI COMMENDATION FOR EXCELLENCE to MARVIN DENICOFF, Office of 
            Naval Research

in the INTERNATIONAL BALLROOM CENTER

12:30-2:00 LUNCH BREAK
           ANNUAL AAAI BUSINESS MEETING in the INTERNATIONAL BALLROOM CENTER

2:00-3:10 THE GREAT DEBATE: METHODOLOGIES FOR AI RESEARCH 
          John McCarthy, Stanford vs. Roger Schank, Yale 
				 	
in the INTERNATIONAL BALLROOM CENTER


3:10-3:30 BREAK

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION III
-------------------------------------------------------

3:30-3:50 PROVING THE CORRECTNESS OF DIGITAL HARDWARE DESIGNS
          Harry G. Barrow, Fairchild AI Laboratory

3:50-4:10 A CHESS PROGRAM THAT CHUNKS Murray Campbell & Hans Berliner, CMU

4:10-4:30 THE DECOMPOSITION OF A LARGE DOMAIN: REASONING ABOUT MACHINES
          Craig Stanfill, Maryland

4:30-4:50 REASONING ABOUT STATE FROM CAUSATION AND TIME IN A MEDICAL DOMAIN
          William Long, MIT

4:50-5:10 THE USE OF QUALITATIVE AND QUANTITATIVE SIMULATIONS Reid Simmons, MIT

5:10-5:30 AN AUTOMATIC ALGORITHM DESIGNER: AN INITIAL IMPLEMENTATION
          Elaine Kant and Allen Newell, CMU

in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION II
←←←←←←←←←←←←←←←←←←←

3:30-3:50 WHY AM AND EURISKO APPEAR TO WORK? 
          Douglas Lenat, Stanford, John Seely Brown, Xerox PARC

3:50-4:10 LEARNING PHYSICAL DESCRIPTIONS FROM FUNCTIONAL DEFINITIONS, EXAMPLES,
          AND PRECEDENTS Patrick Winston & Boris Katz, MIT, Thomas Binford & 
          Michael Lowry, Stanford 

4:10-4:30 A PROBLEM-SOLVER FOR MAKING ADVICE OPERATIONAL Jack Mostow, USC/ISI

4:30-4:50 GENERATING HYPOTHESES TO EXPLAIN PREDICTION FAILURES 
          Steven Salzberg, Yale 

4:50-5:10 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT RECOGNITION
          Richard Keller, Rutgers 

in the INTERNATIONAL BALLROOM WEST

EXPERT SYSTEMS SESSION II
←←←←←←←←←←←←←←←←←←←←←←←←←

3:30-3:50 DIAGNOSIS VIA CAUSAL REASONING: PATHS OF INTERACTION AND THE 
          LOCALITY PRINCIPLE Randall Davis, MIT

3:50-4:10 A NEW INFERENCE METHOD FOR FRAME-BASED EXPERT SYSTEMS 
          James Reggia, Dana Nau, Pearl Wang, Maryland

4:10-4:30 ANALYSIS OF PHYSIOLOGICAL BEHAVIOR USING A CAUSAL MODLE BASED ON 
          FIRST PRINCIPLES John C. Kunz, Stanford 

4:30-4:50 AN INTELLIGENT AID FOR CIRCUIT REDESIGN Tom Mitchell, Louis 
          Steinberg, Smadar Kedar-Cabelli, Van Kelly, Jeffrey Shulman, 
          Timothy Weinrich, Rutgers 

4:50-5:10 TALIB: AN IC LAYOUT DESIGN ASSISTANT Jin Kim and John McDermott, CMU

in the INTERNATIONAL BALLROOM CENTER

FRIDAY, AUGUST 26
←←←←←←←←←←←←←←←←←

KNOWLEDGE REPRESENTATION & PROBLEM SOLVING SESSION IV
------------------------------------------------------

9:00-9:20 ON INHERITANCE HIERARCHIES WITH EXCEPTIONS David W. Etherington, 
          University of British Columbia, Raymond Reiter, UBC and Rutgers

9:20-9:40 DEFAULT REASONING AS LIKELIHOOD REASONING Elaine Rich, Texas

9:40-10:00 DEFAULT REASONING USING MONOTONIC LOGIC: A MODEST PROPOSAL
           Jane Terry Nutter, Tulane 

10:00-10:20 A THEOREM-PROVER FOR A DETECTABLE SUBSET OF DEFAULT LOGIC
            Philippe Besnard, Rene Quiniou,&Patrice Quinton, IRISA-INRIA Rennes

10:20-10:40 DERIVATIONAL ANALOGY AND ITS ROLE IN PROBLEM SOLVING
            Jaime Carbonell, CMU

in the INTERNATIONAL BALLROOM CENTER

COGNITIVE MODELLING SESSION II
------------------------------

9:00-9:20 STRATEGIST: A PROGRAM THAT MODELS STRATEGY-DRIVEN AND CONTENT-DRIVEN
          INFERENCE BEHAVIOR Richard Granger, Jennifer Holbrook, and
          Kurt Eiselt, California-Irvine

9:20-9:40 LEARNING OPERATOR SEMANTICS BY ANALOGY
Sarah Douglas, Stanford & Xerox PARC, Thomas Moran, Xerox PARC

9:40-10:00 AN ANALYSIS OF A WELFARE ELIGIBILITY DETERMINATION INTERVIEW: 
           A PLANNING APPROACH 	Eswaran Subrahmanian, CMU

in the INTERNATIONAL BALLROOM EAST

VISION AND ROBOTICS SESSION II
------------------------------

9:00-9:20 THE PERCEPTUAL ORGANIZATION AS BASIS FOR VISUAL RECOGNITION
          David Lowe and Thomas Binford, Stanford

9:20-9:40 MODEL BASED INTERPRETATION OF RANGE IMAGERY
          Darwin Kuan and Robert Drazovich, AI&DS		

9:40-10:00 A DESIGN METHOD FOR RELAXATION LABELING APPLICATIONS
           Robert Hummel, Courant Institute, NYU

10:00-10:20 APPROPRIATE LENGTHS BETWEEN PHALANGES OF MULTI JOINTED FINGERS FOR
            STABLE GRASPING Tokuji Okada and Takeo Kanade, CMU

10:20-10:40 FIND-PATH FOR A PUMA-CLASS ROBOT Rodney Brooks, MIT

in the INTERNATIONAL BALLROOM WEST

10:40-11:00 BREAK

11:00-12:30 PANEL: ADVANCED HARDWARE ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE
	    Allen Newell, Organizer, CMU

in the INTERNATIONAL BALLROOM 

12:30-2:00 LUNCH BREAK
           AAAI SUBGROUP: AI IN MEDICINE MEMBERSHIP MEETING in HEMISPHERE ROOM

2:00-3:10 INVITED LECTURE - THE STATE OF THE ART IN ROBOTICS Michael Brady, MIT

in the INTERNATIONAL BALLROOM

3:10-3:30 BREAK

SEARCH SESSION II 
-----------------

3:30-3:50 INTELLIGENT CONTROL USING INTEGRITY CONSTRAINTS 
          Madhur Kohli and Jack Minker, Maryland

3:50-4:10 PREDICTING THE PERFORMANCE OF DISTRIBUTED KNOWLEDGE-BASED SYSTEMS:
          MODELLING APPROACH Jasmina Pavlin, UMASS

in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION III 
--------------------

3:30-3:50 LEARNING: THE CONSTRUCTION OF A POSTERIORI KNOWLEDGE STRUCTURES
          Paul Scott, University of Michigan

3:50-4:10 A DOUBLY LAYERED, GENETIC PENETRANCE LEARNING SYSTEM
          Larry Rendell, University of Guelph

4:10-4:30 AN ANALYSIS OF GENETIC-BASED PATTERN TRACKING	AND COGNITIVE-BASED 
          COMPONENT TRACKING MODELS OF ADAPTATION
          Elaine Pettit and Kathleen Swigger, North Texas State University

in the INTERNATIONAL BALLROOM CENTER

SUPPORT HARDWARE AND SOFTWARE SESSION
-------------------------------------

3:30-3:50 MASSIVELY PARALLEL ARCHITECTURES FOR AI: NETL, THISTLE, AND BOLTZMANN
          MACHINES Scott Fahlman, Geoffrey Hinton, CMU,	Terrence Sejnowski, JHU

3:50-4:10 YAPS: A PRODUCTION RULE SYSTEMS MEETS OBJECTS 
          Elizabeth Allen, Maryland

4:10-4:30 SPECIFICATION-BASED COMPUTING ENVIRONMENTS Robert Balzer, David Dyer,
          Mathew Morgenstern, and Robert Neches, USC/ISI

in the INTERNATIONAL BALLROOM WEST

------------------------------

End of AIList Digest
********************

∂25-Jul-83  2359	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #26
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Jul 83  23:58:54 PDT
Date: Monday, July 25, 1983 10:15PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #26
To: AIList@SRI-AI


AIList Digest            Tuesday, 26 Jul 1983      Volume 1 : Issue 26

Today's Topics:
  AAAI-83 Schedule on USENet
  Roommates Wanted for AAAI
  Artificial Intelligence Info for kids
  Preparing Govmt Report on Canadian AI Research
  Definitions (2)
  Expectations of Expert System Technology
  Portable and More Efficient Lisps (3)
----------------------------------------------------------------------

Date: 24 Jul 83 20:20:09-PDT (Sun)
From: decvax!linus!utzoo!utcsrgv!peterr @ Ucb-Vax
Subject: AAAI-83 sched. avail. on USENet
Article-I.D.: utcsrgv.1828

I have a somewhat compressed, but still large (18052 ch.), on-line
version of the AAAI-83 schedule that I'm willing to mail to USENet
people on request.
   peter rowley, U. Toronto CSRG
   {cornell,watmath,ihnp4,floyd,allegra,utzoo,uw-beaver}!utcsrgv!peterr
 or {cwruecmp,duke,linus,lsuc,research}!utzoo!utcsrgv!peterr

------------------------------

Date: 22 Jul 83 10:34:11-PDT (Fri)
From: decvax!linus!utzoo!hcr!ravi @ Ucb-Vax
Subject: Room-mates wanted for AAAI
Article-I.D.: hcr.451

A friend (Mike Rutenberg) and I are going to AAAI at the end of
August.  We'd like to find a couple of people to share a room with --
both to meet interesting people and to save some money.  If you're
interested, please let me know by mail.

Also, if you have any other useful hints (like cheap transportation
from Ontario or better places to stay than the Hilton), please drop me
a line.

Thanks for your help.
        --ravi

------------------------------

Date: 24 Jul 1983 0727-CDT
From: Clive Dawson <CC.Clive@UTEXAS-20>
Subject: Artificial Intelligence Info for kids

               [Reprinted from the UTexas-20 BBoard.]

I received a letter from an 8th grader in Houston who wants to do a
science fair project on Artificial Intelligence.

        "...I plan to explain and demonstrate this topic with
         my computer and a program I made on it concerning this
         topic.  Any information you could send for my research
         would be appreciated."

If anybody knows of any source of AI information suitable for Jr. High
School level (good magazine articles written for the layman, etc.)
please let me know.  I have come across such stuff every so often, but
I'm having trouble remembering where.

Thanks,

Clive

------------------------------

Date: 23 Jul 83 16:30:27-PDT (Sat)
From: decvax!linus!utzoo!utcsrgv!zenon @ Ucb-Vax
Subject: Preparing Govmt report on Canadian AI research
Article-I.D.: utcsrgv.1823

A consortium of 4 groups has been awarded a contract by the Secretary
of State to prepare a report on what Canada ought to be doing to
support R & D in artificial intelligence in the next 5-10 years.  The
groups are Quasar Systems of Ottawa, Nordicity Group of Toronto,
Socioscope of Ottawa, and a group of academic AI people (Pylyshyn,
Mackworth, Skuce, Kittredge, Isabel, with consultants Tsotso,
Mylopoulos, Zucker, Cercone).  Because the client's primary interest
is in language (esp. translation) the report will concentrate on that
aspect, though we plan to cover all of AI on the grounds that it's all
of a piece.  The contract period is July-Dec 1983.  I am coordinating
the technical part of the report.

We are seeking input from all interested parties.  I will be touring
Canada, probably in September, and would like to talk to anyone who
has an AI lab and some ideas about where Canada ought to focus.  I am
especially eager to receive input from, and information about,
what's happening in Canadian industry.

I welcome all suggestions and invitations.  This is the first AI study
commissioned by a federal agency on AI and we should take this as an
opportunity to give them a good cross-section of views.

Zenon Pylyshyn, Centre for Cognitive Science, University of Western
Ontario, London, Ontario, N6A 5C2.  (519)-679-2461

utcsrgv!zenon or on the ARPANET Pylyshyn@CMU-CS-C

------------------------------

Date: Fri 22 Jul 83 09:32:16-EDT
From: MASON@CMU-CS-C.ARPA
Subject: Re: definition of robot

I think the definition of robot is a little too broad.  I've long been
reconciled to definitions which include, for instance,
cam-programmable sewing machines, but this new definition even
includes pistols.  (An input signal, trigger pressure, is processed
mechanically to actuate a mechanical device, the bullet.)  Of course,
if the NRA decided to lobby for robots ...

------------------------------

Date: Fri 22 Jul 83 09:22:54-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Definitions

Here are a few definitions taken from a Teknowledge/CEI ad:

  Artificial Intelligence
    That subfield of Computer Science which is concerned with
    symbolic reasoning and problem solving by computer.

  Knowledge Engineering
    The engineering discipline whereby knowledge is integrated
    into computer systems in order to solve complex problems
    normally required [sic] in a high level of human expertise.

  Knowledge/Expert Systems
    Computer systems that embody knowledge including inexact,
    heuristic and subjective knowledge; the results of knowledge
    engineering.

  Knowledge Representation
    A formalism for representing facts and rules about a subject
    or specialty.

  Knowledge Base
    A base of information encoded in a knowledge representation
    for a particular application.

  Inference Technique
    A methodology for reasoning about information in knowledge
    representation [sic] and drawing conclusions from that knowledge.

  Task Domains
    Application areas for knowledge systems such as analysis of
    oil well drilling problems or identification of computer
    system failures.

  Heuristics
    The informal, judgmental knowledge of an application area
    that constitutes the ``rules of good judgement'' in the field.
    Heuristics also encompass the knowledge of how to solve problems
    efficiently and effectively, how to plan steps in solving
    a complex problem, how to improve performance, and so forth.

  Production Rules
    A widely-used knowledge representation in which knowledge
    is formalized into ``rules'' containing an ``IF'' part and
    a ``THEN'' part (also called a condition and an action).
    The knowledge represented by the production rule is applicable
    to a line of reasoning if the IF part of the rule is satisfied:
    consequently, the THEN part can be concluded or its
    problem-solving action taken.

                                        -- Ken Laws

------------------------------

Date: 24 Jul 83 1:41:35-PDT (Sun)
From: decvax!linus!utzoo!utcsrgv!peterr @ Ucb-Vax
Subject: Expectations of expert system technology
Article-I.D.: utcsrgv.1824

>From a recent headhunting flyer sent to some AAAI members:

"We have been retained by a major Financial Institution, located in
New York City.  They are interested in building the support staff for
their money market traders and are looking for qualified candidates
for the following positions:

    A Senior AI Researcher who has experience in knowledge rep'n
    and expert systems.  The ideal candidate would have a
    graduate degree in CS - AI with a Psychology (particularly
    cognitive processes), Cultural Anthropology, or comparable
    background.  This person will start by being a consultant in
    Human Factors and would interact between the Traders and the
    Systems they use.  Two new Xerox 1100 computers have been
    purchased and experience in LISP programming is necessary
    (with INTERLISP-D preferred).  This person will have their
    own personal LISP machine.  The goal of this position will
    be to analyze how Traders think and to build trading support
    (expert) systems geared to the individual Trader's style."

Two other job descriptions are given for the same project, for an
economist and an MBA with CS (database, communications, and systems)
and Operations Research background.

The fact that the co. would buy the 1100's without consulting their
future user and the tone of the description prompts me to wonder if
the co. is treating expert system technology as an engineering
discipline which can produce results in a relatively short order
rather than the experimental field it appears to be.  Particularly
troubling is the problem domain for this system--I would expect such
traders to make extensive use of knowledge about politics and economic
policy on a number of levels, not easy knowledge to represent.

I'm not an expert systems builder by any means and may be
underestimating the technology...  does anyone think this co. is not
expecting too much?  (Replies to the net, please)

[The company should definitely get copies of

  J.L. Stansfield, COMEX: A Support System for a Commodities Analyst,
  MIT AIM-423, July 1977.

  J.L. Stansfield, Conclusions from the Commodity Expert Project,
  MIT AIM-601, (AD-A097-854), Nov. 1980.

The latter, I hear, documents the author's experience with large,
incomplete databases of unreliable facts about a complex world.
It must be one of the few examples of an academic research project
that could not claim success.  -- KIL]

------------------------------

Date: Mon 25 Jul 83 02:45:51-EDT
From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
Subject: Re: Portable and More Efficient Lisps

        What I wish to generate is a discussion of what are the
features of LISP@-[n] which provide a nice/efficient/(other standard
virtues) environment for computer aided intellectual tasks (such as
AI, CAD, etc.).
        For example, quite a lot of the work that I have been involved
with recently required that from with a LISP environment that I
generate line drawings to represent: data structures, binding
environments for a multi-processor simulator, or even as a graphical
syntax for programming.  Thus, I would like to have 1) reasonable
support (in terms of packages of routines) for textual labels and line
drawings; and 2) this same package available irrespective of which
machine I happen to be using at the time [within the limits of the
hardware available].

        What other examples of common utilities are emerging as
"expected" `primitives'?  Chip

------------------------------

Date: Sat, 23 Jul 83 15:58:24 EDT
From: Stephen Slade <Slade@YALE.ARPA>
Subject: Portable and More Efficient Lisps

Chip Maguire took violent exception to the claim that T, a version of 
Scheme implemented at Yale, is "more efficient and portable" compared
to other Lisp implementations.  He then listed the numerous machines
on which PSL, developed at Utah, now runs.

The problem in this case is one of operator precedence:  "more" has
higher precedence than "and".  Thus, T is both portable AND more
efficient.  These two features are intertwined in the language design
and implementation through the use of lexical scoping and an
optimizing compiler which performs numerous source-to-source
optimizations.  Many of the compiler operations that depend on the
specific target machine are table driven.  For example, the register
allocation scheme clearly depends on the number and type of registers
available.  The actual code generator is certainly machine dependent,
but does not comprise a large portion of the compiler.  The compiler
is written largely in T, simplifying the task of porting the compiler
itself.

For PSL, portability was a major implementation goal.  For T,
portability became a byproduct of the language and compiler design.  A
central goal of T has been to provide a clean, elegant, and efficient
LISP.  The T implementers strove to achieve compatibility not only
among different machines, but also between the interpreted and
compiled code -- often a source of problems in other Lisps.  So far, T
has been implemented for the M68000 (Apollo/Domain), VAX/UNIX, and
VAX/VMS.  There are plans for other machine implementations, as well
as enhancements of the elegance and efficiency of the language itself.

People at Yale have been using T for the past several years now.  
Applications have included an extensible text editor with inductive 
inference capability (editing by example), a hierarchical digital
circuit graphics editor and simulator, and numerous large AI programs.
T is also being used in a great many undergraduate courses both at
Yale and elsewhere.

I believe that PSL and Standard LISP have been very worthwhile
endeavors and have bestowed the salutary light of LISP on many
machines that had theretofore languished in the lispless darkness of
algebraic languages.  T, though virtuous in design and virtual in
implementation, does not address the FORTRAN-heathen, but rather seeks
to uplift the converted and provide comfort to those true-believers
who know, in their heart of hearts, that LISP can embrace both
elegance and efficiency.  Should this credo also facilitate
portability, well, praise the Lord.

------------------------------

Date: Mon, 25 Jul 83 11:41:50 EDT
From: Nathaniel Mishkin <Mishkin@YALE.ARPA>
Subject: Re: Lisp Portability

    Date: Tue 19 Jul 83 15:24:00-EDT
    From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
    Subject: Lisp Portability

    [...]

    So lets hear more about the good ideas in T and fewer nebulous
    comments like:  "more efficient and portable".

I can give my experience working on a display text editor, U, written
in T. (U's original author is Bob Nix.)  U is 10000+ lines of T code.
Notable U features are a "do what I did" editing by example system, an
"infinite" undo facility, and a Laurel (or Babyl) -like mail system.
U runs well on the Apollo and almost well on VAX/VMS. U runs on
VAX/Unix as well as can be expected for a week's worth of work.
Porting U went well:  the bulk of U did not have to be changed.

- - - - -

Notable features of T:

    - T, like Scheme (from which T is derived) supports closures (procedures
      are first-class data objects).  Closures are implemented efficiently
      enough so that they are used pervasively in the implementation of the
      T system itself.

    - Variables are lexically-scoped; variables from enclosing scopes can
      be accessed from closed procedures.

    - T supports an object-oriented programming style that does not conflict
      with the functional nature of Lisp. Operations (like Smalltalk messages)
      can be treated as functions; e.g. they can be used with the MAP
      functions.

    - Compiled and interpreted T behave identically.

    - T has fully-integrated support for multiple namespaces so software
      written by different people can be combined without worrying about
      name conflicts.

    - The T implementors (Jonathan Rees and Norman Adams) have not felt
      constrained to hold on to some of the less modern aspects of older
      Lisps (e.g. hunks and irrational function names).

    - T is less of a bag of bits than other Lisps. T has a language definition
      and a philosophy.  One feels that one understands all of T after reading
      the manual.  The T implementors have resisted adding arbitrary features
      that do not fit with the philosophy.

    - Other features:  inline procedure expansion, procedures accept arbitrary
      numbers of parameters ("lexpr's" or "&rest-args"), interrupt processing.

All these aspects of T have proved to be very useful.

- - - - -

    The predecessor system "Standard LISP" along with the REDUCE
    symbolic algebra system ran on the following machines (as October
    1979):  Amdahl:  470V/6; CDC: 640, 6600, 7600, Cyber 76; Burroughs:
    B6700, B7700; DEC: PDP-10, DECsystem-10, DECsystem-20; CEMA: ES
    1040; Fujitsu:  FACOM M-190; Hitachi:  MITAC M-160, M-180;
    Honneywell:  66/60; Honeywell-Bull:  1642; IBM: 360/44, 360/67,
    360/75, 360/91, 370/155, 370/158, 370/165, 370/168, 3033, 370/195;
    ITEL: AS-6; Siemens:  4004; Telefunken:  TR 440; and UNIVAC: 1108,
    1110.

Hmm. Was the 370/168 implementation significantly different from the
370/158 implementation?  Also, aren't some of those Japanese machines
"360s".  When listing implementations, let's do it in terms of
architectures and operating systems.

While it may be the case that PSL is more portable than T, T does
presently run on the Apollo, VAX/VMS and VAX/Unix. Implementations for
other architectures are being considered.

------------------------------

End of AIList Digest
********************

∂28-Jul-83  0912	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #27
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Jul 83  09:11:29 PDT
Date: Wednesday, July 27, 1983 4:21PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #27
To: AIList@SRI-AI


AIList Digest           Thursday, 28 Jul 1983      Volume 1 : Issue 27

Today's Topics:
  Multiple producers in a production system
  PROLONG
  HFELISP
  Getting Started in AI
  Lisp Translation
  Re: Expectations of Expert System Technology
  The Fifth Generation Computer Project
  The Military and AI
  AI Koans
  HP Computer Colloquium 7/28
----------------------------------------------------------------------

Date: 26 Jul 1983 0937-PDT
From: Jay <JAY@USC-ECLC>
Subject: Multiple producers in a production system

(speculation/question)

Has anyone heard of multiple "producers" in production systems?  What
I mean is:  should the STM contain (a b c) and there is a rule (a b)
-> (d) and another (b c) -> (e), would it be useful to somehow do BOTH
productions?  The PS could become two PS's, one with (d c) and another
with (e a) in STM.  This sort of a PS could be useful in fuzzy areas 
of knowledge where the same implicants could (due to lack of other 
implicants, or due to lack of understanding) imply more than one 
result.

j'

------------------------------

Date: Tue 26 Jul 83 23:14:06-PDT
From: WALLACE <N.WALLACE@SU-SCORE.ARPA>
Subject: PROLONG

        PROLONG:  A VERY SLOW LOGIC PROGRAMMING LANGUAGE

                          ABSTRACT

PROLONG was developed at the University of Heiroglyphia over a 22-year
period.  PROLONG is an implementation of a very well-known technique
for deciding whether a given well-formed formula F of first-order
logic is a theorem.  We first type in the axioms A of our system.
Then PROLONG applies the rules of inference successively to the axioms
A and the subsequent theorems we derive from A.  A matching routine
determines whether F is identical to one of these theorems.  If the
algorithm stops, we know that F is a theorem.  If it never stops, we
known that F is not.

------------------------------

Date: 27 Jul 1983 0942-PDT
From: Jay <JAY@USC-ECLC>
Subject: HFELISP


        HFELISP (Heffer Lisp) HUMAN FACTOR ENGINEERED LISP

                                ABSTRACT

  HFE sugests that the more complicated features of (common) Lisp are 
dangerous, and hard to understand.  As a result a number of Fortran, 
Cobol, and 370 assembler programmers got together with a housewife.  
They pared Lisp down to, what we belive to be, a much simpler and more
understandable system.  The system includes only the primitives CONS, 
READ, and PRINT.  However CONS was restricted to only take an atom for
the first argument, and a onelevel list for the second.  Since all 
lists are onelevel they also did away with parenthesis.  All the 
primitives were coded in ADA and this new lisp is being considered as 
the DOD's AI language.

j'

------------------------------

Date: 22 Jul 83 15:39:24-PDT (Fri)
From: harpo!floyd!cmcl2!rocky2!flipkin @ Ucb-Vax
Subject: Getting Started in AI
Article-I.D.: rocky2.103

Can someone point me to a good place to begin with AI? I find the
subject fascinating (as does my EECS girlfriend), and I would
appreciate some help getting started. Thanks in advance,
                Dennis Moore

(reply via mail please, unless you think it is of great interest
to the net)

[I think it is of great interest!  I recommend the AI Handbook for a
general overview.  I am still looking for a good intro to Lisp and the
programming conventions needed to produce interesting Lisp programs.
(Winston and Horn is a reasonable introduction, and Charniak,
Riesbeck, and McDermott has a lot of good material.  The Little Lisper
is a good introduction to recursive programming if you can stand the
"programmed text" question-and-answer presentation.)
-- KIL]

------------------------------

Date: 26 Jul 1983 0833-PDT
From: FC01@USC-ECL
Subject: Lisp Translation

        This lisp debate seems to be turning into a freeforall.
Slanderous remarks are unnecessary. The fact is that once you get used
to something, the momentum of keeping with it is often more powerful
than any advantages attainable by changing from it. Perhaps functions
like transor from Interlisp could be extended by some of the AI
researchers to provide real translations from lisp to lisp. This way,
you could develop your programs in the lisp of your choice and run
them in the most efficient lisp available on any given machine. With
all the work that has been done on human translations and the extreme
complexity thereof, it would seem a practical and only extremely 
ambitious (as opposed to down right unrealistic) project to develop a
translator between lisps. Think of it like translating between a New
Yorker and a Bostonian and a Texan, all talking breeds of English. If
the energy spent on developing new lisps and arguing about their
superiorities were spent in the lisp translation area, we might have
it done by now.
                        Fred

------------------------------

Date: 25 Jul 83 18:11:37-PDT (Mon)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Expectations of expert system technology
Article-I.D.: ssc-vax.345

Expert systems technology is an experimental field whose basic
concepts have been fairly well established in the past few years.
Since it is really an engineering field (knowledge engineering) much
of the important research is carried on by attempting to develop a
specific application and seeing what sorts of problems and solutions
crop up.  This is true for MYCIN, R1, PROSPECTOR, and many other
expert systems.  Our Expert Systems Technology group at Boeing has
been developing a prototype flight route planner.  It has provided a
good test bed for more theoretical work on the kinds of tools and
capabilities needed for knowledge engineering (although as a planner,
it may never be fully functional).  Our application is sufficiently
difficult that it is quite experimental, however a simple expert
system is not particularly difficult to put together, if some of the
existing and available tools are used.  Needless to say, many sweeping
generalizations and unjustified assumptions (read: gross hacks) must 
be made, in order to simplify the problem to a point where an expert 
system can be built.  The resulting expert system, although perhaps
not much more capable than a good C program, will be much smaller and 
more transparent in structure than any ordinary program.

The ad in question may or may not be reasonable.  I don't know enough 
about finance to say whether the knowledge in that domain can be 
easily encoded.  However, if the company's expectations are not too
high, they may end up with a reasonable tool, one that will be just as
good as if some C wizard had spent a year of sleepless nights 
reinventing the AI wheels.

Stan ("the Leprechaun Hacker") Shebs
Boeing Aerospace Co.
ssc-vax!sts (soon utah-cs)

------------------------------

Date: 26 Jul 83 10:50:26-PDT (Tue)
From: decvax!linus!utzoo!hcr!ravi @ Ucb-Vax
Subject: The Fifth Generation Computer Project
Article-I.D.: hcr.455

Has anyone out there had any contact with the Japanese Institute for
New Generation Computer Technology (which is running the Fifth
Generation Computer Project)?  Since the the first rush of publicity
when the project was initiated, things have been fairly quiet (except
for the somewhat superficial book by Feigenbaum and a few papers in
symposia), and it's a bit hard to find out just how the project is
progressing.  I am especially interested in talking to people who have
visited INGCT recently and have met with the people directly involved
in the project.  Thanks!
        --ravi

        {linus, floyd, allegra, ihnp4} ! utzoo ! hcr ! hcrvax ! ravi 
OR
        decvax ! hcr ! hcrvax ! ravi

------------------------------

Date: Wed, 27 Jul 83 08:42 EDT
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: The military and AI

Food for thought:


  Date: 26 Jul 83 12:05:02 PDT (Tuesday)
  From: McCullough.PA
  Subject: The military and AI
  To: antiwar↑ 

  From "The Race to Build a Supercomputer" in Newsweek, July 4, 1983...

  [Robert Kahn, mentioned below, is DARPA's computer director]


'Once they are in place, these technlogies will make possible an 
astonishing new breed of weapons and military hardware.  Smart robot 
weapons--drone aircraft, unmanned submarines and land vehicles--that 
combine aritificial intelligence and high-powered computing can be
sent off to do jobs that now involve human risk.  "This is a sexy area
to the military, because you can imagine all kinds of neat,
interesting things you could send off on their own little missions
around the world or even in local combat," says Kahn.  The Pentagon
will also use the technologies to create artificial-intelligence
machines that can be used as battlefield advisers and superintelligent
computers to coordinate complex weapons systems.  An intelligent
missile-guidance system would have to bring together different
technologies--real-time signal processing, numerical calculations and
symbolic processing, all at unimaginably high speeds--in order to make
decisions and give advice to human commanders.'

------------------------------

Date: 24 Jul 1983 16:21-PDT
From: greiner@Diablo
Subject: AI Koans

[This has appeared on several BBoards thanks to Gabriel Robins, Rich
Welty, Drew McDermott, Margot Flowers, and no doubt others.  I have
no idea what it is about, but pass it on for your doubtful
enlightenment.  -- KIL]


AI Koans: (by Danny)

  A novice was trying to fix a broken lisp machine by turning the
power off and on.  Knight, seeing what the student was doing spoke
sternly- "You can not fix a machine by just power-cycling it with no
understanding of what is going wrong."
  Knight turned the machine off and on.
  The machine worked.

-       -       -       -       -

One day a student came to Moon and said, "I understand how to make a
better garbage collector.  We must keep a reference count of the
pointers to each cons." Moon patiently told the student the following
story-

  "One day a student came to Moon and said, "I understand how to
  make a better garbage collector...


-       -       -       -       -

  In the days when Sussman was a novice Minsky once came to him as he
sat hacking at the PDP-6.  "What are you doing?", asked Minsky.
  "I am training a randomly wired neural net to play Tic-Tac-Toe."
  "Why is the net wired randomly?", asked Minsky.
  "I do not want it to have any preconceptions of how to play"
  Minsky shut his eyes,
  "Why do you close your eyes?", Sussman asked his teacher.
  "So the room will be empty."
  At that momment, Sussman was enlightened.


-       -       -       -       -

A student, in hopes of understanding the Lambda-nature, came to
Greenblatt.  As they spoke a Multics system hacker walked by.  "Is it
true", asked the student, "that PL-1 has many of the same data types
as Lisp".  Almost before the student had finished his question,
Greenblatt shouted, "FOO!", and hit the student with a stick.


-       -       -       -       -

A disciple of another sect once came to Drescher as he was eating his
morning meal.  "I would like to give you this personality test", said
the outsider,"because I want you to be happy." Drescher took the
paper that was offered him and put it into the toaster- "I wish the
toaster to be happy too".


-       -       -       -       -
(by who?)

A man from AI walked across the mountains to SAIL to see the Master,
Knuth.  When he arrived, the Master was nowhere to be found.

        "Where is the wise one named Knuth?" he asked a passing
student.

        "Ah," said the student, "you have not heard. He has gone on a
pilgrimage across the mountains to the temple of AI to seek out new
disciples."

Hearing this, the man was Enlightened.

-       -       -       -       -


And, of course, my own contribution:


A famous Lisp Hacker noticed an Undergraduate sitting in front of a
Xerox 1108, trying to edit a complex Klone network via a browser.
Wanting to help, the Hacker clicked one of the nodes in the network
with the mouse, and asked "what do you see?"
Very earnesty, the Undergraduate replied "I see a cursor."
The Hacker then quickly pressed the boot toggle at the back of the
keyboard, while simultaneously hitting the Undergraduate over the
head with a thick Interlisp Manual.  The Undergraduate was then
Enlightened.


         - Gabriel [Robins@ISIF]

------------------------------

Date: 26 Jul 83 14:10:41 PDT (Tuesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 7/28


                Professor Gio Wiederhold
                Department of Computer Science
                Stanford University

                Knowledge in Databases


We define knowledge-based approaches to database problems.

Using a clarification of application levels from the enterprise to the
system levels, we give examples of the varieties of knowledge which
can be used.  Most of the examples are drawn from work at the KBMS
project at Stanford.

The object of the presentation is to illustrate the power, and also
the high payoff of quite straightforward artificial intelligence 
applications in databases.  Implementation choices will also be 
evaluated.


        Thursday, July 28, 1983 4:00 pm

        5M Conference room
        HP Stanford Park Labs
        1501 Page Mill Rd
        Palo Alto

        *** Be sure to arrive at the building's lobby ON TIME, so that
you may be escorted to the conference room

------------------------------

End of AIList Digest
********************

∂29-Jul-83  1004	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #28
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Jul 83  10:04:18 PDT
Date: Friday, July 29, 1983 9:12AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #28
To: AIList@SRI-AI


AIList Digest            Friday, 29 Jul 1983       Volume 1 : Issue 28

Today's Topics:
  USENET and AI
  AI and the Military
  The Fifth Generation Computer Project
  Lisp Books, Nondeterminism, Japanese Effort
  Automated LISP Dialect Translation
  Data Flow Computers and PS's
  Repeated Substring Detection
  A.I. in Sci Fi (2)
----------------------------------------------------------------------

Date: 26 Jul 83 11:52:01-PDT (Tue)
From: teklabs!jima @ Ucb-Vax
Subject: USENET and AI
Article-I.D.: teklabs.2247

In response to [a Usenet] query about AI research going on at USENET
sites:

The Tektronix Computer Research Lab now has a Knowledge-Based Systems 
group. We are a <very> new group and are still staffing up.  We're 
looking into circuit trouble shooting as well as numerous other topics
of interest.

Jim Alexander
Usenet: {ucbvax,decvax,pur-ee,ihnss,chico}!teklabs!jima
CSnet:  jima@tek
ARPA:   jima.tek@rand-relay

------------------------------

Date: Wed 27 Jul 83 21:29:44-PDT
From: Ira Kalet <IRA@WASHINGTON.ARPA>
Subject: AI and the military

The possibilities of AI in unmanned weapons systems are wonderful!  
Now we could send all the weapons, and their delivery vehicles to the 
moon (or beyond) where they can fight our war for us without anyone 
getting hurt and no property damage.  That would be progress!  If only
the decision makers valued us humans more than their toys..........

------------------------------

Date: 27 Jul 83 18:38:58 PDT (Wednesday)
From: Hamilton.ES@PARC-MAXC.ARPA
Subject: Re: The Fifth Generation Computer Project

In case some of you are not on every junk mailing list known to man
the way I am, there is a new international English-language journal
with an all-Japanese editorial board called "New Generation
Computing", published by Springer-Verlag, Journal Fulfillment Dept.,
44 Hartz Way, Secaucus, NJ 07094.  The price is even more outrageous
than the stuff published by North Holland:  vol.1 (2 issues) 1983,
$52; vol.2 (4 issues) 1984, $104.

Can anybody explain why so much AI literature (even by US authors) is 
published by foreign publishers at outrageous prices?  I should have 
thought some US univerity press would get smart and get into the act
in a bigger way.  Lawrence Erlbaum seems to be doing a creditable job
in Cognitive Science, but that's just one corner of AI.

--Bruce

------------------------------

Date: 29 Jul 1983 0838-PDT
From: FC01@USC-ECL
Subject: Re: Lisp Books, Nondeterminism, Japanese Effort

Lots of things to talk about today, A good lisp book for the beginner:
The LISP 1.6 Primer. It really explains what's going down, and even
has exercises with answers. It is not specific to any particular lisp
of today (since it is quite old) and therefore gives the general
knowledge necessary to use any lisp (with a little help from the
manual).

Nondeterministic production systems: Lots of work has been done. The 
fact is that a production system is built under the assumption that 
there is a single global database. The tree version of a production 
system doesn't meet this requirement. On the other hand, there are 
many models of what you speak of.  The Petri-net model treats such 
things nondeterministically by selecting one or the other (assuming 
their results prevent each other from occuring) seemingly at random.  
Of course, unless you have a real parallel processor the results you 
get will be deterministic. I refer you to any good book on Petri-nets 
(Peterson is pretty good). Tree structured algorithms in general have 
this property, therefore any breadth-first search will try to do both 
forks of the tree at once. Other examples of theorem provers doing 
this are relatively common (not to mention most multiprocess operating
systems based on forks).

%th generation computers: There is a lot of work on the same basic
idea as 5th generation computers (a 5th generation computer by any
other name sounds better). From what I have been able to gather from
reading all the info from ICOT (the Japanese project directorate) they
are trying to do the project by getting foreign experts to come and
tell them how. They anounce their project, say they're going to lead
the world, and wait for the egos of other scientists to bring them
there to show them how to really do it. The papers I've read show a
few good researchers with real good ideas but little in the way of
knowing how to get them working. On the other hand, data flow, speech
understanding, systolic arrays, microcomputer interfaces to
'supercomputers' and high BW communications are all operational to
some degree in the US, and are being improved on a daily basis. I
would therefore say that unless we show them how, we will be the
leaders in this field, not they.

***The last article was strictly my opinion-- no reflection on anyone
else***

                        Fred

------------------------------

Date: Thu, 28 Jul 83 11:34:17 CDT
From: Paul.Milazzo <milazzo.rice@Rand-Relay>
Subject: Automated LISP Dialect Translation

When Rice University got its first VAX, a friend of mine and I set 
about porting a production system based game playing program to Franz 
Lisp from Cambridge Lisp running on an IBM 370.  We used, as I recall,
a combination of Emacs macros (to change lexical constructs) and a
LISP program (to translate program constructs).  The technique was not
an elegant one, nor was it particularly general, but it gives me good 
reason to think that the LISP translator Fred proposes is far from 
impossible.  It also points out that implementation superiority is not
the only reason for choosing one LISP over another.

                                Paul Milazzo <milazzo.rice@Rand-Relay>
                                Dept. of Mathematical Sciences
                                Rice University, Houston, TX

:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)
P.S.  Fred:  After living in Texas for eight years, I'm still not
      sure I could interpret a Texan's remarks for a New Yorker.
      The dialect is easy to understand, but the concepts are all
      different...
:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

------------------------------

Date: 28 Jul 1983 1352-PDT
From: Jay <JAY@USC-ECLC>
Subject: data flow computers and PS's

(more speculation)

 There has been some developement of computers suited to certain  high
level languages, includeing  the LISP machines.   There has also  been
some research  into non-Von  Neuman machines.   One  such  machine  is
the Data Flow Machine.

  The data flow machine differs from the conventional computer in that
ALL  instructions  are  initiated  when  the  program  starts.    Each
instruction waits  for  the  calculations yeilding  its  arguments  to
finish before it finishes.

  This machine seems,  to  me,  to be  ideally  suited  to  Production
Systems/Expert Systems.   Each  rule would  be  represented as  a  few
instructions (the IF part of the  production) and the THEN part  would
be represented by the completion of  the rule.  For example, the  rule
(Month-is-june AND Sun-is-up) ->  (Temperature-is-high) would be coded
as:

Temperature-is-high:    AND
                       /   \
                     /       \
                   /           \
          (Month-is-june)   (Sun-is-up)

  Where (Month-is-june) and (Sun-is-up) are represented as either
other rules, or as data (which I assume completes instantly).

j'

------------------------------

Date: Thu 28 Jul 83 16:06:46-PDT
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Repeated Substring Detection

Would anyone in AI have use for the following type of program?  Given
a k-dimensional (the lower k the better) input string of characters 
from a finite alphabet, the program finds all substrings of dimension
k (or less if necessary) that occur more than once in the input
string.  I don't have a program that does this, but would like to know
of any interest.

                                        Sincerely,
                                        Dave Foulser

------------------------------

Date: 27 Jul 1983 1617-PDT
From: Park
Subject: A.I. in Sci Fi

                  [Reprinted from the SRI BBoard.]

Do you have a favorite gripe about the way scientists, computers, 
robots, or artificial intelligence are portrayed on tv shows?  Send 
them to me and I will forward them on Monday August 1 to an 
honest-to-God tv-show writer who is going to write that kind of show 
soon and would like to do it right.

Bill Park, EJ239 SRI International 333 Ravenswood Avenue Menlo Park,
CA 94025

------------------------------

Date: Thu 28 Jul 83 12:24:12-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: A.I. in Sci Fi

Gripes?  You mean things like:

  Hawaii 5-0 always using the card sorter as the epitome
  of computer readout?

  Stepford Wives portraying androids so realistic that no one
  notices, and executives/scientists who prefer them to true
  companions?

  Demon Seed showing impregnation of a woman by a computer?

  Telephon slowing down CRT typeout to 150 baud and adding
  Teletype sound effects?

  War Games similarly slowing the CRT typeout; using
  natural language communication; using voice synthesis
  on a home terminal connected by modem to a military computer;
  postulating that our national defense is in the hands of
  unsecured computers with dial-up ports, faulty password
  systems, games directories, and big panels of flashing lights;
  and portraying scientists and generals as nerds?

  Star Wars suggesting that computerized targeting mechanisms
  will always be inferior to human reflexes?

  Tron's premise that a computer can suck you into its internal
  conceptual world?

  Star Trek and War Games preaching that any computer can be
  disabled, even melted, by a logical contradiction or an
  unsatisfiable task?

Nah, I don't mind.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂29-Jul-83  1911	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #29
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Jul 83  19:11:05 PDT
Date: Friday, July 29, 1983 4:27PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #29
To: AIList@SRI-AI


AIList Digest           Saturday, 30 Jul 1983      Volume 1 : Issue 29

Today's Topics:
  Robustness stories, program logs wanted; reprise
  Job Ad: Research Fellowships at Edinburgh AI
  Job Ad: Research Associate/Programmer at Edinburgh AI
  Job Ad: Computing Officer at Edinburgh AI
----------------------------------------------------------------------

Date: 28 Jul 83 1631 EDT (Thursday)
From: Craig.Everhart@CMU-CS-A
Reply-to: Robustness@CMU-CS-A
Subject: Robustness stories, program logs wanted; reprise

Response to the blinded Robustness mailbox has been good, but not
quite good enough to do the trick.  If you have a robustness-related
story or a change log for a program, wouldn't you consider sending it
to my collection?  Thanks very much!

What I need is descriptions of robustness features--designs or fixes
that have made programs meet their users' expectations better, beyond
bug fixing.  E.g.:
        - An automatic error recovery routine is a robustness feature,
          since the user (or client) doesn't then have to recover by
          hand.
        - A command language that requires typing more for a dangerous
          command, or supports undoing, is more robust than one that
          has neither feature, since each makes it harder for the user
          to get in trouble.
There are many more possibilities.  Anything where a system
doesn't meet user expectations because of incomplete or ill-advised
design is fair game.

Your stories will be used to validate my PhD thesis at CMU, which is
an attempt to build a discrimination net that will aid system
designers and maintainers in improving their designs and programs.
All stories will be properly credited in the thesis.

Please send a description of the problem, including an idea of the
task and what was going wrong (or what might have gone wrong) and a
description of the design or fix that handled the problem.  Or, if you
know of a program change log and would be available to answer a
question or two on it, please send it.  I'll extract the reports from
it.

Please send stories and logs to Robustness@CMU-CS-A.  Send queries
about the whole process to Everhart@CMU-CS-A.  I appreciate it--thank
you!

------------------------------

Date: Wednesday, 27-Jul-83  17:34:36-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Research Fellowships at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                              2 RESEARCH FELLOWS

                               (readvertisement)

Applications are invited for two Research Fellow posts to join a
project, funded by the Science and Engineering Research Council, which
is concerned with developing methods of modelling the user of
knowledge-based training and aid systems.  Candidates, who should have
a higher degree in Computer Science, Mathematics, Experimental
Psychology or related discipline, should be experienced programmers
and familiar with UNIX.  Experience of PROLOG or LISP and some
knowledge of IKBS (Intelligent Knowledge Based Systems) techniques 
would be an advantage.

The posts are tenable for three years, starting 1 October 1983, on the
salary scale 7190 - 11160 pounds sterling.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 5106.

------------------------------

Date: Wednesday, 27-Jul-83  17:38:47-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Research Associate/Programmer at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                         RESEARCH ASSOCIATE/PROGRAMMER

Applications are invited for a post of Research Associate/Programmer,
to join a project, led by Dr Jim Howe and funded by the Science and
Engineering Research Council, which is concerned with the
interpretation of sonar data in a 3-D marine environment.  Candidates,
who should have a degree in Computer Science, Mathematics or related
discipline, should be conversant with the UNIX programming environment
and fluent in the C language.  The work involves programming
applications of statistical estimation, 3-D motion representation, and
rule-based inference; experience in one or more of these areas would
be an advantage.

The post is tenable for three years, starting 1 October 1983, on the
salary scale 6310 - 7190 pounds sterling.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 5107.

------------------------------

Date: Wednesday, 27-Jul-83  17:32:58-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Computing Officer at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                        DEPARTMENTAL COMPUTING OFFICER

Applications are invited for a post of Departmental Computing Officer.
The successful applicant will lead a small group which is responsible
for creating, maintaining and documenting systems and application
software as needed for research and teaching in Artificial
Intelligence, and for managing the department's computing systems
which run under Berkeley UNIX.  Candidates, who should have a degree
in Computer Science or related discipline, should be conversant with
UNIX and fluent in the C language..  A background in compiler design
or an interest in A.I. would be advantageous.

The post is salaried on the scale 7190 - 11615 pounds sterling, with
placement according to age and experience.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 7033.

------------------------------

End of AIList Digest
********************

∂02-Aug-83  1514	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #30
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Aug 83  15:12:22 PDT
Date: Tuesday, August 2, 1983 12:54PM
From: AIList (Moderator: Kenneth Laws) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #30
To: AIList@SRI-AI


AIList Digest            Tuesday, 2 Aug 1983       Volume 1 : Issue 30

Today's Topics:
  Automatic Translation - Lisp to Lisp,
  Language Understanding - EPISTLE System,
  Programming Aids - High-Level Debuggers,
  Databases - Request for Geographic Descriptors,
  Seminars - Chess & Evidential Reasoning
----------------------------------------------------------------------

Date: Fri 29 Jul 83 15:53:59-PDT
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: Lisp Translators

[...]

        There has been some discussion about Lisp translation programs
the last couple of days. Another one to add to the list is that
developed by Gord Novak at Sumex for translating Interlisp into Franz,
Maclisp, UCILisp, and Portable Standard Lisp. I suspect Gord would 
have a pretty good idea about what else is available, as this seems to
be an area of interest of his.

                                Mike Walker

[Another resource might be the set of macros that Rodney Brooks 
developed to run his Maclisp ACRONYM system under Franz Lisp.
The Image Understanding Testbed at SRI uses this package.
-- KIL]

------------------------------

Date: 30 Jul 1983 07:10-PDT
From: the tty of Geoffrey S. Goodfellow
Reply-to: Geoff@SRI-CSL
Subject: IBM Epistle.


    TECHNOLOGY MEMO
    By Dan Rosenheim
    (c) 1983 Chicago Sun-Times (Independent Press Service)
    IBM is experimenting with an artificial intelligence program that 
may lead to machine recognition of social class, according to a 
research report from International Resource Development.
    According to the market research firm, the IBM program can
evaluate the style of a letter, document or memo and can criticize the
writing style, syntax and construction.
    The program is called EPISTLE (Evaluation, Preparation and 
Interpretation System for Text and Language Entities).
    Although IBM's immediate application for this technology is to 
highlight ''inappropriate style'' in documents being prepared by 
managers, IRD researchers see the program being applied to determine 
social origins, politeness and even general character.
     Like Bernard Shaw's Professor Higgins, the system will detect
small nuances of expression and relate them to the social background
of the originator, ultimately determining sex, age, level of
intelligence, assertiveness and refinement.
    Particularly intriguing is the possibility that the IBM EPISTLE 
program will permit a response in the mode appropriate to the user and
the occasion. For example, says IRD, having ascertained that a letter
had been sent by a 55-year-old woman of Armenian background, the
program could help a manager couch a response in terms to which the
woman would relate.

------------------------------

Date: 01 Aug 83  1203 PDT
From: Jim Davidson <JED@SU-AI>
Subject: EPISTLE


There's a lot of exaggeration here, presumably by the author of the
Sun-Times article.  EPISTLE is a legitimate project being worked on
at Yorktown, by George Heidorn, Karen Jensen, and others.  [See,
e.g., "The EPISTLE text-critiquing system". Heidorn et al, IBM
Systems Journal, 1982] Its general domain, as indicated, is business
correspondence.  Its stated (long-term) goals are

    (a) to provide support for the authors of business letters--
        critiquing grammar and style, etc.;

    (b) to deal with incoming texts: "synopsizing letter contents,
        highlighting portions known to be of interest, and
        automatically generating index terms based on conceptual
        or thematic characteristics rather than key words".

Note that part (b) is stated considerably less ambitiously than in
the Sun-Times article.

The current (as of 1982) version of the system doesn't approach even
these more modest goals.  It works only on problems in class (a)--
critiquing drafts of business letters.  The *only* things it checks
for are grammar (number agreement, pronoun agreement, etc.), and
style (overly complex sentences, inappropriate vocabulary, etc.)
Even within these areas, it's still very much an experimental system,
and has a long way to go.

Note in particular that the concept of "style" is far short of the
sort of thing presented in the Sun-Times article.  The kind of style
checking they're dealing with is the sort of thing you find in a
style manual: passive vs. active voice, too many dependent clauses,
etc.

------------------------------

Date: 28 Jul 1983 05:25:43-PST
From: whm.arizona@Rand-Relay
Subject: Debugger Query--Summary of Replies

                    [Reprinted from Human-Nets.]

Several weeks ago I posted a query for information on debuggers.  The 
information I received fell into two categories: information about 
papers, and information about actual programs.  The information about 
papers was basically subsumed by two documents: an annotated 
bibliography, and soon-to-be-published conference proceedings.  The 
information about programs was quite diverse and somewhat lengthy.  In
order to avoid clogging the digest, only the information about the 
papers is included here.  A longer version of this message will be 
posted to net.lang on USENET.

The basic gold mine of current ideas on debugging is the Proceedings 
of the ACM SIGSOFT/SIGPLAN Symposium on High-Level Debugging which was
held in March, 1983.  Informed sources say that it is scheduled to 
appear as vol. 8, no. 4 (1983 August) of SIGSOFT's Software 
Engineering Notes and as vol. 18, no. 8 (1983 August) of SIGPLAN 
Notices.  All members of SIGSOFT and SIGPLAN should receive copies 
sometime in August.

Mark Johnson at HP has put together a pair of documents on debugging.
They are:

        "An Annotated Software Debugging Bibliography"
        "A Software Debugging Glossary"

I believe that a non-annotated version of this bibliography appeared 
in SIGPLAN in February 1982.  The annotated bibliography is the basic 
gold mine of "pointers" about debugging.

Mark can be contacted at:

        Mark Scott Johnson
        Hewlett-Packard Laboratories
        1501 Page Mill Road, 3U24
        Palo Alto, CA 94304
        415/857-8719

        Arpa:  Johnson.HP-Labs@RAND-RELAY
        USENET: ...!ucbvax!hplabs!johnson


Two books were mentioned that are not currently included in Mark's 
bibliography:

        "Algorithmic Debugging" by Ehud Shapiro.  It has information
          on source-level debugging, debuggers in the language being
          debugged, debuggers for unconventional languages, etc.  It
          is supposedly available from MIT Press.  (From
          dixon.pa@parc-maxc)

        "Smalltalk-80: The Interactive Programming Environment"
           A section of the book describes the system's interactive
           debugger.  (This book is supposedly due in bookstores
           on or around the middle of October.  A much earlier
           version of the debugger was briefly described in the
           August 1981 BYTE.)  (From Pavel@Cornel.)

Ken Laws (Laws@sri-iu) sent me an extract from "A Bibliography of 
Automatic Programming" which contained a number of references on 
topics such as programmer's apprentices, program understanding, 
programming by example, etc.


Many thanks to those who took the time to reply.

                                Bill Mitchell
                                The University of Arizona
                                whm.arizona@rand-relay
                                arizona!whm

------------------------------

Date: Fri 29 Jul 83 19:32:39-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: WANTED: Geographic Information Data Bases

I want to build a geographic knowledge base and wonder if
someone out there has small or large sets of foreign
geographic data. Something containing elements such as
(PARIS CITY FRANCE) composed of three items,
Geographic-Name, Superclass, and Containing-Geographic item.

I have already acquired a list of all U.S. cities and
their state memberships; but apart from that need other
geographic information for other U.S. features (e.g. counties,
rivers, mountains, etc.) as well as world-wide data.

I am not especially looking for numeric data (e.g. Longitude
and Latitude; elevations, etc.) nor numeric attributes such
as population, area, etc. -- I want symbolic data, names of
geographic entities.

Note::: I do mean already machine-readable.

Bob Amsler
Natural-Language and Knowledge-Resource Systems Group
Advanced Computer Systems Department
SRI International
333 Ravenswood Ave
Menlo Park, CA 95025

------------------------------

Date: 1 August 1983 1507-EDT
From: Dorothy Josephson at CMU-CS-A
Subject: CMU Seminar, 8/9

                  [Reprinted from the CMU BBoard.]

DATE:           Tuesday, August 9, 1983
TIME:           3:30 P.M.
PLACE:          Wean Hall 5409
SPEAKER:        Hans Berliner
TOPIC:          "Ken Thompson's New Chess Theorem"

                        ABSTRACT

Among the not-quite-so-basic endgames in chess is the one of 2
Bishops versus Knight (no pawns).  What the value of a general
position in this domain is, has always been an open question.  The
Bishops have a large advantage, but it was thought that a basic and
usually achievable position could be drawn.  Thompson has just shown
that this endgame is won in the general case using a technique called
retrograde enumeration.  We will explain what he did, how he did it,
and the significance of this result.  We hope some people from Formal
Foundations will attend as there are interesting questions relating
to whether a construction such as this should be considered a
"proof."

------------------------------

Date: 1 Aug 83 17:40:48 PDT (Monday)
From: murage.pa@PARC-MAXC.ARPA
Subject: HP Computer Colloquium, 8/4

                  [Reprinted from the SRI BBoard.]


                       JOHN D. LAWRENCE

                   Articifial Intelligence Center
                       SRI International


                       EVIDENTIAL REASONING:
           AN IMPLIMENTATION FOR MULTI-SENSOR INTEGRATION


One common feature of most knowledge-based expert systems is that
they must reason based upon evidential information. Yet there is very
little agreement on how this should be done. Here we present our
current understanding of this problem and its solution as it applies
to multi-sensor integration. We begin by characterizing evidence as a
body of information that is uncertain, incomplete, and sometimes
inaccurate. Based on this characterization, we conclude that
evidential reasoning requires both a method for pooling multiple
bodies of evidence to arrive at a consensus opinion and some means of
drawing the appropriate conclusions from that opinion. We contrast
our approach, based on a relatively new mathematical theory of
evidence, with those approaches based on Bayesian probability models.
We believe that our approach has some significant advantages,
particulary its ability to represent and reason from bounded
ignorance. Further, we describe how these techniques are implemented
by way of a long term memory and a short term memory.  This provides
for automated reasoning from evidential information at multiple
levels of abstraction over time and space.


   Thursday, August 4, 1983 4:00 p.m.

   5M Conference Room
   1501 Page Mill Road
   Palo Alto, CA 94304

   NON-HP EMPLOYEES:  Welcome! Please come to the lobby on time, so
that you may be escorted to the conference room.

------------------------------

End of AIList Digest
********************

∂02-Aug-83  2352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #31
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Aug 83  23:50:57 PDT
Date: Tuesday, August 2, 1983 10:49PM
From: AIList Moderator: Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #31
To: AIList@SRI-AI


AIList Digest           Wednesday, 3 Aug 1983      Volume 1 : Issue 31

Today's Topics:
  Fifth Generation - Opinion & Book Review
----------------------------------------------------------------------

Date: Sat 30 Jul 83 21:39:16-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: 5th generation

I think that there is a widespread misconception on ICOT and the 5th
generation project. Here are my comments on a recent message to this
bulletin board:

    From what I have been able to gather from reading all the
    info from ICOT (the Japanese project directorate) they are
    trying to do the project by getting foreign experts to come
    and tell them how. They anounce their project, say they're
    going to lead the world, and wait for the egos of other
    scientists to bring them there to show them how to really do
    it.

I know personally several people that have visited ICOT, have talked 
at length with two of them and read trip reports by others. In their 
visits, there was very little if any suggestion tht they should 
participate on the day to day effort at ICOT or give detailed reports 
on their work. The character of the visits was very much that of an
academic visit where the visitor goes on doing his current work and
sees what the hosts are up to. They were also very open with their
(very concrete and under way) plans. The image of the ICOT worker
waiting axiously to be told what to do seems the opposite of reality,
and in fact they sometimes seem too busy with their own work to give
their visitors any more than the minimum courtous attention.  As far
as I can tell, the goal of the invitations is to foster goodwill and
understanding of ICOTs goals.

    The papers I've read show a few good researchers with real
    good ideas but little in the way of knowing how to get them
    working.

ICOT has a very clear plan of creating a line of successively faster 
and more sophisticated "inference machines".  The first, the personal 
sequential inference machine (PSI), a specialized Prolog machine, is
being built now, and there is no reason to believe that it will not be
completed in time. They are also doing research in parallel 
architectures and database machines.

    On the other hand, data flow, speech understanding, systolic
    arrays, microcomputer interfaces to 'supercomputers' and
    high BW communications are all operational to some degree in
    the US, and are being improved on a daily basis. I would
    therefore say that unless we show them how, we will be the
    leaders in this field, not they.

I have looked, and I know people who have looked much more carefully, 
at the usefulness of current fashions in parallel architectures for 
general deductive inference engines. The picture, unfortunately, is 
not brilliant.  Given that ICOT are comitted to logic programming and 
deductive mechanisms in general, there isn't that much that they could
borrow from that work. That is, they are taking genuine research 
risks. To explain fully why I think most current architectures are not
appropriate for logic programming/deduction would take me too far 
afield. I will just point out that logic programming/deduction involve
dealing with incompletely specified objects (terms with uninstantiated
variables) that can be specified further in many alternative ways (OR 
parallelism).  Implementation of this kind of parallelism in currently
BUILT architectures would involve either wholesale copying or a high 
cost in accessing variable bindings.

Fernando Pereira

------------------------------

Date: 01 Aug 83  1422 PDT
From: Jim Davidson <JED@SU-AI>
Subject: The Fifth Generation (book review)

BC-BOOK-REVIEW Undated By CHRISTOPHER LEHMANN-HAUPT c. 1983 N.Y. Times
News Service
    THE FIFTH GENERATION. Artificial Intelligence and Japan's Computer
Challenge to the World. By Edward A. Feigenbaum and Pamela McCorduck.
275 pages. Illustrated with diagrams. Addison-Wesley. $15.75.

    This isn't just another of those books that says Japan is better 
than we are and therefore is going to keep on whipping us in 
productivity. ''The Fifth Generation'' goes considerably further than 
that. It points with a trembling finger at Japan's commitment to 
produce within a decade a new generation of computers so immensely 
powerful that they will in effect constitute a new and revolutionary 
form of wealth.
    KIPS, these computers will be called, an acronym of knowledge 
information processing systems. They will exploit the recent 
speculation that intelligence, be it real or artificial, doesn't 
depend so much on the power to reason as it does on a ''messy bunch of
details, facts, rules of good guessing, rules of good judgment, and
experiential knowledge,'' as the authors put it. They will be so much
more powerful that where today's machines can handle 10,000 to 100,000
logical inferences per second, or LIPS, the next-generation computer
will be capable of 100 million to 1,000 million LIPS.
    These computers, if the Japanese succeed, will be able to interact
with people using natural language, speech and pictures. They'll 
transform talk into print and translate one language into another.  
Compared to today's machines, they'll be what automobiles are to 
bicycles. And because they'll raise knowledge to the status of what 
land, labor and capital once were, these machines will become ''an 
engine for the new wealth of nations.''
    Will the Japanese really pull this off, despite their supposed 
tendency to be ''copycats'' instead of innovators? The authors insist 
that this and other stereotypes are largely mythical; that every great
industrial nation must go through a phase of imitation. Sure, the
Japanese can do it. And even if they fail to fulfill their grand 
design, they'll likely achieve enough to make it pointless for any 
other nation to compete with them. Meanwhile, the United States will 
assume the role of ''the first great post-industrial agrarian 
society.''
    It's quite an awesome picture that Edward A. Feigenbaum and Pamela
McCorduck have painted. What's more, they have impressive credentials
- Feigenbaum as professor of computer science at Stanford University 
and a founder of TeKnowledge Inc., a pioneer knowledge engineering 
company; Mrs. McCorduck as a science-writer who teaches at Columbia 
and whose last book was a history of artificial intelligence called 
''Machines Who Think.'' And their Jeremiad is extremely well written, 
even quite witty in places. It's certainly more articulate by an order
of magnitude than ''In Search of Excellence,'' the book that defends
America's managerial potential and now sits atop the nonfiction
best-seller list.
    So what are we supposed to do in the face of this awesome
challenge?  The authors list various possibilities, such as joining up
with Japan or preparing for our future as the world's truck garden.
But what they'd really like to see is ''a national center for
knowledge technology'' - that is, ''a gathering up of all knowledge,''
''to be fused, amplified, and distributed, all at orders of magnitude,
difference in cost, speed, volume, and >>usefulness<< over what we
have now.''
    Let that be as it may. While ''The Fifth Generation'' makes a 
powerful case, there are those who believe that, between the 
Pentagon's Defense Advanced Research Projects Agency (DARPA) and 
several interindustry groups that have been formed, we have already 
been sufficiently aroused to compete in this new race for world 
leadership. (The Soviet Union, by the way, is out in left field, 
according to the authors.)
    Whether the apocalypse it foresees is real or not, ''The Fifth 
Generation'' is worthwhile reading. Pamela McCorduck is very good on 
the debate over the ability of the machines to think, concluding that 
the condemnation they have met has been largely political - amusingly 
similar to ''the reasons given in the nineteenth century to explain 
why women could never be the intellectual equals of men.'' Feigenbaum 
is fascinating on his firsthand impressions of the Japanese computer 
establishment. (Each of the co-authors becomes a character in the 
narrative when his or her specialty happens to come up.)
    Together they are lucid on what the fifth-generation machines will
be like. And there is the standard mind-bending section on future 
computer applications. I particularly like Mrs. McCorduck's vision of 
the geriatric robot. ''It isn't hanging about in the hopes of 
inheriting your money - nor of course will it slip you a little 
something to speed the inevitable. It isn't hanging about because it 
can't find work elsewhere. It's there because it's yours. It doesn't 
just bathe you and feed you and wheel you out into the sun when you 
crave fresh air and a change of scene, though of course it does all 
those things. The very best thing about the geriatric robot is that it
>>listens<<. 'Tell me again,' it says, 'about how wonderful-dreadful
your children are to you. Tell me again that fascinating tale of the
coup of '63. Tell me again ... ' And it means it.''

------------------------------

End of AIList Digest
********************

∂04-Aug-83  1211	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #32
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Aug 83  12:09:23 PDT
Date: Thursday, August 4, 1983 9:26AM
From: AIList Moderator: Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #32
To: AIList@SRI-AI


AIList Digest            Thursday, 4 Aug 1983      Volume 1 : Issue 32

Today's Topics:
  Graph Theory - Finding Clique Covers,
  Knowledge Representation - Textnet,
  Fifth Generation & Msc. - Opinion,
  Lisp - Revised Maclisp Manual & Review of IQLisp
----------------------------------------------------------------------

Date: 2 Aug 83 11:14:51 EDT  (Tue)
From: Dana S. Nau <dsn%umcp-cs@UDel-Relay>
Subject: A graph theory problem

The following graph theory problem has arisen in connection with some
AI research on computer-aided design and manufacturing:

    Let H be a graph containing at least 3 vertices and having no
    cycles of length 4.  Find a smallest clique cover for H.

If there were no restrictions on the nature of H, the problem would be
NP-hard, but given the restrictions, it's unclear what its complexity
is.  A couple of us here at Maryland have been puzzling over the
problem for a week or so, and haven't been able to reduce any known
NP-hard problem to it.  However, the fastest procedure we have found
to solve the problem takes exponential time in the worst case.

Does anyone know anything about the computational complexity of this
problem, or about possible procedures for solving it?

------------------------------

Date: 3 Aug 83 20:50:46 EDT  (Wed)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: Textnet

[Adapted from Human-Nets.  The organization and indexing
of knowledge are topics that should be of interest to the AI
community.  -- KIL]

Regarding the recent worldnet discussion, I thought I'd briefly 
describe my research and suggest how it might apply: My thesis work 
has been in the area of advanced text handlers for the online 
scientific community.  My system is called "Textnet" and shares much 
with both NLS/Augment and Hypertext.  It combines a hierarchical 
component (like NLS, though we allow and encourage multiple 
hierarchies for the same text) with the arbitrary linked network 
strategy of Hypertext.  The Textnet data structure resembles a 
semantic network in that links are typed and are valid manipulable 
objects themselves, as are "chunks" (nodes with associated text) and 
"tocs" (nodes capturing hierarchical info).

I believe that a Textnet approach is the most flexible for a national 
network.  In a distributed version of Textnet (distributing 
Hypertext/Xanadu has also been proposed), users create not only new 
papers and critiques of existing ones, but also link together existing
text (i.e., reindexing information), and build alternate
organizations.

There can be no mad dictator in such an information network.  Each 
user organizes the world of scientific knowledge as he/she desires.  
Of course, the system can offer helpful suggestions, notifying a user 
about new information needing to be integrated, etc.  But in this 
approach, the user plays the active role.  Rather than passively 
accepting information in whatever guise worldnet decides to promote, 
each must take an active hand in monitoring that part of the network 
of interest, and designing personalized search strategies for the 
rest.  (For example, I might decree that any information stemming from
a set of journals I deem absurd, shall be ignored.)  After all, any 
truly democratic system should and does require a little work from 
each member.

------------------------------

Date: 3 Aug 1983 0727-PDT
From: FC01@USC-ECL
Subject: Re: Fifth Generation

Several good points were made about the Japanese capabilities and
plans for 5th generation computers. I certainly didn't intend to say
that they weren't capable of building such machines, only that the
U.S. could easily beat them to it if the effort were deemed
worthwhile. I have to agree that the nature of systolic arrays is
quite different from the necessary architecture for inference engines,
but nevertheless for vision and speech applications, these arrays are 
quite clearly superior. I know of no other nation with a data flow
machine in operation (although the Japanese are most certainly working
on it). Virtually every theorem proving system in existence was
written in the U.S. All of this information was freely (and rightly in
my opinion) disseminated to the rest of the world. If we continue to
do the research and seek immediate profits at the expense of long term
development, there is no doubt in my mind that the Japanese will beat
us there. If on the other hand, we use our extreme expertise to make 
our development programs the best they can be, and don't make the same
mistake we made with robotics in the 70s, I feel we can build better
machines sooner.

        Lisp translators from interlisp to otherlisps seem very 
interesting to me. Perhaps someone could send me a pointer to an 
ARPA-net mailing address of the creator/maintainer of these programs.
To my knowledge, none operates w/out human assistance, but I could be 
wrong.  [Check with Hanson@SRI-IU for Rodney Brooks' Maclisp-to-Franz 
macro package.  It does not cover all features in Maclisp.  -- KIL]

        As to natural language translation using computers, it has
been tried for technical translation and has been quite succesful as a
dictionary. As of 5 years ago, there were no real translators beyond
this for natural language.  Perhaps this has changed drastically. It
is my guess that without a system capable of learning, true
translation will never be done. It is simply too much to expect that a
human expert would be able to embody all of the knowledge of a
language into a program. Perhaps 90% translation could be achieved in
a few years, and 99% could probably be here w/in 10 years (between
similar languages).

        Speech recognition can be quite effective for relatively small
vocabularies by a given speaker in a particular language.
Understanding speech is a considerably slower process, but has the
advantage of trying to make sense of the sounds. It is not probably
realistic to say that general purpose speech understanding systems in
multiple languages with multiple speakers using large vocabularies
will be operational at real time performance in the next 10 years.

        Vision systems have been researched considerably for limited
robotics applications. Context boundedness seems to have a great
effect on the sort of IO that humans do. It is certainly not clear
that real time vision systems capable of understanding large varieties
of environments will be operational w/in the next 10 years.

        These problems are not simply solved by having very large
quantities of processing power! If they were, 5th generation computers
would not be such a risk. Even if the goals are not met, the advances
due to a large R+D program such as ICOTs will certainly have many
technological spinoffs with a widespread effect on the world
marketplace. It has been a longstanding problem with AI research that
people who demonstrate its results and people who report on these 
demonstrations both stress the possibilities for the future rather
than the realities of today. In many cases, the misconceptions spread
through the scientific community as well as the general public. Even
many computer science 'experts' that I've met have vast misconceptions
about what the current systems can in fact do, have in fact done, and
can be easily expanded to do. In many cases, NP complete problems have
been approached through heuristic means. This certainly works in many
cases, but as the sizes of problems increase, it is not clear that
these heuristics will apply as handily. NP completeness cannot be 
gotten around in general by building bigger or faster computers.
Computer learning has only been approached by a few researchers, and
few people would be considered 'intelligent' if they couldn't learn
from their mistakes.

        It doesn't bother me to see Kirk destroy computers with his
illogical ways. I've personally blown away many operating systems
accidentally with my illogical ways, and don't expect that anyone will
ever be able to build a 'perfect' machine. It does bother me when
people look at that as more than fantasy and claim it as scientific
evidence. Just as the 'robots' that are run by remote control (kind of
like a radio controlled airplane) sometimes upset me when they fool
people into thinking they are autonomous and intelects.

                                Yet another flaming controversy
				starter by
                                        Fred

------------------------------

Date: 3 August 1983 15:04 EDT
From: Kent M. Pitman <KMP @ MIT-MC>
Subject: MIT-LCS TR-295: The Revised Maclisp Manual

They said it would never happen, but look for yourself...

                        The Revised Maclisp Manual
                             by Kent Pitman

                                Abstract

Maclisp is a dialect of Lisp developed at M.I.T.'s Project MAC (now
the MIT Laboratory for Computer Science) and the MIT Artificial
Intelligence Laboratory for use in artificial intelligence research
and related fields.  Maclisp is descended from Lisp 1.5, and many
recent important dialects (for example Lisp Machine Lisp and NIL) have
evolved from Maclisp.

David Moon's original document on Maclisp, The Maclisp Reference
Manual (alias the Moonual) provided in-depth coverage of a number of
areas of the Maclisp world. Some parts of that document, however, were
never completed (most notably a description of Maclisp's I/O system);
other parts are no longer accurate due to changes that have occurred
in the language over time.

This manual includes some introductory information about Lisp, but is 
not intended as tutorial. It is intended primarily as a reference 
manual; particularly, it comes in response to users' pleas for more 
up-to-date documentation. Much text has been borrowed directly from
the Moonual, but there has been a shift in emphasis. While the Moonual
went into greater depth on some issues, this manual attempts to offer
more in the way of examples and style notes.  Also, since Moon had
worked on the Multics implementation, the Moonual offered more detail
about compatibility between ITS and Multics Maclisp. While it is hoped
that Multics users will still find the information contained herein to
be useful, this manual focuses more on the ITS and TOPS-20
implementations since those were the implementation most familiar to
the author.

The PitMANUAL, draft #14 May 21, 1983
                                   Saturday Evening Edition

Keywords: Artificial Intelligence, Lisp, List Structure, Maclisp,
          Programming Language, Symbol Manipulation

Ordering Information:

        The Revised Maclisp Manual
        MIT-LCS TR-295, $13.10

        Publications
        MIT Laboratory for Computer Science
        545 Technology Square
        Cambridge, MA 02139

About 300 copies were made. I don't know how long they'll last.
--kmp

------------------------------

Date: 1 August 1983 1747-EDT
From: Jeff Shrager at CMU-CS-A
Subject: IQLisp for the IBM-PC


        A review of IQLisp (by Integral Quality, 1983).

                Compiled by Jeff Shrager
                    CMU Psychology
                      7/27/83

The following comments refer to IQLisp running on an IBM-PC XT/256K
(you tell IQLisp the host machine's memory size at startup).  I spent
two two-hour (approximately) sessions with IQLisp just going through
the manual and hacking various features.  Then I tried to implement a
small production system interpreter (which took another three hours).

I. Things that make IQLisp more attractive than other micro Lisp
   systems that I have seen.

  A. The general workspace size is much larger than most due to the
     IBM-PC XT's expanded capacity.  IQLisp can take advantage of the
     increased space and the manual explains in detail how memory
     can be rearranged to take advantage of different programming
     requirements.  (But, see II.G.) (See also, summary.)
  B. The Manual is complete and locally legible. (But see II.D.)
     The internal specifications manual is surprisingly clear and
     complete.
  C. There is a window package. (But the windows aren't implemented
     to scroll under one another so the feature is more-or-less
     useless.)
  D. There is a macro facility.  This feature is important to both
     speed and eventual implementation of a compiler. (But see II.B.)
     Note that the manual teaches the "correct" way to write
     fexprs -- i.e., with macros.
  E. It uses the 8087 FP coprocessor if one exists. (But see II.A.)
  F. Integer bignums are supported.
  G. Arrays are supported for various data types.
  H. It has good "simple" I/O facilities.
     1. Function key support.
     2. Single keystroke input.
     3. Read macros. (No print macros?)
     4. A (marginal) window facility.
     5. Multiple streams.
  I. The development package is a useful programming tool.
     1. Error recovery tools are well designed.
     2. A complete structure editor is provided. (But, see II.I.)
     3. Many useful macros are included (e.g, backquote).
  J. It seems to be reasonably fast.  (See summary.)
  K. Stack frame hacking functions are provided which permit error
     control and evaluations in different contexts. (But, see II.H.)
  L. There is a clean interface to DOS.  (The "DIR" function is
     especially useful and cleverly implemented.)


II. Negative aspects of IQLisp.  (* Things marked with a "*" indicate
    important deficiencies.)

**A. There is no compiler!
 *B. Floating point is not supported without the 8087.  One would
     think that some sort of even very slow FP would be provided.
 *C. Casing is completely backwards.  Uppercase is demanded by IQLisp
     which forces one to put on shift lock (in a bad place on the IBM
     PC).  If any case dependency is implemented it should be the
     opposite (i.e., demand lower case) but case sensitivity should
     be switch controllable -- and default OFF!
 *D. The manual is poorly organized.  It is very difficult to find
     a particular topic since there are no complete indexes and the
     topics are split over several different sections.
  E. Error recovery is sometimes poor.  I have had three or four
     occasions to reboot the PC because IQLisp had gone to lunch.
     Once this was because the 8087 was not present and I had told
     the system that it was.  I don't know what caused the other
     problems.
  F. The file system supports only sequential files.
  G. The stack is fixed at 64K maximum which isn't very much and
     permits only about 700 levels of binding-free recursion.
  H. No new features of larger Lisp systems are provided.  For
     example: closures, flavors, etc.  This is really not a
     reasonable complaint since we're talking 256K here.
  I. There is no screen editor for functions.


III. Summary.

I was disappointed by IQLisp but perhaps this is because I am still
dreaming of having a Lisp machine for under $5,000.  IQ has obviously
put a very large amount of effort into the system and its
documentation (the latter being at least as important as the former).

Although one does not have all the functionality of a Lisp machine in
IQLisp (or even nearly so) I think that they have done an admirable
job within the constraints of the IBM-PC.  Some of the features are
overkill (e.g, the window system which is pretty worthless in the way
provided and in a non-graphics environment.)

My production system was not the model of efficient PS hacking.  It
was not meant to be.  I wanted to see how IQLisp compared with our
Vax VMS Franz system.  I didn't use a RETE net or efficient memory
organization.  IQ didn't do very well against even a heavily loaded
Vax (also interpreted lisp code). The main problem was space, not
speed.  This is to be expected on a machine without virtual memory.
Since there are no indexed file capabilities in IQLisp, the user is
strictly limited by the available core memory. I think that it's
going to be some time before we can do interesting AI with a micro.
However, (1) I think that I could have rewritten my production system
to be much more efficient in both space and time.  It may have run
acceptably with some careful tuning (what do you want for three
hours!?). And (2) we are going to try to use the system in the near
future for some human-computer interaction experiments -- as a
single-subject workstation for learning Lisp.  I see no reason that
it should not perform acceptably in domains which are less
information intensive than AI.

The starred (*) items in section II above are major stumbling blocks
to using IQLisp in general.  Of these, it is the lack of a Lisp
compiler which stops me from recommending it to everyone.  I expect
that this will be corrected in the near future because they have all
the required underpinnings (macros, assembly interface, etc).  Why
don't people just write a simple little lisp system and a whizzy
compiler?

------------------------------

End of AIList Digest
********************

∂05-Aug-83  2115	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #33
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Aug 83  21:13:29 PDT
Date: Friday, August 5, 1983 5:13PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #33
To: AIList@SRI-AI


AIList Digest            Saturday, 6 Aug 1983      Volume 1 : Issue 33

Today's Topics:
  Automatic Translation - FRANZLATOR & Natural Language,
  Expert Systems - Survey Alert,
  Fifth Generation - Opinions,
  Computational Complexity - Parallelism,
  Distributed AI - Problem Solving Bibliography,
  Literature Sources - Requests,
  Workstations - Request,
  Job - Stanford Heuristic Programming Project
----------------------------------------------------------------------

Date: Thu, 4 Aug 83 12:06 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: FRANZLATOR inter-dialect translation system


We have built a rule-driven lisp-to-lisp translation system
(FRANZLATOR) in Franz lisp and have used it to translate KL-ONE from
Interlisp to Franz. (We includes people here at Penn and at BBN and
CCA).  The system is modular so that modifying it to work with a
different source and target dialect should involve only changing
several data bases.

The translator is organized as a two-pass system which is applied to a
set of source-dialect files and produces a corresponding set of
target-dialect files and a set of files containing notes about the
translation (e.g.  possible errors).

During the first pass all of the source files are scanned to build up
a database of information about the functions defined in the file
(e.g. type of function, arity, how it evals its args).  In the second
pass the expressions in the source files are translated and the
results written to the target files. The translation of an
s-expression is driven by transformation rules applied according to an
"eval-order" schedule (i.e. the arguments to a function call are
translated before the call to the function itself). An additional
initial pass may be required to perform certain character-level
transformations, although this can often be done through the use of
multiple readtables.

The actual translation is done by a set of rewrite rules, each rule
taking an s-expression into one or more resultant s-expressions.  In
addition to the usual "pattern" and "result" parts, rules can be
easily augmented with arbitrary conditions and actions and can have
several other attributes which control their application (e.g. a
priority). Variables are represented using the "backquote" convention.
Example of rules for Interlisp->Franz are:
   (NIL nil)
   ((NLISTP ,x) (not (dtpr ,x)))
   ((PROG1 ,@args) (prog2 nil ,@args))
   ((DECLARE: ,@args) ,(translateDeclare: ,args))
   ((and ,@x (and ,@y) ,@z) (and ,@x ,@y ,@z) -cyclic)

The translation rules are presented to the system in the form
described above and are immediately "compiled" (by macro-expansion)
into Lisp code which is quite efficient and can be, of course, further
compiled by LISZT.  The pattern matching operation, for example, is
"open coded" into a conjuction of primitive tests and action (e.g. EQ,
EQUAL, LENGTH, SETQ).

If you are interested in more information, contact me.

- Tim at UPENN (csnet)

------------------------------

Date: Friday, 5 August 1983 12:43:04 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Machine translation

        The thing that makes any kind of general purpose machine
translation extremely hard is that there generally aren't one-to-one
correspondences between words, phrases, or sometimes concepts in two
different human languages.  A real translator essentially reads and
understands the text in one language, and then generates the
appropriate text in the other language.  Since understanding general
texts requires huge amounts of real-world knowledge, unrestricted
machine translation will arrive about the time AI programs can pass
the Turing test.  In my opinion, this will be substantially longer
than ten years.

------------------------------

Date: Thu 4 Aug 83 09:25:16-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Summary

The August issue of IEEE Spectrum contains an article by William B.  
Gevarter (of NASA) titled "Expert Systems: Limited but Powerful".  The
table of existing expert systems shows 79 systems in 16 categories.  
The text includes brief descriptions of Dendral, Mycin, R1, and 
Internist.

                                        -- Ken Laws

------------------------------

Date: 4 Aug 83 8:56:21-PDT (Thu)
From: decvax!linus!philabs!ras @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: philabs.27320

        Bully for you Fred! I also believe the Japanese do not have
        the know how nor the man-power to create such a machine.
        They make great memory devices but thats where it ends.

                                        Rafael Aquino !plabs

------------------------------

Date: Thu 4 Aug 83 13:41:13-PDT
From: Al Davis <ADavis at SRI-KL>
Subject: Re: Fifth Generation Book Review


As a frequent visitor to the Soviet Union, and regular reader of
Kibernetica, I don't get the feeling that the "Russians are out in
left field" - nor do I feel that the book is particularly
illuminating.  It is readable and provides some excellent insight to
the non-professional.  However the hype and reality is carefully
interwoven.  After all how professional is the "pointing of a
trembling finger at the Japanese".  Take your pick.

                                                Al Davis

                                                AI Architecture
                                                Fairchild AI Labs

------------------------------

Date: 4 Aug 1983 23:05:15-PDT
From: borgward.umn-cs@Rand-Relay
Subject: Re: Fifth Generation Computing

I do know of other nations with a data flow machine in operation.  
Gurd and Watson have one that works at Manchester in England.  I think
that the French LAU system also works.  Such lapses in attention are
what make Americans unpopular in Europe.  We also import a lot of AI
research from Europe and Prolog as well.

--Peter Borgwardt, University of Minnesota

------------------------------

Date: Fri 5 Aug 83 14:06:06-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NP-completeness and parallelism

        In AIList V#32 Fred comments that "NP-completeness cannot be 
gotten around in general by building bigger or faster computers".  My
guess would be that parallelism may offer a way to reduce the order of
an algorithm, perhaps even to a polynomial order (using a machine with
"infinite parallel capacity", closely related to Turing's machine with
"infinite memory"). For example, I have heard of work developing 
sorting algorithms for parallel machines which have a lower order than
any known sequential algorithm.

        Perhaps more powerful machines are truly the answer to some of
our problems, especially in vision analysis and data base searching.  
Has anyone heard of a good book discussing parallel algorithms and 
reduction in problem order?

David Rogers

DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Thu 4 Aug 83 17:41:01-PDT
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Distributed Problem Solving: An Annotated Bibliography

 For all of you who expressed interest in the annotated bibliography
on Distributed Problem Solving, here is some important information on
how to ftp a copy if you don't know this already.

The bibliography manuscript file "<vsingh.dps>dpsdis.bib" will be kept
on sumex-aim.arpa.  Please login as "anonymous" with password 
"sumexguest" (one word).

The file is by no means complete as you can see.  It will be 
continually updated.  You may notice that the file is prepared for 
Scribe formatting.

Please mail additional entries/annotations/corrections/suggestions to 
me and I will incorporate them in the file as soon as possible.  The 
turnaround time will be a lot shorter if the new entries are also in 
Scribe format.  If you know anything about Scribe, please save me a 
lot of effort and put your entries in Scribe format.

For those of you that did not see the original message, I have 
reproduced it below.

-------------------------------------------------------------------------------


This is to request contributions to an annotated bibliography of 
papers in *Distributed Problem-Solving* that I am currently compiling.
My plan is to make the bibliography available to anybody that is 
interested in it at any stage in its compilation.  Papers will be from
many diverse areas: Artificial Intelligence, Computer Systems 
(especially Distributed Systems and Multiprocessors), Analysis of 
Algorithms, Economics, Organizational Theory, etc.

Some miscellaneous comments.  My definition of distributed 
problem-solving is a very general one, namely "the process of many 
entities engaged in solving a problem", so feel free to send a 
contribution if you are not sure that a paper is suitable for this 
bibliography.  I also encourage you to make short annotations; more 
than 5 sentences is long.  All annotations in the bibliography will 
carry a reference to the author.  If your bibliography entries are in 
Scribe format that's great because the entire bibliography will be in 
Scribe.

Vineet Singh (VSINGH@SUMEX-AIM.ARPA)

------------------------------

Date: 1 Aug 83 4:22:03-PDT (Mon)
From: ihnp4!cbosgd!cbscd5!lvc @ Ucb-Vax
Subject: AI Journals
Article-I.D.: cbscd5.365

I am interested in subscribing to a computer science journal(s) that
deals primarily with artificial intelligence.  Could anyone that knows
of such journals send me via mail the names of these journals.  I will
post a list of all those sent my way.  Thanks in advance,

Larry Cipriani cbosgd!cbscd5!lvc

------------------------------

Date: 4 Aug 83 0:26:53-PDT (Thu)
From: hplabs!hp-pcd!jrf @ Ucb-Vax
Subject: AI~Geography
Article-I.D.: hp-pcd.1455



Please send info on what's available in Geography (PROSPECTOR,
cartography, etc.).  Thanks.

jrf

------------------------------

Date: 05 Aug 83  1417 PDT
From: Fred Lakin <FRD@SU-AI>
Subject: LISP & SUNs ...

I am interested in connectons between Franz LISP and SUN workstations.
Like how far along is Franz on the SUN?  Is there some package which
allows Franz on a VAX to use a SUN as a display device?  Also, now
that i think of it, any other LISP's which might run on both SUNs and
VAXes ...

Any info on this matter would be appreciated.  Thnaks, Fred Lakin

------------------------------

Date: Thu 4 Aug 83 09:57:01-PDT
From: Larry Fagan  <FAGAN@SUMEX-AIM.ARPA>
Subject: Programmer - ONCOCIN Project: Stanford Heuristic Programming
         Project

Programmer - ONCOCIN Project:  Stanford Heuristic Programming Project

        This position will involve applications programming for an 
oncology protocol management system known as ONCOCIN.  This project 
with Ted Shortliffe as principal investigator, represents an 
application of expert systems to the treatment of cancer patients, and
is currently in daily use by physicians.  The job requires significant
experience with artificial intelligence techniques and the LISP or
Interlisp languages.  The applicant must be willing to learn an
already existing, large expert system.  Masters level training in
computer science and previous experience with personal workstations
are highly desirable.  Although the tasks required will be varied, the
emphasis will be on artificial intelligence aspects of the oncology
research work:

*day-to-day management of the Interlisp programming efforts;
*participation in the design as well as the implementation of system
capabilities; *documentation of the system on an ongoing basis
(system overview/description as well as software documentation);
*supervisory coordination of students and part-time programmers who
may also be working on related projects; *assistance with occasional
non-programming matters important to the smooth running of the
project and to the efficient and effective performance of the system
in the clinical environment; *assistance with system demonstrations
for visitors and at meetings; *assistance with preparation of
portions of annual reports and funding proposals; *an ability to work
closely with the Chief Programmer, who will coordinate the Interlisp
efforts with other developing aspects of the total project.

Salary:  will follow Stanford University guidelines for Scientific 
Programmer III in accordance with the level of training and prior 
experience.

Contact: Larry Fagan, M.D., Ph.D.  (FAGAN@SUMEX)
         Project Director, ONCOCIN
         Stanford University Medical Center
         TC-117, Dept. of General Internal Medicine
         Stanford, Calif. 94305 (415)497-6979

------------------------------

End of AIList Digest
********************

∂08-Aug-83  1500	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #34
Received: from SRI-AI by SU-AI with TCP/SMTP; 8 Aug 83  14:59:40 PDT
Date: Monday, August 8, 1983 1:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #34
To: AIList@SRI-AI


AIList Digest             Monday, 8 Aug 1983       Volume 1 : Issue 34

Today's Topics:
  Fifth Generation - Opinion,
  Translation - Natural Language,
  Computational Complexity - Parallelism,
  LOGO - Request,
  Lab Descriptions - USENET Sites,
  Conferences - AAAI Panel to Honor Alexander Lerner
----------------------------------------------------------------------

Date: 5 Aug 83 20:14:19-PDT (Fri)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!ditzel @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: ssc-vax.377

Whereas it is true the Unites States holds a substantial lead in AI
over the Japanese, it really is beyond me how a person can believe
that they do not have the resources to overcome such a lead.  In my
*opinion* some things make a possible Japanese lead in AI machines 
possible.  Like:

*It is a national effort with an attempt to coordinate goals. The fact
that the project will be a coordinated effort rather than various 
incongruously related developments should facilitate compatibility
among the different topics.

*It may well be that Japan will have to go to the outside world to
make their project a success. What of it...a success is still a
success.

*In addition to believe that a priority project supported by both
government and industry will not try to encourage,educate and nurture
talented individuals toward the topics covered by the 5th generation
is not realistic.

*Worse yet, to believe such a project will not have an intense
political and social effect on Japan is also ignoring reality. If and
when successes in project goals do come, various segments of the
society and industrial sectors may begin to participate.

*The 5th generation project at least is visionary, a bit idealistic
and very ambitious. The outside 'egos' don't have an equivalent
project in the United States. (i.e.-one that has substantial backing
from industry and government *and* has fairly substantial financing
for the next five to ten years).

The point is we are very early into the project.... wait a bit.... we 
may learn a thing or two if we are not energetic enough.



                                            cld

------------------------------

Date: 5 Aug 83 14:50:43-PDT (Fri)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: ssc-vax.376

Concerning your lack of concern about the Japanese:

They may not have the manpower now, but they have been hiring outside 
Japan and giving some pretty strong support to their researchers.  I'd
go in a minute if they made me an offer...

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 5 Aug 83 12:51:22-PDT (Fri)
From: decvax!linus!utzoo!watmath!echrzanowski @ Ucb-Vax
Subject: 5th generation computers
Article-I.D.: watmath.5613

I recently had an opportunity to show a visiting prof from the
University of Kyoto around our facilities. During one of our
conversations I asked him about the 5th generation computers in Japan.
His response was that it was only a large government promotional
campaign and nothing more.  Sure they are building some new computers
but not to the degree that we are expected to believe.


If anyone else has any ideas or comments on 5th generation computers I
would like to see them.


                                   (watmath)!echrzanowski

------------------------------

Date: 6 Aug 83 13:01:14-PDT (Sat)
From: decvax!genrad!mit-eddie!smh @ Ucb-Vax
Subject: Re: 5th generation computers
Article-I.D.: mit-eddie.551

About the professor from Kyoto who claimed that the 5th generation 
project was only a big government promotional effort:

Maybe so, maybe not.  Weren't there some similar gentlemen in
Washington making similar assurances about a different matter around 5
Dec 1948?

------------------------------

Date: Sat, 6 Aug 83 19:42 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: natural language translation


    ... A real translator essentially reads and understands the
    text in one language, and then generates the appropriate
    text in the other language.  Since understanding general
    texts requires huge amounts of real-world knowledge,
    unrestricted machine translation will arrive about the time
    AI programs can pass the Turing test.  In my opinion, this
    will be substantially longer than ten years....

The long-standing machine translation project at the University of
Texas at Austin is not a system based on a deep understanding of the
text being translated yet has been giving good results in translating
technical manuals from German to English. Slocum reported on its
status in the ACL Conference on Applied Natural Language Processing
held in Santa Monica in February 83.  In this case, good meant
requiring less post-translation editing than the output of human
translators.

------------------------------

Date: 6 Aug 83 11:09:57 EDT  (Sat)
From: Craig Stanfill <craig%umcp-cs@UDel-Relay>
Subject: NP-completeness and parallelism

David Rogers commented that in parallel computing it makes sense to
assume a processor with an infinite number of processing elements,
much as a Turing machine has an infinite amount of memory.  He then
goes on to suggest that this might allow the effective solution of
NP-hard problems.

If we do this, we need to consider the processor-complexity of our
algorithms, not just the time-complexity.  For example, are there
algorithms for NP-hard problems which are linear in time but NP-hard
in the number of processors?  I suspect this is the case.

Parallelism is not the solution to combinatorial explosions; it is
just as limiting to use 2**n processors as it to use 2**n time.
However, the speedup is probably worth the effort; I would rather work
with a computer that uses 64,000 processors for one second than one
which uses 1 processor for 64,000 seconds.  Now, if we can just figure
out how to do this ...

------------------------------

Date: 7 Aug 83 16:57:17-PDT (Sun)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: NP-completeness and parallelism
Article-I.D.: pyuxll.388

A couple of clarifications are in order here:

1. NP-completeness of a problem means, among other things, that the
   best known algorithm for that problem has exponential
   worst-case running time on a serial processor.  That is not
   intended as a technical definition, just an operational one.
   Moreover, all NP-complete problems are related by the fact
   that if a polynomial-time algorithm is ever discovered for
   any of them, then there is a polynomial-time algorithm for
   all, so the (highly oversimplified!) definition of
   NP-complete, as of this date, is "intrinsically exponential."

2. Perhaps obvious, but I will say so anyway: n processors yoked in
   parallel can't do better than to be n times faster than a
   single serial processor. For some problems (e.g. sorting),
   the speedup is less.

The bottom line is that the "biggest tractable problem" is
proportional to the log of the computing power at your disposal;
whether you increase the power by speeding up a serial processor or by
multiplying the number of processors is of small consequence.

Now for the silver lining.  NP-complete problems often can be tweaked 
slightly to yield "easy" problems; if you have an NP-complete problem 
on your hands, go back and see if you can restrict it to a more
readily soluble problem.

Also, one can often restrict to a subproblem which, while it is still 
NP-complete, has a heuristic which generates solutions which aren't 
too far from optimal.  An example of this is the Travelling Salesman 
Problem.  Several years ago Lewis, Rosencrantz, and Stearns at GE
Research described a heuristic that yielded solutions that were no
worse than twice the optimum if the graph obeyed the triangle 
inequality (i.e. getting from A to C costs no more than going from A
to B, then B to C), a perfectly reasonable constraint.  It seems to me
that the heuristic ran in O(n-squared) or O(n log n), but my memory
may be faulty; low-order polynomial in any case.

So: "think parallel" may or may not help.  "Think heuristic" may help
a lot!

=Ned=

------------------------------

Date: 5 Aug 83 17:56:34-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: LOGO wanted
Article-I.D.: allegra.1721

A colleague of mine is looking for an implementation of LOGO, or any
similar language, under UNIX (one that already ran on both Suns and
PDP-11/23's would be ideal, but fat chance of that, eh?).  Failing
that, she would like to find a reasonably portable version (e.g., in
MacLisp).  In any case, if you have suggestions, please send them to
me and I shall forward.

Cheers, John ("This Has Been A Public Service Announcement")
DeTreville Bell Labs

------------------------------

Date: 5 Aug 83 13:15:42-PDT (Fri)
From: decvax!linus!utzoo!utcsrgv!kramer @ Ucb-Vax
Subject: Re: USENET and AI
Article-I.D.: utcsrgv.1898

We at the University of Toronto have a strong AI group that has been
in action for years:

        Area                       Major Project

  Knowledge Representation     PSN (procedural semantic network)

  Databases and Knowledge      TAXIS Representation

  Vision                       ALVEN (left ventricular motion understanding)

  Linguistics                  Speech acts


A major summary of our activities is being prepared to appear in the 
magazine for AAAI at some point.

Our research is being done on VAXen under UNIX*.  Presently at
utcsrgv, we will soon (September) be moving to a VAX dedicated to ai
work.

------------------------------

Date: 6 Aug 83 13:40:14-PDT (Sat)
From: ihnp4!houxm!hocda!spanky!burl!duke!unc!bts @ Ucb-Vax
Subject: More AI on USENET only
Article-I.D.: unc.5673

     The Computer Science Department at UNC-Chapel Hill is another
site with (some) AI interests that is on USENET but not ARPANET.  We
are one of CSNET's phone sites, but this still doesn't allow us to FTP
files. (Yes, in part, this is a plea for those folks who can FTP to
share with the rest of us on USENET!)

     Our functional programming group has a couple of projects with
some AI overtones.  We have begun to look at AI style programming
languages for Gyula Mago's string reduction tree-machine.  This is a
small-grain parallel computer which executes Backus' FFP language.
We're also looking at automatic FP program transformations.

     Along with our neighbors at Duke University, we have some Prolog
programmers.  Right now, that's C-Prolog at UNC and NU7 UNIX Prolog at
Duke.

        Bruce Smith, UNC-Chapel Hill
        duke!unc!bts (USENET)
        bts.unc@udel-relay (other NETworks)

------------------------------

Date: 5 Aug 83 15:11:37 EDT  (Fri)
From: JACK MINKER <minker%umcp-cs@UDel-Relay>
Subject: AAAI Panel to Honor Alexander Lerner

        In conjunction with the AAAI meeting in Washington, D.C. a
session is being held to honor the 70th birthday of the Soviet
cyberneticist, Professor Alexander Lerner. The session will be held
on:

                Date: Tuesday, August 23, 1983
                Time: 7:00 PM
                Location: Georgetown Room, Concourse Level

        The session will consist of a brief description of Dr.
Lerner's career, followed by a panel discussion on:

                Future Directions in Artificial Intelligence

The following have agreed to be on the panel with me:

                Nils Nilsson
                John McCarthy
                Patrick Winston

Others will be invited to participate in the panel session.

        We hope that you will be able to join us to honor this
distinguished scientist.


                Jack Minker
                University of Maryland

------------------------------

End of AIList Digest
********************

∂09-Aug-83  1920	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #35
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  19:20:06 PDT
Date: Tuesday, August 9, 1983 10:00AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #35
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 35

Today's Topics:
  Expert Systems - Bibliography,
  Learning - Bibliography,
  Logic - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 09:04:09-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Bibliographies

The bibliographies in this and the following three issues were
extracted from the new-reports list put out by the Stanford Math/CS
Library.  I have sorted the citations as best I could from just the
titles.  Reports on planning and problem solving have not been pulled
out separately--they are listed here either by application domain
or by technique.

                                        -- Ken Laws

------------------------------

Date: Tue 9 Aug 83 08:44:04-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Bibliography

This is an update to the titles previously reported in AIList.

J.S. Aikins, J.C. Kunz, E.H. Shortliffe, and R.J. Fallat, PUFF: An
Expert System for Interpretation of Pulmonary Function Data.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-931; Stanford U. Comp. Sci. Dept.
Heuristic Programming Project, HPP-82-013, 1982.  21p.

C. Apte, Expert Knowledge Management for Multi-Level Modelling.  
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-41, 1982.

B.G. Buchanan and R.O. Duda, Principles of Rule Based Expert Systems.
Stanford U. Comp. Sci. Dept., STAN-CS-82-926; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-014, 1982.  55p.

B.G. Buchanan, Partial Bibliography of Work on Expert Systems.  
Stanford U. Comp. Sci. Dept., STAN-CS-82-953; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-30, 1982.  13p.

A. Bundy and B. Silver, A Critical Survey of Rule Learning Programs.  
Edinburgh U. A.I. Dept., Res. Paper 169, 1982.

R. Davis, Expert Systems: Where are We? And Where Do We Go from Here?
M.I.T. A.I. Lab., Memo 665, 1982.

D. Dellarosa and L.E. Bourne, Jr., Text-Based Decisions: Changes in
the Availability of Facts due to Instructions and the Passage of
Time.  Colorado U. Cognitive Sci. Inst., Tech.  rpt. 115-ONR, 1982.

T.G. Dietterich, B. London, K. Clarkson, and G. Dromey, Learning and
Inductive Inference (a section of the Handbook of Artificial
Intelligence, edited by Paul R.  Cohen and Edward A. Feigenbaum).  
Stanford U. Comp. Sci. Dept., STAN-CS-82-913; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-010, 1982.  215p.

G.A. Drastal and C.A. Kulikowski, Knowledge Based Acquisition of Rules
for Medical Diagnosis.  Rutgers U. Comp. Sci. Res. Lab., CBM-TM-97, 
1982.

N.V. Findler, An Expert Subsystem Based on Generalized Production
Rules.  Arizona State U. Comp. Sci. Dept., TR-82-003, 1982.

N.V. Findler and R. Lo, A Note on the Functional Estimation of Values
of Hidden Variables--An Extended Module for Expert Systems.  Arizona
State U. Comp. Sci.  Dept., TR-82-004, 1982.

K.E. Huff and V.R. Lesser, Knowledge Based Command Understanding: An
Example for the Software Development Environment. Massachusetts U.
Comp. & Info. Sci. Dept., COINS Tech.Rpt. 82-06, 1982.

J.K. Kastner, S.M. Weiss, and C.A. Kulikowske, Treatment Selection and
Explanation in Expert Medical Consultation: Application to a Model of
Ocular Herpes Simplex.  Rutgers U. Comp.  Sci. Res. Lab., CBM-TR-132,
1982.

R.M. Keller, A Survey of Research in Strategy Acquisition.  Rutgers U.
Comp. Sci. Dept., DCS-TR-115, 1982.

V.E. Kelly and L.I. Steinberg, The Critter System: Analyzing Digital
Circuits by Propagating Behaviors and Specifications. Rutgers U.
Comp. Sci. Res. Lab., LCSR-TR-030, 1982.

J.J. King, An Investigation of Expert Systems Technology for
Automated Troubleshooting of Scientific Instrumentation.  Hewlett
Packard Co. Comp. Sci. Lab., CSL-82-012; Hewlett Packard Co. Comp.
Res.  Center, CRC-TR-82-002, 1982.

J.J. King, Artificial Intelligence Techniques for Device
Troubleshooting.  Hewlett Packard Co. Comp. Sci. Lab., CSL-82-009; 
Hewlett Packard Co. Comp. Res. Center, CRC-TR-82-004, 1982.

G.M.E. Lafue and T.M. Mitchell, Data Base Management Systems and
Expert Systems for CAD.  Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-028,
1982.

R.J. Lytle, Site Characterization using Knowledge Engineering -- An
Approach for Improving Future Performance.  Cal U. Lawrence Livermore
Lab., UCID-19560, 1982.

T.M. Mitchell, P.E. Utgoff, and R. Banerji, Learning by
Experimentation: Acquiring and Modifying Problem Solving Heuristics.
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-31, 1982.

D.S. Nau, Expert Computer Systems, Computer, Vol. 16, No. 2, pp.
63-85, Feb. 1983.

D.S. Nau, J.A. Reggia, and P. Wang, Knowledge-Based Problem Solving
Without Production Rules, IEEE 1983 Trends and Applications Conf., pp.
105-108, May 1983.

P.G. Politakis, Using Empirical Analysis to Refine Expert System
Knowledge Bases.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-130, Ph.D.
Thesis, 1982.

J.A. Reggia, P. Wang, and D.S. Nau, Minimal Set Covers as a Model for
Diagnostic Problem Solving, Proc. First IEEE Comp. Soc. Int. Conf. on
Medical Computer Sci./Computational Medicine, Sept. 1982.

J.A. Reggia, D.S. Nau, and P. Wang, Diagnostic Expert Systems Based on
a Set Covering Model, Int. J. Man-Machine Studies, 1983.  To appear.

M.D. Rychener, Approaches to Knowledge Acquisition: The Instructable
Production System Project.  Carnegie Mellon U. Comp. Sci. Dept.,
1981.

R.D. Schachter, An Incentive Approach to Eliciting Probabilities.  
Cal. U., Berkeley. O.R. Center, ORC 82-09, 1982.

E.H. Shortliffe and L.M. Fagan, Expert Systems Research: Modeling the
Medical Decision Making Process.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-932; Stanford U. Comp. Sci. Dept. Heuristic Programming
Project, HPP-82-003, 1982.  23p.

M. Suwa, A.C. Scott, and E.H. Shortliffe, An Approach to Verifying
Completeness and Consistency in a Rule Based Expert System.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-922, 1982.  19p.

J.A. Wald and C.J. Colbourn, Steiner Trees, Partial 2-Trees, and
Minimum IFI Networks.  Saskatchewan U. Computational Sci. Dept., Rpt.
82-06, 1982.

J.A. Wald and C.J. Colbourn, Steiner Trees in Probabilistic Networks.
Saskatchewan U. Computational Sci. Dept., Rpt. 82-07, 1982.

A. Walker, Automatic Generation of Explanations of Results from
Knowledge Bases.  IBM Watson Res. Center, RJ 3481, 1982.

J.W. Wallis and E.H. Shortliffe, Explanatory Power for Medical Expert
Systems: Studies in the Representation of Causal Relationships for
Clinical Consultation.  Stanford U. Comp.  Sci. Dept.,
STAN-CS-82-923, 1982.  37p.

S. Weiss, C. Kulikowske, C. Apte, and M. Uschold, Building Expert
Systems for Controlling Complex Programs.  Rutgers U. Comp. Sci. Res.
Lab., LCSR-TR-40, 1982.

Y. Yuchuan and C.A. Kulikowske, Multiple Strategies of Reasoning for
Expert Systems.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-131, 1982.

------------------------------

Date: Tue 9 Aug 83 08:47:25-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Learning Bibliography

Anderson, J.R. Farrell, R. Sauers, R.* Learning to plan in LISP.* 
Carnegie Mellon U. Psych.Dept.*1982.

Barber, G.*Supporting organizational problem solving with a 
workstation.* M.I.T. A.I. Lab.*Memo 681.*1982.

Bundy, A. Silver, B.*A critical survey of rule learning programs.* 
Edinburgh U. A.I. Dept.*Res. Paper 169.*1982.

Carbonell, J.G.* Learning by analogy: formulating and generalizing 
plans from past experience.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-126.*1982.

Carroll, J.M. Mack, R.L.* Metaphor, computing systems, and active 
learning.* IBM Watson Res. Center.*RC 9636.*1982.  schemes.* IBM 
Watson Res. Center.*RJ 3645.*1982.

Cohen, P.R.* Planning and problem solving.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-939; Stanford U. Comp.Sci.Dept.  Heuristic 
Programming Project.*HPP-82-021.*1982.  61p.

Dellarosa, D. Bourne, L.E. Jr.*Text-based decisions: changes in the 
availability of facts due to instructions and the passage of time.* 
Colorado U. Cognitive Sci.Inst.* Tech.rpt. 115-ONR.*1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Findler, N.V. Cromp, R.F.*An artificial intelligence technique to 
generate self-optimizing experimental designs.* Arizona State U.  
Comp.Sci.Dept.*TR-83-001.* 1983.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Kautz, H.A.*A first-order dynamic logic for planning.* Toronto U.  
Comp. Systems Res. Group.*CSRG-144.*1982.

Luger, G.F.*Some artificial intelligence techniques for describing 
problem solving behaviour.* Edinburgh U. A.I.  Dept.*Occasional Paper 
007.*1977.

Mitchell, T.M. Utgoff, P.E. Banerji, R.* Learning by experimentation:
acquiring and modifying problem solving heuristics.* Rutgers U.  
Comp.Sci.Res.Lab.*LCSR-TR-31.* 1982.

Moura, C.M.O. Casanova, M.A.* Design by example (preliminary report).*
Pontificia U., Rio de Janeiro.  Info.Dept.*No. 05/82.*1982.

Nadas, A.*A decision theoretic formulation of a training problem in 
speech recognition and a comparison of training by uncondition versus 
conditional maximum likelihood.* IBM Watson Res. Center.*RC 
9617.*1982.

Slotnick, D.L.* Time constrained computation.* Illinois U.  
Comp.Sci.Dept.*UIUCDCS-R-82-1090.*1982.

Tomita, M.* Learning of construction of finite automata from examples 
using hill climbing.  RR: regular set recognizer.* Carnegie Mellon U.
Comp.Sci.Dept.* CMU-CS-82-127.*1982.

Utgoff, P.E.*Acquisition of appropriate bias for inductive concept 
learning.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TM-02.*1982.

Winston, P.H. Binford, T.O. Katz, B. Lowry, M.* Learning physical 
descriptions from functional definitions, examples, and precedents.* 
M.I.T. A.I. Lab.*Memo 679.* 1982.

Winston, P.H.* Learning by augmenting rules and accumulating censors.*
M.I.T. A.I. Lab.*Memo 678.*1982.

------------------------------

Date: Tue 9 Aug 83 08:48:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Logic Bibliography

Ballantyne, M. Bledsoe, W.W. Doyle, J. Moore, R.C. Pattis, R.  
Rosenschein, S.J.* Automatic deduction (Chapter XII of Volume III of 
the Handbook of Artificial Intelligence, edited by Paul R. Cohen and 
Edward A. Feigenbaum).* Stanford U. Comp.Sci.Dept.*STAN-CS-82-937; 
Stanford U.  Comp.Sci.Dept. Heuristic Programming 
Project.*HPP-82-019.* 1982.  64p.

Bergstra, J. Chmielinska, A. Tiuryn, J.*" Hoare's logic is not 
complete when it could be".* M.I.T. Lab. for Comp.Sci.*TM-226.*1982.

Bergstra, J.A. Tucker, J.V.* Hoare's logic for programming languages 
with two data types.* Mathematisch Centrum.*IW 207/82.*1982.

Boehm, H.-J.*A logic for expressions with side-effects.* Cornell U.  
Comp.Sci.Dept.*Tech.Rpt. 81-478.*1981.

Bowen, D.L. (ed.)* DECsystem-10 Prolog user's manual.* Edinburgh U.  
A.I. Dept.*Occasional Paper 027.*1982.

Boyer, R.S. Moore, J.S.*A mechanical proof of the unsolvability of the
halting problem.* Texas U. Computing Sci. and Comp.Appl.Inst.  
Certifiable Minicomputer Project.*ICSCA-CMP-28.*1982.

Bundy, A. Welham, B.*Utility procedures in Prolog.* Edinburgh U. A.I.
Dept.*Occasional Paper 009.*1977.

Byrd, L. (ed.)*User's guide to EMAS Prolog.* Edinburgh U.  A.I.  
Dept.*Occasional Paper 026.*1981.

Demopoulos, W.*The rejection of truth conditional semantics by Putnam 
and Dummett.* Western Ontario U. Cognitive Science Centre.*COGMEM 
06.*1982.

Goto, E. Soma, T. Inada, N. Ida, T. Idesawa, M. Hiraki, K.  Suzuki, M.
Shimizu, K. Philipov, B.*Design of a Lisp machine - FLATS.* Tokyo U.
Info.Sci.Dept.*Tech.Rpt.  82-09.*1982.

Griswold, R.E.*The control of searching and backtracking in string 
pattern matching.* Arizona U. Comp.Sci.Dept.*TR 82-20.*1982.

Hagiya, M.*A proof description language and its reduction system.* 
Tokyo U. Info.Sci.Dept.*Tech.Rpt. 82-03.*1982.

Itai, A. Makowsky, J.*On the complexity of Herbrand's theorem.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 243.*1982.

Kautz, H.A.*A first-order dynamic logic for planning.* Toronto U.  
Comp. Systems Res. Group.*CSRG-144.*1982.

Kozen, D.C.*Results on the propositional mu-calculus.* Aarhus U.  
Comp.Sci.Dept.*DAIMI PB-146.*1982.

Makowsky, J.A. Tiomkin, M.L.*An array assignment for propositional 
dynamic logic.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 234.*1982.

Manna, Z. Pneuli, A.*How to cook a temporal proof system for your pet 
language.* Stanford U. Comp.Sci.Dept.* STAN-CS-82-954.*1982.  14p.

Mosses, P.* Abstract semantic algebras!* Aarhus U.  
Comp.Sci.Dept.*DAIMI PB-145.*1982.

Orlowska, E.*Logic of vague concepts: applications of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
474.*1982.

Sakamura, K. Ishikawa, C.* High level machine design by dynamic 
tuning.* Tokyo U. Info.Sci.Dept.*Tech.Rpt.  82-07.*1982.

Sato, M.*Algebraic structure of symbolic expressions.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt. 82-05.*1982.

Shapiro, E.Y.* Alternation and the computational complexity of logic 
programs.* Yale U. Comp.Sci.Dept.*Res.Rpt. 239.* 1982.

Stabler, E.P. Jr.* Database and theorem prover designs for question 
answering systems.* Western Ontario U. Cognitive Science 
Centre.*COGMEM 12.*1982.

Sterling, L. Bundy, A.* Meta level inference and program 
verification.* Edinburgh U. A.I. Dept.*Res. Paper 168.* 1982.

Treleaven, P.C. Gouveia Lima, I.*Japan's fifth generation computer 
systems.* Newcastle upon Tyne U. Computing Lab.* No. 176.*1982.

------------------------------

End of AIList Digest
********************

∂09-Aug-83  2027	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #36
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  20:26:59 PDT
Date: Tuesday, August 9, 1983 10:26AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #36
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 36

Today's Topics:
  Robotics - Bibliography,
  Vision - Bibliography,
  Speech Understanding - Bibliography,
  Pattern Recognition - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 09:22:41-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Robotics Bibliography

Ambler, A.P. Popplestone, R.J. Kempf, K.G.*An experiment in the 
offline programming of robots.* Edinburgh U. A.I.  Dept.*Res. Paper 
170.*1982.

Ambler, A.P.* RAPT: an object level robot programming language.* 
Edinburgh U. A.I. Dept.*Res. Paper 172.*1982.

Brooks, R.A. Lozano-Perez, T.*A subdivision algorithm in configuration
space for findpath with rotation.* M.I.T.  A.I.  Lab.*Memo 684.*1982.

Brooks, R.A.*Solving the find path problem by representing free space 
as generalized cones.* M.I.T. A.I. Lab.*Memo 674.*1982.

Brooks, R.A.*Symbolic error analysis and robot planning.* M.I.T. A.I.
Lab.*Memo 685.*1982.

Cameron, S.* Body models for every body.* Edinburgh U.  A.I.  
Dept.*Working Paper 107.*1982.

Gueting, R.H. Wood, D.*Finding rectangle intersections by 
divide-and-conquer.* McMaster U. Comp.Sci. Unit.* Comp.Sci. Tech.Rpt.
No. 82-CS-04.*1982.

Hofri, M.*BIN packing: an analysis of the next fit algorithm.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 242.*1982.

Hollerbach, J.M.*Computers, brains, and the control of movement.* 
M.I.T. A.I. Lab.*Memo 686.*1982.

Hollerbach, J.M.*Dynamic scaling of manipulator trajectories.* M.I.T.
A.I. Lab.*Memo 700.*1982.

Hollerbach, J.M.*Workshop on the design and control of dexterous hands
(held at the MIT Artificial Intelligence Laboratory on November 5-6,
1981).* M.I.T. A.I. Lab.*Memo 661.*1982.

Hopcroft, J.E. Joseph, D.A. Whitesides, S.H.*On the movement of robot 
arms in 2-dimensional bounded regions.*Cornell U.  
Comp.Sci.Dept.*Tech.Rpt. 82-486.*1982.

Kirkpatrick, D.* Optimal search in planar subdivisions.* British 
Columbia U. Comp.Sci.Dept.*Tech.Rpt. 81-13.*1981.

Kouta, M.M. O'Rourke, J.*Fast algorithms for polygon decomposition.* 
Johns Hopkins U. E.E. & Comp.Sci.Dept.* Tech.Rpt. 82/10.*1982.

Koutsou, A.*A survey of model bases robot programming languages.* 
Edinburgh U. A.I. Dept.*Working Paper 108.* 1981.

Lozano-Perez, T.* Robot programming.* M.I.T. A.I. Lab.*Memo 698.*1982.

Mason, M.T.* Manipulator grasping and pushing operations.* M.I.T.  
A.I. Lab.*TR-690, Ph.D. Thesis. Mason, M.T.*1982.

Mavaddat, F.* WATSON/I: WATerloo's SONically guided robot.*Waterloo U.
Comp.Sci.Dept.*Res.Rpt. CS-82-16.*1982.

Moran, S.*On the densest packing of circles in convex figures.* 
Technion - Israel Inst. of Tech. Comp.Sci.Dept.* Tech.Rpt. 241.*1982.

Mujtaba, M.S.* Motion sequencing of manipulators.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-917, Ph.D. Thesis.  Mujtaba, M.S.  
(Department of Industrial Engineering and Engineering 
Management).*1982.  291p.

Myers, E.W.*An O(ElogE+I) expected time algorithm for the planar 
segment intersection problem.* Arizona U.  Comp.Sci.Dept.*TR 
82-03.*1982.

Popplestone, R.J.*Discussion document on body modelling for robot 
languages.* Edinburgh U. A.I. Dept.*Working Paper 110.*1982.

Shneier, M.* Hierarchical sensory processes for 3-D robot vision.* 
Maryland U. Comp.Sci. Center.*TR-1165.*1982.

Slotnick, D.L.* Time constrained computation.* Illinois U.  
Comp.Sci.Dept.*UIUCDCS-R-82-1090.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Taylor, R.H.*An integrated robot system architecture.* IBM Watson Res.
Center.*RC 9824.*1983.

Yin, B.*A proposal for studying how to use vision within a robot 
language which reasons about spatial relationships.*Edinburgh U. A.I.
Dept.*Working Paper 109.*1982.

------------------------------

Date: Tue 9 Aug 83 09:54:44-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Vision Bibliography


A. Athukorala, Some Hardware for Computer Vision.  Edinburgh U. A.I.
Dept., Working Paper 102, 1981.

H.H. Baker, Depth from Edge and Intensity Based Stereo.  Ph.D. Thesis,
Stanford U. Comp. Sci. Dept., STAN-CS-82-930; Stanford U. Comp. Sci.
Dept. A.I. Lab., AIM-347, 1982, 90p.  Based on a Ph.D. thesis
submitted to the University of Illinois at Urbana-Champaign in
September of 1981.

R.J. Beattie, Edge Detection for Semantically Based Early Visual
Processing.  Edinburgh U. A.I. Dept., Res. Paper 174, 1982.

M. Brady and W.E.L. Grimson, The Perception of Subjective Surfaces.  
M.I.T. A.I. Lab., Memo 666, 1981.

I. Chakravarty, The Use of Characteristic Views as a Basis for
Recognition of Three-Dimensional Objects.  Rensselaer Polytechnic
Inst. Image Processing Lab., IPL-TR-034, 1982.

L. Dreschler, Ermittlung markanter Punkte auf den Bildern bewegter
Opjekte und Berechnung einer 3D-Beschreibung auf dieser Grundlage.
Hamburg U. Fachbereich Informatik, Bericht Nr. 83, 1981.

J.-O. Eklundh, Knowledge Based Image Analysis: Some Aspects of Images
using Other Types of Information.  Royal Inst. of Tech., Stockholm,
Num.Anal. & Computing Sci. Dept., TRITA-NA-8206, 1982.

R.B. Fisher, A Structured Pattern Matching Approach to Intermediate
Level Vision.  Edinburgh U. A.I. Dept., Res. Paper 177, 1982.

W.B. Gevarter, An Overview of Computer Vision.  U.S. National Bureau
of Standards, NBSIR 82-2582, 1982.

W.E.L. Grimson, The Implicit Constraints of the Primal Sketch.  M.I.T.
A.I. Lab., Memo 663, 1981.

W.I. Grosky, Towards a Data Model for Integrated Pictorial Databases.
Wayne State U. Comp. Sci. Dept., CSC-82-012, 1982.

R.F. Hauser, Some experiments with stochastic edge detection, IBM
Watson Res. Center, RZ 1210, 1983.

E.C. Hildreth and S. Ullman, The Measurement of Visual Motion.  M.I.T.
A.I. Lab., Memo 699, 1982.

T. Kanade (ed.), Vision.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-938; Stanford U. Comp. Sci. Dept. Heuristic Programming
Project, HPP-82-020, 1982, 220p.  Assistant Editor: Steven A. Shafer.
Contributors:  David A. Bourne, Rodney Brooks, Nancy H. Cornelius,
James L. Crowley, Hiromichi Fujisawa, Martin Herman, Fuminobu Komura,
Bruce D. Lucas, Steven A. Shafer, David R. Smith, Steven L. Tanimoto, 
Charles E. Thorpe.

A. Krzesinski, The normalised convolution algorithm, IBM Watson Res.
Center, RC 9834, 1983.

M.A. Lavin and L.I. Lieberman, AML/V: An Industrial Machine Vision
Programming System.  IBM Watson Res. Center, RC 9390, 1982.

C.N. Liu, M. Fatemi, and R.C. Waag, Digital Processing for Improvement
of Ultrasonic Abdominal Images.  IBM Watson Res. Center, RC 9499, 
1982.

D. Montuno and A. Fournier, Detecting intersection among star
polygons, Toronto U. Comp. Systems Res. Group, CSRG-146, 1982.

T.N. Mudge and T.A. Rahman, Efficiency of feature dependent
algorithms for the parallel processing of images, Michigan U.
Computing Res.  Lab., CRL-TR-11-83, 1983.

T.M. Nicholl, D.T. Lee, Y.Z. Liao, and C.K. Wong, Constructing the X-Y
convex hull of a set of X-Y polygons, IBM Watson Res. Center, RC 9737,
1982.

E. Pervin and J.A. Webb, Quaternions in computer vision and robotics, 
Carnegie Mellon U. Comp. Sci. Dept., CMU-CS-82-150, 1982.

T. Poggio, H.K. Nishihara, and K.R.K. Nielsen, Zero Crossings and
Spatiotemporal Interpolation in Vision: Aliasing and Electrical
Coupling Between Sensors.  M.I.T. A.I. Lab., Memo 675, 1982.

T. Poggio, Visual Algorithms.  M.I.T. A.I. Lab., Memo 683, 1982.

W. Richards, H.K. Nishihara, and B. Dawson, CARTOON: A Biologically
Motivated Edge Detection Algorithm.  M.I.T. A.I. Lab., Memo 668, 1982.

A. Rosenfeld, Computer vision, Maryland U. Comp. Sci. Center, TR-1157,
1982.

A. Rosenfeld, Trends and perspectives in computer vision, Maryland U.
Comp. Sci. Center, TR-1194, 1982.

I.K. Sethi and R. Jain, Determining Three Dimensional Structure of
Rotating Objects.  Wayne State U. Comp. Sci. Dept., CSC-83-001, 1983.

M. Shneier, Hierarchical sensory processes for 3-D robot vision, 
Maryland U. Comp. Sci. Center, TR-1165, 1982.

C.L. Sidner, Protocols of Users Manipulating Visually Presented
Information with Natural Language.  Bolt, Beranek and Newman, Inc.,
BBN 5128, 1982.

R.W. Sjoberg, Atmospheric Effects in Satellite Imaging of Mountainous
Terrain.  M.I.T. A.I. Lab., TR-688,

S.N. Srihari, Pyramid representations for solids, SUNY, Buffalo, Comp.
Sci. Dept., Tech.Rpt. 200, 1983.

K.A. Stevens, Implementation of a Theory for Inferring Surface Shape
from Contours.  M.I.T. A.I. Lab., Memo 676, 1982.

D. Terzopoulos, Multi-Level Reconstruction of Visual Surfaces:
Variational Principles and Finite Element Representations.  M.I.T.
A.I. Lab., Memo 671, 1982.

R.Y. Tsai, Multiframe Image Point Matching and 3-D Surface
Reconstruction.  IBM Watson Res. Center, RC 9398, 1982.

R.Y. Tsai and T.S. Huang, Analysis of 3-D Time Varying Scene.  IBM
Watson Res. Center, RC 9479, 1982.

R.Y. Tsai, 3-D inference from the motion parallax of a conic arc and
a point in two perspective views, IBM Watson Res. Center, RC 9818,
1983.

R.Y. Tsai, Estimating 3-D motion parameters and object surface
structures from the image motion of conic arcs, I: theoretical basis,
IBM Watson Res. Center, RC 9787, 1983.

R.Y. Tsai, Estimating 3-D motion parameters and object surface
structures from the image motion of conic arcs, IBM Watson Res.
Center, RC 9819, 1983.

L. Uhr and L. Schmitt, The Several Steps from ICON to SYMBOL, using
Structured Cone/Pyramids.  Wisconsin U. Comp. Sci. Dept., Tech.Rpt.
481, 1982.

P.H. Winston, T.O. Binford, B. Katz, and M. Lowry, Learning Physical
Descriptions from Functional Definitions, Examples, and Precedents.
M.I.T. A.I. Lab., Memo 679, 1982.  1982.

M.-M. Yau, Generating quadtrees of cross-sections from octrees, SUNY,
Buffalo, Comp. Sci. Dept., Tech.Rpt. 199, 1982.

B. Yin, A Proposal for Studying How to Use Vision Within a Robot
Language which Reasons about Spatial Relationships.  Edinburgh U.
A.I. Dept., Working Paper 109, 1982.

------------------------------

Date: Tue 9 Aug 83 08:54:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Speech Understanding Bibliography

Lucassen, J.M.*Discovering phonemic base forms automatically: an 
information theoretic approach.* IBM Watson Res. Center.*RC 
9833.*1983.

Waibel, A.*Towards very large vocabulary word recognition.* Carnegie 
Mellon U. Comp.Sci.Dept.*CMU-CS-82-144.*1982.

------------------------------

Date: Tue 9 Aug 83 08:49:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition Bibliography

Barnes, E.R.*An algorithm for separating patterns by ellipsoids.* IBM 
Watson Res. Center.*RC 9500.*1982.

Chiang, W.P. Teorey, T.J.*A method for database record clustering.* 
Michigan U. Computing Res.Lab.* CRL-TR-05-82.*1982.

Findler, N.V. Cromp, R.F.*An artificial intelligence technique to 
generate self-optimizing experimental designs.* Arizona State U.  
Comp.Sci.Dept.*TR-83-001.* 1983.

Findler, N.V. Lo, R.*A note on the functional estimation of values of 
hidden variables--an extended module for expert systems.* Arizona 
State U. Comp.Sci.Dept.*TR-82-004.* 1982.

Jenkins, J.M.* Symposium on computer applications to cardiology:  
introduction and automated electrocardiography and arrhythmia 
monitoring.* Michigan U. Computing Res.Lab.*CRL-TR-20-83.*1983.

Kumar, V. Kanal, L.N.* Branch and bound formulations for sequential 
and parallel And/Or tree search and their applications to pattern 
analysis and game playing.* Maryland U. Comp.Sci.  
Center.*TR-1144.*1982.

O'Rourke, J.*The signature of a curve and its applications to pattern 
recognition (preliminary version).* Johns Hopkins U. E.E. & 
Comp.Sci.Dept.*Tech.Rpt. 82/09.*1982.

Seidel, R.*A convex hull algorithm for point sets in even dimensions.*
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-14.*1981.

Varah, J.M.*On fitting exponentials by nonlinear least squares.* 
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  82-02.*1982.

------------------------------

End of AIList Digest
********************

∂09-Aug-83  2149	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #37
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  21:47:14 PDT
Date: Tuesday, August 9, 1983 10:33AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #37
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 37

Today's Topics:
  Representation - Bibliography,
  Natural Language Understanding - Bibliography,
  Cognition - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 08:51:05-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Representation Bibliography

Abdallah, M.A.N.* Data types as algorithms.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-10.*1982.

Alterman, R.E.*A system of seven coherence relations for 
hierarchically organizing event concepts in text.* Texas U.  
Comp.Sci.Dept.*TR-209.*1982.

Amit, Y.*Review of conceptual dependency theory.* Edinburgh U. A.I.  
Dept.*Occasional Paper 008.*1977.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Ericson, L.W.* DPL-82: a language for distributed processing.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-129.*1982.

Forbus, K.D.* Qualitative process theory.* M.I.T. A.I.  Lab.*Memo 
664.*1982.

Katz, R.H. Lehman, T.J.*Storage structures for versions and 
alternatives.* Wisconsin U. Comp.Sci.Dept.*Tech.Rpt.  479.*1982.

Lucas, P. Risch, T.*Representation of factual information by equations
and their evaluation.* IBM Watson Res.  Center.*RJ 3362.*1982.

Luger, G.F.*Some artificial intelligence techniques for describing 
problem solving behaviour.* Edinburgh U. A.I.  Dept.*Occasional Paper 
007.*1977.

Lytinen, S.L. Schank, R.C.* Representation and translation.*Yale U.  
Comp.Sci.Dept.*Res.Rpt. 234.*1982.

Mahr, B. Makowsky, J.A.*Characterizing specification languages which 
admit initial semantics.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 232.*1982.

Mercer, R.E. Reiter, R.*The representation of presuppositions using 
defaults.* British Columbia U.  Comp.Sci.Dept.*Tech.Rpt. 82-01.*1982.

Orlowska, E. Pawlak, Z.*Representation of nondeterministic 
information.* Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS 
Rpt. No. 450.*1981.

Orlowska, E.*Logic of vague concepts: applications of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
474.*1982.

Orlowska, E.*Semantics of vague concepts: application of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
469.*1982.

Pawlak, Z.* Rough functions.* Polish Academy of Sciences.  Inst. of 
Comp.Sci.*ICS PAS rpt. no. 467.*1981.

Pawlak, Z.* Rough sets: power set hierarchy.* Polish Academy of 
Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  470.*1982.

Pawlak, Z.*About conflicts.* Polish Academy of Sciences.  Inst. of 
Comp.Sci.*ICS PAS Rpt. No. 451.*1981.

Pawlak, Z.*Some remarks about rough sets.* Polish Academy of Sciences.
Inst. of Comp.Sci.*ICS PAS rpt. no. 456.* 1982.

Sridharan, N.S.*A flexible structure for knowledge: examples of legal 
concepts.* Rutgers U. Comp.Sci.Res.Lab.* LRP-TR-014.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Weiser, M. Israel, B. Stanfill, C. Trigg, R. Wood, R.* Working papers 
in knowledge representation and acquisition.* Maryland U. Comp.Sci.  
Center.*TR-1175.* 1982.  Contents: Israel, B. Weiser, M.*Towards a 
perceptual system for monitoring computer behavior; Stanfill, C.* 
Geometry to causality: a hierarchy of subdomains for machine world; 
Trigg, R.*Acquiring knowledge for an electronic textbook; Wood, R.J.*A
model for interactive program synthesis.

Winston, P.H. Binford, T.O. Katz, B. Lowry, M.* Learning physical 
descriptions from functional definitions, examples, and precedents.* 
M.I.T. A.I. Lab.*Memo 679.* 1982.

Woods, W.A. Bates, M. Bobrow, R.J. Goodman, B. Israel, D.  Schmolze, 
J. Schudy, R. Sidner, C.L. Vilain, M.*Research in knowledge 
representation for natural language understanding. Annual report: 1 
September 1981 to 31 August 1982.* Bolt, Beranek and Newman, Inc.*BBN 
rpt.  5188.*1982.

------------------------------

Date: Tue 9 Aug 83 08:46:47-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Natural Language Understanding Bibliography

Allen, E.M.*Acquiring linguistic knowledge for word experts.* Maryland
U. Comp.Sci. Center.*TR-1166.

Alterman, R.E.*A system of seven coherence relations for 
hierarchically organizing event concepts in text.* Texas U.  
Comp.Sci.Dept.*TR-209.*1982.

Amit, Y.*Review of conceptual dependency theory.* Edinburgh U. A.I.  
Dept.*Occasional Paper 008.*1977.

Ballard, B.W. Lusth, J.C.*An English-language processing system which 
'learns' about new domains.* Duke U.  Comp.Sci.Dept.*CS-1982-18.*1982.

Ballard, B.W.*A "domain class" approach to transportable natural 
language processing.* Duke U. Comp.Sci.Dept.* CS-1982-11.*1982.

Bancilhon, F. Richard, P.* TQL, a textual query language.* 
INRIA.*Rapport de Recherche 145.*1982.

Barr, A. Cohen, P.R. Fagan, L.*Understanding spoken language (Chapter 
V of Volume I of the Handbook of Artificial Intelligence, edited by 
Avron Barr and Edward A. Feigenbaum).* Stanford U. Comp.Sci.Dept.* 
STAN-CS-82-934; Stanford U. Comp.Sci.Dept. Heuristic Programming 
Project.*HPP-82-016.*1982.  52p.

Black, J.B. Galambos, J.A. Read, S.* Story comprehension.* Yale U.  
Cognitive Science Program.*Tech.Rpt. 017.*1982.

Black, J.B. Seifert, C.M.*The psychological study of story 
understanding.* Yale U. Cognitive Science Program.* Tech.Rpt.  
018.*1982.

Black, J.B. Wilkes-Gibbs, D. Gibbs, R.W. Jr.*What writers need to know
that they don't know they need to know.* Yale U. Cognitive Science
Program.*Tech.Rpt. 08.*1981.

Carbonell, J.G.* Meta-language utterances in purposive discourse.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-125.*1982.

Clinkenbeard, D.J.*A quite general text analysis method.* Colorado U.
Comp.Sci.Dept.*CU-CS-237-82.*1982.

Culik, K. Natour, I.A.* Ambiguity types of formal grammars.*Wayne 
State U. Comp.Sci.Dept.*CSC-82-014.*1982.

Dellarosa, D. Bourne, L.E. Jr.*Text-based decisions: changes in the 
availability of facts due to instructions and the passage of time.* 
Colorado U. Cognitive Sci.Inst.* Tech.rpt. 115-ONR.*1982.

Denny, J.P.* Whorf's Algonquian: old evidence and new ideas concerning
linguistic relativity.* Western Ontario U.  Cognitive Science
Centre.*COGMEM 11.*1982.

Dolev, D. Reischuk, R. Strong, H.R.*'Eventual' is earlier than 
'immediate'.* IBM Watson Res. Center.*RJ 3632.*1982.

Dyer, M.G.*In-depth understanding: a computer model of integrated 
processing for narrative comprehension.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 219, Ph.D. Thesis. Dyer, M.G.* 1982.

Gawron, J.M. King, J.J. Lamping, J. Loebner, J.J. Paulson, E.A.  
Pullum, G.K. Sag, I.A. Wasow, T.A.*Processing English with a 
generalized phrase structure grammar.* Hewlett Packard Co.  
Comp.Sci.Lab.*CSL-82-005.*1982.

Greene, B.R. Fujisake, T.*A probabilistic approach for dealing with 
ambiguous syntactic structures.* IBM Watson Res. Center.*RC 
9764.*1982.

Hartmanis, J.*On Goedel speed-up and succinctness of language 
representation.* Cornell U. Comp.Sci.Dept.* Tech.Rpt. 82-485.*1982.

Israel, D.J.*On interpreting semantic network formalisms.* Bolt, 
Beranek and Newman, Inc.*BBN rpt. 5117.*1982.

Jensen, K. Heidorn, G.E.*The fitted parse: 100% parsing capability in 
a syntactic grammar of English.* IBM Watson Res. Center.*RC 
9729.*1982.  graphs.* IBM Watson Res. Center.*RC 9642.*1982.

Johnson, P.N. Robertson, S.P.* MAGPIE: a goal based model of 
conversation.* Yale U. Comp.Sci.Dept.*Res.Rpt. 206.* 1981.

Katz, B. Winston, P.H.* Parsing and generating English using 
commutative transformations.* M.I.T. A.I. Lab.* Memo 677.*1982.

Lamping, J. King, J.J.* LM/GPSG--a prototype workstation for 
linguists.* Hewlett Packard Co. Comp.Sci.Lab.* CSL-82-011; Hewlett 
Packard Co. Comp.Res. Center.* CRC-Tr-82-006.*1982.

Lehnert, W. Dyer, M.G. Johnson, P.N. Yang, C.J. Harley, S.* BORIS: an 
experiment in in-depth understanding of narratives.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 188.*1981.

Lehnert, W.G.* Affect units and narrative summarization.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 179.*1980.

Lytinen, S.L. Schank, R.C.* Representation and translation.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 234.*1982.

Mann, W.C. Matthiessen, C.M.I.M.*Two discourse generators, by William 
C. Mann; A grammar and a lexicon for a text production system, by 
Christian M.I.M. Matthiessen.* Southern Cal U.  
Info.Sci.Inst.*ISI/RR-82-102.*1982.

Mann, W.C.*The anatomy of a systemic choice.* Southern Cal U.  
Info.Sci.Inst.*ISI/RR-82-104.*1982.

Martin, P.A.*Integrating local information to understand dialog.* 
Stanford U. Comp.Sci.Dept.*STAN-CS-82-941; Stanford U. Comp.Sci.Dept.
A.I. Lab.*AIM-348, Ph.D.  Thesis. Martin, P.A.*1982.  125p.

Miller, L.A.*" Natural language text are not necessarily grammatical 
and unambiguous. Or even complete".* IBM Watson Res. Center.*RC 
9441.*1982.

Misek-Falkoff, L.D.*The new field of software linguistics: an 
early-bird view.* IBM Watson Res. Center.*RC 9421.* 1982.

Misek-Falkoff, L.D.* Software science and natural language: a 
unification of Halstead's counting rules for programs and English 
text, and a claim space approach to extensions.* IBM Watson Res.  
Center.*RC 9420.*1982.

Mueckstein, E.-M.M.* Parsing for collecting syntactic statistics.* IBM
Watson Res. Center.*RC 9836.*1983.

Mueckstein, E.M.M.* Q-Trans: query translation into English.* IBM 
Watson Res. Center.*RC 9841.*1983.

Perlman, G.* Natural artificial languages: low-level processes.* Cal.
U., San Diego. Human Info. Proces.  Center.*Rpt. 8208.*1982.

Peterson, J.L.* Webster's seventh new collegiate dictionary: a 
computer-readable file format.* Texas U.  Comp.Sci.Dept.*TR-196.*1982.

Reiser, B.J. Black, J.B. Lehnert, W.G.* Thematic knowledge structures 
in the understanding and generation of narratives.* Yale U. Cognitive 
Science Program.* Tech.Rpt. 016.*1982.

Reiser, B.J. Black, J.B.*Processing and structural models of 
comprehension.* Yale U. Cognitive Science Program.* Tech.Rpt.  
012.*1982.

Schank, R.C. Burstein, M.*Modeling memory for language understanding.*
Yale U. Comp.Sci.Dept.*Res.Rpt. 220.* 1982.

Schank, R.C. Collins, G.C. Davis, E. Johnson, P.N. Lytinen, S.  
Reiser, B.J.*What's the point?* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
205.*1981.

Shwartz, S.P.*The search for pronominal referents.* Yale U. Cognitive 
Science Program.*Tech.Rpt. 10.*1981.

Sidner, C.L. Bates, M.*Requirements for natural language understanding
in a system with graphic displays.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5242.*1983.

Sidner, C.L.* Protocols of users manipulating visually presented 
information with natural language.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5128.*1982.

Smith, D.E.* FOCUSER: a strategic interaction paradigm for language 
acquisition.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-36, Ph.D. Thesis.
Smith, D.E.*1982.

Stabler, E.P. Jr.* Programs, rule governed behavior and grammars in 
theories of language acquisition and use.* Western Ontario U.  
Cognitive Science Centre.*COGMEM 07.* 1982.

Usui, T.*An experimental grammar for translating English to Japanese.*
Texas U. Comp.Sci.Dept.*TR-201.*1982.

Wilensky, R.*Talking to UNIX in English: an overview of an on-line 
consultant.* California U., Berkeley.  Comp.Sci.Div.*UCB/CSD 
82/104.*1982.

Woods, W.A. Bates, M. Bobrow, R.J. Goodman, B. Israel, D.  Schmolze, 
J. Schudy, R. Sidner, C.L. Vilain, M.*Research in knowledge 
representation for natural language understanding. Annual report: 1 
September 1981 to 31 August 1982.* Bolt, Beranek and Newman, Inc.*BBN 
rpt.  5188.*1982.

------------------------------

Date: Tue 9 Aug 83 08:45:21-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Cognition Bibliography

Ballard, B.W. Lusth, J.C.*An English-language processing system which 
'learns' about new domains.* Duke U. Comp.Sci.Dept.,CS-1982-18,1982.

Barr, A.* Artificial intelligence: cognition as computation.* Stanford
U. Comp.Sci.Dept.*STAN-CS-82-956; Stanford U. Comp.Sci.Dept.  
Heuristic Programming Project.* HPP-82-29.*1982.  28p.

Black, J.B. Galambos, J.A. Reiser, B.J.*Coordinating discovery and 
verification research.* Yale U. Cognitive Science Program.*Tech.Rpt.  
013.*1982.

Black, J.B. Galambos, J.A. Read, S.* Story comprehension.* Yale U.  
Cognitive Science Program.*Tech.Rpt. 017.*1982.

Black, J.B. Seifert, C.M.*The psychological study of story 
understanding.* Yale U. Cognitive Science Program.* Tech.Rpt.  
018.*1982.

Black, J.B. Wilkes-Gibbs, D. Gibbs, R.W. Jr.*What writers need to know
that they don't know they need to know.* Yale U. Cognitive Science
Program.*Tech.Rpt. 08.*1981.

Bonar, J. Soloway, E.*Uncovering principles of novice programming.* 
Yale U. Comp.Sci.Dept.*Res.Rpt. 240.*1982.

Carbonell, J.G.* Learning by analogy: formulating and generalizing 
plans from past experience.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-126.*1982.

Carroll, J.M. Mack, R.L.* Metaphor, computing systems, and active 
learning.* IBM Watson Res. Center.*RC 9636.*1982.  schemes.* IBM 
Watson Res. Center.*RJ 3645.*1982.

Cohen, P.R.*Models of cognition (Chapter XI of Volume III of the 
Handbook of Artificial Intelligence, edited by Paul R. Cohen and 
Edward A. Feigenbaum).* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-936; 
Stanford U. Comp.Sci.Dept.  Heuristic Programming 
Project.*HPP-82-018.*1982.  87p.

Conrad, M.* Microscopic macroscopic interface in biological 
information processing.* Wayne State U. Comp.Sci.Dept.* 
CSC-83-003.*1983.

Doyle, J.*The foundations of psychology: a logico-computational 
inquiry into the concept of mind.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-149.*1982.

Dyer, M.G.*In-depth understanding: a computer model of integrated 
processing for narrative comprehension.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 219, Ph.D. Thesis. Dyer, M.G.* 1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Ericsson, K.A. Chase, W.G.* Exceptional memory.* Carnegie Mellon U.  
Psych.Dept.*Tech.Rpt. 08.*1982.

Firdman, H.E.*Toward a theory of cognizing systems: the search for an 
integrated theory of AI.* Hewlett Packard Co.  
Comp.Sci.Lab.*CSL-82-007; Hewlett Packard Co.  Comp.Res.  
Center.*CRC-TR-82-002.*1982.

Galambos, J.A.*Normative studies of six characteristics of our 
knowledge of common activities.* Yale U. Cognitive Science 
Program.*Tech.Rpt. 014.*1982.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Hollerbach, J.M.*Computers, brains, and the control of movement.* 
M.I.T. A.I. Lab.*Memo 686.*1982.

Israel, D.J.*On interpreting semantic network formalisms.* Bolt, 
Beranek and Newman, Inc.*BBN rpt. 5117.*1982.

Kampfner, R.R. Conrad, M.*Sequential behavior and stability properties
of enzymatic neuron networks.* Wayne State U.  
Comp.Sci.Dept.*CSC-82-011.*1982.

Lansner, A.* Information processing in a network of model neurons: a 
computer simulation study.* Royal Inst. of Tech., Stockholm.  
Num.Anal. & Computing Sci.Dept.* TRITA-NA-8211.*1982.

Mather, J.A.* Saccadic eye movements to seen and unseen targets:  
preprogramming and sensory input in motor control.* Western Ontario U.
Cognitive Science Centre.* COGMEM 10.*1982.

Mitchell, T.M. Utgoff, P.E. Banerji, R.* Learning by experimentation:
acquiring and modifying problem solving heuristics.* Rutgers U.  
Comp.Sci.Res.Lab.*LCSR-TR-31.* 1982.

Poggio, T. Koch, C.*Nonlinear interactions in a dendritic tree:  
localization, timing, and role in information processing.* M.I.T.  
A.I. Lab.*Memo 657.*1981.

Reiser, B.J. Black, J.B. Lehnert, W.G.* Thematic knowledge structures 
in the understanding and generation of narratives.* Yale U. Cognitive 
Science Program.* Tech.Rpt. 016.*1982.

Richards, W. Nishihara, H.K. Dawson, B.* CARTOON: a biologically 
motivated edge detection algorithm.* M.I.T.  A.I. Lab.*Memo 668.*1982.

Schank, R.C. Burstein, M.*Modeling memory for language understanding.*
Yale U. Comp.Sci.Dept.*Res.Rpt. 220.* 1982.

Schank, R.C.*Representing meaning: an artificial intelligence 
perspective.* Yale U. Cognitive Science Program.*Tech.Rpt. 11.*1981.

Seifert, C.M. Robertson, S.P.*On-line processing of pragmatic 
inferences.* Yale U. Cognitive Science Program.*Tech.Rpt. 015.*1982.

Shwartz, S.P.*Three-dimensional mental rotation revisited: picture 
plane rotation is really faster than depth rotation.* Yale U.  
Cognitive Science Program.*Tech.Rpt.  09.*1981.

Sidner, C.L.* Protocols of users manipulating visually presented 
information with natural language.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5128.*1982.

Smith, D.E.* FOCUSER: a strategic interaction paradigm for language 
acquisition.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-36, Ph.D. Thesis.
Smith, D.E.*1982.

Soloway, E. Bonar, J. Ehrlich, K.* Cognitive strategies and looping 
constructs: an empirical study.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
242.*1982.

Soloway, E. Ehrlich, K. Bonar, J. Greenspan, J.*What do novices know 
about programming?* Yale U. Comp.Sci.Dept.* Res.Rpt. 218.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Stabler, E.P. Jr.* Programs, rule governed behavior and grammars in 
theories of language acquisition and use.* Western Ontario U.  
Cognitive Science Centre.*COGMEM 07.* 1982.

Utgoff, P.E.*Acquisition of appropriate bias for inductive concept 
learning.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TM-02.*1982.

------------------------------

End of AIList Digest
********************

∂09-Aug-83  2330	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #38
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  23:17:23 PDT
Date: Tuesday, August 9, 1983 10:38AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #38
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 38

Today's Topics:
  Programming - Bibliography,
  Databases - Bibliography,
  Computer Science - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 08:50:19-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Programming Bibliography

[Includes programming environments and techniques
as well as automatic programming.]

Abdallah, M.A.N.* Data types as algorithms.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-10.*1982.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Andrews, G.R.* Distributed programming languages.* Arizona U.  
Comp.Sci.Dept.*TR 82-13.*1982.

Archer, J.E. Jr.*The design and implementation of a cooperative 
program development environment.* Cornell U.  Comp.Sci.Dept.*Tech.rpt.
81-468, Ph.D. Thesis. Archer, J.E. Jr.*1982.

Bakker, J.W. de Zucker, J.I.* Processes and the denotational semantics
of concurrency.* Mathematisch Centrum.*IW 209/82.*1982.

Barber, G.*Supporting organizational problem solving with a 
workstation.* M.I.T. A.I. Lab.*Memo 681.*1982.

Bergstra, J.A. Klop, J.W.* Fixed point semantics in process algebras.*
Mathematisch Centrum.*IW 206/82.*1982.

Bergstra, J.A. Tucker, J.V.* Hoare's logic for programming languages 
with two data types.* Mathematisch Centrum.*IW 207/82.*1982.

Best, E.* Relational semantics of concurrent programs (with some 
applications).* Newcastle Upon Tyne U. Computing Lab.*No. 180.*1982.

Bobrow, D.G. Stefik, M.*The LOOPS manual (preliminary version).* 
Xerox. Palo Alto Res. Center.*Memo KB-VLSI-81-13.*1981, (working 
paper).

Bonar, J. Soloway, E.*Uncovering principles of novice programming.* 
Yale U. Comp.Sci.Dept.*Res.Rpt. 240.*1982.

Burger, W.F. Halim, N. Pershing, J.A. Parr, F.N. Strom, R.E. Yemini, 
S.*Draft NIL reference manual.* IBM Watson Res. Center.*RC 9732.*1982.

Culik, K. Rizki, M.M.* Mathematical constructive proofs as computer 
programs.* Wayne State U. Comp.Sci.Dept.* CSC-83-004.*1983.

diSessa, A.A.*A principled design for an integrated computational 
environment.* M.I.T. Lab. for Comp.Sci.* TM-223.*1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Elrad, T. Francez, N.*A weakest precondition semantics for 
communicating processes.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 244.*1982.

Ericson, L.W.* DPL-82: a language for distributed processing.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-129.*1982.

Eyries, F.*Synthese d'images de scenes composees de spheres.* 
INRIA.*Rapport de Recherche 163.*1982.

Good, D.I.*The proof of a distributed system in GYPSY.* Texas U.  
Computing Sci.Inst.*TR-030.*1982.

Israel, B.*Customizing a personal computing environment through object
oriented programming.* Maryland U.  Comp.Sci.  Center.*TR-1158.*1982.

Jobmann, M.*ILMAOS - Eine Sprache zur Formulierung von 
Rechensystemmodellen.* Hamburg U. Fachbereich Informatik.* Bericht Nr.
91.*1982.

Kanasaki, K. Yamaguchi, K. Kunii, T.L.*A software development system 
supported by a database of structures and operations.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt.  82-15.*1982.

Kant, E. Newell, A.* Problem solving techniques for the design of 
algorithms.* Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-145.*1982.

Krafft, D.B.* AVID: a system for the interactive development of 
verifiably correct programs.* Cornell U.  Comp.Sci.Dept.*Tech.rpt.  
81-467.*1981.

Lacos, C.A. McDermott, T.S.*Interfacing with the user of a syntax 
directed editor.* Tasmania U. Info.Sci.Dept.*No.  R82-03.*1982.

Lamping, J. King, J.J.* IZZI--a translator from Interlisp to 
Zetalisp.* Hewlett Packard Co. Comp.Sci.Lab.* CSL-82-010; Hewlett 
Packard Co. Comp.Res. Center.* CRC-TR-82-005.*1982.

LeBlanc, T.J.*The design and performance of high level language 
primitives for distributed programming.* Wisconsin U.  
Comp.Sci.Dept.*Tech.Rpt. 492, Ph.D. Thesis.  LeBlanc, T.J.*1982.

Lengauer, C.*A methodology for programming with concurrency.* Toronto 
U. Comp. Systems Res. Group.* CSRG-142, Ph.D. Thesis. Lengauer, 
C.*1982.

Lesser, V. Corkill, D. Pavlin, J. Lefkowitz, L. Hudlicka, E. Brooks, 
R. Reed, S.*A high-level simulation testbed for cooperative 
distributed problem solving.* Massachusetts U. Comp. & 
Info.Sci.Dept.*COINS Tech.Rpt.  81-16.*1981.

Lieberman, H.*Seeing what your programs are doing.* M.I.T.  A.I.  
Lab.*Memo 656.*1982.

Lochovsky, F.H.* Alpha beta, edited by F.H. Lochovsky.* Toronto U.  
Comp. Systems Res. Group.*CSRG-143.*1982.  Contents: (1) Lochovsky, 
F.H. Tsichritzis, D.C.* Interactive query language for external data 
bases; (2) Mendelzon, A.O.*A database editor; (3) Lee, D.L.*A voice 
response system for an office information system; (4) Gibbs, S.J.* 
Office information models and the representation of 'office objects'; 
(5) Martin, P.* Tsichritzis, D.C.*A message management model; (6) 
Nierstrasz, O.*Tsichritzis, D.C.* Message flow modeling; (7) 
Tsichritzis, D.C. Christodoulakis, S. Faloutsos, C.* Design 
considerations for a message file server.

Mahr, B. Makowsky, J.A.*Characterizing specification languages which 
admit initial semantics.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 232.*1982.

McAllester, D.A.* Reasoning utility package. User's manual.  Version 
one.* M.I.T. A.I. Lab.*Memo 667.*1982.

Medina-Mora, R.* Syntax directed editing: towards integrated 
programming environments.* Carnegie Mellon U.  Comp.Sci.Dept.* Ph.D.  
Thesis. Medina-Mora, R.*1982.

Melese, B.* Metal, un langage de specification pour le systeme 
mentor.* INRIA.*Rapport de Recherche 142.*1982.

Olsen, D.R. Jr. Badler, N.*An expression model for graphical command 
languages.* Arizona State U.  Comp.Sci.Dept.*TR-82-001.*1982.

Paige, R.* Transformational programming--applications to algorithms 
and systems: summary paper.* Rutgers U.  
Comp.Sci.Dept.*DCS-TR-118.*1982.

Parr, F.N. Strom, R.E.* NIL: a high level language for distributed 
systems programming.* IBM Watson Res.  Center.*RC 9750.*1982.

Pratt, V.*Five paradigm shifts in programming language design and 
their realization in Viron, a dataflow programming environment.* 
Stanford U. Comp.Sci.Dept.* STAN-CS-82-951.*1982.  9p.

Rosenstein, L.S.* Display management in an integrated office 
workstation.* M.I.T. Lab for Comp.Sci.*TR-278.* 1982.

Ross, P.M.* TERAK LOGO user's manual (for version 1 - 0).* Edinburgh 
U. A.I. Dept.*Occasional Paper 021.*1980.

Schlichting, R.D. Schneider, F.B.*Using message passing for 
distributed programming: proof rules and disciplines.* Arizona U.  
Comp.Sci.Dept.*TR 82-05.*1982.

Schmidt, E.E.*Controlling large software development in a distributed 
environment.* Xerox. Palo Alto Res.  Center.*CSL-82-07, Ph.D. Thesis.
Schmidt, E.E. (University of California at Berkeley).*1982.

Senach, B.*Aide a la resolution de probleme par presentation graphique
des informations.* INRIA.*Rapport de Recherche 013.*1982.

Soloway, E. Bonar, J. Ehrlich, K.* Cognitive strategies and looping 
constructs: an empirical study.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
242.*1982.

Soloway, E. Ehrlich, K. Bonar, J. Greenspan, J.*What do novices know 
about programming?* Yale U. Comp.Sci.Dept.* Res.Rpt. 218.*1982.

Stefik, M. Bell, A.G. Bobrow, D.G.* Rule oriented programming in 
LOOPS.* Xerox. Palo Alto Res. Center.*Memo KB-VLSI-82-22.*1982.  
(working paper).

Sterling, L. Bundy, A.* Meta level inference and program 
verification.* Edinburgh U. A.I. Dept.*Res. Paper 168.* 1982.

Sterling, L. Bundy, A. Byrd, L. O'Keefe, R. Silver, B.* Solving 
symbolic equations with PRESS.* Edinburgh U.  A.I. Dept.*Res. Paper 
171.*1982.

Tappel, S. Westfold, S. Barr, A.* Programming languages for AI 
research (Chapter VI of Volume II of the Handbook of Artificial 
Intelligence, edited by Avron Barr and Edward A. Feigenbaum).* 
Stanford U. Comp.Sci.Dept.* STAN-CS-82-935; Stanford U.  
Comp.Sci.Dept. Heuristic Programming Project.*HPP-82-017.*1982.  90p.

Theriault, D.*A primer for the Act-1 language.* M.I.T.  A.I.  
Lab.*Memo 672.*1982.

Thompson, H.*Handling metarules in a parser for GPSG.  Edinburgh U.  
A.I. Dept.*Res. Paper 175.*1982.

Walker, A.* PROLOG/EX1: an inference engine which explains both yes 
and no answers.* IBM Watson Res. Center.*RJ 3771.*1983.

Waters, R.C.* LetS: an expressional loop notation.* M.I.T.  A.I.  
Lab.*Memo 680a.*1983.

Wilensky, R.*Talking to UNIX in English: an overview of an on-line 
consultant.* California U., Berkeley.  Comp.Sci.Div.*UCB/CSD 
82/104.*1982.

Wolper, P.L.*Synthesis of communicating processes from temporal logic 
specifications.* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-925, Ph.D.  
Thesis. Wolper, P.L.* 1982.  111p.

Wood, R.J.* Franz flavors: an implementation of abstract data types in
an applicative language.* Maryland U.  Comp.Sci.  
Center.*TR-1174.*1982.

Woods, D.R.*Drawing planar graphs.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-943, Ph.D. Thesis. Woods, D.R.* 1981.

------------------------------

Date: Tue 9 Aug 83 08:55:06-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Database Bibliography

Bancilhon, F. Richard, P.* TQL, a textual query language.* 
INRIA.*Rapport de Recherche 145.*1982.

Bossi, A. Ghezzi, C.*Using FP as a query language for relational 
data-bases.* Milan. Politecnico. Dipartimento di Elettronica. Lab. di 
Calcolatori.*Rapporto Interno N.  82-11.*1982.

Cooke, M.P.*A speech controlled information retrieval system.* U.K.  
National Physical Lab. Info. Technology and Computing Div.*DITC 
15/83.*1983.

Corson, Y.*Aspects psychologiques lies a l'interrogation d'une base de
donnees.* INRIA.*Rapport de Recherche 126.* 1982.

Cosmadakis, S.S.*The complexity of evaluation relational queries.* 
M.I.T. Lab. for Comp.Sci.*TM-229.*1982.

Daniels, D. Selinger, P. Haas, L. Lindsay, B. Mohan, C.  Walker, A.  
Wilms, P.*An introduction to distributed query compilation in R.* IBM 
Watson Res. Center.*RJ 3497.*1982.

Gonnet, G.H.* Unstructured data bases.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-09.*1982.

Griswold, R.E.*The control of searching and backtracking in string 
pattern matching.* Arizona U. Comp.Sci.Dept.*TR 82-20.*1982.

Grosky, W.I.*Towards a data model for integrated pictorial databases.*
Wayne State U. Comp.Sci.Dept.*CSC-82-012.* 1982.

Haas, L.M. Selinger, P.G. Bertino, E. Daniels, D. Lindsay, B. Lohman, 
G. Masunaga, Y. Mogan, C. Ng, P. Wilms, P.  Yost, R.* R*: a research 
project on distributed relational DBMS.* IBM Watson Res. Center.*RJ 
3653.*1982.

Hailpern, B.T. Korth, H.F.*An experimental distributed database 
system.* IBM Watson Res. Center.*RC 9678.*1982.

Jenny, C.*Methodologies for placing files and processes in systems 
with decentralized intelligence.* IBM Watson Res. Center.*RZ 
1139.*1982.

Kanasaki, K. Yamaguchi, K. Kunii, T.L.*A software development system 
supported by a database of structures and operations.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt.  82-15.*1982.

Klug, A.*On conjunctive queries containing inequalities.* Wisconsin U.
Comp.Sci.Dept.*Tech.Rpt. 477.*1982.

Konikowska, B.* Information systems: on queries containing k-ary 
descriptors.* Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS 
rpt. no. 466.*1982.

Lochovsky, F.H.* Alpha beta, edited by F.H. Lochovsky.* Toronto U.  
Comp. Systems Res. Group.*CSRG-143.*1982.  Contents: (1) Lochovsky, 
F.H. Tsichritzis, D.C.* Interactive query language for external data 
bases; (2) Mendelzon, A.O.*A database editor; (3) Lee, D.L.*A voice 
response system for an office information system; (4) Gibbs, S.J.* 
Office information models and the representation of 'office objects'; 
(5) Martin, P.* Tsichritzis, D.C.*A message management model; (6) 
Nierstrasz, O.*Tsichritzis, D.C.* Message flow modeling; (7) 
Tsichritzis, D.C. Christodoulakis, S. Faloutsos, C.* Design 
considerations for a message file server.

Lohman, G.M. Stoltzfus, J.C. Benson, A.N. Martin, M.D.  Cardenas, 
A.F.* Remotely sensed geophysical databases: experience and 
implications for generalized DBMS.* IBM Watson Res. Center.*RJ 
3794.*1983.

Madelaine, E.*Le systeme perluette et les preuves de representation de
types abstraits.* INRIA.*Rapport de Recherche 133.*1982.

Maier, D. Ullman, J.D.* Fragments of relations.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-929.*1982.  11p.

Michard, A.*A new database query language for non-professional users:
design principles and ergonomic evaluation.* INRIA.*Rapport de 
Recherche 127.*1982.

Ng, P.* Distributed compilation and recompilation of database 
queries.* IBM Watson Res. Center.*RJ 3375.*1982.

Srivas, M.K.*Automatic synthesis of implementations for abstract data 
types from algebraic specifications.* M.I.T. Lab for Comp.Sci.*TR-276,
Ph.D. Thesis. Srivas, M.K. (This report is a minor revision of a
thesis of the same title submitted to the Department of Electrical
Engineering and Computer Science in December 1981).*1982.

Stabler, E.P. Jr.* Database and theorem prover designs for question 
answering systems.* Western Ontario U. Cognitive Science 
Centre.*COGMEM 12.*1982.

Stamos, J.W.*A large object oriented virtual memory: grouping 
strategies, measurements, and performance.* Xerox. Palo Alto Res.  
Center.*SCG-82-02.*1982.

Wald, J.A. Sorenson, P.G.*Resolving the query inference problem using 
Steiner trees.* Saskatchewan U.  Computational 
Sci.Dept.*Rpt.83-04.*1983.

Weyer, S.A.* Searching for information in a dynamic book.* Xerox.  
Palo Alto Res. Center.*SCG-82-01, Ph.D. Thesis.  Weyer, S.A.  
(Stanford University).*1982.

------------------------------

Date: Tue 9 Aug 83 08:56:36-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Computer Science Bibliography

[Includes selected topics in CS that seem relevant to AIList
and are not covered in the preceeding bibliographies.]

Eppinger, J.L.*An empirical study of insertion and deletion in binary 
search trees.* Carnegie Mellon U.  Comp.Sci.Dept.*CMU-CS-82-146.*1982.

Gilmore, P.C.*Solvable cases of the travelling salesman problem.* 
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-08.*1981.

Graham, R.L. Hell, P.*On the history of the minimum spanning tree 
problem.* Simon Fraser U. Computing Sci.Dept.*TR 82-05.*1982.

Gupta, A. Hon, R.W.*Two papers on circuit extraction.* Carnegie Mellon
U. Comp.Sci.Dept.*CMU-CS-82-147.*1982.  Contents: Gupta, A.* ACE: a
circuit extractor; Gupta, A.  Hon, R.W.* HEXT: a hierarchical circuit
extractor.

Hofri, M.*BIN packing: an analysis of the next fit algorithm.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 242.*1982.

Jomier, G.*An overview of systems modelling and evaluation 
tendencies.* INRIA.*Rapport de Recherche 134.*1982.

Jurkiewicz, E.* Stability of compromise solution in multicriteria 
decision making problem.* Polish Academy of Sciences. Inst. of 
Comp.Sci.*ICS PAS rpt. no. 455.*1981.

Kirkpatrick, D.G. Hell, P.*On the complexity of general graph factor 
problems.* British Columbia U.  Comp.Sci.Dept.*Tech.Rpt. 81-07.*1981.

Kjelldahl, L. Romberger, S.*Requirements for interactive editing of 
diagrams.* Royal Inst. of Tech., Stockholm.  Num.Anal. & Computing 
Sci.Dept.*TRITA-NA-8303.*1983.

Moran, S.*On the densest packing of circles in convex figures.* 
Technion - Israel Inst. of Tech. Comp.Sci.Dept.* Tech.Rpt. 241.*1982.

Nau, D. Kumar, V. Kanal, L.*General branch and bound and its relation 
to A* and AO*.* Maryland U. Comp.Sci.  Center.*TR-1170.*1982.

Nau, D.S.* Pathology on game trees revisited, and an alternative to 
minimaxing.* Maryland U. Comp.Sci.  Center.*TR-1187.*1982.

Roberts, B.J. Marashian, I.* Bibliography of Stanford computer science
reports, 1963-1982.* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-911.*1982.
59p.

Scowen, R.S.*An introduction and handbook for the standard syntactic 
metalanguage.* U.K. National Physical Lab.  Info. Technology and 
Computing Div.*DITC 19/83.*1983.

Seidel, R.*A convex hull algorithm for point sets in even dimensions.*
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-14.*1981.

Varah, J.M.*Pitfalls in the numerical solution of linear ill posed 
problems.* British Columbia U. Comp.Sci.Dept.* Tech.Rpt. 81-10.*1981.

Wegman, M.*Summarizing graphs by regular expressions.* IBM Watson Res.
Center.*RC 9364.*1982.

------------------------------

End of AIList Digest
********************

∂16-Aug-83  1113	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #39
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Aug 83  11:06:44 PDT
Date: Friday, August 12, 1983 9:06AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #39
To: AIList@SRI-AI


AIList Digest            Friday, 12 Aug 1983       Volume 1 : Issue 39

Today's Topics:
  Textnet - Publish Adventure,
  Representation - Current Adequacy,
  Computational Complexity - NP-Completeness & FFP Machine,
  Programming Languages - Functional Programming,
  Fifth Generation - Opinion & Pearl Harbor Correction,
  Programming Languages & Humor - Comment
----------------------------------------------------------------------

Date: 11-Aug-83 13:52 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet

I have spent most spare minutes for the last ten years designing a
distributed hyper-service using NLS and Augment as a development tool.
We can simulate, via electronic mail, the beginnings of a
self-descriptive service-service called the "Publish adventure".  The
Xanadu project's Hypertext, because of its devotion to static text, is
a degenerate case of the Publish adventure.  If you are interested in
collaborating on the design of the protocol, let me know.

 -- Kirk Kelley

------------------------------

Date: 10 Aug 83 16:36:29-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: A Real AI Topic
Article-I.D.: ssc-vax.398

First let me get in a one last (?) remark about where the Japanese are
in AI - pattern recognition and robotics are useful but marginal in
the AI world.  Some of the pattern recognition work seems to be making
the same conclusions now that real AI workers made ten years ago
(those who don't know history are doomed to repeat it!).

Now on to the good stuff.  I have been thinking about knowledge 
representation (KR) recently and made some interesting (to me, anyway)
observations.

1.  Certain KRs tend to show up again and again, though perhaps in
    well-disguised forms.

2.  All the existing KRs can be cast into something like an
    attribute-value representation.

Space does not permit going into all the details, but as an example,
the PHRAN language analyzer from Berkeley is actually a specialized
production rule system, although its origins were elsewhere (in
parsers using demons).  Semantic nets are considered obsolete and ad
hoc, but predicate logic reps end up looking an awful lot like a net
(so does a sizeable frame system).  A production rule has two
attributes: the condition and the action.  Object-oriented programming
(smalltalk and flavors) uses the concept of attributes (instance
variables) attached to objects.  There are other examples.

Question: is there something fundamentally important and inescapable 
about attribute-value pairs attached to symbols?  (ordinary program 
code is a representation of knowledge, but doesn't look like av-pairs
- is it a valid counterexample?)

What other possible KRs are there?

Certain KRs (such as RLL (which is really a very interesting system)) 
claim to be universal and capable of representing anything.  Are there
any particularly difficult concepts that *no* KR has been able to
represent (even in a crude way)?  What is so difficult about those
concepts, if any such exist?

                                Just stirring up the mud,
                                stan the leprechaun hacker
                                ssc-vax!sts (soon utah-cs)


[I believe that planning systems still have difficulties in
representing continuous time, hypothetical worlds, beliefs, and
intentions, among other things.  In vision, robotics, geology, and
medicine, there are difficulties in representing shape, texture, and
spatial relationships.  Attribute-value pairs are just not very
useful for representing continuous quantities.  -- KIL]

------------------------------

Date: Mon 8 Aug 83 17:19:42-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NP-completeness

    I forward this message because it raises an interesting point, and
I thought readers may care to see it. I had a reply to this, but
perhaps someone else may care to comment.

  Date:     Sun,  7 Aug 83 18:28:09 CDT
  From: Mike.Caplinger <mike.rice@Rand-Relay>

  Claiming that a parallel machine makes NP-complete problems
  polynomial (given that the machine has an infinite number of
  processing elements) is certainly true (by the definition of
  NP-completeness), but meaningless.  Admittedly, a large number of
  processing elements might make a finitely-bounded algorithm faster,
  but any finitely-bounded algorithm is a constant time algorithm.
  (If I say N is never greater than the number of processors, then N
  might as well be a constant.)

------------------------------

Date: 10 Aug 83 13:19:32-PDT (Wed)
From: ihnp4!we13!burl!duke!unc!koala @ Ucb-Vax
Subject: Matrix Multiplication on the FFP Machine
Article-I.D.: unc.5687

        Since the subject has been brought up, I felt I should clear
up some of the statements about the FFP machine.  The machine consists
of a linear vector of small processors which communicate by being
connected as the leaves of a binary tree.

        Roughly speaking, the FFP machine performs general matrix
multiplication in O(nxn) space and time.  Systolic arrays can multiply
matrices in O(n) time, but do not provide a flexibility in the size of
matrices that can be handled.

        Order notation only presents half the picture - in real life,
constant factors and other terms are also important.  The machine's
matrix multiply operation examines each element of the two matrices
once.  Multiplying two matrices, mxn and nxp, requires accessing (mxn
+ nxp) values, and this is the measure of the time for the
computation.  Each cell performs n multiplications, dominated by the
access.  Further, when you multiply two matrices, mxn and nxp, the
result is of size mxp.  (Consider multiplying a column by a row).
Thus, when n < (mxp)/(m+p), extra space must be allocated for the
result.  This is also a quadratic time operation.

                                David Middleton
                                UNC Chapel Hill
                                decvax!duke!unc!koala

------------------------------

Date: 11 Aug 83 16:23:19-PDT (Thu)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Matrix Multiplication on the FFP Machine
Article-I.D.: ssc-vax.406

I must admit to being a little sloppy when giving the maximum speed of
a matrix multiplication on an FFP machine (haven't worked on this 
stuff for a year, and my memory is slipping).  I still stand by the 
original statement, however.  The *maximum* possible speed for the 
multiplication of two nxn matrices is O(log n).  What I should have 
done is state that the machine architecture is completely unspecified.
I am not convinced that the Mago tree machine is the ultimate in FFP
designs, although it is very interesting.  The achievement of O(log n)
requires several things.  Let me enumerate.  First, assume that the
matrix elements are already distributed to their processors.  Second,
assume that a single processor can quickly distribute a value to 
arbitrarily many processors (easy: put it on the bus (buss? :-} ) and
let the processors all go through a read cycle simultaneously).  
Third, assume that the processors can communicate in such a way that
addition of n numbers can be performed in log n time (by adding pairs,
then pairs of pairs, etc).  Then the distribution of values takes
constant time, the multiplications are all done simultaneously and so
take constant time, leaving only the summation to slow things down.  I
know this is fast and loose; its main failing is that it assumes the
availability of an extraordinarily high number of communication paths
(the exact number is left as an exercise for the reader).

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps For those not familiar with FP, read J. Backus' Turing Lecture in
CACM (Aug 78, I believe) - it is very readable, also he gives a
one-liner for matrix multiplication in FP, which I used as a basis for
the timing hackery above

------------------------------

Date: 11 Aug 83 19:32:18-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Functional Programming and AI
Article-I.D.: ssc-vax.408

It is interesting that the subject of FP (an old interest of mine) has
arisen in the AI newsgroup (no this is not an "appropriate newsgroup"
flame).  Having worked with both AI and FP languages, it seems to me
that the two are diametrically opposed to one another.  The ultimate
goal of functional programming language research is to produce a
language that is as clean and free of side effects as possible; one
whose semantic definition fits on a single side of an 8 1/2 x 11 sheet
of paper (and not in microform, smart-aleck!).  On the other hand, the
goal of AI research (at least in the AI language area) is to produce
languages that can effectively work with as tangled and complicated 
representations of knowledge as possible.  Languages for semantic 
nets, frames, production systems, etc, all have this character.  
Formal definitions are at best difficult, and sometimes impossible 
(aside: could this be proved for any specific knowledge rep?).  Now
between the Japanese 5th generation project (and the US response) and
the various projects to build non-vonNeumann machines using FP, it
looks to me like the seeds of a controversy over the best way to do
programming.  Should we be using FP languages or AI languages?  We
can't have it both ways, right?  Or can we?

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: Mon 8 Aug 83 13:58:36-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: Japanese 5th Generation Effort

It seems to me that the 5th generation effort differs from most
efforts we are familiar with in being strictly top-down. That is to
say, the Japanese are willing to start work not only without knowing
how to solve the nitty-gritty problems at the bottom--but without
knowing what those nitty-gritty problems actually are. Although
dangerous, this is a very powerful research strategy. Until it gets
bogged down due to an almost insurmountable number of unsolvable 
technical problems one can expect very rapid progress indeed. When it
does get bogged down, their understanding of the problems will be as
great as that of anyone else in the world. The best way to learn is by
doing.

------------------------------

Date: 9-AUG-1983 15:24
From: SHRAGER%CMU-PSY-A@CMU-CS-PT
Subject: On Science and the Fifth Generation


I'm a little confused about why this Japanese business seems to be
scaring the pants off of the US research community; why scientists are
quoted in national news magazines as being "panic stricken", and why
terms like "race" and "ahead" are being thrown around in a community
of "scientists"; why anyone cares if the fifth generation thing is
propoganda or not.  You'll find out when they make it work or they
don't!

Science is a cooperative effort.  If Japan wants to jump forward
(note, not "ahead" in any sense) in technology and understanding it is
the position of every other scientist to applaud their boldness and
provide every ounce of critical advice we can give them.  So what if
symbolics goes bankrupt becuase Japan makes a machine that makes the
3600 look like an Apple!? It will probably cost one third as much and
I'll be able to have one on my desk to further my research efforts.
Likewise, whatever the Japanese research community learns will
certainly benefit my research, even if just by learning what roads are
not fruitful.

Worry about the arms race, not the computer race!  Work as hard as you
can to further science and technology, not to beat the Japanese!  Work
toward the Nth generation, not the fifth or the sixth or the
seventh....  A little competition is probably useful sometimes, but
not to the detrement of the community spirit of science.  If we start
hiding things from one another, do we have the right to call ourselves
scientists?

When I begin to worry is when Japan decides to build a better MX
missle, not a better computer system.  Then issues of scientific
morals are involved and it's a whole 'nother ballgame.

------------------------------

Date: 9 Aug 83 21:04:30-PDT (Tue)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Pearl Harbor Day
Article-I.D.: ssc-vax.393

OK folks, especially those of you from various parts of tektronix-land
who don't seem to have access to or have interest in reading a history
book, let's review the bidding for your edification at least.  A very
unsavory reference was made in the context of a remark from a
present-day visiting professor from Japan regarding the Japanese Fifth
Generation Project.  The first bid for a date was 5 Dec 1948.  This
was changed by the same author after he received at least one
electronic mail reply to 5 Dec 1945!  This may have been with
tongue-in-cheek, as I know that he was given the correct date at least
once prior to his second message.  It's a matter of record that the
Japanese Ambassador was instructed to visit the Secretary of State on
Friday, December 5, 1941.  Whether he or his representative were again
doing so on Sunday, December 7, 1941 is a moot point, as I am certain
that they were very busy at the old trash incinerator that morning.  
Although we should not forget history, lest we be doomed to repeat it,
I do think that comparison of this episode with the present day 5th
Generation Project, even in the context of the devastation of Detroit,
is stretching things beyond the breaking point.  If you want to flame,
send mail to me, as I already have my asbestos suit on, but let's
graduate net.ai back to something more appropriate and certainly more
interesting.

TJ (with Amazing Grace) The Piper ssc-vax!tjj

------------------------------

Date: 10 Aug 83 12:02:09-PDT (Wed)
From: teklabs!done @ Ucb-Vax
Subject: Re: 5th generation computers
Article-I.D.: teklabs.2322

<flame on>

I can't stand this any longer:

   "YESTERDAY, DECEMBER 7, 1941; A DATE WHICH WILL LIVE IN INFAMY!"

Carefully memorize this date and PLEASE DON'T SCREW IT UP AGAIN.  Or
maybe infamy needs to be expressed in binary for you Computer Science 
folks.

<flame off>

Don Ellis   | USENET:  {aat,cbosg,decvax,harpo,ihnss,orstcs,pur-ee,ssc-vax
Tektronix   |          ucbvax,unc,zehntel,ogcvax,reed} !teklabs!done
Oregon, USA | ARPAnet: done.tek@rand-relay    CSNet: done@tek

------------------------------

Date: 10 Aug 1983 1244-EDT
From: MONTALVO%MIT-OZ@MIT-ML
Subject: Re: HFELISP

   Date: 27 Jul 1983 0942-PDT
   From: Jay <JAY@USC-ECLC>
   Subject: HFELISP

           HFELISP (Heffer Lisp) HUMAN FACTOR ENGINEERED LISP

                                   ABSTRACT

     HFE sugests that the more complicated features of (common) Lisp
   are dangerous, and hard to understand.  As a result a number of
   Fortran, Cobol, and 370 assembler programmers got together with a
   housewife. ...

How dare you malign the good sense of housewives by classing them with
Fortran, Cobol, and 370 assembler programmers!

Fanya Montalvo

------------------------------

End of AIList Digest
********************

∂16-Aug-83  1333	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #40
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Aug 83  13:26:49 PDT
Date: Tuesday, August 16, 1983 9:10AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #40
To: AIList@SRI-AI


AIList Digest            Tuesday, 16 Aug 1983      Volume 1 : Issue 40

Today's Topics:
  Knowledge Representation & Applicative Languages,
  Fifth Generation - Military Potential,
  Artificial Intelligence - Bigotry & Turing Test
----------------------------------------------------------------------

Date: Friday, 12 Aug 1983 15:28-PDT
From: narain@rand-unix
Subject: Reply to stan the leprechaun hacker


I am responding to two of the points you raised.

Attribute value pairs are hopeless for any area (including AI areas)
where your "cognitive chunks" are complex structures (like trees). An
example is symbolic algebraic manipulation, where it is natural to
think in terms of general forms of algebaraic expressions. Try writing
a symbolic differentiation program in terms of attribute-value pairs.
Another example is the "logic grammars" for natural language, whose
implementation in Prolog is extremely clear and efficient.

As to whether FP or more generally applicative languages are useful to
AI depends upon the point of view you take of AI. A useful view is to
consider it as "advanced programming" where you wish to develop 
intelligent computer programs, and so develop powerful computational
methods for them, even if humans do not use those methods. From this
point of view Backus's comments about the "von Neumann bottleneck"
apply equally to AI programming as they do to conventional
programming. Hence applicative languages may have ideas that could
solve the "software crisis" in AI as well.

This is not just surmise; the Prolog applications to date and underway
are evidence in favor of the power of applicative languages. You may
debate about the "applicativeness" of practical Prolog programming,
but in my opinion the best and (also the most efficient) Prolog
programs are in essence "applicative".

-- Sanjai Narain

------------------------------

Date: 12 Aug 1983 1208-PDT
From: FC01@USC-ECL
Subject: Knowledge Representation, Fifth Generation

About knowledge representation---

        Although many are new to this ballgame, the fundamentals of
the field are well established. Look in the dictionary of information
science a few years back (5-10?) for an article on the representation
of knowledge by Irwin Marin.  The (M,R) pair mentioned is indeed a
general structure for representation. In fact, you may recal 10 or 20
years ago there was talk that the most efficient programs on computers
would eventually consist of many many pointers (Rs) that pointed
between datums (Ms) in may different ways - kinda like the brain!!! It
has gone well beyond the (M,R) pair stage and Marin has developed a
structure for representation that allows top down knowledge
engineering to proceed in a systematic fashion. I guess many of us
forsake history in many ways, both social and technical.

        As to the 'race' to 5th generation computers, it may indeed be
a means to further the military industrial complex in the area of
computing, but let us also consider the tactical implications of a
highly intelligent (take the term with a grain of salt when speeking
of a computer) tactical computer. Perhaps the complexities of battle
could be simplified for human consumption to the point where a good
general could indeed win an otherwise lost war. Perhaps not. The 
scientific sharing of ideas has always been the boon of science and
the bust of government. The U.S. is in an advantageous vantage point
from the boom point of view because we share so much with each other
and others. We are also tops in the bust category because it is so
easy to get our information to other places.  Somewhere the scientific
need for communication must be traded off with the possible effects of
the research. This is what I call scientific responsibility.  As
scientists we are responsible not only to our research and the
dissemination of our knowledge, but also responsible for the effects
of that knowledge. If we shared the 'secrets' of the atomic bomb with
the world as we developed it, do you think more or fewer people would
have died? I think the Germans (who were also working on the project)
might have been able to complete their version sooner and would have
killed a great number more people. In the case of Japan, we are
talking economic struggle rather than political, but the concept of
war and destruction can be visualized just as well. A small country
using a very rapid economic growth to push ahead of the rest of the
world, now has no place to expand to. Heard it before? What new
technology will be developed using the new generation of computers?
Can we afford to lose our edge in yet another technological area to
the more eager of the world? Is this just another ploy of the M.I.
complex to get money from the people and take food from the hungry?
Tough questions, without the facts hard to answer.

                                        Another controversy ignited or
                                        enflamed by yours truly,
                                                Fred

------------------------------

Date: 12 Aug 1983 15:09-PDT
From: andy at -[VAX]
Subject: Japan's supercomputers as potential defense threat


    I'm a little confused about why this Japanese business seems to be
    scaring the pants off of the US research community... why
    anyone cares if the fifth generation thing is propoganda or not.
    You'll find out when they make it work or they don't!  ...Worry
    about the arms race, not the computer race!
                        -- SHRAGER%CMU-PSY-A@CMU-CS-PT

One serious reason for concern, at least according to political 
conservatives, is that the United States would cease to be in a 
position to control the distribution of the world's most advanced 
computing technology.

Currently, there are specific export restrictions to prohibit transfer
of advanced technology from the U.S. to its putative enemies (e.g. the
Soviet Union).  (For example, I was told not long ago that it is 
illegal to fly over France carrying the schematics for a Cyber in your
briefcase.)

The reason for this becomes quite clear when you consider who the 
principal consumers of supercomputers are in this country: they are 
disproportionally well represented by people pursuing nuclear energy 
and weapons R&D, cryptology, and war gaming.  If the Japanese have the
fastest computers, then they control distribution of the hottest 
computational technology and at least potentially could sell it to 
countries that DoD would prefer to remain well behind us
technologically.  Worse, they might sell it to others but not to the
United States.

While there are lapses in the effectiveness of this sort of export 
control, it seems to work fairly well overall.  For example, I
recently read that the East Germans have just successfully fabricated
a Z-80 chip clone; reportedly, although their chip does seem to work,
it is substantially inferior to the state of the art here.  If the
best that "blacklisted" countries can do is play catch-up via reverse 
engineering, the U.S. Government will have met its practical goal of 
denying them up-to-date technology.  If, on the other hand, other 
countries are able to produce faster and more powerful computers, the 
U.S. could no longer control access to the best tools available for 
defense R&D.


    When I begin to worry is when Japan decides to build a better MX
    missle, not a better computer system.  Then issues of scientific
    morals are involved and it's a whole 'nother ballgame.


Supercomputers play a significant role in intelligence and weapons 
resarch in the United States.  I would expect those people who 
subscribe to the view that the U.S. Government should deny high 
technology to its perceived enemies to argue that they ARE "worry[ing]
about the arms race" when they feel threatened by Japan's big 
technology push, and that the issue IS at least qualitatively 
equivalent to Japan's developing better missiles.

                                                asc

p.s. No flames about science and brotherhood, please.  I didn't claim
     to agree with the conservatives whose views I'm attempting to
     describe.  The argument that "Science is a cooperative effort"
     has, BTW, also been voiced freequently in response to NSA's
     recent attempt to control cryptology research in the U.S.

p.p.s.  Perhaps further discussion of the role of Japan's
     supercomputer project in defense applications should be directed to,
     or at least CC'd to, ARMS-D@MIT-MC.

------------------------------

Date: Fri, 12 Aug 83 12:59:34 EDT
From: Brint Cooper (CTAB) <abc@brl-bmd>
Subject: Unprintable

I'm sorry, folks, but all this flaming about 7 December 1941 sounds
too much like old fashioned racism for me.

B. Cooper

------------------------------

Date: 12 Aug 83 16:52:14-PDT (Fri)
From: ihnp4!we13!otuxa!ll1!sb1!sb6!emory!gatech!spaf @ Ucb-Vax
Subject: Sex, religion, words, smoking, farting, and the net
Article-I.D.: gatech.364

It just occurred to me today that most of the discussions going on
about use of genderless pronouns, homosexuals, heterosexuals,
personal habits, religion, and other interesting habits, all have one
point in common when we discuss them -- they're *human*
activities/conditions.

Now stop for a moment and consider the Turing test.  When you read
these messages from other users on the net, how do you know that they
are from people typing at some site rather than some intelligent
program?  I would contend that a good definition of humanity and
intelligence could be formulated by someone looking at the net
traffic.  The rabid flamers and fanatics who condemn and insult would
not meet that definition.

We develop new ideas daily in this field.  A handicapped person is
freed from his or her limitations if they can communicate with the
rest of us at 300 or 1200 baud.  They can stutter, or be mute, they
can be almost completely paralyzed, but their minds and souls are
still alive and free and can communicate with the rest of us.

It doesn't matter if you are male or female, black, red, white,
green, tall, short, old, young, fat, smoking, farting, going 55 mph,
attracted to members of the same sex, attracted to sheep, or any
possible variation of the human condition -- you are a human
intelligence at the other end of my network connection, and I deal
with you in a human manner.  Once you show your lack of tolerance or
your inability to at least try to understand, you show yourself to be
less than human.

Discrimination really means the ability to differentiate amongst
alternatives.  Prejudice and bigotry mean that you discriminate based
on factors which have no real bearing on the choice at hand.  I
believe that the definition of "human intelligence" is that it
implies the ability to discriminate and the inability to be a bigot.

I hope that some of the contributors to the net are simply AI
projects; I would hate to believe that there are people with so much
hate and intolerance as is sometimes expressed.

Comments?

--
The soapbox of Gene Spafford
CSNet:  Spaf @ GATech
ARPA:   Spaf.GATech @ UDel-Relay
uucp:   ...!{sb1,allegra,ut-ngp}!gatech!spaf
        ...!duke!mcnc!msdc!gatech!spaf


[I disagree strongly with any definition of humanity that excludes
flamers and bigots, but this digest is not the place for such a
discussion.  The question of whether intelligence excludes (or
implies) prejudice is more interesting.  We should also be seeking a
replacement for the Turing test that could identify nonhuman
intelligence. -- KIL]

------------------------------

Date: 14 Aug 83 1:12:15-PDT (Sun)
From: harpo!seismo!rlgvax!oz @ Ucb-Vax
Subject: Re: Sex, religion, words, smoking, farting, and the net
Article-I.D.: rlgvax.994

I agree that it would be a shame if there were AI projects that had
such hate and bigotry.  I argue that it WOULD be possible for an AI
project to exhibit the narrowmindedness and stupidity that we
frequently see on the net.  An interesting discussion, Gene, it is
something to ponder.

                                OZ
                                seismo!rlgvax!oz

------------------------------

End of AIList Digest
********************

∂17-Aug-83  1713	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #41
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Aug 83  17:12:52 PDT
Date: Wednesday, August 17, 1983 4:04PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #41
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 41

Today's Topics:
  Expert Systems - Rog-O-Matic,
  Programming Languages - LOGLisp & NETL & Functional Programming,
  Computational Complexity - Time/Space Tradeoff & Humor
----------------------------------------------------------------------

Date: Tuesday, 16 August 1983 21:20:38 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Rog-O-Matic paper request


People of NetLand, the long awaited day of liberation has arrived!
Throw off the shackles of ignorance and see what secrets of
technology have been laid bare through the agency of a free
university press, unbridled by the harsh realities of economic
competition!

                The Rog-O-Paper is here!

For a copy of CMU Technical Report CMU-CS-83-144, entitled
"Rog-O-Matic: A Belligerent Expert System", please send your physical
address to

                Mauldin@CMU-CS-A

and include the phrase "paper request" in the subject line.


For those who have a copy of the draft, the final version contains
two more figures, expanded descriptions of some algorithms, and an
updated discussion of Rog-O-Matic's performance, including
improvements made since February.  And even if you don't have a copy
of the draft, the final version still contains two more diagrams,
expanded descriptions of some algorithms, and an updated discussion
of performance.  The history of the program's development is also
chronicled.

The source is still available by either FTP or can be mailed in
several pieces.  It is about a third of a megabyte of characters, and
is mailed in pieces either 70K or 40K characters long.

Michael Mauldin (Fuzzy)
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA  15213


                     CMU-CS-83-144      Abstract

      Rog-O-Matic is an unusual combination  of  algorithmic and
      production  systems programming techniques which cooperate
      to explore a hostile environment.  This environment is the
      computer game  Rogue,  which offers several advantages for
      studying  exploration  tasks.   This  paper  presents  the
      major features of the Rog-O-Matic  system,  the  types  of
      knowledge  sources  and   rules   used   to   control  the
      exploration,  and  compares  the performance of the system
      with human Rogue players.

------------------------------

Date: Tue 16 Aug 83 22:56:27-CDT
From: Donald Blais <CC.BLAIS@UTEXAS-20.ARPA>
Subject: LOGLisp language query

In the July 1983 issue of DATAMATION, Larry R. Harris states that the
logic programming language LOGLisp has recently been developed by
Robinson.  What sources can I go to for additional information on this
language?

-- Donald

------------------------------

Date: Wed, 17 Aug 83 04:25 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Scott Fahlmann's NETL

I've read a book by Scott Fahlmann about a system called NETL for 
representing knowledge in terms of a particular tree-like structure.  
I found it a fascinating idea.  It was published in 1979.  When I last
heard about it, there were plans to develop some hardware to implement
the concept.  Does anyone know what's been happening on this front?
                              Alan Glasser (glasser@lll-mfe)

------------------------------

Date: 15 Aug 83 22:44:27-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: uiucdcs.2574

Having also worked with both FP and AI systems I basically agree with
your perceptions of their respective goals and functions, but I think
that we can have both, since they operate at different levels: Think
of a powerful, functional language that underlies the majority of the
work in AI data and procedural representations, and imagine what the
world would be like if it were pure (but still powerful).

Besides the "garbage collector" running now and then, there could,
given the mathematical foundations of FP systems, also be an
"efficiency expert" hanging around to tighten up your sloppy code.

Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

P.S. There is a recent paper by Lenat from Rand called "Cognitive
Economy" which discusses some possible advances in computing
environment maintenance; I don't recall it being linked to FP
systems, however.

------------------------------

Date: 16 Aug 83 20:33:29 EDT  (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: maximum speed

This *maximum* time business needs further ground rules if we are to
discuss it here (which we probably shouldn't).  For instance, the
argument that communication and multiplcation paths don't matter in an
nxn matrix multiply, but that the limiting step is the summation of n
numbers, seems to allow too much power in specifying components.  I am
allowed unboundedly many processors and communication paths, but only
a tree of adders?  I can build you a circuit that will add n numbers
simultaneously, so that means the *maximum* speed of an nxn matrix
multiply is constant.  But it just ain't so.  As n grows larger and
larger and larger the communication paths and the addition circuitry 
will also either grow and grow and grow, or the algorithm will slow
down.  Good old time-space tradeoff.

        (Another time-space tradeoff for matrix multiply on digital
computers:  just remember all the answers and look them up in ROM.
Result: constant time matrix multiply for bounded n.)

------------------------------

Date: 16 Aug 1983 2016-MDT
From: William Galway <Galway@UTAH-20>
Subject: NP-completeness and parallelism, humor

Perhaps AI-digest readers will be amused by the following
article.  I believe it's by Danny Cohen, and appears in the
proceedings of the CMU Conference on VLSI Systems and
Computations, pages 124-125, but this copy was dug out of local
archives.

..................................................................

                      The VLSI Approach to
                    Computational Complexity

                      Professor J. Finnegan
                 University of Oceanview, Kansas
             (Formerly with the DP department of the
               First National Bank of Oceanview)]

The rapid advance of  VLSI and the trend  toward the decrease  of
the geometrical  feature  size,  through the  submicron  and  the
subnano to the subpico, and beyond, have dramatically reduced the
cost  of  VLSI  circuitry.   As  a  result,  many   traditionally
unsolvable problems  can now  (or  will in  the near  future)  be
easily implemented using VLSI technology.

For example, consider the  traveling salesman problem, where  the
optimal sequence of N nodes ("cities") has to be found.   Instead
of  applying  sophisticated   mathematical  tools  that   require
investment in human thinking, which because of the rising cost of
labor  is  economically  unattractive,  VLSI  technology  can  be
applied to  construct  a  simple  machine  that  will  solve  the
problem!

The traveling salesman problem is considered difficult because of
the requirement  of finding  the best  route out  of N!  possible
ones.  A conventional single processor would require O(N!)  time,
but with clever use of VLSI technology this problem can easily be
solved in polynomial time!!

The solution is obtained with a simple VLSI array having only  N!
processors.  Each  processor is  dedicated to  a single  possible
route that  corresponds  to  a certain  permutation  of  the  set
[1,2,3,..N].  The time to load the distance matrix and to  select
the shortest  route(s)  is  only  polynomial  in  N.   Since  the
evaluation of  each route  is  linear in  N, the  entire  system
solves the problem in just polynomial time! Q.E.D.

Readers familiar only with conventional computer architecture may
wrongly suspect  that  the  communication between  all  of  these
processors is too expensive (in area).  However, with the use  of
wireless communication this problem is easily solved without  the
traditional, conventional area penalty.   If the system fails  to
obtain  from  the  FCC  the  required  permit  to  operate  in  a
reasonable  domain  of  the  frequency  spectrum,  it  is  always
possible to  use  microlasers and  picolasers  for  communicating
either through a light-conducting  substrate (e.g.  sapphire)  or
through a convex light-reflecting surface mounted parallel to the
device.   The  CSMA/CD  (Carrier  Sense  Multiple  Access,   with
Collision Detection) communication  technology, developed in  the
early seventies,  may  be found  to  be most  helpful  for  these
applications.

If it is necessary to  solve a problem with  a larger N than  the
one for which the system  was initially designed, one can  simply
design another system for that particular  value of N, or even  a
larger  one,  in  anticipation   of  future  requirements.    The
advancement of  VLSI  technology  makes  this  iterative  process
feasible and attractive.

This approach is not new.  In the early eighties many researchers
discovered the possibility of  accelerating the solution of  many
NP-complete problems by a simple  application of systems with  an
exponential number of processors.

Even earlier, in  the late seventies  many scientists  discovered
that problems with polynomial complexity could also be solved  in
lower time (than  the complexity) by  using number of  processors
which  is  also  a  polynomial  function  of  the  problem  size,
typically of  a  lower  degree.   NxN  matrix  multiplication  by
systems with N↑2 processors used to  be a very popular topic  for
conversations and  conference papers,  even though  less  popular
among system builders.  The requirement of dealing the variable N
was (we believe)  handled by  the simple  P/O technique,  namely,
buying a new system for any other value of N, whenever needed.

According to the most  popular model of those  days, the cost  of
VLSI processors decreases  exponentially.  Hence the  application
of an exponential number  of processors does  not cause any  cost
increase, and  the application  of only  a polynomial  number  of
processors results in a substantial cost saving!!  The fact  that
the former exponential decrease refers  to calendar time and  the
latter to problem size probably has no bearing on this discussion
and should be ignored.

The famous Moore model of exponential cost decrease was based  on
plotting the time  trend (as has  been observed in  the past)  on
semilogarithmic scale.   For that  reason  this model  failed  to
predict the present  as seen  today.  Had  the same  observations
been plotted on a simple linear  scale, it would be obvious  that
the cost of VLSI processors is already (or about to be) negative.
This must be the case, or else there is no way to explain why  so
many researchers  design systems  with an  exponential number  of
processors and compete  for solving  the same  problem with  more
processors.

CONCLUSIONS

 - With  the  rapid  advances  of  VLSI  technology  anything  is
possible.

- The more VLSI processors in a system, the better the paper.

------------------------------

End of AIList Digest
********************

∂18-Aug-83  1135	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #42
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Aug 83  11:29:18 PDT
Date: Thursday, August 18, 1983 9:54AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #42
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 42

Today's Topics:
  Fifth Generation - National Security,
  Artificial Intelligence - Prejudice & Turing Test
----------------------------------------------------------------------

Date: Tue, 16 Aug 83 13:32:17 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: AI & Morality

  The human manner has led to all sorts of abuses.  Indeed your latest
series of messages (e.g. Spaf) has offended me.  Maybe he meant
humane?  In any event there is no need to be vulgar to make a point.
Any point.

  There are some of us who work for the US government who are very
aware of the threats of exporting high technology and deeply concerned
about the free exchange of data and information and the benefits of
such exchange.  It is only in recent years and maybe because of the
Japanese that academia has taken a greater interest in areas which
they were unwilling to look at before (current economics also makes
for strange bedfellows). Industry has always had an interest (if for
nothing more than to show us a better? wheel for bigger!  bucks).  We
are in a good position to maintain the military-industrial-university
complex (not sorry if this offends anyone) and get some good work 
done.  Recent government policy may restrict high technology flow so
that you might not even get on that airplane soon.

[...]

Mort

------------------------------

Date: Tue, 16 Aug 83 17:15:24 EDT
From: Joe Buck <buck@NRL-CSS>
Subject: frame theory of prejudice


We've heard on this list that we should consider flamers and bigots 
less than human. But doesn't Minsky's frame theory suggest that
prejudice is simply a natural by-product of the way our minds work?
When we enter a new situation, we access a "script" containing default
assumptions about the situation. If the default assumptions are
"sticky" (don't change to agree with newly obtained information), the
result is prejudice.

When I say "doctor", a picture appears in your mind, often quite
detailed, containing default assumptions about sex, age, physical
appearance, etc.  In some people, these assumptions are more firmly
held than in others.  Might some AI programs designed along these
lines show phenomena resembling human prejudice?

                                                Joe Buck
                                                buck@nrl-css

------------------------------

Date: 16 Aug 1983 1437-PDT
From: Jay <JAY@USC-ECLC>
Subject: Turing Test; Parry, Eliza, and Flamer

Parry and Eliza are fairly famous early AI projects.  One acts
paranoid, another acts like an interested analyst.  How about reviving
the project and challenging the Turing test?  Flamer is born.

Flamer would read messages from the net and then reply to the 
sender/bboard denying all the person said, insulting him, and in 
general making unsupported statements.  I suggest some researchers out
there make such a program and put it on the net.  The goal would be 
for the readers of the net try to detect the Flamer, and for Flamer to
escape detection.  If the Flamer is not discovered, then it could be 
considered to have passed the Turing test.

Flamer has the advantage of being able to take a few days in 
formulating a reply; it could consult many related online sources, it
could request information concerning the subject from experts (human,
or otherwise), it could perform statistical analysis of other flames
to make appropriate word choices, it could make common errors 
(gramical, syntactical, or styleistical), and it could perform other 
complex computations.

Perhaps Flamer is already out there, and perhaps this message is 
generated by such a program.

j'

------------------------------

Date: 16 Aug 83 20:57:20 EDT  (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: artificially intelligent bigots.

I agree that bigotry and intelligence exclude each other.  An
Eliza-like bigotry program would be simple in direct proportion to its
bigotry.

------------------------------

Date: 15 Aug 83 20:05:24-PDT (Mon)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: AI Projects on the Net
Article-I.D.: ssc-vax.417


This is a really fun topic.  The problem of the Turing Test is 
enormously difficult and *very* subtle (either that or we're 
overlooking something really obvious).  Now the net provides a
gigantic lab for enterprising researchers to try out their latest
attempts.  So far I have resisted the temptation, since there are more
basic problems to solve first!  The curious thing about an AI project
is that it can be made infinitely complicated (programs are like that;
consider emacs or nroff), certainly enough to simulate any kind of
behavior desired, whether it be bigotry, right-wingism, irascibility,
mysticism, or perhaps even ordinary rational thought.  This has been 
demonstrated by several programs, among them PARRY (simulates 
paranoia), and POLITICS (simulates arguments between ideologues) (mail
me for refs if interested).  So it doesn't appear that there is a way
to detect an AI project, based on any *particular* behavior.

A more productive approach might be to look for the capability to vary
behavior according to circumstances (self-modifiability).  I can note
that all humans appear capable of modifying their behavior, and that
very few AI programs can do so.  However, not all human behavior can
be modified, and much cannot be modified easily.  "Try not to think of
a zebra for the next ten minutes" - humans cannot change their own
thought processes to manage this feat, while an AI program would not
have much problem.  In fact, Lenat's Eurisko system (assuming we can
believe all the claims) has the capability to speed up its own
operation! (it learned that Lisp 'eq' and 'equal' are the same for
atoms, and changed function references in its own code) The ability to
change behavior cannot be a criterion.

So how does one decide?  The question is still open....

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps I thought about Zeno's Paradox recently - the Greeks (especially 
Archimedes) were about a hair's breadth away from discovering 
calculus, but Zeno had crippled everybody's thinking by making a 
"paradox" where none existed.  Perhaps the Turing Test is like
that....

------------------------------

End of AIList Digest
********************

∂19-Aug-83  1927	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #43
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Aug 83  19:26:11 PDT
Date: Friday, August 19, 1983 5:26PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #43
To: AIList@SRI-AI


AIList Digest           Saturday, 20 Aug 1983      Volume 1 : Issue 43

Today's Topics:
  Administrivia - Request for Archives,
  Bindings - J. Pearl,
  Programming Languages - Loglisp & LISP CAI Packages,
  Automatic Translation - Lisp to Lisp,
  Knowledge Representation,
  Bibliographies - Sources & AI Journals
----------------------------------------------------------------------

Date: Thu 18 Aug 83 13:19:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Archives

I would like to hear from systems people maintaining AIList archives
at their sites.  Please msg AIList-Request@SRI-AI if you have an
online archive that is publicly available and likely to be available
under the same file name(s) for the forseeable future.  Send any
special instructions needed (beyond anonymous FTP).  I will then make
the information available to the list.

                                        -- Ken Laws

------------------------------

Date: Thu, 18 Aug 83 13:50:16 PDT
From: Judea Pearl <f.judea@UCLA-LOCUS>
Subject: change of address

Effective September 1, 1983 and until March 1, 1984 Judea Pearl's 
address will be :

        Judea Pearl
        c/o Faculty of Management
        University of Tel Aviv
        Ramat Aviv, ISRAEL

Dr. Pearl will be returning to UCLA at that time.

------------------------------

Date: Wednesday, 17 Aug 1983 17:52-PDT
From: narain@rand-unix
Subject: Information on Loglisp


You can get Loglisp (language or reports) by writing to J.A. Robinson
or E.E. Sibert at:

      C.I.S.
      313 Link Hall
      Syracuse University
      Syracuse, NY 13210


A paper on LOGLISP also appeared in "Logic Programming" eds. Clark and
Tarnlund, Academic Press 1982.

-- Sanjai

------------------------------

Date: 17 Aug 83 15:19:44-PDT (Wed)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: LISP CAI Packages
Article-I.D.: dcdwest.214

Is there a computer-assisted instructional package for LISP that runs
under 4.1 bsd ?  I would appreciate any information available and will
summarize what I learn ( about the package) in net.lang.lisp.

Peter Benson decvax!ittvax!dcdwest!benson

------------------------------

Date: 17-AUG-1983 19:27
From: SHRAGER%CMU-PSY-A@CMU-CS-PT
Subject: Lisp to Lisp translation again


I'm glad that I didn't have to start this dicussion up this time.
Anyhow, here's a suggestion that I think should be implemented but
which requires a great deal of Lisp community cooperation.  (Oh
dear...perhaps it's dead already!)

Probably the most intracompatible language around (next to TRAC) is
APL.  I've had a great deal of success moving APL workspaces from one 
implementation to another with a minumum of effort.  Now, part of this
has to do with the fact that APL's primatve set can't be extended
easily but if you think about it, the question of exactly how do you
get all the stuff in a workspace from one machine to the other isn't
an easy one to answer.  The special character set makes each machine's
representation a little different and, of course, trying to send the
internal form would be right out!

The APL community solved this rather elegantly: they have a thing
called a "workspace interchange standard" which is in a canonical code
whose first 256 bytes are the atomic vector (character codes) for the
source machine, etc.  The beauty of this canconical representation
isn't just that it exists, but rather that the translation to and from
this code is the RESPONSIBILITY OF THE LOCAL IMPLEMENTOR!  That is,
for example, if I write a program in Franz and someone at Xerox wants
it, I run it through our local workspace outgoing translator which
puts it into the standard form and then I ship them that (presumably
messy) version.  They have a compatible ingoing translator which takes
certain combinations of constructs and translates them to InterLisp.

Now, of course, this isn't all that easy.  First we'd have to agree on
a standard but that's not so bad.  Most of the difficulty in deciding
on a standard Lisp is taste and that has nothing to do with the form
of the standard since no human ever writes in it.  Another difficulty
(here I am endebted to Ken Laws) is that many things have impure
semantics and so cannot be cleanly translated into another form --
take, for example, the spaghetti stack (please!). Anyhow, I never said
it would be easy but I don't think that it's all that difficult either
-- certainly it's easier than the automatic programming problem.

I'll bet this would make a very interesting dissertation for some
bright young Lisp hacker.  But the difficult part isn't any particular
translator.  Each is hand tailored by the implementors/supporters of a
particular lisp system. The difficult part is getting the Lisp world
to follow the example of a computing success, as, I think, the APL
world has shown workspace interchange to be.

------------------------------

Date: 18 Aug 83 15:31:18-PDT (Thu)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Knowledge Representation, Programming Styles
Article-I.D.: ssc-vax.437

Actually trees can be expressed as attribute-value pairs.  Have had to
do that to get around certain %(&↑%$* OPS5 limitations, so it's 
possible, but not pretty.  However, many times your algebraic/tree 
expressions/structures have duplicated components, in which case you
would like to join two nodes at lower levels.  You then end up with a
directed structure only.  (This is also a solution for multiple
inheritance problems.)

I'll refrain from flaming about traditional (including logic)
grammars.  I'm tired of people insisting on a restricted view of
language that claims that grammar rules are the ultimate description
of syntax (semantics being irrelevant) and that idioms are irritating 
special cases.  I might note that we have basically solved the
language analysis problem (using a version of Berkeley's Phrase
Analysis that handles ambiguity) and are now working on building a
language learner to speed up the knowledge acquisition process, as
well as other interesting projects.

I don't recall a von Neumann bottleneck in AI programs, at least not 
of the kind Backus was talking about.  The main bottleneck seems to be
of a conceptual rather than a hardware nature.  After all, production 
systems are not inherently bottlenecked, but nobody really knows how 
to make them run concurrently, or exactly what to do with the results 
(I have some ideas though).

                                        stan the lep hack
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 16 Aug 83 10:43:54-PDT (Tue)
From: ihnp4!ihuxo!fcy @ Ucb-Vax
Subject: How does one obtain university technical reports?
Article-I.D.: ihuxo.276

I think the bibliographies being posted to the net are great.  I'd 
like to follow up on some of the references, but I don't know where to
obtain copies for many of them.  Is there some standard protocol and
contact point for requesting copies of technical reports from 
universities?  Is there a service company somewhere from which one 
could order such publications with limited distribution?

                        Curiously,

                        Fred Yankowski
                        Bell Labs Rm 6B-216
                        Naperville, IL
                        ihnp4!ihuxo!fcy


[I published all the addresses I know in V1 #8, May 22.  Two that
might be of help are:

    National Technical Information Service
    5285 Port Royal Road
    Springfield, Virginia  22161

    University Microfilms
    300 North Zeeb Road
    Ann Arbor, MI  48106

You might be able to get ordering information for many sources
through your corporate or public library.  You could also contact
LIBRARY@SCORE; I'm sure Richard Manuck  would be willing to help.
If all else fails, put out a call for help through AIList. -- KIL]

------------------------------

Date: 17 Aug 83 1:14:51-PDT (Wed)
From: decvax!genrad!mit-eddie!gumby @ Ucb-Vax
Subject: Re: How does one obtain university technical reports?
Article-I.D.: mit-eddi.616

Bizarrely enough, MIT and Stanford AI memos were recently issued by 
some company on MICROFILM (!) for some exorbitant price.  This price 
supposedly gives you all of them plus an introduction by Marvin
Minsky.  They advertised in Scientific American a few months ago.  I
guess this is a good deal for large institutions like Bell, but
smaller places are unlikely to have a microfilm (or was it fiche)
reader.

MIT AI TR's and memos can be obtained from Publications, MIT AI Lab, 
8th floor, 545 Technology Square, Cambridge, MA 02139.


[See AI Magazine, Vol. 4, No. 1, Winter-Spring 1983, pp. 19-22, for 
Marvin Minsky's "Introduction to the COMTEX Microfiche Edition of the
Early MIT Artificial Intelligence Memos".  An ad on p. 18 offers the
set for $2450.  -- KIL]

------------------------------

Date: 17 Aug 83 10:11:33-PDT (Wed)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!cbosgd!cbscd5!lvc @
      Ucb-Vax
Subject: List of AI Journals
Article-I.D.: cbscd5.419

Here is the list of AI journals that I was able to put together from
the generous contributions of several readers.  Sorry about the delay.
Most of the addresses, summary descriptions, and phone numbers for the
journals were obtained from "The Standard Periodical Directory"
published by Oxbridge Communications Inc.  183 Madison Avenue, Suite
1108 New York, NY 10016 (212) 689-8524.  Other sources you may wish to
try are Ulrich's International Periodicals Directory, and Ayer
Directory of Publications.  These three reference books should be
available in most libraries.

*************************
AI Journals and Magazines 
*************************

------------------------------
AI Magazine
        American Association for Artificial Intelligence
        445 Burgess Drive
        Menlo Park, CA 94025
        (415) 328-3123
        AAAI-OFFICE@SUMEX-AIM
        Quarterly, $25/year, $15 Student, $100 Academic/Corporate
------------------------------
Artificial Intelligence
        Elsevier Science Publishers B.V. (North-Holland)
        P.O. Box 211
        1000 AE Amsterdam, The Netherlands
        About 8 issues/year, 880 Df. (approx. $352)
------------------------------
American Journal of Computational Linguistics
        Donald E. Walker
        SRI International
        333 Ravenswood Avenue
        Menlo Park, CA 94025
        (415) 859-3071
        Quarterly, individual ACL members $15/year, institutions $30.
------------------------------
Robotics Age
        Robotics Publishing Corp.
        174 Concord St., Peterborough NH 03458 (603) 924-7136
        Technical articles related to design and implementation of
        intelligent machine systems
        Bimonthly, No price quoted
------------------------------
SIGART Newsletter
        Association for Computing Machinery
        11 W. 42nd St., 3rd fl.
	New York NY 10036
	(212) 869-7440
        Artificial intelligence, news, report, abstracts, educa-
        tional material, etc.  Book reviews.
        Bimonthly $12/year, $3/copy
------------------------------
Cognitive Science
        Ablex Publishing Corp.
        355 Chestnut St.
	Norwood NJ 07648
	(201) 767-8450
        Articles devoted to the emerging fields of cognitive
        psychology and artificial intelligence.
        Quarterly $22/year
------------------------------
International Journal of Man Machine Studies
        Academic Press Inc.
        111 Fifth Avenue
	New York NY 10013
	(212) 741-4000
        No description given.
        Quarterly $26.50/year
------------------------------
IEEE Transactions on Pattern Analysis and Machine Intelligence
        IEEE Computer Society
        10662 Los Vaqueros Circles,
	Los Alamitos CA 90720
	(714) 821-8380
        Technical papers dealing with advancements in artificial
        machine intelligence
        Bimonthly $70/year, $12/copy
------------------------------
Behavioral and Brain Sciences
        Cambridge University Press
        32 East 57th St.
	New York NY 10022
	(212) 688-8885
        Scientific form of research in areas of psychology,
	neuroscience, behavioral biology, and cognitive science,
	continuing open peer commentary is published in each issue
        Quarterly $95/year, $27/copy
------------------------------
Pattern Recognition
        Pergamon Press Inc.
        Maxwell House, Fairview Park
        Elmsford NY 10523
	(914) 592-7700
        Official journal of the Pattern Recognition Society
        Bimonthly $170/year, $29/copy
------------------------------

************************************
Other journals of possible interest.
************************************

------------------------------
Brain and Cognition
        Academic Press
        111 Fifth Avenue
	New York NY 10003
	(212) 741-6800
        The latest research in the nonlinguistic aspects of neuro-
        psychology.
        Quarterly $45/year
------------------------------
Brain and Language
        Academic Press, Journal Subscription
        111 Fifth Avenue
	New York NY 10003
	(212) 741-6800
        No description given.
        Quarterly $30/year
------------------------------
Human Intelligence
        P.O. Box 1163
        Birmingham MI 48012
	(313) 642-3104
        Explores the research and application of ideas on human
	intelligence.
        Bimonthly newsletter - No price quoted.
------------------------------
Intelligence
        Ablex Publishing Corp.
        355 Chestnut St.
	Norwood NJ 07648
	(201) 767-8450
        Original research, theoretical studies and review papers
        contributing to understanding of intelligence.
        Quarterly $20/year
------------------------------
Journal of the Assn. for the Study of Perception
        P.O. Box 744
	DeKalb IL 60115
        No description given.
        Semiannually $6/year
------------------------------
Computational Linguistics and Computer Languages
        Humanities Press
        Atlantic Highlands NJ 07716
	(201) 872-1441
        Articles deal with syntactic and semantic of [missing word]
        languages relating to math and computer science, primarily
        those which summarize, survey, and evaluate.
        Semimonthly $46.50/year
------------------------------
Annual Review in Automatic Programming
        Maxwell House, Fairview Park
        Elmsford NY 10523
	(914) 592-7700
        A comprehensive treatment of some major topics selected
        for their current importance.
        Annual $57/year
------------------------------
Computer
        IEEE Computer Society
        10662 Los Vaqueros Circle
        Los Alamitos, CA 90720
        (714) 821-8380
        Monthly, $6/copy, free with Computer Society Membership
------------------------------
Communications of the ACM
        Association for Computing Machinery
        11 West 42nd Street
        New York, NY 10036
        Monthly, $65/year, free with membership ($50, $15 student)
------------------------------
Journal of the ACM
        Association for Computing Machinery
        11 West 42nd Street
        New York, NY 10036
        Computer science, including some game theory,
        search, foundations of AI
        Quarterly, $10/year for members, $50 for nonmembers
------------------------------
Cognition
        Associated Scientific Publishers b.v.
        P.O. Box 211
        1000 AE Amsterdam, The Netherlands
        Theoretical and experimental studies of the mind, book reviews
        Bimonthly, 140 Df./year (~ $56), 240 Df. institutional
------------------------------
Cognitive Psychology
        Academic Press
        111 Fifth Avenue
        New York, NY 10003
        Quarterly, $74 U.S., $87 elsewhere
------------------------------
Robotics Today
        Robotics Today
        One SME Drive
        P.O. Box 930
        Dearborn, MI 48121
        Robotics in Manufacturing
        Bimonthly, $36/year unless member of SME or RIA
------------------------------
Computer Vision, Graphics, and Image Processing
        Academic Press
        111 Fifth Avenue
        New York, NY 10003
        $260/year U.S. and Canada, $295 elsewhere
------------------------------
Speech Technology
        Media Dimensions, Inc.
        525 East 82nd Street
        New York, NY 10028
        (212) 680-6451
        Man/machine voice communications
        Quarterly, $50/year
------------------------------

*******************************
    Names, but no addresses
*******************************

        Magazines
        --------

AISB Newsletter

        Proceedings
        ←←←←←←←←←←

IJCAI	International Joint Conference on AI
AAAI	American Association for Artificial Intelligence
TINLAP	Theoretical Issues in Natural Language Processing
ACL	Association of Computational Linguistics
AIM	AI in Medicine
MLW	Machine Learning Workshop
CVPR	Computer Vision and Pattern Recognition (formerly PRIP)
PR	Pattern Recognition
IUW	Image Understanding Workshop (DARPA)
T&A	Trends and Applications (IEEE, NBS)
DADCM	Workshop on Data Abstraction, Databases, and Conceptual Modeling
CogSci	Cognitive Science Society
EAIC	European AI Conference


Thanks again to all that contributed.

Larry Cipriani
cbosgd!cbscd5!lvc

------------------------------

End of AIList Digest
********************

∂22-Aug-83  1145	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #44
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Aug 83  11:41:46 PDT
Date: Monday, August 22, 1983 9:39AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #44
To: AIList@SRI-AI


AIList Digest            Monday, 22 Aug 1983       Volume 1 : Issue 44

Today's Topics:
  AI Architecture - Parallel Processor Request,
  Computational Complexity - Maximum Speed,
  Functional Programming,
  Concurrency - Production Systems & Hardware,
  Programming Languages - NETL
----------------------------------------------------------------------

Date: 18 Aug 83 17:30:43-PDT (Thu)
From: decvax!linus!philabs!sdcsvax!noscvax!revc @ Ucb-Vax
Subject: Looking for parallel processor systems
Article-I.D.: noscvax.182

We have been looking into systems to replace our current ANALOG
computers.  They are the central component in a real time simulation
system.  To date, the only system we've seen that looks like it might
do the job is the Zmob system being built at the Univ. of Md (Mark
Weiser).

I would appreciate it if you could supply me with pointers to other
systems that might support high speed, high quality, parallel
processing.

Note: most High Speed networks are just too slow and we can't justify
a Cray-1.

Bob Van Cleef

uucp : {decvax!ucbvax || philabs}!sdcsvax!nosc!revc arpa : revc@nosc 
CompuServe : 71565,533

------------------------------

Date: 19 Aug 83 20:29:13-PDT (Fri)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: maximum speed
Article-I.D.: ssc-vax.445

Hmmm, I didn't know that addition of n numbers could be performed 
simultaneously - ok then, constant time matrix multiplication, given 
enough processors.  I still haven't seen any hard data on limits to
speed because of communications problems.  If it seems like there are
limits but you can't prove it, then maybe you haven't discovered the
cleverest way to do it yet...

                                        stan the lep hack
                                        ssc-vax!sts (soon utah-cs)

ps The space cost of constant or log time matrix mults is of course
   ridiculous

pps Perhaps this should move to net.applic?

------------------------------

Date: Fri, 19 Aug 83 15:08:15 EDT
From: Paul Broome (CTAB) <broome@brl-bmd>
Subject: Re: Functional programming and AI

Stan,

Let me climb into my pulpit and respond to your FP/AI prod.  I don't 
think FP and AI are diametrically opposed.  To refresh everyone's
memory here are some of your comments.


        ...  Having worked with both AI and FP languages,
        it seems to me that the two are diametrically
        opposed to one another.  The ultimate goal of functional
        programming language research is to produce a language that
        is as clean and free of side effects as possible; one whose
        semantic definition fits on a single side of an 8 1/2 x 11
        sheet of paper ...

Looking at Backus' Turing award lecture, I'd have to say that
cleanliness and freedom of side effects are two of Backus' goals but
certainly not succinctness of definition.  In fact Backus says (CACM,
Aug.  78, p. 620), "Our intention is to provide FP systems with widely
useful and powerful primitive functions rather than weak ones that 
could then be used to define useful ones."

Although FP has no side effects, Backus also talked about applicative 
state systems (AST) with one top level change of state per
computation,i.e.  one side effect.  The world of expressions is a
nice, orderly one; the world of statements has all the mush.  He's
trying to move the statement part out of the way.

I'd have to say one important part of the research in FP systems is to
define and examine functional forms (program forming operations) with 
nice mathematical properties.  A good way to incorporate (read 
implement) a mathematical concept in a computer program is without 
side effects.  This side effect freeness is nice because it means that
a program is 'referentially transparent', i.e. it can be used without
concern about collision with internal names or memory locations AND
the program is dependable; it always does the same thing.

A second nice thing about applicative languages is that they are
appropriate for parallel execution.  In a shared memory model of
computation (e.g. Ada) it's very difficult (NP-complete, see CACM, a
couple of months ago) to tell if there is collision between
processors, i.e. is a processor overwriting data that another
processor needs.


        On the other hand, the goal of AI research (at least in the
        AI language area) is to produce languages that can effectively
        work with as tangled and complicated representations of
        knowledge as possible.  Languages for semantic nets, frames,
        production systems, etc, all have this character.

I don't think that's the goal of AI research but I can't offer a
better one at the moment.  (Sometimes it looks as if the goal is to
make money.)

Large, tangled structures can be handled in applicative systems but
not efficiently, at least I don't see how.  If you view a database
update as a function mapping the pair (NewData, OldDatabase) into
NewDatabase you have to expect a new database as the returned value.
Conceptionally that's not a problem.  However, operationally there
should just be a minor modification of the original database when
there is no sharing and suspended modification when the database is
being shared.  There are limited transformations that can help but
there is much room for improvement.

An important point in all this is program transformation.  As we build
bigger and smarter systems we widen the gap between the way we think 
and the hardware.  We need to write clear, easy to understand, and 
large-chunked programs but transform them (within the same source 
language) into possibly less clear, but more efficient programs.  
Program transformation is much easier when there are no side effects.

        Now between the Japanese 5th generation project (and the US
        response) and the various projects to build non-vonNeumann
        machines using FP, it looks to me like the seeds of a
        controversy over the best way to do programming.  Should we be
        using FP languages or AI languages?  We can't have it both ways,
        right?  Or can we?

A central issue is efficiency.  The first FORTRAN compiler was viewed
with the same distrust that the public had about computers in general.
Early programmers didn't want to relinquish explicit management of
registers or whatever because they didn't think the compiler could do
as well as they.  Later there was skepticism about garbage collection
and memory management.  A multitude of sins is committed in the name
of (machine) efficiency at the expense of people efficiency.  We
should concern ourselves more with WHAT objects are stored than with
HOW they are stored.

There's no doubt that applicative languages are applicable.  The
Japanese (fortunately for them) are less affected by, as Dijkstra puts
it, "our antimathematical age."  And they, unlike us, are willing to
sacrifice some short term goals for long term goals.


- Paul Broome
  (broome@brl)

------------------------------

Date: 17 Aug 83 17:06:13-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: ssc-vax.427

There *is* a powerful functional language underlying most AI programs
- Lisp!  But it's never pure Lisp.  The realization that got me to
thinking about this was the apparent necessity for list surgery,
sooner or later.  rplaca and allied functions show up in the strangest
places, and seem to be crucial to the proper functioning of many AI
systems (consider inheritance in frames or the construction of a
semantic network; perhaps method combination in flavors qualifies).
I'm not arguing that an FP language could *not* be used to build an AI
language on top; I'm thinking more about fundamental philosophical
differences between different schools of research.

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: Sat 20 Aug 83 12:28:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: So the language analysis problem has been solved?!?

I will also refrain from flaming, but not from taking to task 
excessive claims.

    I'll refrain from flaming about traditional (including
    logic) grammars.  I'm tired of people insisting on a
    restricted view of language that claims that grammar rules
    are the ultimate description of syntax (semantics being
    irrelevant) and that idioms are irritating special cases.  I
    might note that we have basically solved the language
    analysis problem (using a version of Berkeley's Phrase
    Analysis that handles ambiguity) ...

I would love to test that "solution of the language analysis 
problem"... As for the author being "tired of people insisting on a 
restricted ...", he is just tired of his own straw people, because 
there doesn't seem to be anybody around anymore claiming that 
"semantics is irrelevant".  Formal grammars (logic or otherwise) are 
just a convenient mathematical technique for representing SOME 
regularities in language in a modular and testable form. OF COURSE, a 
formal grammar seen from the PROCEDURAL point of view can be replaced 
by any arbitrary "ball of string" with the same operational semantics.
What this replacement does to modularity, testability and 
reproducibility of results is sadly clear in the large amount of 
published "research" in natural language analysis which is untestable 
and irreproducible. The methodological failure of this approach 
becomes obvious if one considers the analogous proposal of replacing 
the principles and equations of some modern physical theory (general 
relativity, say) by a computer program which computes "solutions" to 
the equations for some unspecified subset of their domain, some of 
these solutions being approximate or plain wrong for some (again 
unspecified) set of cases. Even if such a program were "right" all the
time (in contradiction with all our experience so far), its sheer 
opacity would make it useless as scientific explanation.

Furthermore, when mentioning "semantics", one better say which KIND of
semantics one means. For example, grammar rules fit very well with 
various kinds of truth-theoretic and model-theoretic semantics, so the
comment above cannot be about that kind of semantics. Again, a theory 
of semantics needs to be testable and reproducible, and, I would 
claim, it only qualifies if it allows the representation of a 
potential infinity of situation patterns in a finite way.

    I don't recall a von Neumann bottleneck in AI programs, at
    least not of the kind Backus was talking about.  The main
    bottleneck seems to be of a conceptual rather than a
    hardware nature.  After all, production systems are not
    inherently bottlenecked, but nobody really knows how to make
    them run concurrently, or exactly what to do with the
    results (I have some ideas though).

The reason why nobody knows how to make production systems run 
concurrently is simply because they use a global state and side 
effects. This IS precisely the von Neumann bottleneck, as made clear 
in Backus' article, and is a conceptual limitation with hardware 
consequences rather than a purely hardware limitation. Otherwise, why 
would Backus address the problem by proposing a new LANGUAGE (fp), 
rather than a new computer architecture?  If your AI program was 
written in a language without side effects (such as PURE Prolog), the 
opportunities for parallelism would be there. This would be 
particularly welcome in natural language analysis with logic (or other
formal) grammars, because dealing with more and more complex subsets 
of language needs an increasing number of grammar rules and rules of 
inference, if the results are to be accurate and predictable.  
Analysis times, even if they are polynomial on the size of the input, 
may grow EXPONENTIALLY with the size of the grammar.

                                Fernando Pereira
                                AI Center
                                SRI International
                                pereira@sri-ai

------------------------------

Date: 15 Aug 83 22:44:05-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: uiucdcs.2573


The nodes in a data-flow machine, in order to compute efficiently,
must be able to do a local computation.  This is why arithmetic or
logical operations are O.K. to distribute.  Your scheme, however,
seems to require that the database of propositions be available to
each node, so that the known facts can be deduced "instantaneously".
This would cause severe problems with the whole idea of concurrency,
because either the database would have to be replicated and passed
through the network, or an elaborate system of memory locks would need
to be established.

The Hearsay system from CMU was one of the early PS's with claims to a
concurrent implementation. There is a paper I remember in IEEE ToC (75
or 76) which discussed the problems of speedup and locks.

Also, I think John Holland (of Michigan?) is currently working on a 
parallel PS machine (but doesn't call it that!)


Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

------------------------------

Date: 17 Aug 83 16:56:55-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: ssc-vax.426

A concurrent PS is not too impossible, 'cause I've got one 
(specialized for NL processing and not actually implemented 
concurrently, but certainly capable).  It is true that the working
memory would have to be carefully organized, but that's a matter of
sufficiently clever design; there's no fundamental theoretical
problems.  Traditional approaches won't work, because two concurrently
operating rules may come to contradictory conclusions, both of which
may be valid.  You need a way to store both of these and use them.

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 18 Aug 83 0516 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: NETL

I am a graduate student of Scott Fahlman's, and I've been working on
NETL for the last five years.  There are some interesting lessons to
be learned from the history of the NETL project.  NETL was a
combination of a parallel computer architecture, called a parallel
marker propagation machine, and a representation language that
appeared to fit well on this architecture.  There will probably never
be a hardware implementation of the NETL Machine, although it is
certainly feasible.  Here's why...

The first problem with NETL is its radical semantics:  no one
completely understands their implications.  We (Scott Fahlman, Walter
van Roggen, and I) wrote a paper in IJCAI-81 describing the problems
we had figuring out how exceptions should interact with multiple
inheritance in the IS-A hierarchy and why the original NETL system
handled exceptions incorrectly.  We offered a solution in our paper,
but the solution turned out to be wrong.  When you consider that NETL
contains many features besides exceptions and inheritance, e.g.
contexts, roles, propositional statements, quantifiers, and so on, and
all of these features can interact (!!), so that a role (a "slot" in
frame lingo) may only exist within certain contexts, and have
exceptions to its existence (not its value, which is another matter)
in certain sub-contexts, and may be mapped multiple times because of
the multiple inheritance feature, it becomes clear just how 
complicated the semantics of NETL really is.  KLONE is in a similar 
position, although its semantics are less radical than NETL's.
Fahlman's book contains many simple examples of network notation
coupled with appeals to the reader's intuition; what it doesn't
contain is a precise mathematical definition of the meaning of a NETL
network because no such definition existed at that time.  It wasn't
even clear that a formal definition was necessary, until we began to
appreciate the complexity of the semantic problems.  NETL's operators
are *very* nonstandard; NETL is the best evidence I know of that
semantic networks need not be simply notational variants of logic,
even modal or nonmonotonic logics.

In my thesis (forthcoming) I develop a formal semantics for multiple 
inheritance with exceptions in semantic network languages such as
NETL.  This brings us to the second problem.  If we choose a
reasonable formal semantics for inheritance, then inheritance cannot
be computed on a marker propagation machine, because we need to pass
around more information than is possible on such a limited
architecture.  The algorithms that were supposed to implement NETL on
a marker propagation machine were wrong:  they suffered from race
conditions and other nasty behavior when run on nontrivial networks.
There is a solution called "conditioning" in which the network is
pre-processed on a serial machine by adding enough extra links to
ensure that the marker propagation algorithms always produce correct 
results.  But the need for serial preprocessing removes much of the 
attractiveness of the parallel architecture.

I think the NETL language design stands on its own as a major
contribution to knowledge representation.  It raises fascinating
semantic problems, most of which remain to be solved.  The marker
propagation part doesn't look too promising, though.  Systems with
NETL-like semantics will almost certainly be built in the future, but
I predict they will be built on top of different parallel
architectures.

-- Dave Touretzky

------------------------------

Date: Thu 18 Aug 83 13:46:13-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NETL and hardware

        In Volume 40 of the AIList Alan Glasser asked about hardware 
implimentations using marker passing a la NETL. The closest hardware I
am aware of is called the Connection Machine, and is begin developed 
at MIT by Alan Bawden, Dave Christman, and Danny Hillis (apologies if
I left someone out). The project involves building a model with about
2↑10 processors. I'm not sure of its current status, though I have
heard that a company is forming to build and market prototype CM's.

        I have heard rumors of the SPICE project at CMU, though I am
not aware of any results pertaining to hardware, the project seems to
have some measure of priority at CMU. Hopefully members of each of
these projects will also send notes to AIList...

David Rogers, DRogers@SUMEX-AIM

------------------------------

Date: Thu, 18 Aug 1983  22:01 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: NETL


I've only got time for a very quick response to Alan Glasser's query 
about NETL.  Since the book was published we have done the following:

1. Our group at CMU has developed several design sketches for
practical NETL machine implementations of about a million porcessing
elements.  We haven't built one yet, for reasons described below.

2. David B. McDonald has done a Ph.D.thesis on noun group
understanding (things like "glass wine glass") using a NETL-type
network to hold the necessary world knowledge.  (This is available as
a CMU Tech Report.)

3. David Touretzky has done a through logical analysis of NETL-style 
inheritance with exceptions, and is currently writing up his thesis on
this topic.

4. I have been studying the fundamental strengths and limitations of 
NETL-like marker-passing compared to other kinds of massively parallel
computation.  This has gradually led me to prefer an architecture that
passes numers or continuous values to the single-bit marker-passing of
NETL.

For the past couple of years, I've been putting most of my time into
the Common Lisp effort -- a brief foray into tool building that got
out of hand -- and this has delayed any plans to begin work on a NETL
machine.  Now that our Common Lisp is nearly finished, I can think
again about starting a hardware project, but something more exciting
than NETL has come along: the Boltzmann Machine architecture that I am
working on with Geoff Hinton of CMU and Terry Sejnowski of
Johns-Hopkins.  We will be presenting a paper on this at AAAI.

Very briefly, the Boltzmann machine is a massively parallel
architecture in which each piece of knowledge is distributed over many
units, unlike NETL in which concepts are associated with particular
pieces of hardware.  If we can make it work, this has interesting
implications for reliable large-scale implementation, and it is also a
much more plausible model for neural processing than is something like
NETL.

So that's what has happened to NETL.

-- Scott Fahlman (FAHLMAN@CMU-CS-C)

------------------------------

End of AIList Digest
********************

∂22-Aug-83  1347	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #45
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Aug 83  13:46:49 PDT
Date: Monday, August 22, 1983 10:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #45
To: AIList@SRI-AI


AIList Digest            Monday, 22 Aug 1983       Volume 1 : Issue 45

Today's Topics:
  Language Translation - Lisp-to-Lisp,
  Programming Languages - Lisps on 68000s and SUNs
----------------------------------------------------------------------

Date: 19 Aug 1983 2113-PDT
From: VANBUER@USC-ECL
Subject: Lisp Interchange Standard

In response to your message sent Friday, August 19, 1983 5:26PM

On Lisp translation via a standard form:

I have used Interlisp Transor a fair amount both into and out of
Interlisp (even experimented with translation to C), and the kind of
thing which makes it very difficult, especially if you want to retain
some efficiency, are subtle differences in what seem to be fairly
standard functions:  e.g. in Interlisp (DREMOVE (CAR X) X) will be EQ
to X (though not EQUAL or course) except in the case the result is
NIL; both CAR and CDR of the lead cell are RPLACed so that all
references to the value of X also see the DREMOVE as a side effect.
In Franz Lisp, the DREMOVE would have the value (CDR X) in most cases,
but no RPLACing is done.  In most cases this isn't a problem, but ....
In APL, at least the majority of the language has the same semantics
in all implementations.
        Darrel J. Van Buer, SDC

------------------------------

Date: 20 Aug 1983 1226-PDT
From: FC01@USC-ECL
Subject: Re: Language Translation

I like the APL person's [Shrager's] point of view on translation.
The problem seems to be that APL has all the things it needs in its
primative functions. Lisp implementers have seen fit to impurify
their language by adding so much fancy stuff that they depend on so
heavily.  If every lisp program were translated into lisp 1.5 (or
so), it would be easy to port things, but it would end in
innefficient implementations.  I like APL, in fact, I like it so much
I've begun maintaining it on our unix system. I've fixed several
bugs, and it now seems to work very well.  It has everything any
other APL has, but nobody seems to want to use it except me. I write
simulators in a day, adaptive networks in a week, and analyze
matrices in seconds. So at any rate, anyone who is interested in APL
on the VAX - especially for machine intelligence applications please
get in touch with me. It's not ludicrous by the way, IBM does more
internal R+D in APL than in any other language! That includes their
robotics programs where they do lots of ARM solutions (matrix
manipulation being built into APL has tremendous advantages in this
domain).

FLAME ON!
[I believe this refers the Stan the Leprechaun's submission in
V1 #43. -- KIL]

So if your language translation program is the last word in
translators, how come it's not in the journals? How come nobody knows 
that it solves all the problems of translation? How come you haven't
made a lot of money selling COBOL to PASCAL to C to APL to LISP to
ASSEMBLER to BASIC to ... translators in the open market? Is it that
it only works for limited cases? Is it that it only deals with
'natural' languages? Is it really as good as you think, or do you only
think it's really good?  How about sharing your (hopefully non
NPcomplete) solution to an NP complete problem with the rest of us!  
FLAME OFF!

[...]
                Fred

------------------------------

Date: Sat 20 Aug 83 15:18:13-PDT
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Lisp-to-Lisp translation

Some of the comments on Lisp-to-Lisp translation seem to be rather 
naive.  Translating code that works on pure S-expressions is usually 
not too difficult.  However, Lisp is not pure Lisp.

I am presently translating some code from Interlisp to Zetalisp (from
a Dec-20 to a Symbolics 3600) and thought a few comments might be
appropriate.  First off, Interlisp has TRANSOR which is a package to
translate between Lisps and is programmable.  It isn't used often but
it does some of the basic translations.  There is an Interlisp
Compatability Package(ILCP) on the 3600, which when combined with a
CONVERT program to translate from Interlisp (running in Interlisp),
covers a fair amount of Interlisp.  (Unfortunately it is still early
in its development - I just rewrote all the I/O functions because they
didn't work for me.)

Even with these aids there are lots of problems.  Here are a few
examples I have come across:  In the source language, taking the CAR
of an atom did not cause an error.  Apparently laziness prevented the
author from writing code to check whether some input was an atom
(which was legal input) before seeing if the CAR of it was some
special symbol.

Since Interlisp-10 is short of cons-cell room, many relatively obscure
pieces of code were designed to use few conses.  Thus the author used 
and reused scratch lists and scratch strings.  The exact effect
couldn't be duplicated.  In particular, he would put characters into
specific spots in the scratch string and then would collect the whole
string.  (I'm translating this into arrays.)

All the I/O has to be changed around.  The program used screen control
characters to do fancy I/O on the screen.  It just printed the right
string to go to whereever it wanted.  You can't print a string on the
3600 to do that.  Also, whether you get an end-of-line character at
the end of input is different (so I have to hand patch code that did a
(RATOM) (READC)).  And of course file names (as well as the default
part of them, ie., the directory) are all different.

Then there are little differences which the compatability package can
take care of but introduce inefficiencies.  For instance, the function
which returns the first position of a character in a string is
different between the two lisps because the values returned are off by
1.  So, where the author of the program used that function just to
determine whether the character was in the string is now computing the
position and then offsetting it by 1.

The ILCP does have a nice advantage of letting me use the Interlisp 
name for functions even though there is a similarly named, but
different, function in Zetalisp.

Unfortunately for me, this code is going to be continued to be
developed on the Dec-20 while we want to get the same code up on the
3600.  So I have to try to set it up so the translation can happen
often rather than just once.  That means going back to the Interlisp
code and putting it into shape so that a minimum amount of
hand-patching need be done.

------------------------------

Date: 19 Aug 83 10:52:11-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: Lisps on 68000's
Article-I.D.: allegra.1760

A while ago I posted a query about Lisps on 68000's.  I got
essentially zero replies, so let me post what I know and see whether
anyone can add to it.

First, Franz Lisp is being ported from the VAX to 68000's.  However,
the ratio of random rumors to solid facts concerning this undertaking
seems the greatest since the imminent availability of NIL.  Moreover,
I don't really like Franz; it has too many seams showing (I've had too
many programs die without warning from segmentation errors and the
like).

Then there's T.  T sounds good, but the people who are saying it's
great are the same ones trying to sell it to me for several thousand
dollars, so I'd like to get some more disinterested opinions first.
The only person I've talked to said it was awful, but he admits he
used an early version.

I have no special knowledge of PSL, particularly of the user
environment or of how useful or standard its dialect looks, nor of the
status of its 68000 version.

As for an eventual Common Lisp on a 68000, well, who knows?

There are also numerous toy systems floating around, but none I would 
consider for serious work.

Well, that's about everything I know; can any correct me or add to the
list?

Cheers,
John ("Don't Wanna Program in C") DeTreville
Bell Labs, Murray Hill

[I will reprint some of the recent Info-Grpahics discussion of SUNs
and other workstations as LISP-based graphics servers.  Several of
the comments relate to John's query.  -- KIL]

------------------------------

Date: Fri, 5 Aug 83 21:30:22 PDT
From: fateman%ucbkim@Berkeley (Richard Fateman)
Subject: SUNs, 3600s, and Lisp

         [Reprinted from the Info-Graphics discussion list.]

[...]

In answer to Fred's original query, (I replied to him personally
earlier ), Franz has been running on a SUN since January, 1983.  We
find it runs Lisp faster than a VAX 750, and with expected performance
improvements, may be close to a VAX 780. (= about 2.5 to 4 times
slower than a KL-10).  This makes it almost irrelevant using Franz on
a VAX.  Yet more specifically in answer to FRD's question, Franz on
the SUN has full access to the graphics software on it, and one could
set up inter-process communication between a Franz on a VAX and
something else (e.g. Franz) on a SUN. A system for shipping smalltalk
pictures to SUNs runs at UCB.

  Franz runs on other 68000 UNIX workstations, including Pixel, Dual,
and Apple Lisa.  Both Interlisp-D and MIT LispMachine Lisp have more 
highly developed graphics stuff at the moment.

  As far as other lisps, I would expect PSL and T, which run on Apollo
Domain 68000 systems, to be portable towards the SUN, and I would not
be surprised if other systems turn up.  For the moment though, Franz
seems to be alone.  Most programs run on the SUN without change (e.g.
Macsyma).

------------------------------

Date: Sat 6 Aug 83 13:39:13-PDT
From: Bill Nowicki <NOWICKI@SU-SCORE.ARPA>
Subject: Re: LISP & SUNs ...

         [Reprinted from the Info-Graphics discussion list.]

You can certainly run Franz under Unix from SMI, but it is SLOW.  Most
Lisps are still memory hogs, so as was pointed out, you need a
$100,000 Lisp machine to get decent response.

If $100,000 is too much for you to spend on each programmer, you might
want to look at what we are doing on the fourth floor here at
Stanford.  We are running a small real-time kernel in a cheap, quiet,
diskless SUN, which talks over the network to various servers.  Bill
Yeager of Sumex has written a package which runs under interLisp and
talks to our Virtual Graphics Terminal Service.  InterLisp can be run
on VAX/Unix or VAX/VMS systems, TOPS-20, or Xerox D machines.  The
cost/performance ratio is very good, since each workstation only needs
256K of memory, frame buffer, CPU, and Ethernet interface, while the 
DECSystem-20 or VAX has 8M bytes and incredibly fast system 
performance (albeit shared between 20 users).

We are also considering both PSL and T since they already have 68000
compilers.  I don't know how this discussion got on Info-Graphics.

        -- Bill

------------------------------

Date: 6 Aug 1983 1936-MDT
From: JW-Peterson@UTAH-20 (John W. Peterson)
Subject: Lisp Machines

         [Reprinted from the Info-Graphics discussion list.]

Folks who don't have >$60K to spend on a Lisp Machine may want to
consider Utah's Portable Standard Lisp (PSL) running on the Apollo 
workstation.  Apollo PSL has been distributed for several months now.
PSL is a full Lisp implementation, complete with a 68000 Lisp
compiler.  The standard distribution also comes with a wide range of
utilities.

PSL has been in use at Utah for almost a year now and is supporting 
applications in computer algebra (the Reduce system from Rand), VLSI 
design and Computer aided geometric design.

In addition, the Apollo implementation of PSL comes with a large and
easily extensible system interface package.  This provides easy,
interactive access to the resident Apollo window package, graphics
library, process communication system and other operating system
services.

If you have any questions about the system, feel free to contact me
via
        JW-PETERSON@UTAH-20 (arpa) or
        ...!harpo!utah-cs!jwp (uucp)

jw

------------------------------

Date: Sun, 7 Aug 83 12:08:08 CDT
From: Mike.Caplinger <mike.rice@Rand-Relay>
Subject: SUNs

         [Reprinted from the Inof-Graphics discussion list.]

[...]

Lisp is available from UCB (ftp from ucb-vax) for the SUN and many 
simialr 68K-based machines.  We have it up on our SMI SUNs running
4.1c UNIX.  It seems about as good as Franz on the VAX, which from a 
graphics standpoint, is saying nothing at all.

By the way, the SUN graphics library, SUNCore, seems to be an OK 
implementation of the SIG Core standard.  It has some omissions and 
extensions, like every implementation.  I haven't used it extensively 
yet, and it has some problems, but it should get some good graphics 
programs going fairly rapidly.  I haven't yet seen a good graphics
demo for the SUN.  I hope this isn't indicative of what you can
actually do with one.

By the way, "Sun Workstation" is a registered trademark of Sun 
Microsystems, Inc.  You may be able to get a "SUN-like" system 
elsewhere.  I'm not an employee of Sun, I just have to deal with them
a lot...

------------------------------

End of AIList Digest
********************

∂23-Aug-83  1228	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #46
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Aug 83  12:27:41 PDT
Date: Tuesday, August 23, 1983 10:53AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #46
To: AIList@SRI-AI


AIList Digest            Tuesday, 23 Aug 1983      Volume 1 : Issue 46

Today's Topics:
  Artificial Intelligence - Prejudice & Frames & Turing Test & Evolution,
  Fifth Generation - Top-Down Research Approach
----------------------------------------------------------------------

Date: Thu 18 Aug 83 14:49:13-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Prejudice

The message from (I think .. apologies if wrong) Stan the Leprechaun,
which sets up "rational thought" as the opposite of "right-wingism"
and of "irascibility", disproves the contention in another message
that "bigotry and intelligence are mutually exclusive".  Indeed this
latter message is its own disproof, at least by my definition of
bigotry.  All of which leads me to believe that one or other of them
*was* sent by an AI project Flamer-type program.  Good work.
                                                - Richard

------------------------------

Date: 22 Aug 83 19:45:38-EDT (Mon)
From: The soapbox of Gene Spafford <spaf%gatech@UDel-Relay>
Subject: AI and Human Intelligence

[The following are excerpts from several interchanges with the author.
-- KIL]

Words mean not necessarily what I want them to mean nor what you want
them to mean, but what we all agree that they mean.  My point is that 
we must very possibly consider emotions and ethics in any model we 
care to construct of a "human" intelligence.  The ability to handle a
conversation, as is implied by the Turing test, is not sufficient in 
my eyes to classify something as "intelligent."  That is, what
*exactly* is intelligence?  Is it something measured by an IQ test?
I'm sure you realize that that particular point is a subject of much
conjecture.

If these discussion groups are for discussion of artificial
"intelligence," then I would like to see some thought given as to the
definition of "intelligence."  Is emotion part of intelligence?  Is
superstition part of intelligence?

FYI, I do not believe what I suggested -- that bigots are less than
human.  I made that suggestion to start some comments.  I have gotten
some interesting mail from people who have thought some about the
idea, and from a great many people who decided I should be locked away
for even coming up with the idea.

[...]

That brought to mind a second point -- what is human?  What is
intelligence?  Are the the same thing? (My belief -- no, they aren't.)
I proposed that we might classify "human" as being someone who *at
least tries* to overcome irrational prejudices and bigotry.  More than
ever we need such qualitites as open-mindedness and compassion, as
individuals and as a society.  Can those qualities be programmed into
an AI system?  [...]

My original submission to Usenet was intended to be a somewhat 
sarcastic remark about the nonsense that was going on in a few of the
newsgroups.  Responses to me via mail indicate that at least a few
people saw through to some deeper, more interesting questions.  For
those people who immediately jumped on my case for making the
suggestion, not only did you miss the point -- you *are* the point.

--
  The soapbox of Gene Spafford
  CSNet:  Spaf @ GATech ARPA:  Spaf.GATech @ UDel-Relay
  uucp: ...!{sb1,allegra,ut-ngp}!gatech!spaf
        ...!duke!mcnc!msdc!gatech!spaf

------------------------------

Date: 18 Aug 83 13:40:03-PDT (Thu)
From: decvax!linus!vaxine!wjh12!brh @ Ucb-Vax
Subject: Re: AI Projects on the Net
Article-I.D.: wjh12.299

        I realize this article was a while ago, but I'm just catching
up with my news reading, after vacation.  Bear with me.

        I wonder why folks think it would be so easy for an AI program
to "change it's thought processes" in ways we humans can't.  I submit
that (whether it's an expert system, experiment in KR or what) maybe
the suggestion to 'not think about zebras' would have a similiar
effect on an AI proj. as on a human.  After all, it IS going to have
to decipher exactly what you meant by the suggestion.  On the other
hand, might it not be easier for one of you humans .... we, I mean ...
to consciously think of something else, and 'put it out of your
mind'??

        Still an open question in my mind...  (Now, let's hope this
point isn't already in an article I haven't read...)

                        Brian Holt
                        wjh!brh

------------------------------

Date: Friday, 19 Aug 1983 09:39-PDT
From: turner@rand-unix
Subject: Prejudice and Frames, Turing Test


  I don't think prejudice is a by-product of Minsky-like frames.
Prejudice is simply one way to be misinformed about the world.  In
people, we also connect prejudism with the inability to correct
incorrect information in light of experiences which prove it be wrong.

  Nothing in Minsky frames as opposed to any other theory is a
necessary condition for this.  In any understanding situation, the
thinker must call on background information, regardless of how that is
best represented.  If this background information is incorrect and not
corrected in light of new information, then we may have prejudism.

  Of course, this is a subtle line.  A scientist doesn't change his
theories just because a fact wanders by that seems to contradict his
theories.  If he is wise, he waits until a body of irrefutable
evidence builds up.  Is he prejudiced towards his current theories?
Yes, I'd say so, but in this case it is a useful prejudism.

  So prejudism is really related to the algorithm for modifying known 
information in light of new information.  An algorithm that resists
change too strongly results in prejudism.  The opposite extreme -- an
algorithm that changes too easily -- results in fadism, blowing the
way the wind blows and so on.

                        -----------

  Stan's point in I:42 about Zeno's paradox is interesting.  Perhaps
the mind cast forced upon the AI community by Alan Turing is wrong.
Is Turing's Test a valid test for Artificial Intelligence?

  Clearly not.  It is a test of Human Mimicry Ability.  It is the
assumption that the ability to mimic a human requires intelligence.
This has been shown in the past not to be entirely true; ELIZA is an
example of a program that clearly has no intelligence and yet mimics a
human in a limited domain fairly well.

  A common theme in science fiction is "Alien Intelligence".  That is,
the sf writer basis his story on the idea:  "What if alien
intelligence wasn't like human intelligence?"  Many interesting
stories have resulted from this basis.  We face a similar situation
here.  We assume that Artificial Intelligence will be detectable by
its resemblance to human intelligence.  We really have little ground
for this belief.

  What we need is a better definition of intelligence, and a test
based on this definition.  In the Turing mind set, the definition of
intelligence is "acts like a human being" and that is clearly
insufficient.  The Turing test also leads one to think erroneously
that intelligence is a property with two states (intelligent and
non-intelligent) when even amongst humans there is a wide variance in
the level of intelligence.

  My initial feeling is to relate intelligence to the ability to
achieve goals in a given environment.  The more intelligent man today
is the one who gets what he wants; in short, the more you achieve your
goals, the more intelligent you are.  This means that a person may be
more intelligent in one area of life than in another.  He is, for
instance, a great businessman but a poor father.  This is no surprise.
We all recognize that people have different levels of competence in
different areas.

  Of course, this defintion has problems.  If your goal is to lift
great weights, then your intelligence may be dependent on your
physical build.  That doesn't seem right.  Is a chess program more
intelligent when it runs on a faster machine?

  In the sense of this definition we already have many "intelligent"
programs in limited domains.  For instance, in the domain of
electronic mail handling, there are many very intelligent entities.
In the domain of human life, no electronic entities.  In the domain of
human politics, no human entities (*ha*ha*).

  I'm sure it is nothing new to say that we should not worry about the
Turing test and instead worry about more practical and functional
problems in the field of AI.  It does seem, however, that the Turing
Test is a limited and perhaps blinding outlook onto the AI field.


                                        Scott Turner
                                        turner@randvax

------------------------------

Date: 21 Aug 83 13:01:46-PDT (Sun)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!smb @ Ucb-Vax
Subject: Hofstadter
Article-I.D.: ulysses.560

Douglas Hofstadter is the subject of today's N.Y. Times Magazine cover
story.  The article is worth reading, though not, of course,
particularly deep technically.  Among the points made:  that
Hofstadter is not held in high regard by many AI workers, because they
regard him as a popularizer without any results to back up his
theories.

------------------------------

Date: Tue, 23 Aug 83 10:35 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Program Genesis

After reading in the New York Times Sunday Magazine of August 21 about
Douglas Hofstadter's latest idea on artificial intelligence arising
from the interplay of lower levels, I was inspired to carry his
suggestion to the logical limit.  I wrote the following item partly in
jest, but the idea may have some merit, at least to stimulate
discussion.  It was also inspired by Stanislaw Lem's story "Non
Serviam".

------------------------------------------------------------------------


                            PROGRAM GENESIS

                A COMPUTER MODEL OF THE PRIMORDIAL SOUP


     The purpose of this program is to model the primordial soup that 
existed in the earth's oceans during the period when life first
formed.  The program sets up a workspace (the ocean) in which storage
space in memory and CPU time (resources) are available to
self-replicating mod- ules of memory organization (organisms).
Organisms are sections of code and data which, when run, cause copies
of themselves to be written into other regions of the workspace and
then run.  Overproduction of species, competition for scarce
resources, and occasional copying errors, either accidental or
deliberately introduced, create all the conditions neces- sary for the
onset of evolutionary processes.  A diagnostic package pro- vides an
ongoing picture of the evolving state of the system.  The goal of the
project is to monitor the evolutionary process and see what this might
teach us about the nature of evolution.  A possible long-range 
application is a novel method for producing artificial intelligence.
The novelty is, of course, not complete, since it has been done at
least once before.

------------------------------

Date: 18 Aug 83 11:16:24-PDT (Thu)
From: decvax!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Japanese 5th Generation Effort
Article-I.D.: dciem.293

There seems to be an analogy between the 5th generation project and 
the ARPA-SUR project on automatic speech understanding of a decade
ago.  Both are top-down, initiated with a great deal of hope, and
dependent on solving some "nitty-gritty problems" at the bottom. The
result of the ARPA-SUR project was at first to slow down research in
ASR (automatic speech recognition) because a lot of people got scared
off by finding how hard the problem really is. But it did, as Robert
Amsler suggests the 5th generation project will, show just what
"nitty-gritty problems" are important. It provided a great step
forward in speech recognition, not only for those who continued to
work on projects initiated by ARPA-SUR, but also for those who have
come afterward. I doubt we would now be where we are in ASR if it had
not been for that apparently failed project ten years ago.
(Parenthetically, notice that a lot of the subsequent advances in ASR
have been due to the Japanese, and that European/American researchers
freely use those advances.)

Martin Taylor

------------------------------

End of AIList Digest
********************

∂24-Aug-83  1206	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #47
Received: from SRI-AI by SU-AI with TCP/SMTP; 24 Aug 83  12:05:26 PDT
Date: Wednesday, August 24, 1983 10:34AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #47
To: AIList@SRI-AI


AIList Digest           Wednesday, 24 Aug 1983     Volume 1 : Issue 47

Today's Topics:
  Request - AAAI-83 Registration,
  Logic Programming - PARLOG & PROLOG & LISP Prologs
----------------------------------------------------------------------

Date: 22 Aug 83 16:50:55-PDT (Mon)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: AAAI-83 Registration
Article-I.D.: allegra.1777

Help!  I put off registering for AAAI-83 until too late, and now I
hear that it's overbooked!  (I heard 7000 would-be registrants and
1500 places, or some such.)  If you're registered but find you can't
attend, please let me know, or if you have any other suggestions, feel
free.

Cheers, John ("Something Wrong With My Planning Heuristics")
DeTreville Bell Labs, Murray Hill

------------------------------

Date: 23 Aug 83  1337 PDT
From: Diana Hall <DFH@SU-AI>
Subject: PARLOG

                 [Reprinted from the SCORE BBoard.]

Parlog Seminar

Keith Clark will give a seminar on Parlog Thursday, Sept. 1 at 3 p.m
in Room 252 MJH.



              PARLOG: A PARALLEL LOGIC PROGRAMMING LANGUAGE

                              Keith L. Clark

ABSTRACT

        PARLOG is a logic programming language in the sense that
nearly every definition and query can be read as a sentence of
predicate logic.  It differs from PROLOG in incorporating parallel
modes of evaluation.  For reasons of efficient implementation, it
distinguishes and separates and-parallel and or-parallel evaluation.
        PARLOG relations are divided into two types:  and-relations
and or-relations.  A sequence of and-relation calls can be evaluated
in parallel with shared variables acting as communication channels.
Only one solution to each call is computed.
        A sequence of or-relation calls is evaluated sequentially but
all the solutions are found by a parallel exploration of the different
evaluation paths.  A set constructor provides the main interface
between and-relations and or-relations.  This wraps up all the
solutions to a sequence of or-relation calls in a list.  The solution
list can be concurrently consumed by an and-relation call.
        The and-parallel definitions of relations that will only be
used in a single functional mode can be given using conditional
equations.  This gives PARLOG the syntactic convenience of functional
expressions when non-determinism is not required.  Functions can be
invoked eagerly or lazily; the eager evaluation of nested function
calls corresponds to and-parallel evaluation of conjoined relation
calls.
        This paper is a tutorial introduction and semi-formal
definition of PARLOG.  It assumes familiarity with the general
concepts of logic programming.

------------------------------

Date: Thu 18 Aug 83 20:00:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: There are Prologs and Prologs ...

In the July issue of SIGART an article by Richard Wallace describes 
PiL, yet another Prolog in Lisp. The author claims that his 
interpreter shows that "it is easy to extend Lisp to do what Prolog 
does."

It is a useful pedagogical exercise for Lisp users interested in logic
programming to look at a simple, clean implementation of a subset of 
Prolog in Lisp. A particularly illuminating implementation and 
discussion is given in "Structure and Implementation of Computer 
Programs", a set of MIT lecture notes by Abelson and Sussman.

However, such simple interpreters (even the Abelson and Sussman one 
which is far better than PiL) are not a sufficient basis for the claim
that "it is easy extend Lisp to do what Prolog does." What Prolog 
"does" is not just to make certain deductions in a certain order, but 
also MAKE THEM VERY FAST. Unfortunately, ALL Prologs in Lisp I know of
fail in this crucial aspect (by factors between 30 and 1000).

Why is speed such a crucial aspect of Prolog (or of Lisp, for that 
matter)? First, because the development of complex experimental 
programs requires MANY, MANY experiments, which just could not be done
if the systems were, say, 100 times slower than they are. Second, 
because a Prolog (Lisp) system needs to be written mostly in Prolog 
(Lisp) to support the extensibility that is a central aspect of modern
interactive computing environments.

The following paraphrase of Wallace's claim shows its absurdity: "[LiA
(Lisp in APL) shows] that is easy to extend APL to do what Lisp does."
Really? All of what Maclisp does? All of what ZetaLisp does?

Lisp and Prolog are different if related languages. Both have their 
supporters. Both have strengths and (serious) weaknesses. Both can be 
implemented with comparable efficiency. It is educational to to look 
both at (sub)Prologs in Lisp and (sub)Lisps in Prolog. Let's not claim
discoveries of philosopher's stones.

Fernando Pereira
AI Center
SRI International

------------------------------

Date: Wed, 17 Aug 1983  10:20 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: FOOLOG Prolog

                 [Reprinted from the PROLOG Digest.]

Here is a small Prolog ( FOOLOG = First Order Oriented LOGic )
written in Maclisp. It includes the evaluable predicates CALL,
CUT, and BAGOF. I will probably permanently damage my reputation
as a MacLisp programmer by showing it, but as an attempt to cut
the hedge, I can say that I wanted to see how small one could
make a Prolog while maintaining efficiency ( approx 2 pages; 75%
of the speed of the Dec-10 Prolog interpreter ).  It is actually
possible to squeeze Prolog into 16 lines.  If you are interested
in that one and in FOOLOG, I have a ( very ) brief report describing
them that I can send you.  Also, I'm glad to answer any questions
about FOOLOG. For me, the best is if you send messages by Snail Mail,
since I do not have a net connection.  If that is uncomfortable, you
can also send messages via Ken Kahn, who forwards them.

My address is:

Martin Nilsson
UPMAIL
Computing Science Department
Box 2059
S-750 02 UPPSALA, Sweden


---------- Here is a FOOLOG sample run:

(load 'foolog)          ; Lower case is user type-in

; Loading DEFMAX 9844442.
(progn (defpred member  ; Definition of MEMBER predicate
         ((member ?x (?x . ?l)))
         ((member ?x (?y . ?l)) (member ?x ?l)))
       (defpred cannot-prove    ; and CANNOT-PROVE predicate
         ((cannot-prove ?goal) (call ?goal) (cut) (nil))
         ((cannot-prove ?goal)))
       'ok)
OK
(prove (member ?elem (1 2 3)) ; Find elements of the list
       (writeln (?elem is an element))))
(1. IS AN ELEMENT)
MORE? t                 ; Find the next solution
(2. IS AN ELEMENT)
MORE? nil               ; This is enough
(TOP)
(prove (cannot-prove (= 1 2)) ; The two cannot-prove cases
MORE? t
NIL
(prove (cannot-prove (= 1 1))
NIL


---------- And here is the source code:

; FOOLOG Interpreter (c) Martin Nilsson  UPMAIL   1983-06-12

(declare (special *inf* *e* *v* *topfun* *n* *fh* *forward*)
         (special *bagof-env* *bagof-list*))

(defmacro defknas (fun args &rest body)
  `(defun ,fun macro (l)
     (cons 'progn (sublis (mapcar 'cons ',args (cdr l))
                          ',body))))

; ---------- Interpreter

(setq *e* nil *fh* nil *n* nil *inf* 0
      *forward* (munkam (logior 16. (logand (maknum 0) -16.))))
(defknas imm (m x) (cxr x m))
(defknas setimm (m x v) (rplacx x m v))
(defknas makrecord (n)
  (loop with r = (makhunk n) and c for i from 1 to (- n 2) do
        (setq c (cons nil nil))
        (setimm r i (rplacd c c)) finally (return r)))

(defknas transfer (x y)
  (setq x (prog1 (imm x 0) (setq y (setimm x 0 y)))))
(defknas allocate nil
  (cond (*fh* (transfer *fh* *n*) (setimm *n* 7 nil))
        ((setq *n* (setimm (makrecord 8) 0 *n*)))))
(defknas deallocate (on)
  (loop until (eq *n* on) do (transfer *n* *fh*)))
(defknas reset (e n) (unbind e) (deallocate n) nil)
(defknas ult (m x)
  (cond ((or (atom x) (null (eq (car x) '/?))) x)
        ((< (cadr x) 7)
         (desetq (m . x) (final (imm m (cadr x)))) x)
        ((loop initially (setq x (cadr x)) until (< x 7) do
               (setq x (- x 6)
                     m (or (imm m 7)
                           (imm (setimm m 7 (allocate)) 7)))
          finally (desetq (m . x) (final (imm m x)))
          (return x)))))
(defknas unbind (oe)
  (loop with x until (eq *e* oe) do
   (setq x (car *e*)) (rplaca x nil) (rplacd x x) (pop *e*)))
(defknas bind (x y n)
  (cond (n (push x *e*) (rplacd x (cons n y)))
        (t (push x *e*) (rplacd x y) (rplaca x *forward*))))
(lap-a-list '((lap final subr) (hrrzi 1 @ 0 (1)) (popj p) nil))
; (defknas final (x) (cdr (memq nil x))) ; equivalent
(defknas catch-cut (v e)
  (and (null (and (eq (car v) 'cut) (eq (cdr v) e))) v)))

(defun prove fexpr (gs)
  (reset nil nil)
  (seek (list (allocate)) (list (car (convq gs nil)))))

(defun seek (e c)
  (loop while (and c (null (car c))) do (pop e) (pop c))
  (cond ((null c) (funcall *topfun*))
        ((atom (car c)) (funcall (car c) e (cdr c)))
        ((loop with rest = (cons (cdar c) (cdr c)) and
          oe = *e* and on = *n* and e1 = (allocate)
          for a in (symeval (caaar c)) do
          (and (unify e1 (cdar a) (car e) (cdaar c))
               (setq inf* (1+ *inf*)
                     *v* (seek (cons e1 e)
                               (cons (cdr a) rest)))
               (return (catch-cut *v* e1)))
          (unbind oe)
          finally (deallocate on)))))

(defun unify (m x n y)
  (loop do
    (cond ((and (eq (ult m x) (ult n y)) (eq m n)) (return t))
          ((null m) (return (bind x y n)))
          ((null n) (return (bind y x m)))
          ((or (atom x) (atom y)) (return (equal x y)))
          ((null (unify m (pop x) n (pop y))) (return nil)))))

; ---------- Evaluable Predicates

(defun inst (m x)
  (cond ((let ((y x))
           (or (atom (ult m x)) (and (null m) (setq x y)))) x)
        ((cons (inst m (car x)) (inst m (cdr x))))))

(defun lisp (e c)
  (let ((n (pop e)) (oe *e*) (on *n*))
    (or (and (unify n '(? 2) (allocate) (eval (inst n '(? 1))))
             (seek e c))
        (reset oe on))))

(defun cut (e c)
  (let ((on (cadr e))) (or (seek (cdr e) c) (cons 'cut on))))

(defun call (e c)
  (let ((m (car e)) (x '(? 1)))
    (seek e (cons (list (cons (ult m x) '(? 2))) c))))

(defun bagof-topfun nil
  (push (inst *bagof-env* '(? 1)) *bagof-list*) nil)

(defun bagof (e c)
  (let* ((oe *e*) (on *n*) (*bagof-list* nil)
                  (*bagof-env* (car e)))
    (let ((*topfun* 'bagof-topfun)) (seek e '(((call (? 2))))))
    (or (and (unify (pop e) '(? 3) (allocate) *bagof-list*)
             (seek e c))
        (reset oe on))))

; ---------- Utilities

(defun timer fexpr (x)
  (let* ((*rset nil) (*inf* 0) (x (list (car (convq x nil))))
         (t1 (prog2 (gc) (runtime) (reset nil nil)
                    (seek (list (allocate)) x)))
         (t1 (- (runtime) t1)))
    (list (// (* *inf* 1000000.) t1) 'LIPS (// t1 1000.)
          'MS *inf* 'INF)))

(eval-when (compile eval load)
  (defun convq (t0 l0)
    (cond ((pairp t0) (let* (((t1 . l1) (convq (car t0) l0))
                             ((t2 . l2) (convq (cdr t0) l1)))
                        (cons (cons t1 t2) l2)))
          ((null (and (symbolp t0) (eq (getchar t0 1) '/?)))
           (cons t0 l0))
          ((memq t0 l0)
           (cons (cons '/? (cons (length (memq t0 l0))
                                 t0)) l0))
          ((convq t0 (cons t0 l0))))))

(defmacro defpred (pred &rest body)
  `(setq ,pred ',(loop for clause in body
                       collect (car (convq clause nil)))))

(defpred true    ((true)))
(defpred =       ((= ?x ?x)))
(defpred lisp    ((lisp ?x ?y) . lisp))
(defpred cut     ((cut) . cut))
(defpred call    ((call (?x . ?y)) . call))
(defpred bagof   ((bagof ?x ?y ?z) . bagof))
(defpred writeln
  ((writeln ?x) (lisp (progn (princ '?x) (terpri)) ?y)))

(setq *topfun*
      '(lambda nil (princ "MORE? ")
               (and (null (read)) '(top))))

------------------------------

Date: Wed, 17 Aug 1983  10:14 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: A Pure Prolog Written In Pure Lisp

                 [Reprinted from the PROLOG Digest.]

;; The following is a tiny Prolog interpreter in MacLisp
;; written by Ken Kahn.
;; It was inspired by other tiny Lisp-based Prologs of
;; Par Emanuelson and Martin Nilsson
;; There are no side-effects in anywhere in the implementation
;; Though it is very slow of course.

(defun Prolog (database) ;; a top-level loop for Prolog
  (prove (list (rename-variables (read) '(0)))
         ;; read a goal to prove
         '((bottom-of-environment)) database 1)
  (prolog database))

(defun prove (list-of-goals environment database level)
  ;; proves the conjunction of the list-of-goals
  ;; in the current environment
  (cond ((null list-of-goals)
         ;; succeeded since there are no goals
         (print-bindings environment environment)
          ;; the user answers "y" or "n" to "More?"
         (not (y-or-n-p "More?")))
        (t (try-each database database
                     (rest list-of-goals) (first list-of-goals)
                     environment level))))

(defun try-each (database-left database goals-left goal
                               environment level)
 (cond ((null database-left)
        ()) ;; fail since nothing left in database
       (t (let ((assertion
                 ;; level is used to uniquely rename variables
                 (rename-variables (first database-left)
                                   (list level))))
            (let ((new-environment
                   (unify goal (first assertion) environment)))
              (cond ((null new-environment) ;; failed to unify
                     (try-each (rest database-left)
                               database
                               goals-left
                               goal
                               environment level))
                    ((prove (append (rest assertion) goals-left)
                            new-environment
                            database
                            (add1 level)))
                    (t (try-each (rest database-left)
                                 database
                                 goals-left
                                 goal
                                 environment
                                 level))))))))

(defun unify (x y environment)
  (let ((x (value x environment))
        (y (value y environment)))
    (cond ((variable-p x) (cons (list x y) environment))
          ((variable-p y) (cons (list y x) environment))
          ((or (atom x) (atom y))
           (and (equal x y) environment))
          (t (let ((new-environment
                    (unify (first x) (first y) environment)))
               (and new-environment
                    (unify (rest x) (rest y)
                           new-environment)))))))

(defun value (x environment)
  (cond ((variable-p x)
         (let ((binding (assoc x environment)))
           (cond ((null binding) x)
                 (t (value (second binding) environment)))))
        (t x)))

(defun variable-p (x) ;; a variable is a list beginning with "?"
  (and (listp x) (eq (first x) '?)))

(defun rename-variables (term list-of-level)
  (cond ((variable-p term) (append term list-of-level))
        ((atom term) term)
        (t (cons (rename-variables (first term)
                                   list-of-level)
                 (rename-variables (rest term)
                                   list-of-level)))))

(defun print-bindings (environment-left environment)
  (cond ((rest environment-left)
         (cond ((zerop
                 (third (first (first environment-left))))
                (print
                 (second (first (first environment-left))))
                (princ " = ")
                (prin1 (value (first (first environment-left))
                              environment))))
         (print-bindings (rest environment-left) environment))))

;; a sample database:
(setq db '(((father jack ken))
           ((father jack karen))
           ((grandparent (? grandparent) (? grandchild))
            (parent (? grandparent) (? parent))
            (parent (? parent) (? grandchild)))
           ((mother el ken))
           ((mother cele jack))
           ((parent (? parent) (? child))
            (mother (? parent) (? child)))
           ((parent (? parent) (? child))
            (father (? parent) (? child)))))

;; the following are utilities

(defun first (x) (car x))
(defun rest (x) (cdr x))
(defun second (x) (cadr x))
(defun third (x) (caddr x))

------------------------------

End of AIList Digest
********************

∂25-Aug-83  1057	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #48
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Aug 83  10:56:54 PDT
Date: Thursday, August 25, 1983 9:14AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #48
To: AIList@SRI-AI


AIList Digest           Thursday, 25 Aug 1983      Volume 1 : Issue 48

Today's Topics:
  AI Literature - Journals & COMTEX & Online Reports,
  AI Architecture - The Connection Machine,
  Programming Languages - Scheme and Lisp Availability,
  Artificial Intelligence - Turing Test & Hofstadter Article
----------------------------------------------------------------------

Date: 20 Aug 1983 0011-MDT
From: Jed Krohnfeldt <KROHNFELDT@UTAH-20>
Subject: Re: AI Journals

I would add one more journal to the list:

Cognition and Brain Theory
	Lawrence Erlbaum Associates, Inc.
	365 Broadway,
	Hillsdale, New Jersey 07642
	$18 Individual $50 Instititional
	Quarterly
	Basic cognition, proposed models and discussion of
	consciousness and mental process, epistemology - from frames to
	neurons, as related to human cognitive processes. A "fringe"
	publication for AI topics, and a good forum for issues in cognitive
	science/psychology.

Also, I notice that the institutional rate was quoted for several of 
the journals cited.  Many of these journals can be had for less if you
convince them that you are a lone reader (individual) and/or a 
student.


[Noninstitutional members of AAAI can get the Artificial Intelligence
Journal for $50.  See the last page of the fall AI Magazine.

Another journal for which I have an ad is

New Generation Computing
	Springer-Verlag New York Inc.
	Journal Fulfillment Dept.
	44 Hartz Way
	Secaucus, NJ  07094
	A quarterly English-language journal devoted to international
	research on the fifth generation computer.  [It seems to be
	very strong on hardware and logic programming.]
	1983 - 2 issues - $52. (Sample copy free.)
	1984 - 4 issues - $104.

-- KIL]

------------------------------

Date: Sun 21 Aug 83 18:06:52-PDT
From: Robert Amsler <AMSLER@SRI-AI>
Subject: Journal listings

Computing Reviews, Nov. 1982, lists all the periodicals they receive 
and their addresses. Handy list of a lot of CS journals.

------------------------------

Date: Tue, 23 Aug 83 11:05 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: COMTEX and getting AI technical reports


There WAS a company which offered a service in which subscribers would
get copies of recent technical reports on all areas of AI research -
COMTEX.  The reports were to be drawn from universities and
institutions doing AI research.  The initial offering in the series
contained old Stanford and MIT memos.  The series was intended to
provide very timely access to current reaseach in the participating
institution. COMTEX has decided to discontinue the AI series, however.
Perhaps if they perceive an increased demand for this series they will
reactivate it.

Tim

[There is a half-page Comtex ad for the MIT and Stanford memoranda in
the Fall issue of AI Magazine, p. 79.  -- KIL]

------------------------------

Date: 19 Aug 83 19:21:34 PDT (Friday)
From: Hamilton.ES@PARC-MAXC.ARPA
Subject: On-line tech reports?

I raised this issue on Human-nets nearly two years ago and didn't seem
to get more than a big yawn for a response.

Here's an example of what I had to go through recently:  I saw an 
interesting-looking CMU tech report (Newell, "Intellectual Issues in
the History of AI") listed in SIGART News.  It looked like I could
order it from CMU.  No ARPANET address was listed, so I wrote -- I
even gave them my ARPANET address.  They sent me back a form letter
via US Snail referring me to NTIS.  So then I phoned NTIS.  I talked
to an answering machine and left my US Snail address and the order
number of the tech report.  They sent me back a postcard giving the
price, something like $7.  I sent them back their order form,
including my credit card#.  A week or so later I got back a moderately
legible document, probably reproduced from microfiche, that looks
suspiciously like a Bravo document that's probably on line somewhere,
if I only knew where.  I'm not picking on CMU -- this is a general
problem.

There's GOT to be a better way.  How about: (1) Have a standard 
directory at each major ARPA host, containing at least a catalog with 
abstracts of all recent tech reports, and info on how to order, and 
hopefully full text of at least the most recent and/or popular ones, 
available for FTP, perhaps at off-peak hours only.  (2) Hook NTIS into
ARPANET, so that folks could browse their catalogs and submit orders 
electronically.

RUTGERS used to have an electronic mailing list to which they 
periodically sent updated tech report catalogs, but that's about the 
only activity of this sort that I've seen.

We've got this terrific electronic highway.  Let's make it useful for 
more than mailing around collections of flames, like this one!

--Bruce

------------------------------

Date: 23 August 1983 00:22 EDT
From: Alan Bawden <ALAN @ MIT-MC>
Subject: The Connection Machine

    Date: Thu 18 Aug 83 13:46:13-PDT
    From: David Rogers <DRogers at SUMEX-AIM.ARPA>

    The closest hardware I am aware of is called the Connection
    Machine, and is begin developed at MIT by Alan Bawden, Dave
    Christman, and Danny Hillis ...

also Tom Knight, David Chapman, Brewster Kahle, Carl Feynman, Cliff
Lasser, and Jon Taft.  Danny Hillis provided the original ideas, his
is the name to remember.

    The project involves building a model with about 2↑10 processors.

The prototype Connection Machine was designed to have 2↑20 processors,
although 2↑10 might be a good size to actually build to test the idea.

One way to arrive at a superficial understanding of the Connection
Machine would be to imagine augmenting a NETL machine with the ability
to pass addresses (or "pointers") as well as simple markers.  This
permits the Connection Machine to perform even more complex pattern
matching on semantic-network-like databases.  The detection of any
kind of cycle (find all people who are employed by their own fathers),
is the canonical example of something this extension allows.

But thats only one way to program a Connection Machine.  In fact, the
thing seems to be a rather general parallel processor.

MIT AI Memo #646, "The Connection Machine" by Danny Hillis, is still a
perfectly good reference for the general principles behind the
Connection Machine, despite the fact that the hardware design has
changed a bit since it was written.  (The memo is currently being
revised.)

------------------------------

Date: 22 August 1983 18:20 EDT
From: Hal Abelson <HAL @ MIT-MC>
Subject: Lisps on 68000


At MIT we are working on a version of Scheme (a lexically scoped 
dialect of Lisp) that runs on the HP 9836 computer, which is a 68000 
machine.  Starting 3 weeks from now, 350 MIT students will be using 
this system on a full-time basis.

The implementation consists of a kernel written in 68000 assembler, 
with most of the system written in Scheme and compiled using a quick 
and dirty compiler, which is also written in Scheme.  The 
implementation sits inside of HP's UCSD-Pascal-clone operating system.
For an editor, we use NMODE, which is a version of EMACS written in 
Portable Standard Lisp. Thus our machines run, at present, with both 
Scheme and PSL resident, and consequently require 4 megabytes of main 
memory.  This will change when we get another editor, which will be at
least a few months.

The current system gives good performance for coursework, and is 
optimized to provide fast interpreted code, as well as a good 
debugging environment for student use.

Work will begin on a serious compiler as soon as the start-of-semester
panic is over.  There will also be a compatible version for the Vax.

Distribution policy has not yet been decided upon, but most likely we 
will give the system away (not the PSL part, which is not ours to 
give) to anyone who wants it, provided that people who get it agree to
return all improvements to MIT.

Please no requests for a few months, though, since we are still making
changes in the design and documentation.  Availibility will be 
annouced on this mailing list..

------------------------------

Date: 23 Aug 83 16:36:26-PDT (Tue)
From: harpo!seismo!rlgvax!cvl!umcp-cs!mark @ Ucb-Vax
Subject: Franz lisp on a Sun Workstation.
Article-I.D.: umcp-cs.2096

So what is the true story?  What person says it is almost as fast as
a single user 780, another says it is an incredible hog.  These can't
both be right, as a Vax-780 IS at least as fast as a Lispmachine (not
counting the bitmapped screen).  It sounded to me like the person who
said it was fast had actually used it, but the person who said it was
slow was just working from general knowledge.  So maybe it is fast.
Wouldn't that be nice.
--
spoken: mark weiser
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!mark
CSNet:  mark@umcp-cs
ARPA:   mark.umcp-cs@UDel-Relay

------------------------------

Date: Tue 23 Aug 83 14:43:50-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: in defense of Turing

        Scott Turner (AIList V1 #46) has some interesting points about
intelligence, but I felt compelled to defend Turing in his absence.  
The Turing article in Mind (must reading for any AIer) makes it clear
that the test is not proposed to *define* an intelligent system, or
even to *recognize* one; the claim is merely that a system which *can*
pass the test has intelligence. Perhaps this is a subtle difference, 
but it's as important as the difference between "iff" and "if" in
math.

        Scott bemoans the Turing test as testing for "Human Mimicing 
Ability", and suggests that ELIZA has shown this to be possible 
without intelligence. ELIZA has fooled some people, though I would not
say it has passed anything remotely like the Turing test.  Mimicing
language is a far cry from mimicing intelligence.

        In any case, it may be even more difficult to detect 
intelligence without doing a comparison to human intellect; after all,
we're the only intelligent systems we know of...

Regards,

David

------------------------------

Date: Tue 23 Aug 83 19:23:00-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Hofstadter article

        Alas, after reading the article about Hofstadter in the
NYTimes, I realized that AI workers can be at least as closeminded as
other scientists have shown. At its bottom level, it seemed that DH's
basic feeling (that we have a long way to go before creating real
intelligence) is embarrassingly obvious. In the long run, the false
hopes that expectations of quick results give rise to can only hurt
the acceptance of AI in people's minds.

        (By the way, I thought the article was very well written, and
would encourage people to look it up. The report is spiced with
opinions from AI workers such as Alan Newell and Marvin Minsky, and it
was enjoyable to hear their candid comments about Hofstadter and AI in
general. Quite a step above the usual articles designed for general
consumption about AI...)

David R.

------------------------------

End of AIList Digest
********************

∂29-Aug-83  1311	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #49
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Aug 83  13:09:16 PDT
Date: Monday, August 29, 1983 11:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #49
To: AIList@SRI-AI


AIList Digest            Monday, 29 Aug 1983       Volume 1 : Issue 49

Today's Topics:
  Conferences - AAAI-83 Registration,
  Bindings - Rog-O-Matic & Mike Mauldin,
  Artificial Languages - Loglan,
  Knowledge Representation & Self-Consciousness - Textnet,
  AI Publication - Corporate Constraints,
  Lisp Availability - PSL on 68000's,
  Automatic Translation - Lisp-to-Lisp & Natural Language
----------------------------------------------------------------------

Date: 23 Aug 83 11:04:22-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!arnold@Ucb-Vax
Subject: Re: AAAI-83 Registration
Article-I.D.: umcp-cs.2093


        If there will be over 7000 people attending AAAi←83,
        then there will almost be as many people as will
        attend the World Sci. Fic. Convention.

        I worked registration for AAAI-83 on Aug 22 (Monday).
        There were about 700 spaces available, along with about
        1700 people who pre-registered.

        [...]

                --- A Volunteer

------------------------------

Date: 26 Aug 83 2348 EDT
From: Rudy.Nedved@CMU-CS-A
Subject: Rog-O-Matic & Mike Mauldin

Apparently people want something related to Rog-O-Matic and are 
sending requests to "Maudlin". If you notice very closely that is not
how his name is spelled. People are transposing the "L" and the "D".
Hopefully this message will help the many people who are trying to
send Mike mail.

If you still can't get his mailing address right, try
"mlm@CMU-CS-CAD".

-Rudy
A CMU Postmaster

------------------------------

Date: 28 August 1983 06:36 EDT
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: Loglan

I've been interested in LOGLANS since Heinlein's GULF which was in
part devoted to it.  Alas, nothing seems to happen that I can use; is
the institute about to publish new materials?  Is there anything in
machine-readable form using Loglans?  Information appreciated.  JEP

------------------------------

Date: 25-Aug-83 10:03 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet

Randy Trigg mentioned his "Textnet" thesis project a few issues back
that combines hypertext and NLS/Augment structures.  He makes a strong
statement about distributed Textnet on worldnet:

   There can be no mad dictator in such an information network.

I am interested in building a testing ground for statements such as
that.  It would contain a model that would simulate the global effects
of technologies such as publishing on-line.  Here is what may be of
interest to the AI community.  The simulation would be a form of
"augmented global self-consciousness" in that it models its own
viability as a service published on-line via worldnet.  If you have
heard of any similar project or might be interested in collaborating
on this one, let me know.

 -- kirk

------------------------------

Date: 25 Aug 83 15:47:19-PDT (Thu)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.475

OK, you turned your flame-thrower on, now prepare for mine!  You want
to know why things don't get published -- take a look at your address
and then at mine.  You live (I hope I'm not talking to an AI Project)
in the academic community; believe it or not there are those of us
who work in something euphemistically refered to as industry where
the rule is not publish or perish, the rule is keep quiet and you are
less likely to get your backside seared!  Come on out into the 'real'
world where technical papers must be reviewed by managers that don't
know how to spell AI, let alone understand what language translation
is all about.  Then watch as two of them get into a moebius argument,
one saying that there is nothing classified in the paper but there is
proprietary information, while the other says no proprietary but it
definitely is classified!  All the while this is going on the
deadline for submission to three conferences passes by like the
perennial river flowing to the sea.  I know reviews are not unheard
of in academia, and that professors do sometimes get into arguments,
but I've no doubt that they would be more generally favorable to
publication than managers who are worried about the next
stockholder's meeting.

It ain't all that bad, but at least you seem to need a wider
perspective.  Perhaps the results haven't been published; perhaps the
claims appear somewhat tentative; but the testing has been critical,
and the only thing left is primarily a matter of drudgery, not
innovative research.  I am convinced that we may certainly find a new
and challenging problem awaiting us once that has been done, but at
least we are not sitting around for years on end trying to paste
together a grammar for a context
sensitive language!!

Ted Jardine
TJ (with Amazing Grace) The Piper
ssc-vax!tjj

------------------------------

Date: 24 Aug 83 19:47:17-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: Lisps on 68000's - (nf)
Article-I.D.: uiucdcs.2626

I played with a version of PSL on a HP 9845 for several hours one
day.  The environment was just like running FranzLisp under Emacs in
"electric-lisp" mode. (However, the editor is written in PSL itself,
so it is potentially much more powerful than the emacs on our VAX,
with its screwy c/mock-lisp implementation.) The language is in the
style of Maclisp (rather than INTERLISP) and uses standard scoping
(rather than the lexical scoping of T). The machine has 512 by 512
graphics and a 2.5 dimensional window system, but neither are as
fully integrated into the programming environment as on a Xerox
Dolphin. Although I have no detailed benchmarks, I did port a
context-free chart parser to it. The interpreter speed was not
impressive, but was comparable with interpreted Franz on a VAX.
However, the speed of compiled code was very impressive. The compiler
is incremental, and built-in to the lisp system (like in INTERLISP),
and caused about a 10-20 times speedup over interpreted code (my
estimate is that both the Franz and INTERLISP-d compilers only net
2-5 times speedup).  As a result, the compiled parser ran much faster
on the 68000 than the same compiled program on a Dolphin.

I think PSL is definitely a superior lisp for the 68000, but I have
no idea whether is will be available for non-HP machines...


Jordan Pollack
University of Illinois
...pur-ee!uiucdcs!uicsl!pollack

------------------------------

Date: 24 Aug 83 16:20:12-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Lisp-to-Lisp translation
Article-I.D.: ssc-vax.468

These problems just go to show what AI people have known for years 
(ever since the first great bust of machine translation) - ya can't 
translate without understanding what yer translating.  Optimizing 
compilers are often impressive encodings of expert coders' knowledge, 
and they are for very simple languages - not like Interlisp or English

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 24 Aug 83 16:12:59-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.467

You have heard of my parser.  It's a variant on Berkeley's PHRAN, but 
has been improved to handle arbitrarily ambiguous sentences.  I
submitted a paper on it to AAAI-83, but it was rejected (well, I did
write it in about 3 days - wasn't very good).  A paper will be
appearing at the AIAA Computers in Aerospace conference in October.
The parser is only a *basic* solution - I suppose I should have made
that clearer.  Since it is knowledge-based, it needs **lots** of
knowledge.  Right now we're working on ways to acquire linguistic
knowledge automatically (Selfridge's work is very interesting).  The
knowledge base is woefully small, but we don't anticipate any problems
expanding it (famous last words!).

The parser has just been released for use within Boeing ("just"
meaning two days ago), and it may be a while before it becomes
available elsewhere (sorry).  I can mail details on it though.

As for language analysis being NP-complete, yes you're right.  But are
you sure that humans don't brute-force the process, and that computers
won't have to do the same?

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

ps if IBM is using APL, that explains a lot (I'm a former MVS victim)

------------------------------

Date: 24 Aug 83 15:47:11-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: So the language analysis problem has been solved?!?
Article-I.D.: ssc-vax.466

Heh-heh.  Thought that'd raise a few hackles (my boss didn't approve 
of the article; oh well.  I tend to be a bit fiery around the edges).

The claim is that we have "basically" solved the problem.  Actually, 
we're not the only ones - the APE-II parser by Pazzani and others from
the Schank school have also done the same thing.  Our parser can
handle arbitrarily ambiguous sentences, generating *all* the possible
meanings, limited only by the size of its knowledge base.  We have the
capability to do any sort of idiom, and mix any number of natural
languages.  Our problems are really concerned with the acquisition of
linguistic knowledge, either by having nonspecialists put it in by
hand (*everyone* is an expert on the native language) or by having the
machine acquire it automatically.  We can mail out some details if
anyone is interested.

One advantage we had is starting from ground zero, so we had very few 
preconceptions about how language analysis ought to be done, and
scanned the literature.  It became apparent that since we were
required to handle free-form input, any kind of grammar would
eventually become less than useful and possibly a hindrance to
analysis.  Dr. Pereira admits as much when he says that grammars only
reflect *some* aspects of language.  Well, that's not good enough.  Us
folks in applied research can't always afford the luxury of theorizing
about the most elegant methods.  We need something that models human
cognition closely enough to make sense to knowledge engineers and to
users.  So I'm sort of in the Schank camp (folks at SRI hate 'em)
although I try to keep my thinking as independent as possible (hard
when each camp is calling the other ones charlatans; I'll post
something on that pernicious behavior eventually).

Parallel production systems I'll save for another article...

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps I *did* read an article of Dr. Pereira's - couldn't understand the
point.  Sorry.  (perhaps he would be so good as to explain?)

[Which article? -- KIL]

------------------------------

Date: 26 Aug 83 11:19-EST (Fri)
From: Steven Gutfreund <gutfreund%umass-cs@UDel-Relay>
Subject: Musings on AI and intelligence

Spafford's musings on intelligent communications reminded me of an
article I read several years ago by John Thomas (then at T.J. Watson,
now at White Plains, a promotion as IBM sees it).

In the paper he distinguishes between two distinct approaches (or
philosophies) at raising the level man/machine communication.

[Natural langauge recognition is one example of this problem. Here the
machine is trying to "decipher" the user's natural prose to determine
the desired action. Another application are intelligent interfaces
that attempt to decipher user "intentions"]

The Human Approach -

Humans view communication as inherently goal based. When one
communicates with another human being, there is an explicit goal -> to
induce a cognitive state in the OTHER. This cognitive state is usually
some function of the communicators cognitive state. (usually the
identity function, since one wants the OTHER to understand what one is
thinking). In this approach the medium of communication (words, art,
gestulations) are not the items being communicated, they are
abstractions meant to key certain responses in the OTHER to arrive at
the desired goal.

The Mechanistic Approach

According to Thomas this is the approach taken by natural language 
recognition people. Communication is the application of a decrypto
function to the prose the user employed. This approach is inherently
flawed, according to Thomas, since the actually words and prose do not
contain meaning in themselves but are tools for efecting cognitive
change.  Thus, the text of one of Goebell's propaganda speeches can
not be examined in itself to determine what it means, but one needs an
awareness of the cognitive models, metaphors, and prejudices of the
speaker and listeners.  Capturing this sort of real world knowledge
(biases, prejudices, intuitive feelings) is not a stong point of the
AI systems. Yet, the extent that certain words move a person, may be
much more highly dependent on, say his Catholic upbringing than the
words themselves.

If one doubts the above thesis, then I encourage you to read Thomas
Kuhn's book "the Sturcture of Scientific Revolutions" and see how
culture can affect the interpretation of supposedly hard scientific
facts and observations.

Perhaps something that best brings this out was an essay (I believe it
was by Smuyllian) in "The Mind's Eye" (Dennet and Hofstadter). In this
essay a homunculus is set up with the basic tools of one of Schank's
language understanding systems (scripts, text, rules, etc.) He then
goes about the translation of the text from one language to another
applying a set of mechanistic transformation rules. Given that the
homunculus knows nothing of either the source language or the target
language, can you say that it has any understanding of what the script
was about? How does this differ from today's NUR systems?


                                        - Steven Gutfreund
                                          Gutfreund.umass@udel-relay

------------------------------

End of AIList Digest
********************

∂30-Aug-83  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #50
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Aug 83  11:42:56 PDT
Date: Tuesday, August 30, 1983 10:16AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #50
To: AIList@SRI-AI


AIList Digest            Tuesday, 30 Aug 1983      Volume 1 : Issue 50

Today's Topics:
  AI Literature - Bibliography Request,
  Intelligence - Definition & Turing Test & Prejudice & Flamer
----------------------------------------------------------------------

Date: 29 Aug 1983 11:05:14-PDT
From: Susan L Alderson <mccarty@Nosc>
Reply-to: mccarty@Nosc
Subject: Help!


We are trying to locate any and all bibliographies, in electronic
form, of AI and Robotics.  I know that this covers a broad spectrum,
but we would rather have too many things to choose from than none at
all.  Any help or leads on this would be greatly appreciated.

We are particularly interested in:

    AI Techniques
    Vision Analysis
    AI Languages
    Robotics
    AI Applications
    Speech Analysis
    AI Environments
    AI Systems Support
    Cybernetics

This is not a complete list of our interests, but a good portion of
the high spots!

susie (mccarty@nosc-cc)


[Several partial bibilographies have been published in AIList; more
would be most welcome.  Readers able to provide pointers should reply
to AIList as well as to Susan.

Many dissertation and report abstracts have been published in the
SIGART newsletter; online copies may exist.  Individual universities
and corporations also maintain lists of their own publications; CMU,
MIT, Stanford, and SRI are among the major sources in this country.
(Try Navarro@SRI-AI for general AI and CPowers@SRI-AI for robotics
reports.)

One of the fastest ways to compile a bibliography is to copy author's
references from the IJCAI and AAAI conference proceedings.  The AI
Journal and other AI publications are also good.  Beware of straying
too far from your main topics, however.  Rosenfeld's vision and image
processing bibliographies in CVGIP (Computer Vision, Graphics, and
Image Processing) list over 700 articles each year.

-- KIL]

------------------------------

Date: 25 Aug 1983 1448-PDT
From: Jay <JAY@USC-ECLC>
Subject: intelligence is...

  An intelligence must have at least three abilities; To act; To 
perceive, and classify (as one of: better, the same, worse) the
results of its actions, or the environment after the action; and
lastly To change its future actions in light of what it has perceived,
in attempt to maximize "goodness", and avoid "badness".  My views are 
very obviously flavored by behaviorism.

  In defense of objections I hear coming...  To act is necessary for 
intelligence, since it is pointless to call a rock intelligent since 
there seems to be no way to detect it.  To perceive is necessary of 
intelligence since otherwise projectiles, simple chemicals, and other 
things that act following a set of rules, would be classified as 
intelligent.  To change future actions is the most important since a 
toaster could perceive that it was overheating, oxidizing its heating
elements, and thus dying, but would be unable to stop toasting until
it suffered a breakdown.

  In summary (NOT (AND actp percievep evolvep)) -> (NOT intelligent), 
or Action, Perception, and Evolution based upon perception is
necessary for intelligence.  I *believe* that these conditions are
also sufficient for intelligence.

awaiting flames,

j'

PS. Yes, the earth's bio-system IS intelligent.

------------------------------

Date: 25 Aug 83 2:00:58-PDT (Thu)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: pyuxll.403

The characterization of prejudice as  an  unwillingness/inability
to  adapt  to  new  (contradictory)  data  is  an  appealing one.
Perhaps this belongs in net.philosophy, but it seems to me that a
requirement  for  becoming a fully functional intelligence (human
or otherwise) is to abandon the search for  compact,  comfortable
"truths"  and  view knowledge as an approximation and learning as
the process of improving those approximations.

There is nothing wrong with compact generalizations: they  reduce
"overhead" in routine situations to manageable levels. It is when
they   are   applied   exclusively   and/or    inflexibly    that
generalizations  yield bigotry and the more amusing conversations
with Eliza et al.

As for the Turing test, I think it may be appropriate to think of
it  as  a "razor" rather than as a serious proposal.  When Turing
proposed the test there was a philosophical argument raging  over
the  definition  of  intelligence,  much  of  which  was outright
mysticism. The famous test cuts the fog nicely: a device  needn't
have  consciousness,  a  soul,  emotions -- pick your own list of
nebulous terms -- in order to  function  "intelligently."  Forget
whether it's "the real thing," it's performance that counts.

I think Turing recognized that, no matter how successful AI  work
was, there would always be those (bigots?) who would rip the back
off the machine and say,  "You  see?  Just  mechanism,  no  soul,
no emotions..." To them, the Turing test replies, "Who cares?"

=Ned=

------------------------------

Date: 25 Aug 83 13:47:38-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!uw-june!emma @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: uw-june.549

I don't think I can accept some of the comments being bandied about 
regarding prejudice.  Prejudice, as I understand the term, refers to 
prejudging a person on the basis of class, rather than judging that 
person as an individual.  Class here is used in a wider sense than 
economic.  Examples would be "colored folk got rhythm" or "all them
white saxophonists sound the same to me"-- this latter being a quote
from Miles Davis, by the way.  It is immediately apparent that
prejudice is a natural result of making generalizations and
extrapolating from experience.  This is a natural, and I would suspect
inevitable, result of a knowledge acquisition process which
generalizes.

Bigotry, meanwhile, refers to inflexible prejudice.  Miles has used a 
lot of white saxophonists, as he recognizes that some don't all sound 
the same.  Were he bigoted, rather than prejudiced, he would refuse to
acknowledge that.  The problem lies in determining at what point an 
apparent counterexample should modify a conception.  Do we decide that
gravity doesn't work for airplanes, or that gravity always works but 
something else is going on?  Do we decide that a particular white sax 
man is good, or that he's got a John Coltrane tape in his pocket?

In general, I would say that some people out there are getting awfully
self-righteous regarding a phenomenon that ought to be studied as a 
result of our knowledge acquisition process rather than used to 
classify people as sub-human.

-Joe P.

------------------------------

Date: 25 Aug 83 11:53:10-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!utcsstat!laura@Ucb-Vax
Subject: AI and Human Intelligence [& Editorial Comment]

Goodness, I stopped reading net.ai a while ago, but had an ai problem
to submit and decided to read this in case the question had already
been asked and answered. News here only lasts for 2 weeks, but things
have changed...

At any rate, you are all discussing here what I am discussing in mail 
to AI types (none of whom mentioned that this was going on here, the 
cretins! ;-) ). I am discussing bigotry by mail to AI folk.

I have a problem in furthering my discussion. When I mentioned it I
got the same response from 2 of my 3 AI folk, and am waiting for the
same one from the third.  I gather it is a fundamental AI sort of
problem.

I maintain that 'a problem' and 'a discription of a problem' are not
the same thing. Thus 'discrimination' is a problem, but the word
'nigger' is not. 'Nigger' is a word which describes the problem of
discrimination. One may decide not to use the word 'nigger' but
abolishing the word only gets rid of one discription of the problem,
but not the problem itself.

If there were no words to express discrimination, and discrimination 
existed, then words would be created (or existing words would be 
perverted) to express discrimination. Thus language can be counted 
upon to reflect the attitudes of society, but changing the language is
not an effective way to change society.


This position is not going over very well. I gather that there is some
section of the AI community which believes that language (the
description of a problem) *is* the problem.  I am thus reduced to
saying, "oh no it isnt't you silly person" but am left holding the bag
when they start quoting from texts. I can bring out anthropology and
linguistics and they can get out some epistomology and Knowledge
Representation, but the discussion isn't going anywhere...

can anybody out there help?

laura creighton
utzoo!utcsstat!laura


[I have yet to be convinced that morality, ethics, and related aspects
of linguistics are of general interest to AIList readers.  While I
have (and desire) no control over the net.ai discussion, I am
responsible for what gets passed on to the Arpanet.  Since I would
like to screen out topics unrelated to AI or computer science, I may
choose not to pass on some of the net.ai submissions related to
bigotry.  Contact me at AIList-Request@SRI-AI if you wish to discuss
this policy. -- KIL]

------------------------------

Date: 25 Aug 1983 1625-PDT
From: Jay <JAY@USC-ECLC>
Subject: [flamer@ida-no: Re:  Turing Test; Parry, Eliza, and Flamer]

Is this a human response??

j'
                ---------------

  Return-path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
  Received: from UDEL-RELAY by USC-ECLC; Thu 25 Aug 83 16:20:32-PDT
  Date:     25 Aug 83 18:31:38 EDT  (Thu)
  From: flamer@ida-no
  Return-Path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
  Subject:  Re:  Turing Test; Parry, Eliza, and Flamer
  To: jay@USC-ECLC
  In-Reply-To: Message of Tue, 16-Aug-83 17:37:00 EDT from
      JAY%USC-ECLC@sri-unix.UUCP <4325@sri-arpa.UUCP>
  Via:  UMCP-CS; 25 Aug 83 18:55-EDT

        From: JAY%USC-ECLC@sri-unix.UUCP

        . . . Flamer would read messages from the net and then
        reply to the sender/bboard denying all the person said,
        insulting him, and in general making unsupported statements.
        . . .

  Boy! Now that's the dumbest idea I've heard in a long time. Only an
  idiot such as yourself, who must be totally out of touch with reality,
  could come up with that. Besides, what would it prove?  It's not much
  of an accomplishment to have a program which is stupider than a human.
  The point of the Turing test is to demonstrate a program that is as
  intelligent as a human. If you can't come up with anything better,
  stay off the net!

------------------------------

End of AIList Digest
********************

∂30-Aug-83  1825	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #51
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Aug 83  18:22:27 PDT
Date: Tuesday, August 30, 1983 4:30PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #51
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Aug 1983     Volume 1 : Issue 51

Today's Topics:
  Expert Systems - Availability & Dissent,
  Automatic Translation - State of the Art,
  Fifth Generation - Book Review & Reply
----------------------------------------------------------------------

Date: 26 Aug 83 17:00:18-PDT (Fri)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: Expert Systems
Article-I.D.: dcdwest.216

I would like to know whether there are commercial expert
systems available for sale.  In particular, I would like to
know about systems like the Programmer's Apprentice, or other
such programming aids.

Thanks in advance,

Peter Benson
!decvax!ittvax!dcdwest!benson

------------------------------

Date: 26 Aug 83 11:12:31-PDT (Fri)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: bulstars
Article-I.D.: mit-eddi.656

from AP (or NYT?)


       COMPUTER TROUBLESHOOTER:
       'Artificially Intelligent' Machine Analyses Phone Trouble

           WASHINGTON - Researchers at Bell Laboratories say
       they've developed an ''artificially intelligent'' computer
       system that works like a highly trained human analyst to
       find troublespots within a local telephone network. Slug
       PM-Bell Computer. New, will stand. 670 words.

Oh, looks like we beat the Japanese :-( Why weren't we told that
'artificial intelligence' was about to exist?  Does anyone know if
this is the newspaper's fault, or if the guy they talked to just
wanted more attention???


-- Randwulf
(Randy Haskins);
Path= genrad!mit-eddie!rh
or... rh@mit-ee (via mit-mc)

------------------------------

Date: Mon 29 Aug 83 21:36:04-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: claims about "solving NLP"

I have never been impressed with claims about "solving the Natural
Language Processing problem" based on `solutions' for 1-2 paragraphs
of [usu. carefully (re)written] text.  There are far too many scale-up
problems for such claims to be taken seriously.  How many NLP systems
are there that have been applied to even 10 pages of NATURAL text,
with the full intent of "understanding" (or at least "treating in the
identical fashion") ALL of it?  Very few.  Or 100 pages?  Practically
none.  Schank & Co.'s "AP wire reader," for example, was NOT intended
to "understand" all the text it saw [and it didn't!], but only to 
detect and summarize the very small proportion that fell within its
domain -- a MUCH easier task, esp. considering its miniscule domain
and microscopic dictionary.  Even then, its performance was -- at best
-- debatable.

And to anticipate questions about the texts our MT system has been
applied to:  about 1,000 pages to date -- NONE of which was ever
(re)written, or pre-edited, to affect our results.  Each experiment
alluded to in my previous msg about MT was composed of about 50 pages
of natural, pre-existing text [i.e., originally intended and written
for HUMAN consumption], none of which was ever seen by the project
linguists/programmers before the translation test was run.  (Our 
dictionaries, by the way, currently comprise about 10,000 German
words/phrases, and a similar number of English words/phrases.)

We, too, MIGHT be subject to further scale-up problems -- but we're a
damned sight farther down the road than just about any other NLP
project has been, and have good reason to believe that we've licked
all the scale-up problems we'll ever have to worry about.  Even so, we
would NEVER be so presumptuous as to claim to have "solved the NLP
problem," needing only a large collection of `linguistic rules' to
wrap things up!!!  We certainly have NOT done so.

REALLY, now...

------------------------------

Date: Mon 29 Aug 83 17:11:26-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Machine Translation - a very short tutorial

Before proclaiming the impossibility of automatic [i.e., computer]
translation of human languages, it's perhaps instructive to know
something about how human translation IS done -- and is not done -- at
least in places where it's taken seriously.  It is also useful,
knowing this, to propose a few definitions of what may be counted as
"translation" and -- more to the point -- "useful translation."
Abbreviations: MT = Machine Translation; HT = Human Translation.

To start with, the claim that "a real translator reads and understands
a text, and then generates [the text] in the [target] language" is
empty.  First, NO ONE really has anything like a good idea of HOW
humans translate, even though there are schools that "teach
translation."  Second, all available evidence indicates that (point #1
notwithstanding), different humans do it differently.  Third, it can
be shown (viz simultaneous interpreters) that nothing as complicated
as "understanding" need take place in all situations.  Fourth, 
although the contention that "there generally aren't 1-1
correspondences between words, phrases..."  sounds reasonable, it is
in fact false an amazing proportion of the time, for languages with
similar derivational histories (e.g., German & English, to say nothing
of the Romance languages).  Fifth, it can be shown that highly
skilled, well-respected technical-manual translators do not always (if
ever) understand the equipment for which they're translating manuals
[and cannot, therefore, be argued to understand the original texts in 
any fundamentally deep sense] -- and must be "understanding" in a
shallower, probably more "linguistic" sense (one perhaps more
susceptible to current state-of-the-art computational treatment).

Now as to how translation is performed in practice.  One thing to
realize here is that, at least outside the U.S. [i.e., where
translation is taken seriously and where almost all of it is done], NO
HUMAN performs "unrestricted translation" -- i.e., human translators
are trained in (and ONLY considered competent in) a FEW AREAS.
Particularly in technical translation, humans are trained in a limited
number of related fields, and are considered QUITE INCOMPETENT outside
those fields.  Another thing to realize is that essentially ALL
TRANSLATIONS ARE POST-EDITED.  I refer here not to stylistic editing,
but to editing by a second translator of superior skill and
experience, who NECESSARILY refers to the original document when
revising his subordinate's translation.  The claim that MT is
unacceptable IF/BECAUSE the results must be post-edited falls to the
objection that HT would be unacceptable by the identical argument.
Obviously, HT is not considered unacceptable for this reason -- and
therefore, neither should MT.  All arguments for acceptablility then
devolve upon the question of HOW MUCH revision is necessary, and HOW
LONG it takes.

Happily, this is where we can leave the territory of pontifical
pronouncements (typically utterred by the un- or ill-informed), and
begin to move into the territory of facts and replicable experiments.
Not entirely, of course, since THERE IS NO SUCH THINGS AS A PERFECT
TRANSLATION and, worse, NO ONE CAN DEFINE WHAT CONSTITUTES A GOOD
TRANSLATION.  Nevertheless, professional post-editors are regularly
saddled with the burden of making operational decisions about these
matters ("Is this sufficiently good that the customer is likely to 
understand the text?  Is it worth my [company's] time to improve it
further?").  Thus we can use their decisions (reflected, e.g., in
post-editing time requirements) to determine the feasibility of MT in
a more scientific manner; to wit: what are the post-editing
requirements of MT vs. HT?  And in order to assess the economic
viability of MT, one must add: taking all expenses into account, is MT
cost-effective [i.e., is HT + human revision more or less expensive
than MT + human revision]?

Re: these last points, our experimental data to date indicate that (1)
the absolute post-editing requirements (i.e., something like "number
of changes required per sentence") for MT are increased w.r.t. HT
[this is no surprise to anyone]; (2) paradoxically, post-editing time
requirements of MT is REDUCED w.r.t. HT [surprise!]; and (3) the
overall costs of MT (including revision) are LESS than those for HT
(including revision) -- a significant finding.

We have run two major experiments to date [with our funding agency
collecting the data, not the project staff], BOTH of which produced
these results; the more recent one naturally produced better results
than the earlier one, and we foresee further improvements in the near
future.  Our finding (2) above, which SEEMS inconsistent with finding
(1), is explainable with reference to the sociology of post-editing
when the original translator is known to be human, and when he will
see the results (which probably should, and almost always does,
happen).  Further details will appear in the literature.

So why haven't you heard about this, if it's such good news?  Well,
you just did!  More to the point, we have been concentrating on
producing this system more than on writing papers about it [though I
have been presenting papers at COLING and ACL conferences], and
publishing delays are part of the problem [one reason for having
conferences].  But more papers are in the works, and the secret will
be out soon enough.

------------------------------

Date: 26 Aug 83  1209 PDT
From: Jim Davidson <JED@SU-AI>
Subject: Fifth Generation (Book Review)

                 [Reprinted from the SCORE BBoard.]

14 Aug 8
by Steven Schlossstein
(c) 1983 Dallas Morning News (Independent Press Service)

    THE FIFTH GENERATION: Artificial Intelligence and Japan's Computer
Challenge to the World. By Edward Feigenbaum and Pamela McCorduck 
(Addison-Wesley, $15.55).

    (Steven Schlossstein lived and worked in Japan with a major Wall 
Street firm for more than six years. He now runs his own Far East 
consulting firm in Princeton, N.J. His first novel, ''Kensei,-' which 
deals with the Japanese drive for industrial supremacy in the high 
tech sector, will be published by Congdon & Weed in October).

    ''Fukoku Kyohei'' was the rallying cry of Meiji Japan when that 
isolated island country broke out of its self-imposed cultural cocoon 
in 1868 to embark upon a comprehensive plan of modernization to catch 
up with the rest of the world.
    ''Rich Country, Strong Army'' is literally what is meant.  
Figuratively, however, it represented Japan's first experimentation 
with a concept called industrial policy: concentrating on the 
development of strategic industries - strategic whether because of 
their connection with military defense or because of their importance 
in export industries intended to compete against foreign products.
    Japan had to apprentice herself to the West for a while to bring
it off.
    The military results, of course, were impressive. Japan defeated 
China in 1895, blew Russia out of the water in 1905, annexed Korea and
Taiwan in 1911, took over Manchuria in 1931, and sat at the top of the
Greater East Asia Co-Prosperity Sphere by 1940. This from a country
previously regarded as barbarian by the rest of the world.
    The economic results were no less impressive. Japan quickly became
the world's largest shipbuilder, replaced England as the world's 
leading textile manufacturer, and knocked off Germany as the premier 
producer of heavy industrial machinery and equipment. This from a 
country previously regarded as barbarian by the rest of the world.
    After World War II, the Ministry of Munitions was defrocked and 
renamed the Ministry of International Trade and Industry (MITI), but 
the process of strategy formulation remained the same.
    Only the postwar rendition was value-added, and you know what 
happened. Japan is now the world's No. 1 automaker, produces more 
steel than anyone else, manufactures over half the TV sets in the 
world, is the only meaningful producer of VTRs, dominates the 64K 
computer chip market, and leads the way in one branch of computer 
technology known as artificial intelligence (AI). All this from a 
country previously regarded as barbarbian by the rest of the world.
    What next for Japan? Ed Feigenbaum, who teaches computer science
at Stanford and pioneered the development of AI in this country, and 
Pamela McCorduck, a New York-based science writer, write that Japan is
trying to dominate AI research and development.
    AI, the fifth generation of computer technology, is to your
personal computer as your personal computer is to pencil and paper. It
is based on processing logic, rather than arithmetic, deals in 
inferences, understands language and recognizes pictures. Or will. It 
is still in its infancy. But not for long; last year, MITI established
the Institute for New Generation Computer Technology, funded it
aggressively, and put some of the country's best brains to work on AI.
    AI systems consist of three subsystems: a knowledge base needed
for problem solving and understanding, an inference subsystem that 
determines what knowledge is relevant for solving the problem at hand,
and an interaction subsystem that facilitates communication between
the overall system and its user - between man and machine.
    Now America does not have a MITI, does not like industrial policy,
has not created an institute to work on AI, and is not even convinced 
that AI is the way to go. But Feigenbaum and McCorduck argue that even
if the Japanese are not successful in developing the fifth generation,
the spin-off from this 10-year project will be enormous, with
potentially wide applications in computer technology, 
telecommunications, industrial robotics, and national defense.
    ''The Fifth Generation'' walks you through AI, how and why Japan 
puts so much emphasis on the project, and how and why the Western 
nations have failed to respond to the challenge. National defense 
implications alone, the authors argue, are sufficient to justify our 
taking AI seriously.
    Smart bombs and laser weapons are but advanced wind-up toys
compared with the AI arsenal of the future. The Pentagon has a little
project called ARPA - Advanced Research Projects Agency - that has
been supporting AI small-scale, but not with the people or funding the
authors feel is meaningful.
    Unfortunately, ''The Fifth Generation'' suffers from some 
organizational defects. You don't really get into AI and how its 
complicated systems operate until you're almost halfway through the 
book. And the chapter on industrial policy - from which all 
technological blessings flow - is only three pages long. It's also at 
the back of the book instead of up front, where it belongs.
    But the issues are highlighted well by experts who are not only 
knowledgeable about AI but who are concerned about our lack of 
response to yet another challenge from Japan. The author's depiction 
of the drivenness of the Japanese is especially poignant. It all boils
down to national survival.
    Japan no longer is in a position of apprenticeship to the West.
                       [Begin garbage]
The D B LD LEAJE OW IN A EMBARRUSSINOF STRATEGIC INDUSDRIES. EAgain1u
2, with few exceptions and shampoo, but it's not trying harder - if at
all.
                        [End garbage]
mount an effective reaponse to the Japanese challenge? ''The
Fifth Generation'' doesn't think so, and for compelling reasons. Give
it a read.
    END

------------------------------

Date: Fri 26 Aug 83 15:40:16-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM>
Subject: Re: Fifth Generation (Book Review)

                 [Reprinted from the SCORE BBoard.]

Anybody who says the Japanese are *leading* in "one branch of computer
technology known as artificial intelligence" is out to lunch.  And by
what standards is DARPA describable as small?  And what is all this
BirdSong about other countries failing to "respond to the challenge"?
Hasn't this turkey read the Alvey report?  Hasn't he noticed France's
vigorous encouragement of their domestic computer industry?  Who in
America is not "convinced that AI is the way to go" (this was true of
the leadership in Britain until the Alvey report came out, I admit)
and what are they doing to hinder AI work?  Does he think 64k RAMs are
the only things that go into computers?  Does he, incidentally, know
that AI has had plenty of pioneers outside of the HPP?

More to the point, most of you know about the wildly over-optimistic
promises that were made in the 60's on behalf of AI, and what happened
in their wake.  Whipping up public hysteria is a dangerous game,
especially when neither John Q. Public nor Malcolm Forbes himself can
do very much about the 5GC project, except put pressure on the local
school board to teach the kids some math and science.
                                                        - Richard

------------------------------

End of AIList Digest
********************

∂31-Aug-83  1538	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #52
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Aug 83  15:34:47 PDT
Date: Wednesday, August 31, 1983 2:12PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #52
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Aug 1983     Volume 1 : Issue 52

Today's Topics:
  Bibliograpy - Vision
----------------------------------------------------------------------

Date: Tue, 30 Aug 83 15:26:12 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Vision Bibliograpy

I have two hundred references from DTIC and NTIS on vision.  The list 
is not complete by any means since I am looking at scene analysis and 
algorithms.  References are more or less from the last ten years with 
few 1982-83 items.  Shown are title, authors, AD number, and 
publication date.  Hopes this helps some.  Mort.

[I have reformatted the entries and sorted them by author.  For files
of this size (about 20K characters), I find that it hassles the fewest
people if I just send it out instead of sending FTP instructions.
-- KIL]


GJ Agin, Representation and Description of Curved Objects, AD755139, 
Oct 72.

N Ahuja & A Rosenfeld & RM Haralick, Neighbor Gray Levels as Features 
in Pixel Classification, , 80.

N Ahuja, Mosaic Models for Image Analysis and Synthesis, ADA050100, 
Nov 77.

JO Amoss, A Syntax-Directed Method of Extracting Topological Regions 
from a Silhouette, ADA045944, Jul 77.

HC Andrews (Project Director), Image Understanding Research, 
ADA054091, Mar 78.

HC Andrews (Project Director), Image Understanding Research, 
ADA046214, Sep 77.

Anonymous, Annual Report 1980, N81-27841, Jan 81.

Anonymous, Automatic Scene Analysis, N81-12776, Nov 79.

Anonymous, Optical Array Processor, ADA118371, Jul 82.

K Arbter, Erkennung und Vermessung von Konturen mit Hilfe der 
Fouriertransformation, ADB061321, Sep 81.

A Baldwin & R Greenblatt & J Holloway & T Knight & D Moon & D Weinreb,
LISP Machine Progress Report, ADA062178, Aug 77.

DH Ballard, Parameter Networks: Towards a Theory of Low-Level Vision, 
ADA101216, Apr 81.

AG Barto & RS Sutton, Goal Seeking Components for Adaptive 
Intelligence: An Initial Assessment, ADA101476, Apr 81.

LS Baumann ed., Image Understanding, ADA052900, Apr 77.

LS Baumann ed., Image Understanding, ADA084764, Apr 80.

LS Baumann ed., Image Understanding, ADA098261, Apr 81.

LS Baumann ed., Proceedings: Image Understanding Workshop, ADA052902, 
May 78.

LS Baumann ed., Proceedings: Image Understanding Workshop, ADA064765, 
Nov 78.

BL Bean & WL Flowers & WM Gutman & AV Jeliek & RL Spellicy, Laser, IR 
and NMMW Propagation Measurements and Analyses, ADB055523L, Feb 80.

B Bhanu, Shape Matching and Image Segmentation Using Stochastic 
Labeling, ADA110033, Aug 81.

GA Biecker & DS Paden & JL Potter, Feature Tagging, ADA091691, Apr 80.

HD Block & NJ Nilsson & RW Duda, Determination and Detection of 
Features in Patterns, AD427840, Dec 63.

M Brady, Computational Approaches to Image Understanding, ADA108191, 
Oct 81.

A Broder & A Rosenfeld, Gradient Magnitude as an Aid in Color Pixel 
Classification, ADA091995, Jun 80.

RA Brooks, Symbolic Reasoning Among 3-D Models and 2-D Images, 
ADA110316, Jun 81.

J Bryant & LF Guseman Jr., Basic Research in Mathematical Pattern 
Recognition and Image Analysis, N81-23561, Jan 81.

BL Bullock, Unstructured Control and Communication Processes in Real 
World Scene Analysis, ADA049458, Oct 77.

GJ Burton, Contrast Discrimination by the Human Visual System, 
ADA104181, May 81.

B Carrigan, Pattern Recognition and Image Processing: Citations from 
NTIS Aug 77 - Jul 79, PB80814221, Aug 80.

R Cederberg, Chain-Link Coding and Segmentation for Raster Scan 
Devices, N79-17129, Nov 78.

A Celmins, A Manual for General Least Squares Model Fitting, 
ADB040229L, Jun 79.

I Chakravarty, A Generalized Line and Junction Labelling Scheme with 
Applications to Scene Analysis, PB278073, Dec 77.

I Chakravarty, A Survey of Current Techniques for Computer Vision, 
PB268385, Jan 77.

R Chellappa, On an Estimation Scheme for Gauss Markov Random Field 
Models, ADA102057, Apr 81.

CH Chen, A Comparative Evaluation of Statistical Image Segmentation 
Techniques, ADA094237, Jan 81.

CH Chen, Image Processing, ADA095552, Feb 81.

CH Chen, Research Progress on Image Segmentation, ADA101827, Jul 81.

CH Chen, Some New Results on Image Processing and Recognition, 
ADA055862, Jun 78.

PW Cheng, A Psychophysical Approach to Form Perception:  
Incompatibility as an Explanation of Integrality, ADA087607, Jul 80.

LS Coles & B Raphael & RO Duda & CA Rosen & TD Garvey & RA Yates & JH 
Munson, Application of Intelligent Automata to Reconnaissance, 
AD868871, Nov 69.

SA Cook & TP Harrington & H Toffer, Digital-Image Processing Improves 
Man-Machine Communication at a Nuclear Reactor, UNI-SA-98, Aug 82.

JL Crowley, A Representation for Visual Information, ADA121443, Nov 
81.

S Cushing and L Vaina, Further Progress in Knowledge Representation 
for Image Understanding, ADA098416, Mar 81.

DARPA, Proceedings: Image Understanding Workshop, ADA052901, Oct 77.

SM Dunn, Generalized Blomqvist Correlation, ADA102058, Apr 81.

CR Dyer, Memory-Augmented Cellular Automata for Image Analysis, 
ADA065328, Nov 78.

JO Eklundh, Studies of Some Algorithms for Digital Picture Processing,
N81-14656, 81.

J Fain & D Gorlin & F Hayes-Roth & S Rosenschein & H Sowizral & D 
Waterman, The ROSIE Language Reference Manual, ADA111025, Dec 81.

JJ Fasano & TS Huang, Feature Dimensionality Reduction Through Use of 
the Karhunen-Love Transform in a Multisensor Pattern Recognition 
System, ADB057184, May 81.

CL Forgy, OPS5 User's Manual, ADA106558, Jul 81.

G Fowler & RM Haralick & FG Gray & C Feustel & C Grinstead, Efficient 
Graph Automorphism by Vertex Partitioning, , 83.

MS Fox, Reasoning with Incomplete Knowledge in a Resource-Limited 
Environment: Integrating Reasoning and Knowledge Acquisition, 
ADA102285, Mar 81.

H Freeman, Shape Description Via the Use of Critical Points, 
ADA040273, Jun 77.

BR Frieden, Image Processing, ADA095075, Feb 81.

DD Garber, Computational Models for Texture Analysis and Synthesis, 
ADA102470, May 81.

Inc., Geo-Centers, Inc., A Review of Three-Dimensional Vision for 
Robotics, ADA118055, May 82.

AP Ginsburg, Perceptual Capabilities, Ambiguities and Artifacts in Man
and Machine, ADA109864, 81.

RC Gonzalez, Evaluation of the Chitra Character Recognition System and
Development of Feature Extraction Algorithms, ADB059991L, May 80.

GD Hadden, A Cellular Automata Approach to Computer Vision and Image 
Processing, ADA096569, Sep 80.

SE Haehn & D Morris, OLPARS VI (On-Line Pattern Analysis and 
Recognition System), ADA118732, Jun 82.

SE Haehn & D Morris, OLPARS VI (On-Line Pattern Analysis and 
Recognition System), ADA118733, Jun 82.

EL Hall & RC Gonzalez, Multi-Sensor Scene Synthesis and Analysis, 
ADA110812, Sep 81.

EL Hall & W Frei & RY Wong, Scene Content Analysis Program - Phase II,
ADA045624, Jul 77.

RM Haralick & D Queeney, Understanding Engineering Drawings, , 82.

RM Haralick & GL Elliott, Increasing Tree Search Efficiency for 
Constraint Satisfaction Problems, , 80.

RM Haralick & LG Shapiro, Decomposition of Polygonal Shapes by 
Clustering, , .

RM Haralick & LG Shapiro, The Consistent Labeling Problem: Part I, , 
Apr 79.

RM Haralick & LG Shapiro, The Consistent Labeling Problem: Part II, , 
May 80.

RM Haralick & LT Watson, A Facet Model for Image Data, , 81.

RM Haralick & LT Watson & TJ Laffey, The Topographic Primal Sketch, , 
83.

RM Haralick, An Interpretation for Probabilistic Relaxation, , 83.

RM Haralick, Edge and Region Analysis for Digital Image Data, , 80.

RM Haralick, Ridges and Valleys on Digital Images, , 83.

RM Haralick, Scene Analysis, Homomorphism, and Consistent Labeling 
Problem Algorithms, ADA082058, Jan 80.

RM Haralick, Some Neighborhood Operators, , 81.

RM Haralick, Statistical and Structural Approaches to Texture, , May 
79.

RM Haralick, Structural Pattern Recognition, Arrangements and Theory 
of Covers, , .

RM Haralick, Using Perspective Transformations in Scene Analysis, , 
80.

F Hayes-Roth & D Gorlin & S Rosenschein & H Sowizral & D Waterman, 
Rationale and Motivation for ROSIE, ADA111018, Nov 81.

CA Hlavka & RM Haralick & SM Carlyle & R Yokoyama, The Discrimination 
of Winter Wheat Using a Growth-State Signature, , 80.

YC Ho and AK Agrawala, On Pattern Classification Algorithms - 
Introduction and Survey, AD667728, Mar 68.

JM Hollerbach, Hierarchical Shape Description of Objects by Selection 
and Modification of Prototypes, ADA024970, Nov 75.

BR Hunt, Automation of Image Processing, ADA111029, May 81.

NE Huston Jr., Shift and Scale Invariant Preprocessor, ADA114519, Dec 
81.

RA Jarvis, Computer Image Segmentation: First Partitions Using Shared 
Near Neighbor Clustering, PB277929, Dec 77.

RA Jarvis, Computer Image Segmentation: Structured Merge Strategies, 
PB277930, Dec 77.

HA Jenkinson, Image Processing Techniques for Automatic Target 
Detection, ADB055686L, Mar 81.

LN Kanal, Pattern Analysis & Modeling, ADA070961, Apr 79.

MD Kelly, Visual Identification of People by Computer, AD713252, Jul 
70.

CE Kim, On Cellular Straight Line Segments, ADA089511, Jul 80.

CE Kim, Three-Dimensional Digital Line Segments, ADA106813, Aug 81.

RL Kirby & A Rosenfeld, A Note on the Use of (Gray Level, Local 
Average Gray Level) Space as an Aid in Threshold Selection, ADA065695,
Jan 79.

L Kitchens & A Rosenfeld, Edge Evaluation Using Local Edge Coherence, 
ADA109564, Dec 80.

AH Klopf, Evolutionary Pattern Recognition Systems, AD637492, Nov 65.

WA Kornfeld, The Use of Parallelism to Implement a Heuristic Search, 
ADA099184, Mar 81.

E Kowler, Eye Movement and Visual Information Processing, ADA112399, 
Dec 81.

S Krusemark & RM Haralick, An Operating System Interface for 
Transportable Image Processing Software, , 83.

FP Kuhl & CR Giardina & OR Mitchell & DJ Charpentier, 
Three-Dimensional Object Recognition Using N-Dimensional Chain Codes, 
ADA119011, Mar 82.

R LaPado & C Reader & L Hubble, Image Processing Displays: A Report on
Commercially Available State-of-the-Art Features, ADA097226, Aug 78.

BA Lambird & D Lavine & LN Kanal, Interactive Knowledge-Based 
Cartographic Feature Extraction, ADB061479L, Oct 81.

BA Lambird & D Lavine & GC Stockman & KC Hayes & LN Kanal, Study of 
Digital Matching of Dissimilar Images, ADA102619, Nov 80.

M Lebowitz, Generalization and Memory in an Integrated Understanding 
System, ADA093083, Oct 80.

T Lozano-Perez, Spatial Planning: A Configuration Space Approach, 
ADA093934, Dec 80.

AV Luizov & NS Fedorova, Illumination and Visual Information, 
ADB056076L, Mar 81.

WI Lundgren, Scene Analysis, ADA115603, Dec 81.

D Marr and HK Nishihara, Representation and Recognition of the Spatial
Organization of Three Dimensional Shapes, ADA031882, Aug 76.

D Marr and S Ullman, Directional Selectivity and Its Use in Early 
Visual Processing, ADA078054, Jun 79.

D Marr, The Low-Level Symbolic Representation of Intensity Changes in 
an Image, ADA013669, Dec 74.

WN Martin and JK Aggarwal, Dynamic Scene Analysis: The Study of Moving
Images, ADA042124, Jan 77.

WN Martin and JK Aggarwal, Survey: Dynamic Scene Analysis, ADA060536, 
78.

J McCarthy & T Binford & C Green & D Luckham & Z Manna ed L Earnest, 
Recent Research in Artificial Intelligence and Foundations of 
Programming, ADA066562, Sep 78.

JL McClelland & DE Rumelhart, An Interactive Activation Model of the 
Effect of Context in Perception Part II, ADA090189, Jul 80.

C McCormick, Strategies for Knowledge-Based Image Interpretation, 
ADA115914, May 82.

KG Mehrotra, Some Observations in Pattern Recognition, ADA113382, Feb 
82.

DL Milgram & A Rosenfeld & T Willett & G Tisdale, Algorithms and 
Hardware Technology for Image Recognition, ADA057191, Mar 78.

DL Milgram & DJ Kahl, Recursive Region Extraction, ADA049591, Dec 77.

DL Milgram, Region Extraction Using Convergent Evidence, ADA061591, 
Jun 78.

M Minsky, K-Lines: A Theory of Memory, ADA078116, Jun 79.

OR Mitchell & FP Kuhl & TA Grogan & DJ Charpentier, A Shape Extraction
and Recognition System, , Mar 82.

CB Moler & GW Stewart, An Efficient Matrix Factorization for Digital 
Image Processing, LA-7637-MS, Jan 79.

MG Moran, Image Analysis, ADA066732, Mar 79.

JL Muerle, Project PARA: Perceiving and Recognition Automata, AD33137,
Dec 63.

GK Myers & RE Twogood, An Algorithm for Enhancing Low-Contrast Details
in Digital Images, UCID-18015, Nov 78.

NTIS, Pattern Recognition and Image Processing Aug 1980-Nov 1981, 
PB82803453, Jan 82.

PM Narendra & BL Westover, Advanced Pattern-Matching Techniques for 
Autonomous Acquisition, ADB059773L, Jan 81.

WP Nelson, Learning Game Evaluation Functions with a Compound Linear 
Machine, ADA085710, Mar 80.

NJ Nilsson & B Raphael & S Wahlstrom, Application of Intelligent 
Automata to Reconnaissance, AD841509, Jun 68.

et. al., NJ Nilsson & CA Rosen & B Raphael, et. al., Application of 
Intelligent Automata to Reconnaissance, AD849872, Feb 69.

NJ Nilsson, A Framework for Artificial Intelligence, ADA068188, Mar 
79.

S Nyberg, On Image Restoration and Noise Reduction with Respect to 
Subjective Criteria, N81-30847, 81.

JV Oldfield, A Special-Purpose Processor for an Automatic Feature 
Extraction System, ADA090789, Aug 80.

JS Ostrem & HD Crane, Automatic Handwriting Verification (AHV), 
ADA111329, Nov 81.

CC Parma & AR Hanson & EM Riseman, Experiments in Schema-Driven 
Interpretation of a Natural Scene, ADA085780, Apr 80.

WA Pearlman, A Visual System Model and a New Distortion Measure in the
Context of Image Processing, PB274534, Jul 77.

T Peli, An Algorithm for Recognition and Localization of Rotated and 
Scaled Objects, ADA102920, Jul 80.

M Pietikainen & A Rosenfeld, Edge-Based Texture Measures, ADA102060, 
May 81.

LJ Pinson & JP Lankford, Research on Image Enhancement Algorithms, 
ADA103216, May 81.

T Poggio & HK Nishihara & KRK Nielsen, Zero-Crossing and 
Spatiotemporal Interpolation in Vision: Aliasing and Electric Coupling
Between Sensors, ADA117608, May 82.

T Poggio, Marr's Approach to Vision, ADA104198, Aug 81.

JM Prager, Extracting and Labelling Boundary Segments in Natural 
Scenes (Revised and Updated), ADA060042, Sep 78.

RC Prather and LM Uhr, Discovery and Learning Techniques for Patern 
Recognition, AD610725, Nov 64.

R Reddy and A Rosenfeld, Final Report on Workshop on Control 
Structures and Knowledge Representation for Image and Speech 
Understanding, ADA076563, Apr 79.

WC Rice & JS Shipman & RJ Spieler, Interactive Digital Image 
Processing Investigation Phase II, ADA087518, Apr 80.

W Richards & K Dismukes, Vision Research for Flight Simulation, 
ADA118721, Jul 82.

W Richards & KA Stevens, Efficient Computations and Representations of
Visual Surfaces, ADA089832, Dec 79.

CA Rosen and NJ Nilsson, Application of Intelligent Automata to 
Reconnaissance, AD820989, Sep 67.

S Rosenberg, Understanding in Incomplete Worlds, ADA062364, May 78.

A Rosenfeld & DL Milgram, Algorithms and Hardware Technology for Image
Recognition, ADA041906, Jul 77.

A Rosenfeld, Cellular Architectures for Pattern Recognition, 
ADA117049, Apr 82.

A Rosenfeld, Image Understanding Using Overlays, ADA086513, May 80.

A Rosenfeld, On Connectivity Properties of Grayscale Pictures, 
ADA108602, Sep 81.

A Rosenfeld, Pebble, Pushdown, and Parallel-Sequential Picture 
Acceptors, ADA051857, Feb 78.

JM Rubin & WA Richards, Color Vision and Image Intensities: When Are 
Changes Material?, ADA103926, May 81.

W Rutkowski, Shape Completion, ADA047682, Aug 77.

EC Seed & HJ Siegel, The Use of Database Techniques in the 
Implementation of a Syntactic Pattern Recognition Task on a Parallel 
Reconfigurable Machine, ADA113934, Dec 81.

S Seeman, FIPS Software for Fast Fourier Transform, Filtering and 
Image Rotation, N79-17594, Oct 78.

LG Shapiro & RM Haralick, A Spatial Data Structure, , 80.

LG Shapiro & RM Haralick, Organization of Relational Models for Scene 
Analysis, , Nov 82.

LG Shapiro & RM Haralick, Structural Descriptions and Inexact 
Matching, , Sep 81.

JE Shore & RM Gray, Minimum Cross-Entropy Pattern Classification and 
Cluster Analysis, ADA086158, Apr 80.

DW Small, Image Processing Program Completion Report, ADA061597, Aug 
78.

DA Smith, Using Enhanced Spherical Images for Object Representation, 
ADA078065, May 79.

DR Smith, On the Computational Complexity of Branch and Bound Search 
Strategies, ADA081608, Nov 79.

BE Soland & PM Narendra & RC Fitch & DV Serreyn & TG Kopet, Prototype 
Automatic Target Screener, ADA060849, Jun 78.

BE Soland & PM Narendra & RC Fitch & DV Serreyn & TG Kopet, Prototype 
Automatic Target Screener, ADA060850, Sep 78.

AJ Stenger & TA Zimmerlin & JP Thomas & M Braunstein, Advanced 
Computer Image Generation Techniques Exploting Perceptual 
Characteristics, ADA103365, Aug 81.

KA Stevens, Surface Perception from Local Analysis of Texture and 
Contour, ADA084803, Feb 80.

GC Stockman & BA Lambird & D Lavine & LN Kanal, Knowledge-Based Image 
Analysis, ADA101319, Apr 81.

GC Stockman & SH Kopstein, The Use of Models in Image Analysis, 
ADA067166, Jan 79.

TM Strat, A Numerical Method for Shape-From-Shading from a Single 
Image, ADA063071, Jan 79.

LT Suminski Jr. & PH Hulin, Computer Generated Imagery (CGI) Current 
Technology and Cost Measures Feasibility Study, ADA091636, Sep 80.

P Szolovits & WA Martin, Brand X Manual, ADA093041, Nov 80.

J Taboada, Coherent Optical Methods for Applications in Robot Visual 
Sensing, ADA110107, 81.

JM Tenenbaum & MA Fischler & HC Wolf, A Scene Analysis Approach to 
Remote Sensing, N79-13438, Jun 78.

U Maryland, Algorithms and Hardware Technology for Image Recognition, 
ADA049590, Oct 77.

S Ullman, The Interpretation of Structure from Motion, ADA062814, Oct 
76.

et. al., SA Underwood, et. al., Visual Learning and Recognition by 
Computer, AD752238, Apr 72.

L Vaina & S Cushing, Foundation of a Knowledge Representation System 
for Image Understanding, ADA095992, Oct 80.

FMDA Vilnrotter, Structural Analysis of Natural Textures, ADA110032, 
Sep 81.

HF Walker, The Mean-Square Error Optimal Linear Discriminant Function 
and Its Application to Incomplete Data Vectors, N79-21827, Feb 79.

S Wang & AY Wu & A Rosenfeld, Image Approximation from Grayscale 
"Medial Axes", ADA091993, May 80.

S Wang & DB Elliott & JB Campbell & RW Erich & RM Haralick, Spatial 
Reasoning in Remotely Sensed Data, , Jan 83.

LT Watson & RM Haralick & OA Zuniga, Constrained Transform Coding and 
Surface Fitting, , May 83.

OA Wehmanen, Pure Pixel Classification Software, N81-11689, JUL 80.

D Weinreb & D Moon, Flavors: Message Passing in the LISP Machine, 
ADA095523, Nov 80.

R Weyhrauch, Prolegomena to a Theory of Formal Reasoning, ADA065698, 
Dec 78.

TD Williams, Computer Interpretation of a Dynamic Image from a Moving 
Vehicle, ADA107565, May 81.

PH Winston & RH Brown editors, Progress in Artificial Intelligence 
1978 Volume 1, ADA068838, 79.

PH Winston & RH Brown eds., Progress in Artificial Intelligence 1978 
Volume 2, ADA068839, 79.

JW Woods, Markov Image Modeling, ADA066078, Oct 78.

AY Wu & T Hong & A Rosenfeld, Threshold Selection Using Quadtrees, 
ADA090245, Mar 80.

VA Yakubovich, Machines That Can Learn to Recognize Patterns, 
AD618643, 63.

JK Yan & DJ Sakrison, Encoding of Images Based on a Two-Component 
Source Model, ADA051033, Nov 77.

Y Yasuoka & RM Haralick, Peak Noise Removal by a Facet Model, , 83.

C Yen, An Image Processing Software Package, ADA101072, Jun 81.

C Yen, On the Use of Fisher's Linear Discriminant for Image 
Segmentation, ADA091591, Nov80.

R Yokoyama & RM Haralick, Texture Pattern Image Generation by Regular 
Markov Chain, , 79.

LA Zadeh, Theory of Fuzziness and Its Application to Information 
Processing and Decision-Making, ADA064598, Oct 76.

AL Zobrist and WB Thompson, Building a Distance Function for Gestalt 
Grouping, ADA015435, 75.

------------------------------

End of AIList Digest
********************

∂02-Sep-83  1043	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #53
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Sep 83  10:42:04 PDT
Date: Thursday, September 1, 1983 2:02PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #53
To: AIList@SRI-AI


AIList Digest             Friday, 2 Sep 1983       Volume 1 : Issue 53

Today's Topics:
  Conferences - AAAI-83 Attendance & Logic Programming,
  AI Publications - Artificial Intelligence Journal & Courseware,
  Artificial Languages - LOGLAN,
  Lisp Availbility - PSL & T,
  Automatic Translation - Ada Request,
  NL & Scientific Method - Rebuttal,
  Intelligence - Definition
----------------------------------------------------------------------

Date: 31 Aug 83 0237 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: AAAI-83 registration

The actual attendance at AAAI-83 was about 2000, plus an additional
1700 people who came only for the tutorials.  This gives a total of
3700.  While much less than the 7000 figure, it's quite a bit larger
than last year's attendance.  Interest in AI seems to be growing
rapidly, spurred partly by media coverage, partly by interest in
expert systems and partly by the 5th generation thing.  Another reason
for this year's high attendance was the Washington location.  We got
tons of government people.

Next year's AAAI conference will be hosted by the University of Texas
at Austin.  From a logistics standpoint, it's much easier to hold a
conference in a hotel than at a university.  Unfortunately, I'm told
there are no hotels in Austin big enough to hold us.  Such is the
price of growth.

-- Dave Touretzky, local arrangements committee member, AAAI-83 & 84

------------------------------
Date: Thu 1 Sep 83 09:15:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Logic Programming Symposium

This is a reminder that the September 1 deadline for submissions to
the IEEE Logic Programming Symposium, to be held in Atlantic City,
New Jersey, February 6-9, 1984, has now all but arrived.  If you are
planning to submit a paper, you are urged to do so without further
delay.  Send ten double-spaced copies to the Technical Chairman:

	Doug DeGroot, IBM Watson Research Center
	PO Box 218, Yorktown Heights, NY 10598

------------------------------

Date: Wed, 31 Aug 83 12:10 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: Subscriptions to the Artificial Intelligence Journal

   Individual (non institutions) belonging to the AAAI, to SIGART or
to AISB can receive a reduced rate personal subscription to the
Artificial Intelligence Journal.  To apply for a subscription, send a
copy of your membership form with a check for $50 (made out to
Elsevier) to:
        Elsevier Science Publishers
        Attn: John Tagler
        52 Vanderbilt Avenue
        New York, New York 10017
North Holland (Elsevier) will acknowledge receipt of the request for
subscription, and provide information about which issues will be
included in your subscription, and when they should arrive.  Back
issues are not available at the personal rate.

Artificial Intelligence, an International journal, has been the
journal of record for the field of Artificial Intelligence since
1970.  Articles for submission should be sent (three copies) to Dr.
Daniel G. Bobrow, Editor-in-chief, Xerox Palo Alto Research Center,
3333 Coyote Hill Road, Palo Alto, California 94304, or to Prof.
Patrick J. Hayes, Associate Editor, Computer Science Department,
University of Rochester, Rochester N.Y. 14627.


danny bobrow

------------------------------

Date: 31 Aug 1983 17:10:40 EDT (Wednesday)
From: Marshall Abrams <abrams at mitre>
Subject: College-level courseware publishing

I have learned that Addison-Wesley is setting up a new
courseware/software operation and are looking for microcomputer
software packages at the college level.  I think the idea is for a
student to be able to go to the bookstore and buy a disk and
instruction manual for a specific course.

Further details on request.

------------------------------

Date: 29 Aug 1983 2154-PDT
From: VANBUER@USC-ECL
Subject: Re: LOGLAN

[...]

The Loglan institute is in the middle of a year long "quiet spell" 
After several years of experiments with sounds, patching various small
logical details (e.g. two unambiguous ways to say "pretty little 
girls"'s two interpretations), the Institute is busily preparing
materials on the new version, preparing to "go public" again in a
fairly big way.
        Darrel J. Van Buer

------------------------------

Date: 30 Aug 1983 0719-MDT
From: Robert R. Kessler <KESSLER@UTAH-20>
Subject: re: Lisps on 68000's


     Date: 24 Aug 83 19:47:17-PDT (Wed)
     From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
     Subject: Re: Lisps on 68000's - (nf)
     Article-I.D.: uiucdcs.2626

     ....

     I think PSL is definitely a superior lisp for the 68000, but I
     have no idea whether is will be available for non-HP machines...


     Jordan Pollack
     University of Illinois
     ...pur-ee!uiucdcs!uicsl!pollack

Yes, PSL is available for other 68000's, particularly the Apollo.  It
is also being released for the DecSystem-20 and Vax running 4.x Unix.
Send queries to

Cruse@Utah-20

Bob.

------------------------------

Date: Tue, 30 Aug 1983  14:32 EDT
From: MONTALVO@MIT-OZ
Subject: Lisps on 68000's


    From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
    Subject: Re: Lisps on 68000's - (nf)
    Article-I.D.: uiucdcs.2626

    I played with a version of PSL on a HP 9845 for several hours one
    day.  The environment was just like running FranzLisp under Emacs
    in ...

A minor correction so people don't get confused:  it was probably an 
HP 9836 not an HP 9845.  I've used both machines including PSL on the 
36, and doubt very much that PSL runs on a 45.

------------------------------

Date: Wed, 31 Aug 83 01:25:29 EDT
From: Jonathan Rees <Rees@YALE.ARPA>
Subject: Re: Lisps on 68000's


    Date: 19 Aug 83 10:52:11-PDT (Fri)
    From: harpo!eagle!allegra!jdd @ Ucb-Vax
    Subject: Lisps on 68000's
    Article-I.D.: allegra.1760

    ...  T sounds good, but the people who are saying it's
    great are the same ones trying to sell it to me for several
    thousand dollars, so I'd like to get some more disinterested
    opinions first.  The only person I've talked to said it was
    awful, but he admits he used an early version.

T is distributed by Yale for $75 to universities and other non-profit 
organizations.

Yale has not yet decided on the means by which it will distribute T to
for-profit institutions, but it has been negotiating with a few 
companies, including Cognitive Systems, Inc.  To my knowledge no final
agreements have been signed, so right now, no one can sell it.

"Supported" versions will be available from commercial outfits who are
willing to take on the extra responsibility (and reap the profits?),
but unsupported versions will presumably still be available directly
from Yale.

Regardless of the final outcome, no company or companies will have 
exclusive marketing rights.  We do not want a high price tag to
inhibit availability.

                        Jonathan Rees
                        T Project
                        Yale Computer Science Dept.

P.S. As a regular T user, I can say that it is a good system.  As its 
principal implementor, I won't claim to be disinterested.
Testimonials from satisfied users may be found in previous AILIST
digests; perhaps you can obtain back issues.

------------------------------

Date: 1 Sep 1983 11:58-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Translation into Ada:  Request for Info

It is estimated that the WMCCS communications system will require five
years to translate into Ada.  Not man-years, but years; if the
staffing is assumed to exceed two hundred then we are talking about a
man-millenium for this task.

Has any work been done on mechanical aids for translating programs
into Ada?  I seek pointers to existing and past projects, or
assurances that no work has been done in this area.  Any pointers to
such information would be greatly appreciated.

To illustrate my lack of knowledge in this field, the only work I have
heard of for translating from one high-level language to another is 
UniLogic's translator for converting BLISS to PL/1.  As I understand 
it, their program only works on the Scribe document formatter but
could be extended to cover other programs.  I am interested in hearing
of other translators, especially those for translating into
strongly-typed languages.

Dan Hoey HOEY@NRL-AIC.ARPA

------------------------------

Date: Wed 31 Aug 83 18:42:08-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Solutions of the natural language analysis problem

Given the downhill trend of some contributions on natural language 
analysis in this group, this is my last comment on the topic, and is
essentially an answer to Stan the leprechaun hacker (STLH for short).

I didn't "admit" that grammars only reflect some aspects of language.
(Using loaded verbs such as "admit" is not conducive to the best 
quality of discussion.)  I just STATED THE OBVIOUS. The equations of 
motion only reflect SOME aspects of the material world, and yet no 
engineer goes without them. I presented this point at greater length 
in my earlier note, but the substantive presentation of method seems 
to have gone unanswered. Incidentally, I worked for several years in a
civil engineering laboratory where ACTUAL dams and bridges were 
designed, and I never saw there the preference for alchemy over 
chemistry that STLH suggests is the necessary result of practical 
concerns. Elegance and reproduciblity do not seem to be enemies of 
generality in other scientific or engineering disciplines.  Claiming 
for AI an immunity from normal scientific standards (however flawed 
...) is excellent support for our many detractors, who may just now be
on the deffensive because of media hype, but will surely come back to 
the fray, with that weapon plus a long list of unfulfilled promises 
and irreproducible "results."

Lack of rigor follows from lack of method. STLH tries to bludgeon us
with "generating *all* the possible meanings" of a sentence.  Does he
mean ALL of the INFINITY of meanings a sentence has in general? Even
leaving aside model-theoretic considerations, we are all familiar with

        he wanted me to believe P so he said P
        he wanted me to believe not P so he said P because he thought
           that I would think that he said P just for me to believe P
           and not believe it
        and so on ...

in spy stories.

The observation that "we need something that models human cognition 
closely enough..." begs the question of what human cognition looks 
like. (Silly me, it looks like STLH's program, of course.)  STLH also 
forgets that is often better for a conversation partner (whether man 
or machine) to say "I don't understand" than to go on saying "yes, 
yes, yes ..." and get it all wrong, as people (and machines) that are 
trying to disguise their ignorance do.

It is indeed not surprising that "[his] problems are really concerned 
with the acquisition of linguistic knowledge." Once every grammatical 
framework is thrown out, it is extremely difficult to see how new 
linguistic knowledge can be assimilated, whether automatically or even
by programming it in. As to the notion that "everyone is an expert on 
the native language", it is similar to the claim that everyone with 
working ears is an expert in acoustics.

As to "pernicious behavior", it would be better if STLH would first 
put his own house in order: he seems to believe that to work at SRI 
one needs to swear eternal hate to the "Schank camp" (whatever that 
is); and useful criticism of other people's papers requires at least a
mention of the title and of the objections. A bit of that old battered
scientific protocol would help...

Fernando Pereira

------------------------------

Date: Tue, 30 Aug 1983  15:57 EDT
From: MONTALVO@MIT-OZ
Subject: intelligence is...

    Date: 25 Aug 1983 1448-PDT
    To: AIList at MIT-MC
    From: Jay <JAY@USC-ECLC>
    Subject: intelligence is...

      An intelligence must have at least three abilities; To act; To
    perceive, and classify (as one of: better, the same, worse) the
    results of its actions, or the environment after the action; and
    lastly To change its future actions in light of what it has
    perceived, in attempt to maximize "goodness", and avoid "badness".
    My views are very obviously flavored by behaviorism.

Where do you suppose the evolutionary cutoff is for intelligence?  By
this definition a Planaria (flatworm) is intelligent.  It can learn a
simple Y maze.

I basically like this definition of intelligence but I think the 
learning part lends itself to many degrees of complexity, and 
therefore, the definition leads to many degrees of intelligence.  
Maybe that's ok.  I would like to see an analysis (probably NOT on 
AIList, althought maybe some short speculation might be appropriate) 
of the levels of complexity that a learner could have.  For example, 
one with a representation of the agent's action would be more 
complicated (therefore, more intelligent) than one without.  Probably 
a Planaria has no representation of it's actions, only of the results 
of its actions.

------------------------------

End of AIList Digest
********************

∂09-Sep-83  1317	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #54
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83  13:16:53 PDT
Date: Friday, September 9, 1983 9:02AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #54
To: AIList@SRI-AI


AIList Digest             Friday, 9 Sep 1983       Volume 1 : Issue 54

Today's Topics:
  Robotics - Walking Robot,
  Fifth Generation - Book Review Discussion,
  Methodology - Rational Psychology,
  Lisp Availability - T,
  Prolog - Lisp Based Prolog, Foolog
----------------------------------------------------------------------

Date: Fri 2 Sep 83 19:24:59-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Strong, agile robot

                 [Reprinted from the SCORE BBoard.]

     There is a nice article in the current Robotics Age about an
outfit down in Anaheim (not Disney) that has built a six-legged robot
with six legs spaced radially around a circular core.  Each leg has
three motors, and there are enough degrees of freedom in the system to
allow the robot to assume various postures such as a low, tucked one 
for tight spots; a tall one for looking around, and a wide one for
unstable surfaces.  As a demonstration, they had the robot climb into
the back of a pickup truck, climb out, and then lift up the truck by
the rear end and move the truck around by walking while lifting the
truck.4

     It's not a heavy AI effort; this thing is a teleoperator
controlled by somebody with a joystick and some switches (although it
took considerable computer power to make it possible for one joystick
to control 18 motors in such a way that the robot can walk faster than
most people).  Still, it begins to look like walking machines are
finally getting to the point where they are good for something.  This
thing is about human sized and can lift 900 pounds; few people can do
that.

------------------------------

Date: 3 Sep 83 12:19:49-PDT (Sat)
From: harpo!eagle!mhuxt!mhuxh!mhuxr!mhuxv!akgua!emory!gatech!pwh@Ucb-Vax
Subject: Re: Fifth Generation (Book Review)
Article-I.D.: gatech.846

In response to Richard Treitel's comments about the Fifth Generation
book review recently posted:

        *This* turkey, for one, has not heard of the "Alvey report."
        Do tell...

I believe that part of your disagreement with the book reviewer stems
from the fact that you seem to be addressing different audiences. He,
a concerned but ignorant lay-audience; you, the AI Intelligensia on
the net.

phil hutto


CSNET pwh@gatech
INTERNET pwh.gatech@udel-relay
UUCP ...!{allegra, sb1, ut-ngp, duke!mcnc!msdc}!gatech!pwh


p.s. - Please do elaborate on the Alvey Report. Sounds fascinating.

------------------------------

Date: Tue 6 Sep 83 14:24:28-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Fifth Generation (Book Review)

Phil,

I wish I were in a position to elaborate on the Alvey Report.  Here's
all I know, as relayed by a friend of mine who is working back in
Britain:

As a response to either (i) the challenge/promise of the Information
Era or (ii) the announcement of a major Japanese effort to develop AI
systems, Mrs.  Thatcher's government commissioned a Commission,
chaired by some guy named Alvey about whom I don't know anything
(though I suspect he is an academic of some stature, else he wouldn't
have been given the job).  The mission of this Commission (or it may
have been a Committee) was to produce recommendations for national
policy, to be implemented probably by the Science and Engineering 
Research Council.  They found that while a few British universities
are doing quite good computer science, only one of them is doing AI
worth mentioning, namely Edinburgh, and even there, not too much of
it.  (The reason for this is that an earlier Government commissioned
another Report on AI, which was written by Professor Sir James
Lighthill, an academic of some stature.  Unfortunately he is a
mathematician specialising in fluid dynamics -- said to have designed 
Concorde's wings, or some such -- and he concluded that the only bit
of decent work that had been done in AI to date was Terry Wingorad's
thesis (just out) and that the field showed very little promise.  As a
result of the Lighthill Report, AI was virtually a dirty word in
Britain for ten years.  Most people still think it means artificial
insemination.)  Alvey's group also found, what anyone could have told
the Government, that research on all sorts of advanced science and
technology was disgracefully stunted.  So they recommended that a few
hundred million pounds of state and industrial funds be pumped into 
research and education in AI, CS, and supporting fields.  This
happened about a year ago, and the Gov't basically bought the whole
thing, with the result that certain segments of the academic job
market over there went straight from famine to feast (the reverse
change will occur pretty soon, I doubt not).  It kind of remains to be
seen what industry will do, since we don't have a MITI.

I partly accept your criticism of my criticism of that review, but I
also believe that a journalist has an obligation not to publish
falsehoods, even if they are generally believed, and to do more than
re-hash the output of his colleagues into a form consistent with the
demands of the story he is "writing".

                                        - Richard

------------------------------

Date: Sat 3 Sep 83 13:28:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational Psychology

I've just read Jon Doyle's paper "Rational Psychology" in the latest 
AI Magazine. It's one of those papers you wish you (I wish) had 
written it yourself. The paper shows implictly what is wrong with many
of the arguments in discussions on intelligence and language analysis 
in this group. I am posting this as a starting shot in what I would 
like to be a rational discussion of methodology. Any takers?

Fernando Pereira

PS. I have been a long-time fan of Truesdell's rational mechanics and 
thermodynamics (being a victim of "black art" physics courses). Jon 
Doyle's emphasis on Truesdell's methodology is for me particularly 
welcome.


[The article in question is rather short, more of an inspirational
pep talk than a guide to the field.  Could someone submit one
"rational argument" or other exemplar of the approach?  Since I am
not familiar with the texts that Doyle cites, I am unable to discern
what he and Fernando would like us to discuss or how they would have
us go about it. -- KIL]

------------------------------

Date: 2 Sep 1983 11:26-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Availability of T


     Yale has not yet decided on the means by which it will distribute
     T to for-profit institutions, but it has been negotiating with a
     few companies, including Cognitive Systems, Inc.  To my knowledge
     no final agreements have been signed, so right now, no one can sell
     it.  ...We do not want a high price tag to inhibit availability.

        -- Jonathan Rees, T Project (REES@YALE) 31-Aug-83

About two days before you sent this to the digest, I received a
14-page T licensing agreement from Yale University's "Office of
Cooperative Research".

Prices ranged from $1K for an Apollo to $5K for a VAX 11/780 for
government contractors (e.g. us), with no software support or
technical assistance.  The agreement does not actually say that
sources are provided, although that is implied in several places. A
rather murky trade secret clause was included in the contract.

It thus appears that T is already being marketed.  These cost figures,
however, are approaching Scribe territory.  Considering (a) the cost
of $5K per VAX CPU, (b) the wide variety of alternative LISPs
available for the VAX, and (c) the relatively small base of existing T
(or Scheme) software, perhaps Yale does "want a high price tag to
inhibit availability" after all....
                                                        asc

------------------------------

Date: Thursday, 1 September 1983 12:14:59 EDT
From: Brad.Allen@CMU-RI-ISL1
Subject: Lisp Based Prolog

                 [Reprinted from the Prolog Digest.]

I would like to voice disagreement with Fernando Pereira's implication
that Lisp Based Prologs are good only for pedagogical purposes. The
flipside of efficiency is usability, and until there are Prolog
systems with exploratory programming environments which exhibit the
same features as, say Interlisp-D or Symbolics machines, there will be
a place for Lisp Based Prologs which can use such features as, E.g., 
bitmap graphics and calls to packages in other languages.  Lisp Based
Prologs can fill the void between now and the point when software
accumulation in standard Prolog has caught up to that of Lisp ( if it
ever does ).

------------------------------

Date: Sat 3 Sep 83 10:51:22-PDT
From: Pereira@SRI-AI
Subject: Prolog in Lisp

                 [Reprinted from the Prolog Digest.]

Relying on ( inferior ) Prologs in Lisp is the best way of not 
contributing to Prolog software accumulation. The large number of 
tools that have been built at Edinburgh show the advantages for the 
whole Prolog community of sites 100% committed to building everything 
in Prolog.  By far the best debugging environment for Prolog programs 
in use today is the one on the DEC-10/20 system, and that is written 
entirely in Prolog. Its operation is very different, and much superior
for Prolog purposes, than all Prolog debuggers built on top of Lisp 
debuggers that I have seen to date. Furthermore, integrating things 
like screen management into a Prolog environment in a graceful way is 
a challenging problem ( think of how long it took until flavors came 
up as the way of building the graphics facilities on the MIT Lisp 
machines ), which will also advance our understanding of computer 
graphics ( I have written a paper on the subject, "Can drawing be 
liberated from the von Neumann style?" ).

I am not saying that Prologs in Lisp are not to be used ( I use one 
myself on the Symbolics Lisp machines ), but that a large number of 
conceptual and language advances will be lost if we don't try to see 
environmental tools in the light of logic programming.

-- Fernando Pereira

------------------------------

Date: Mon, 5 Sep 1983  03:39 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: Foolog

                 [Reprinted from the Prolog Digest.]

In Pereira's introduction to Foolog [a misunderstanding; see the next
article -- KIL] and my toy interpreter he says:

     However, such simple interpreters ( even the
     Abelson and Sussman one which is far better than
     PiL ) are not a sufficient basis for the claim
     that "it is easy extend Lisp to do what Prolog
     does." What Prolog "does" is not just to make
     certain deductions in a certain order, but also
     make them very fast. Unfortunately, all Prologs in
     Lisp I know of fail in this crucial aspect ( by
     factors between 30 and 1000 ).

I never claim for my little interpreter that it was more than a toy.
It primary value is pedagogic in that it makes the operational
semantics of the pure part of Prolog clear.  Regarding Foolog, I
would defend it in that it is relatively complete;

-- it contains cut, bagof, call, etc. and for i/o and arithmetic his
primitive called "lisp" is adequate.  In the introduction he claims
that its 75% of the speed of the Dec 10/20 Prolog interpreter.  If
that makes it a toy then all but 2 or 3 Prolog implementations are
non-toy.

[Comment: I agree with Fernando Pereira and Ken that there are lots
and again lots of horribly slow Prologs floating around. But I do not
think that it is impossible to write a fast one in Lisp, even on a
standard computer. One of the latest versions of the Foolog
interpreters is actually slightly faster than Dec-10 Prolog when
measuring LIPS.  The Foolog compiler I am working on compiled
naive-reverse to half the speed of compiled Dec-10 Prolog ( including
mode declarations ).  The compiler opencodes unification, optimizes
tail recursion and uses determinism, and the code fits in about three
pages ( all of it is in Prolog, of course ).  -- Martin Nilsson]

I tend to agree that too many claims are made for "one day wonders".
Just because I can implement most of Prolog in one day in Lisp
doesn't mean that the implentation is any good.  I know because I
started almost two years ago with a very tiny implementation of
Prolog in Lisp.  As I started to use it for serious applications it
grew to the point where today its up to hundreds of pages of code (
the entire source code for the system comes to 230 Tops20 pages ).
The Prolog runs on Lisp Machines ( so we call it LM-Prolog ).  Mats
Carlsson here in Uppsala wrote a compiler for it and it is a serious
implementation.  It runs naive reverse of a list 30 long on a CADR in
less than 80 milliseconds (about 6250 Lips).  Lambdas and 3600s
typically run from 2 to 5 times faster than Cadrs so you can guess
how fast it'll run.

Not only is LM-Prolog fast but it incorporates many important
innovations.  It exploits the very rich programming environment of
Lisp Machines.  The following is a short list of its features:

User Extensible Interpreter
Extensible unification for implementing
E.g. parallelism and constraints

Optimizing Compiler
Open compilation Tail recursion removal and
automatic detection of determinacy Compiled
unification with microcoded runtime support
Efficient bi-directional interface to Lisp

Database Features
User controlled indexing Multiple databases
(Worlds)

Control Features
Efficient conditionals Demand-driven
computation of sets and bags

Access To Lisp Machine
Features Full programming environment, Zwei
editor, menus, windows, processes, networks,
arithmetic ( arbitrary precision, floating,
rational and complex numbers, strings,
arrays, I/O streams )

Language Features
Optional occur check Handling of cyclic
structures Arbitrary parity

Compatability Package
Automatic translation from DEC-10 Prolog
to LM-Prolog

Performance
Compiled code up to 6250 LIPS on a CADR
Interpreted code; up to 500 LIPS

Availability
LM-Prolog currently runs on LMI CADRs
and Symbolics LM-2s.  Soon to run on
Lambdas.

Commercially Available Soon.
For more information contact
Kenneth M. Kahn or Mats Carlsson.

Inquires can be directed to:

KEN@MIT-OZ   or

UPMAIL P. O. Box 2059
       S-75002
       Uppsala, Sweden

Phone  +46-18-111925

------------------------------
Date: Tue 6 Sep 83 15:22:25-PDT
From: Pereira@SRI-AI
Subject: Misunderstanding

                 [Reprinted from the PROLOG Digest.]

I'm sorry that my first note on Prologs in Lisp was construed as a 
comment on Foolog, which appeared in the same Digest.  In fact, my 
note was send to the digest BEFORE I knew Ken was submitting Foolog.  
Therefore, it was not a comment on Foolog.  As to LM-Prolog, I have a 
few comments about its speed:

1. It depends essentially on the use of Lisp machine subprimitives and
a microcoded unification, which are beyond Lisp the language and the 
Lisp environment in all but the MIT Lisp machines.  It LM-Prolog can 
be considered as "a Prolog in Lisp," then DEC-10/20 Prolog is a Prolog
in Prolog ...

2. To achieve that speed in determinate computation requires mapping 
Prolog procedure calls into Lisp function calls, which leaves 
backtracking in the lurch. The version of LM-Prolog I know of used 
stack group switches for bactracking, which is orders of magnitude 
slower than backtracking on the DEC-20 system.

3. Code compactness is sacrificed by compiling from Prolog into Lisp 
with open-coded unification. This is important because it makes worse 
the paging behavior of large programs.

There are a lot of other issues in estimating the "real" efficiency of
Prolog systems, such as GC requirements and exact TRO discipline.  For
example, using CONS space for runtime Prolog data structures is a 
common technique that seems adequate when testing with naive reverse 
of a 30 long list, but appears hopeless for programs that build 
structure and backtrack a lot, because CONS space is not stack 
allocated ( unless you use certain nonportable tricks, and even 
then... ), and therefore is not reclaimed on backtracking ( one might 
argue that Lisp programs for the same task have the same problem, but 
efficient backtracking is precisely one of the major advantages of 
good Prolog implementations ).

The current Lisp machines have exciting environment tools from which 
Prolog users would like to benefit.  I think that building Prolog 
systems in Lisp will hit artificial performance and language barriers 
much before the actual limits of the hardware employed are reached.  
The approach I favor is to take the latest developments in Prolog 
implementation and use them to build Prolog systems that coexist with 
Lisp on those machines, but use all the hardware resources.  I think 
this is possible with a bit of cooperation from manufacturers, and I 
have reasons to hope this will happen soon, and produce Prolog systems
with a performance far superior to DEC-20 Prolog.

Ken's approach may produce a tolerable system in the short term, but I
don't think it can ever reach the performance and functionality which
I think the new machines can deliver.  Furthermore, there are big
differences between the requirements of experimental systems, with all
sorts of new goodies, and day-to-day systems that do the standard
things, but just much better.  Ken's approach risks producing a system
that falls between these (conflicting) goals, leading to a much larger
implementation effort than is needed just for experimenting with
language extensions ( most of the time better done in Prolog ) or just
for a practical system.

-- Fernando Pereira

PS:  For all it is worth, the source of DEC-20 Prolog is 177 pages of 
Prolog and 139 of Macro-10 (at 1 instruction per line...).  The system
comprises a full compiler, interpreter, debugger and run time system, 
not using anything external besides operating system I/O calls.  We
estimate it incorporates between 5 and 6 man years of effort.

According to Ken, LM-Prolog is 230 pages of Lisp and Prolog ...

------------------------------

End of AIList Digest
********************

∂09-Sep-83  1628	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #55
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83  16:27:56 PDT
Date: Friday, September 9, 1983 12:29PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #55
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Sep 1983      Volume 1 : Issue 55

Today's Topics:
  Intelligence - Turing Test & Definitions,
  AI Environments - Computing Power & Social Systems
----------------------------------------------------------------------

Date: Saturday,  3 Sep 1983 13:57-PDT
From: bankes@rand-unix
Subject: Turing Tests and Definitions of Intelligence


As much as I dislike adding one more opinion to an overworked topic, I
feel compelled to make a comment on the ongoing discussion of the
Turing test.  It seems to me quite clear that the Turing test serves
as a tool for philosopical argument and not as a defining criterion.
It serves the purpose of enlightening those who would assert the
impossibility of any machine ever being intelligent.  The point is, if
a machine which would pass the test could be produced, then a person
would have either to admit it to be intelligent or else accept that
his definition of intelligence is something which cannot be perceived
or tested.

However, when the Turing test is used as a tool with which to think
about "What is intelligence?" it leads primarily to insights into the
psychology and politics of what people will accept as intelligent.
(This is a consequence of the democratic definition - its intelligent
if everybody agrees it is).  Hence, we get all sorts of distractions:
Must an intelligent machine make mistakes, should an intelligent
machine have emotions, and most recently would an intelligent machine
be prejudiced?  All of this deals with a sociological viewpoint on
what is intelligent, and gets us no closer to a fundamental
understanding of the phenomenon.

Intelligence is an old word, like virtue and honor.  It may well be
that the progress of our understanding will make it obsolete, the word
may come to suggest the illusions of an earlier time.  Certainly, it
is much more complex than our language patterns allow.  The Turing
test suggests it to be a boolean, you got it or you don't.  We
commonly use smart as a relational, you're smarter than me but we're
both smarter than rover.  This suggests intelligence is a scaler,
hence IQ tests.  But recent experience with IQ testing across cultures
together with the data from comparative psychology, would suggest that
intelligence is at least multi-dimensional.  Burrowing animals on the
whole do better at mazes than others.  Animals whose primary defense
is flight respond differently to aversive conditioning than do more
aggressive species.

We may have seen a recapitulation of this in the last twenty years
experience with AI.  We have moved from looking for the philosophers
stone, the single thing needed to make something intelligent, to
knowledge based systems.  No one would reasonably discuss (I think)
whether my program is smarter than yours.  But we might be able to say
that mine knows more about medicine than yours or that mine has more
capacity for discovering new relations of a specified type.

Thus I would suggest that the word intelligence (noun that it is,
suggesting a thing which might somehow be gotten ahold of) should be
used with caution.  And that the Turing test, as influential as it has
been, may have outlived its usefulness, at least for discussions among
the faithful.


                                -Steve Bankes
                                 RAND

------------------------------

Date: Sat, 3 Sep 83 17:07:33 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: Learning Complexity

     There was recently a query on AIList about how to characterize
learning complexity (and saying that may be the crucial issue in
intelligence).  Actually, I have been thinking about this recently, so
I thought I would comment.  One way to characterize the learning
complexity of procedural skills is in terms of what kind of production
system is needed to perform the skill.  For example, the kind of
things a slug or crayfish (currently popular species in biopsychology)
can do seem characterizable by production systems with minimal
internal memory, conditions that are simple external states of the
world, and actions that are direct physical actions (this is
stimulus-response psychology in a nutshell).  However, human skills
(progamming computers, doing geometry, etc.)  need much more complex
production systems with complex networks as internal memories,
conditions that include variables, and actions that are mental in
addition to direct physical actions.  Of course, what form productions
would have to be to exhibit human-level intelligence (if indeed, they
can) is an open question and a very active field of research.

------------------------------

Date: 5 Sep 83 09:42:44 PDT (Mon)
From: woodson%UCBERNIE@Berkeley (Chas Woodson)
Subject: AI and computing power

Can you direct me to some wise comments on the following question?
Is the progress of AI being held up by lack of computing power?


[Reply follows. -- KIL]

There was a discussion of this on Human-Nets a year ago.
I am reprinting some of the discussion below.

My own feeling is that we are not being held back.  If we had
infinite compute power tomorrow, we would not know how to use it.
Others take the opposite view: that intelligence may be brute force
search, massive theorem proving, or large rule bases and that we are
shying away from the true solutions because we want a quick finesse.
There is also a view that some problems (e.g. vision) may require
parallel solutions, as opposed to parallel speedup of iterative
solutions.

The AI principal investigators seem to feel (see the Fall AI Magazine)
that it would be enough if each AI investigator had a Lisp Machine
or equivalent funding.  I would extend that a little further.  I think
that the biggest bottleneck right now is the lack of support staff --
systems wizards, apprentice programmers, program librarians, software
editors (i.e., people who edit other people's code), evaluators,
integrators, documentors, etc.  Could Lukas have made Star Wars
without a team of subordinate experts?  We need to free our AI
gurus from the day-to-day trivia of coding and system building just
as we use secretaries and office machines to free our management
personnel from administrative trivia.  We need to move AI from the
lone inventor stage to the industrial laboratory stage.  This is a
matter of social systems rather than hardware.

                                        -- Ken Laws

------------------------------

Date: Tuesday, 12 October 1982  13:50-EDT
From: AGRE at MIT-MC
Subject: artificial intelligence and computer architecture

   [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

A couple of observations on the theory that AI is being held back by
the sorry state of computer architecture.

First, there are three projects that I know of in this country that
are explicitly trying to deal with the problem.  They are Danny
Hillis' Connection Machine project at MIT, Scott Fahlman's NETL
machine at CMU, and the NON-VON project at Columbia (I can't
remember who's doing that one right offhand).

Second, the associative memory fad came and went very many years
ago.  The problem, simply put, is that human memory is a more
complicated place than even the hairiest associative memory chip.
The projects I have just mentioned were all first meant as much more
sophisticated approaches to "memory architectures", though they have
become more than that since.

Third, it is quite important to distinguish between computer
architectures and computational concepts.  The former will always
lag ten years behind the latter.  In fact, although our computer
architectures are just now beginning to pull convincingly out of the
von Neumann trap, the virtual machines that our computer languages
run on haven't been in the von Neumann style for a long time.  Think
of object-oriented programming or semantic network models or
constraint languages or "streams" or "actors" or "simulation" ideas
as old as Simula and VDL.  True these are implemented on serial
machines, but they evoke conceptions of computation more closer to
our ideas about how the physical world works, with notions of causal
locality and data flow and asynchronous communication quite
analogous to those of physics; one uses these languages properly not
by thinking of serial computers but by thinking in these more
general terms.  These are the stuff of everyday programming, at
least among the avant garde in the AI labs.

None of this is to say that AI's salvation isn't in computer
architecture.  But it is to say that the process of freeing
ourselves from the technology of the 40's is well under weigh.
(Yes, I know, hubris.)   - phiL

------------------------------

Date: 13 Oct 1982 08:34 PDT
From: DMRussell at PARC-MAXC
Subject: AI and alternative architectures

   [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

There is a whole subfield of AI growing up around parallel
processing models of computation.  It is characterized by the use of
massive compute engines (or models thereof) and a corresponding
disregard for efficiency concerns.  (Why not, when you've got n↑n
processors?)

"Parallel AI" is a result of a crossing of interests from neural
modelling,  parallel systems theory, and straightforward AI.
Currently, the most interesting work has been done in vision --
where the transformation from pixel data to more abstract
representations (e.g. edges, surfaces or 2.5-D data) via parallel
processing is pretty easy. There has been rather less success in
other, not-so-obviously parallel, fields.

Some work that is being done:

Jerry Feldman & Dana Ballard (University of Rochester)
        -- neural modelling, vision
Steve Small, Gary Cottrell, Lokendra Shastri (University of Rochester)
        -- parallel word sense and sentence parsing
Scott Fahlman (CMU) -- knowledge rep in a parallel world
??? (CMU) -- distributed sensor net people
Geoff Hinton (UC San Diego?) -- vision
Daniel Sabbah (IBM) -- vision
Rumelhart (UC San Diego) -- motor control
Carl Hewitt, Bill Kornfeld (MIT) -- problem solving

(not a complete list -- just a hint)

The major concerns of these people has been controlling the parallel
beasts they've created.  Basically, each of the systems accepts data
at one end, and then munges the data and various hypotheses about
the data until the entire system settles down to a single
interpretation.  It is all very messy, and incredibly difficult to
prove anything.  (e.g. Under what conditions will this system
converge?)

The obvious question is this: What does all of this alternative
architecture business buy you?  So far, I think it's an open
question.  Suggestions?

-- DMR --

------------------------------

Date: 13 Oct 1982 1120-PDT
From: LAWS at SRI-AI
Subject: [LAWS at SRI-AI: AI Architecture]


   [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

In response to Glasser @LLL-MFE:

I doubt that new classes of computer architecture will be the
solution to building artificial intelligence.  Certainly we could
use more powerful CPUs, and the new generation of LISP machines make
practical approaches that were merely feasibility demonstrations
before.  The fact remains that if we don't have the algorithms for
doing something with current hardware, we still won't be able to do
it with faster or more powerful hardware.

Associative memories have been built in both hardware and software.
See, for example, the LEAP language that was incorporated into the
SAIL language.  (MAINSAIL, an impressive offspring of SAIL, has
abandoned this approach in favor of subroutines for hash table
maintenance.)  Hardware is also being built for data flow languages,
applicative languages, parallel processing, etc.  To some extent
these efforts change our way of thinking about problems, but for the
most part they only speed up what we knew how to do already.

For further speculation about what we would do with "massively
parallel architectures" if we ever got them, I suggest the recent
papers by Dana Ballard and Geoffrey Hinton, e.g. in the Aug. ['82]
AAAI conference proceedings [...].  My own belief is that the "missing
link" to AI is a lot of deep thought and hard work, followed by VLSI
implementation of algorithms that have (probably) been tested using
conventional software running on conventional architectures.  To be
more specific we would have to choose a particular domain since
different areas of AI require different solutions.

Much recent work has focused on the representation of knowledge in
various domains: representation is a prerequisite to acquisition and
manipulation.  Dr. Lenat has done some very interesting work on a
program that modifies its own representations as it analyzes its own
behavior.  There are other examples of programs that learn from
experience.  If we can master knowledge representation and learning,
we can begin to get away from programming by full analysis of every
part of every algorithm needed for every task in a domain.  That
would speed up our progress more than new architectures.

[...]

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂09-Sep-83  1728	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #56
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83  17:28:17 PDT
Date: Friday, September 9, 1983 3:36PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #56
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Sep 1983      Volume 1 : Issue 56

Today's Topics:
  Professional Activities - JACM Referees & Inst. for Retraining in CS,
  Artificial Languages - Loglan,
  Knowledge Representation - Multiple Inheritance Query,
  Games - Puzzle & Go Tournament
----------------------------------------------------------------------

Date: 8 Sep 83 10:33:25 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: referees for JACM (AI area)

Since the time I became the AI Area Editor for the JACM, I have found 
myself handicapped for lack of a current roster of referees.  This 
note is to ask you to volunteer to referee papers for the journal.

JACM is the major outlet for theoretical papers in computer science.  
In the area of AI most of the submissions in the past have ranged over
the topics of Automated Reasoning (Theorem Proving, Deduction, 
Induction, Default) and Automated Search (Search methods, state-space 
algorithms, And/Or reduction searches, analysis of efficiency and 
error and attendant tradeoffs).  Under my editorship I would like to 
broader the scope to THEORETICAL papers in all areas of AI, including 
Knowledge Representation, Learning, Modeling (Space, Time, Causality),
Problem Formulation & Reformulation etc.

If you are willing to be on the roster of referees, please send me a 
note with your name, mailing address, net-address and telephone 
number.  Please also list your areas of interest and competence.

If you wish to submit a paper please follow the procedures described 
in the "instructions to authors" page of the journal.  Copies of mss 
can be sent to either me or to the Editor-in-Chief.

N.S. Sridharan [Sridharan@Rutgers] Area Editor, AI JACM

------------------------------

Date: Wed, 7 Sep 83 16:06 PDT
From: Jeff Ullman <ullman@Diablo>
Subject: Institute for Retraining in CS

                [Reprinted from the SU-SCORE BBoard.]

A summer institute for retraining college faculty to teach computer
science is being held at Clarkson College, Potsdam, NY, this summer,
under the auspices of a joint ACM/MAA committee.  They need lecturers
in all areas of computer science, to deliver 1-month courses.  People
at or close to the PH. D. level are needed.  If interested, contact Ed
Dubinsky at 315-268-2382 (office) 315-265-2906 (home).

------------------------------

Date: 6 Sep 83 18:15:17-PDT (Tue)
From: harpo!gummo!whuxlb!pyuxll!abnjh!icu0 @ Ucb-Vax
Subject: Re: Loglan
Article-I.D.: abnjh.236

[Directed to Pourne@MIT-MC]


1. Rumor has it that SOMEONE at the Univ. of Washington (State of, NOT
D.C.)  was working on the [LOGLAN] grammar online (UN*X, as I recall).
I haven't yet had the temerity to post a general inquiry regarding
their locale. If they read your request and respond, please POST
it...some of us out here are also interested.

2. A friend of mine at Ohio State has typed in (by hand!) the glossary
from Vol 1 (the laymans grammar) which could be useful for writing a
"flashcard" program, but both of us are too busy.

                         Art Wieners
                         (who will only be at this addr for this week,
                          but keep your modems open for a resurfacing
                          at da Labs...)

------------------------------

Date: 7 Sep 83 16:43:58-PDT (Wed)
From: decvax!genrad!grkermit!chris @ Ucb-Vax
Subject: Re: Loglan 
Article-I.D.: grkermit.654

I just posted something relevant to net.nlang.  (I'm not sure which is
more appropriate, but I'm going to assume that "natural" language is
closer than all of Artificial Intelligence.)

I sent a request for information to the Loglan Institute, (Route 10,
Box 260 Gainesville, FL 32601 [a NEW address]) and they are just about
to go splashily public again.  I posted the first page of their reply
letter, see net.nlang for more details.  Later postings will cover
their short description of their Interactive Parser, which is among
their many new or improved offerings.

decvax!genrad!grkermit!chris
allegra!linus!genrad!grkermit!chris 
harpo!eagle!mit-vax!grkermit!chris

------------------------------

Date: 2-Sep-83 19:33 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Multiple Inheritance query

Can you tell me where I can find a discussion of the anatomy and value
of multiple inheritance?  I wonder if it is worth adding this feature
to the design for a lay-person's language, called Players, for
specifying adventures.

 -- kirk

------------------------------

Date: 24 August 1983 1536-PDT (Wednesday)
From: Foonberg at AEROSPACE (Alan Foonberg)
Subject: Another Puzzle

                 [Reprinted from the Prolog Digest.]

I was glancing at an old copy of Games magazine and came across the 
following puzzle:

Can you find a ten digit number such that its left-most digit tells 
how many zeroes there are in the number, its second digit tells how 
many ones there are, etc.?

For example, 6210001000.  There are 6 zeroes, 2 ones, 1 two, no 
threes, etc. I'd be interested to see any efficient solutions to this
fairly simple problem. Can you derive all such numbers, not only
ten-digit numbers?  Feel free to make your own extensions to this
problem.

Alan

------------------------------

Date: 5 Sep 83 20:11:04-PDT (Mon)
From: harpo!psl @ Ucb-Vax
Subject: Go Tournament
Article-I.D.: harpo.1840


                          ANNOUNCING
                        The First Ever
                            USENIX
                           COMPUTER

                         #####  #######
                        #     # #     #
                        #       #     #
                        #  #### #     #
                        #     # #     #
                        #     # #     #
                         #####  #######

##### ####  #    # #####  #    #   ##   #    # ###### #    # #####
  #  #    # #    # #    # ##   #  #  #  ##  ## #      ##   #   #
  #  #    # #    # #    # # #  # #    # # ## # #####  # #  #   #
  #  #    # #    # #####  #  # # ###### #    # #      #  # #   #
  #  #    # #    # #   #  #   ## #    # #    # #      #   ##   #
  #   ####   ####  #    # #    # #    # #    # ###### #    #   #


              A B C D E F G H j K L M N O P Q R S T

          19  + + + + + + + + + + + + + + + + + + +  19
          18  + + + + + + + + + + + + + + + + + + +  18
          17  + + + O @ + + + + + + + + + + + + + +  17
          16  + + + O + + + O + @ + + + + + @ + + +  16
          15  + + + + + + + + + + + + + + + + + + +  15
          14  + + O O + + + O + @ + + + + + + + + +  14
          13  + + @ + + + + + + + + + + + + + + + +  13
          12  + + + + + + + + + + + + + + + + + + +  12
          11  + + + + + + + + + + + + + + + + + + +  11
          10  + + + + + + + + + + + + + + + + + + +  10
           9  + + + + + + + + + + + + + + + + + + +  9
           8  + + + + + + + + + + + + + O O O O @ +  8
           7  + + O @ + + + + + + + + + O @ @ @ @ @  7
           6  + + @ O O + + + + + + + + + O O O @ +  6
           5  + + O + + + + + + + + + + + + O @ @ +  5
           4  + + + O + + + + + + + + + + + O @ + +  4
           3  + + @ @ + @ + + + + + + + + @ @ O @ +  3
           2  + + + + + + + + + + + + + + + + + + +  2
           1  + + + + + + + + + + + + + + + + + + +  1

              A B C D E F G H j K L M N O P Q R S T


To be held during the Summer 1984 Usenix conference in Salt Lake
City, Utah.


Probable Rules
-------- -----

1)  The board will be 19 x 19.
This size was chosen rather than one of the smaller boards because
there is a great deal of accumulated Go "wisdom" that would be
worthless on smaller boards.

2) The board positions will be numbered as in the diagram above.  The
columns will be labeled 'A' through 'T' (excluding 'I') left to
right.  The rows will be labeled '19' through '1', top to bottom.

3) Play will continue until both programs pass in sequence.  This may
be a trouble spot, but looks like the best approach available.
Several alternatives were considered: (1) have the referee decide
when the game is over by identifying "uncontested" versus "contested"
area; (2) limit the game to a certain number of moves; all of them
had one or another unreasonable effect.

4) There will be a time limit for each program.  This will be in the
form of a limit on accumulated "user" time (60 minutes?).  If a
program goes over the time limit it will be allowed some minimum
amount of time for each move (15 seconds?).  If no move is generated
within the minimum time the game is forfeit.

5) The tournament will use a "referee" program to execute each
competing pair of programs; thus the programs must understand a
standard set of commands and generate output of a standard form.

    a) Input to the program.  All input commands to the program will
       be in the form of lines of text appearing on the standard
       input and terminated by a newline.
        1) The placement of a stone will be expressed as
           letter-number (e.g. "G7").  Note that the letter "I"
           is not included.
        2) A pass will be expressed as "pass".
        3) The command "time" means the time limit has been exceeded
           and all further moves must be generated within the shorter
           minimum time limit.
    b) Output from the program.  All output from the program will be
       in the form of lines of characters sent to the "standard
       output" (terminated by a newline) and had better be unbuffered.
        1) The placement of a stone will be expressed as
           letter-number, as in "G12".  Note that the letter "I"
           is not included.
        2) A pass will be expressed as "pass".
        3) Any other output lines will be considered garbage and
           ignored.
        4) Any syntactically correct but semantically illegal move
           (e.g. spot already occupied, ko violation, etc.) will be
           considered a forfeit.

The referee program will maintain a display of the board, the move
history, etc.

6) The general form of the tournament will depend on the number of
participants, the availability of computing power, etc.  If only a
few programs are entered each program will play every other program
twice.  If many are entered some form of Swiss system will be used.

7) These rules are not set in concrete ... yet; this one in
particular.


Comments, suggestions, contributions, etc. should be sent via uucp
to harpo!psl or via U.S. Mail to Peter Langston / Lucasfilm Ltd. /
P.O. Box 2009 / San Rafael, CA  94912.


For the record: I am neither "at Bell Labs" nor "at Usenix", but
rather "at" a company whose net address is a secret (cough, cough!).
Thus notices like this must be sent through helpful intermediaries
like Harpo.  I am, however, organizing this tournament "for" Usenix.

------------------------------

End of AIList Digest
********************

∂15-Sep-83  2007	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #57
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Sep 83  20:05:40 PDT
Date: Thursday, September 15, 1983 4:57PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #57
To: AIList@SRI-AI


AIList Digest            Friday, 16 Sep 1983       Volume 1 : Issue 57

Today's Topics:
  Artificial Intelligence - Public Recognition,
  Programming Languages - Multiple Inheritance & Micro LISPs,
  Query Systems - Talk by Michael Hess,
  AI Architectures & Prolog - Talk by Peter Borgwardt,
  AI Architectures - Human-Nets Reprints
----------------------------------------------------------------------

Date: 10 Sep 1983 21:44:16-PDT
From: Richard Tong <fuzzy1@aids-unix>
Subject: "some guy named Alvey"

John Alvey is Senior Director, Technology, at British Telecom.  The 
committee that he headed reported to the British Minister for 
Information Technology in September 1982 ("A Program for Advanced 
Information Technology", HMSO 1982).

The committee was formed in response to the announcement of the
Japanese 5th Generation Project at the behest of the British
Information Technology Industry.

The major recommendations were for increased collaboration within 
industry, and between industry and academia, in the areas of Software 
Engineering, VLSI, Man-Machine Interfaces and Intelligent 
Knowledge-Based Systems.  The recommended funding levels being 
approximately: $100M, $145M, $66M and $40M respectively.

The British Government's response was entirely positive and resulted
in the setting up of a small Directorate within the Department of
Industry.  This is staffed by people from industry and supported by
the Government.

The most obvious results so far have been the creation of several 
Information Technology posts in various universities.  Whether the 
research money will appear as quickly remains to be seen.

Richard.

------------------------------

Date: Mon 12 Sep 83 22:35:21-PDT
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM>
Subject: The world turns; would you believe...

                [Reprinted from the SU-SCORE bboard.]

1. A thing called the Wall Street Computer Review, advertising
a conference on computers for Wall Street professionals, with
keynote speech by Isaac Asimov entitled "Artificial Intelligence
on Wall Street"

2. In the employment advertising section of last Sunday's NY Times,
Bell Labs (of all places!)  showing Expert Systems prominently
as one of their areas of work and need, and advertising for people
to do Expert Systems development using methods of Artificial
Intelligence research. Now I'm looking for a big IBM ad in
Scientific American...

3. In 2 September SCIENCE, an ad from New Mexico State's Computing
Research Laboratory. It says:

"To enhance further the technological capabilities of New Mexico, the
state has funded five centers of technical excellence including
Computing Research Laboratory (CRL) at New Mexico State University.
...The CRL is dedicated to interdisciplinary research on knowledge-
based systems"

------------------------------

Date: 15 Sep 1983 15:28-EST
From: David.Anderson@CMU-CS-G.ARPA
Subject: Re: Multiple Inheritance query

For a discussion of multiple inheritance see "Multiple Inheritance in 
Smalltalk-80" by Alan Borning and Dan Ingalls in the AAAI-82
proceedings.  The Lisp Machine Lisp manual also has some justification
for multiple inheritance schemes in the chapter on Flavors.

--david

[See also any discussion of the LOOPS language, e.g., in the
Fall issue of AI Magazine.  -- KIL]

------------------------------

Date: Wed 14 Sep 83 19:16:41-EDT
From: Ted Markowitz <TJM@COLUMBIA-20.ARPA>
Subject: Info on micro LISP dialects

Has anyone evaluated verions of LISP that run on micros? I'd like to
find out what's already out there and people's impressions of them.
The hardware would be something in the nature of an IBM PC or a DEC
Rainbow.

--ted

------------------------------

Date: 12 Sep 1983 1415-PDT
From: Ichiki
Subject: Talk by Michael Hess

      [This talk will be given at the SRI AI Center.  Visitors
      should come to E building on Ravenswood Avenue in Menlo
      Park and call Joani Ichiki, x4403.]


                Text Based Question Answering Systems
                -------------------------------------

                             Michael Hess
                     University of Texas, Austin

                  Friday, 16 September, 10:30, EK242

Question Answering Systems typically operate on Data Bases consisting 
of object level facts and rules. This, however, limits their 
usefulness quite substantially. Most scientific information is 
represented as Natural Language texts. These texts provide relatively 
few basic facts but do give detailed explanantions of how they can be 
interpreted, i.e. how the facts can be linked with the general laws 
which either explain them, or which can be inferred from them. This 
type of information, however, does not lend itself to an immediate 
representation on the object level.

Since there are no known proof procedures for higher order logics we 
have to find makeshift solutions for a suitable text representation 
with appropriate interpretation procedures. One way is to use the 
subset of First Order Predicate Calculus as defined by Prolog as a 
representation language, and a General Purpose Planner (implemented in
Prolog) as an interpreter. Answering a question over a textual data 
base can then be reduced to proving the answer in a model of the world
as described in the text, i.e. to planning a sequence of actions 
leading from the state of affairs given in the text to the state of 
affairs given in the question. The meta-level information contained in
the text is used as control information during the proof, i.e. during 
the execution of the simulation in the model. Moreover, the format of 
the data as defined by the planner makes explicit some kinds of 
information particularly often addressed in questions.

The simulation of an experiment in the Blocks World, using the kind of
meta-level information important in real scientific experiments, can 
be used to generate data which, when generalised, could be used 
directly as DB for question answering about the experiment.  
Simultaneously, it serves as a pattern for the representation of 
possible texts describing the experiment.  The question of how to 
translate NL questions and NL texts, into this kind of format, 
however, has yet to be solved.

------------------------------

Date: 12 Sep 1983 1730-PDT
From: Ichiki
Subject: Talk by Peter Borgwardt

      [This talk will be given at the SRI AI Center.  Visitors
      should come to E building on Ravenswood Avenue in Menlo
      Park and call Joani Ichiki, x4403.]

There will be a talk given by Peter Borgwardt on Monday, 9/19 at 
10:30am in Conference Room EJ222.  Abstract follows:

              Parallel Prolog Using Stack Segments
                on Shared-memory Multiprocessors

                         Peter Borgwardt
                   Computer Science Department
                     University of Minnesota
                     Minneapolis, MN 55455

                            Abstract

A method of parallel evaluation for Prolog is presented for 
shared-memory multiprocessors that is a natural extension of the 
current methods of compiling Prolog for sequential execution.  In 
particular, the method exploits stack-based evaluation with stack 
segments spread across several processors to greatly reduce the need
for garbage collection in the distributed computation.  AND 
parallelism and stream parallelism are the most important sources of
concurrent execution in this method; these are implemented using local
process lists; idle processors may scan these and execute any process
as soon as its consumed (input) variables have been defined by the
goals that produce them.  OR parallelism is considered less important
but the method does implement it with process numbers and variable
binding lists when it is requested in the source program.

------------------------------

Date: Wed, 14 Sep 83 07:31 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: human-nets discussion on AI and architecture

Ken,

   I see you have revived the Human-nets discussion about AI and
computer architecture.  I initiated that discussion and saved all
the replies.  I thought you might be interested.  I'm sending them
to you rather than AILIST so you can use your judgment about what
if anything you might like to forward to AILIST.
                                        Alan

[The following is the original message.  The remainder of this
digest consists of the collected replies.  I am not sure which,
if any, appeared in Human-Nets.  -- KIL]


---------------------------------------------------------------------

Date: 4 Oct 1982 (Monday) 0537-EDT
From: GLASSER at LLL-MFE
Subject: artificial intelligence and computer architecture

     I am a new member of the HUMAN-NETS interest group.  I am also
newly interested in Artificial Intelligence, partly as a result of
reading "Goedel,Escher,Bach" and similar recent books and articles
on AI.  While this interest group isn't really about AI, there isn't
any other group which is, and since this one covers any computer
topics not covered by others, this will do as a forum.
     From what I've read, it seems that most or all AI work now
being done involves using von Neumann computer programs to model
aspects of intelligent behavior.  Meanwhile, others like Backus
(IEEE Spectrum, August 1982, p.22) are challenging the dominance of
von Neumann computers and exploring alternative programming styles
and computer architectures. I believe there's a crucial missing link
in understanding intelligent behavior.  I think it's likely to
involve the nature of associative memory, and I think the key to it
is likely to involve novel concepts in computer architecture.
Discovery of the structure of associative memory could have an
effect on AI similar to that of the discovery of the structure of
DNA on genetics.  Does anyone out there have similar ideas?  Does
anyone know of any research and/or publications on this sort of
thing?

---------------------------------------------------------------------

Date: 15 Oct 1982 1406-PDT
From: Paul Martin <PMARTIN at SRI-AI>
Subject: Re: HUMAN-NETS Digest   V5 #96

Concerning the NON-VON project at Columbia, David Shaw, formerly of
the Stanford A. I. Lab, is using the development of some
non-VonNeuman hardware designs to make an interesting class of
database access operations no longer require times that are
exponential with the size of the db.  He wouldn't call his project
AI, but rather an approach to   "breaking the VonNeuman bottleneck"
as it applies to a number of well-understood but poorly solved
problems in computing.

---------------------------------------------------------------------

Date: 28 Oct 1982 1515-EDT
From: David F. Bacon
Subject: Parallelism and AI
Reply-to: Columbia at CMU-20C

Parallel Architectures for Artificial Intelligence at Columbia

While the NON-VON supercomputer is expected to provide significant
performance improvements in other areas as well, one of the
principal goals of the project is the provision of highly efficient
support for large-scale artificial intelligence applications.  As
Dr. Martin indicated in his recent message, NON-VON is particularly
well suited to the execution of relational algebraic operations.  We
believe, however, that such functions, or operations very much like
them, are central to a wide range of artificial intelligence
applications.

In particular, we are currently developing a parallel version of the
PROLOG language for NON-VON (in addition to parallel versions of
Pascal, LISP and APL).  David Shaw, who is directing the NON-VON
project, wrote his Ph.D.  thesis at the Stanford A.I. Lab on a
subject related to large-scale parallel AI operations.  Many of the
ideas from his dissertation are being exploited in our current work.

The NON-VON machine will be constructed using custom VLSI chips,
connected according to a binary tree-structured topology.  NON-VON
will have a very "fine granularity" (that is, a large number of very
small processors).  A full-scale NON-VON machine might embody on the
order of 1 million processing elements.  A prototype version
incorporating 1000 PE's should be running by next August.

In addition to NON-VON, another machine called DADO is being
developed specifically for AI applications (for example, an optimal
running time algorithm for Production System programs has already
been implemented on a DADO simulator).  Professor Sal Stolfo is
principal architect of the DADO machine, and is working in close
collaboration with Professor Shaw.  The DADO machine will contain a
smaller number of more powerful processing elements than NON-VON,
and will thus have a a "coarser" granularity.  DADO is being
constructed with off-the-shelf Intel 8751 chips; each processor will
have 4K of EPROM and 8K of RAM.

Like NON-VON, the DADO machine will be configured as a binary tree.
Since it is being constructed using "off-the-shelf" components, a
working DADO prototype should be operational at an earlier date than
the first NON-VON machine (a sixteen node prototype should be
operational in three weeks!).  While DADO will be of interest in its
own right, it will also be used to simulate the NON-VON machine,
providing a powerful testbed for the investigation of massive
parallelism.

As some people have legitimately pointed out, parallelism doesn't
magically solve all your problems ("we've got 2 million processors,
so who cares about efficiency?").  On the other hand, a lot of AI
problems simply haven't been practical on conventional machines, and
parallel machines should help in this area.  Existing problems are
also sped up substantially [ O(N) sort, O(1) search, O(n↑2) matrix
multiply ].  As someone already mentioned, vision algorithms seem
particularly well suited to parallelism -- this is being
investigated here at Columbia.

New architectures won't solve all of our problems -- it's painfully
obvious on our current machines that even fast expensive hardware
isn't worth a damn if you haven't got good software to run on it,
but even the best of software is limited by the hardware.  Parallel
machines will overcome one of the major limitations of computers.

David Bacon
NON-VON/DADO Research Group
Columbia University

------------------------------

Date: 7 Nov 82 13:43:44 EST  (Sun)
From: Mark Weiser <mark.umcp-cs@UDel-Relay>
Subject: Re:  Parallelism and AI

Just to mention another project, The CS department at the University
of Maryland has a parallel computing project called Zmob.  A Zmob
consists of 256 Z-80 processors called moblets, each with 64k
memory, connected by a 48 bit wide high speed shift register ring
network  (100ns/shift, 25.6us/revolution) called the "conveyer
belt".  The conveyer belt acts almost like a 256x256 cross-bar since
it rotates faster than a z-80 can do significant I/O, and it also
provides for broadcast messages and messages sent and received by
pattern match.  Each Z-80 has serial and parallel ports, and the
whole thing is served by a Vax which provides cross-compiling and
file access.

There are four projects funded and working on Zmob (other than the
basic hardware construction), sponsored by the Air Force.  One is
parallel numerical analysis, matrix calculations, and the like (the
Z-80's have hardware floating point).  The second is parallel image
processing and vision.  The third is distributed problem solving
using Prolog.  The fourth (mine) is operating systems and software,
developing remote-procedure-call and a distributed version of Unix
called Mobix.

A two-moblet prototype was working a year and half ago, and we hope
to bring up a 128 processor version in the next few months.  (The
boards are all PC'ed and stuffed but timing problems on the bus are
temporarily holding things back).

------------------------------

End of AIList Digest
********************

∂16-Sep-83  1714	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #58
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Sep 83  17:12:40 PDT
Date: Friday, September 16, 1983 4:10PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #58
To: AIList@SRI-AI


AIList Digest           Saturday, 17 Sep 1983      Volume 1 : Issue 58

Today's Topics:
  Automatic Translation - Ada,
  Games - Go Programs & Foonberg's Number Problem,
  Artificial Intelligence - Turing Test & Creativity
----------------------------------------------------------------------

Date: 10 Sep 83 13:50:18-PDT (Sat)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax
Subject: Re: Translation into Ada:  Request for Info
Article-I.D.: rayssd.142

There have been a number of translators from Pascal to Ada, the first
successful one I know of was developed at UC Berkeley by P. Albrecht,
S. Graham et al.  See the "Source-to-Source Translation" paper in the
1980 Proceedings of Sigplan Symp. on Ada, Dec. 1980.

At Univ. S. Calif. Info. Sci. Institute (USC-ISI), Steve Crocker (now
at the Aerospace Corp.) developed AUTOPSY, a translator from CMS-2 to
Ada.  (CMS-2 is the Navy standard language for embedded software.)

Steve Litvintchouk
Raytheon Company
Portsmouth, RI  02871

------------------------------

Date: 10 Sep 83 13:56:17-PDT (Sat)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax
Subject: Re: Go Tournament
Article-I.D.: rayssd.143

ARE there any available Go programs which run on VAX/UNIX which I
could obtain?  (Either commercially sold, or available from
universities, or whatever.)

I find Go fascinating and would love to have a Go program to play
against.

Please reply via USENET, or to:

Steve Litvintchouk
Raytheon Company
Submarine Signal Division
Portsmouth, RI 02871

(401)847-8000 x4018

------------------------------

Date: 14 Sep 1983 16:18-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Alan Foonberg's number problem

I'm surprised you posted Alan Foonberg's number problem on AIlist
since Vivek Sarkar's solution has already appeared (Prolog digest V1
#28).  I enclose his solution below.  His solution unfortunately omits
the special cases , 2020, and 21200; I have sent a correction to the
Prolog digest.

Dan

------------------------------

Date: Wed 7 Sep 83 11:08:08-PD
From: Vivek Sarkar <JLH.Vivek@SU-SIERRA>
Subject: Solution to Alan Foonberg's Number Puzzle

Here is a general solution to the puzzle posed by Alan Foonberg:

My generalisation is to consider n-digit numbers in base n.  The 
digits can therefore take on values in the range 0 .. n-1 .

A summary of the solution is:

n = 4:  1210

n >= 7:  (n-4) 2 1 0 0 ... 0 0 1 0 0 0
                   <--------->
                    (n-7) 0's

Further these describe ALL possible solutions, I.e. radix values of 
2,3,5,6 have no solutions, and other values have exactly one solution 
for each radix.

Proof:

Case 2 <= n <= 6:  Consider these as singular cases.  It is simple to
show that there are no solutions for 2,3,5,6 and that 1210 is the only
solution for 4. You can do this by writing a program to generate all
solutions for a given radix.  ( I did that; unfortunately it works out
better in Pascal than Prolog ! )

CASE n >= 7:  It is easy to see that the given number is indeed a
solution. ( The rightmost 1 represents the single occurrence of (n-4)
at the beginning ).  For motivation, we can substitute n=10 and get 
6210001000, which was the decimal solution provided by Alan.

The tough part is to show that this represents the only solution, for
a given radix.  We do this by considering all possible values for the
first digit ( call it d0 ) and showing that d0=(n-4) is the only one
which can lead to a solution.

SUBCASE d0 < (n-4):  Let d0 = n-4-j, where j>=1.  Therefore the number
has (n-4-j) 0's, which leaves (j+3) non-zero digits apart from d0.
Further these (j+3) digits must add up to (j+4). ( The sum of the
digits of a solution must be n, as there are n digits in the number,
and the value of each digit contributes to a frequency count of digits
with its positional value).  The only way that (j+3) non-zero digits
can add up to (j+4) is by having (j+2) 1's and one 2.  If there are
(j+2) 1's, then the second digit from the left, which counts the
number of 1's (call it d1) must = (j+2).  Since j >= 1, d1=(j+2) is
neither a 1 nor a 2.  Contradiction !

SUBCASE d0 > (n-4):  This leads to 3 possible values for d0: (n-1),
(n-2) & (n-3).  It is simple to consider each value and see that it
can't possibly lead to a solution, by using an analysis similar to the
one above.

We therefore conclude that d0=(n-4), and it is straightforward to show
that the given solution is the only possible one, for this value of
d0.

-- Q.E.D.

------------------------------

Date: Wed 14 Sep 83 17:25:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: Alan Foonberg's number problem

Thanks for the note and the correction.  I get the Prolog digest
a little delayed, so I hadn't seen the answer at the time I relayed
the problem.

My purpose in sending out the problem actually had nothing to do with
finding the answer.  The answer you forwarded is a nice mathematical
proof, but the question is whether and how AI techniques could solve
the problem.  Would an AI program have to reason in the same manner as
a mathematician?  Would different AI techniques lead to different
answers?  How does one represent the problem and the solution in 
machine-readable form?  Is this an interesting class of problems for
cognitive science to deal with?

I was expecting that someone would respond with a 10-line PROLOG
program that would solve the problem.  The discussion that followed
might contrast that with the LISP or ALGOL infrastructure needed to
solve the problem.  Now, of course, I don't expect anyone to present
algorithmic solutions.

                                        -- Ken Laws

------------------------------

Date: 9 Sep 83 13:15:56-PDT (Fri)
From: harpo!floyd!cmcl2!csd1!condict @ Ucb-Vax
Subject: Re: in defense of Turing - (nf)
Article-I.D.: csd1.116

A comment on the statement that it is easy to trip up an allegedly
intelligent machine that generates responses by using the input as an
index into an array of possible outputs:  Yes, but this machine has no
state and hence hardly qualifies as a machine at all!  The simple
tricks you described cannot be used if we augment it to use the entire
sequence of inputs so far as the index, instead of just the most
recent one, when generating its response. This allows it to take into
account sequences that contain runs of identical inputs and to 
understand inputs that refer to previous inputs (or even
Hofstadteresque self-referential inputs).  My point is not that this
new machine cannot be tripped up but that the one described is such a
straw man that fooling it gives no information about the real
difficulty of programming a computer to pass the Turing test.

------------------------------

Date: 10 Sep 83 22:20:39-PDT (Sat)
From: decvax!wivax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker@Ucb-Vax
Subject: Re: in defense of Turing
Article-I.D.: umcp-cs.2538

It should be fairly obvious that the Turing test is not a precise
test to determine intelligence because the very meaning of the
word 'intellegence' cannot be precisely pinned down, despite what
your Oxford dictionary might say.

I think the idea here is that if a machine can perform such that
it is indistinguishable from the behavior of a human then it can
be said to display human intelligence.  Note that I said, "human
intelligence."

It is even debatable whether certain members of the executive branch
can be said to be intelligent.  If we can't apply the Turing test
there... then surely we're just spinning our wheels in an attempt
to apply it universally.

                                                - Speaker

--
Full-Name:      Speaker-To-Animals
Csnet:          speaker@umcp-cs
Arpa:           speaker.umcp-cs@UDel-Relay

This must be hell...all I can see are flames... towering flames!

------------------------------

Date: Wed 14 Sep 83 12:35:11-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: intelligence and genius

[This continues a discussion on Human-Nets.  My original statement, 
printed below, was shot down by several people.  Individuals certainly
derive satisfaction from hobbies at which they will never excel.  It 
would take much of the fun out of my life, however, if I could not
even imagine excelling at anything because cybernetic life had
surpassed humans in every way. -- KIL]

    From: Ken Laws <Laws@SRI-AI.ARPA>
    Life will get even worse if AI succeeds in automating true
    creativity.  What point would there be in learning to paint,
    write, etc., if your home computer could knock out more
    artistic creations than you could ever hope to master?

    I was rather surprised that this suggestion was taken so quickly
as it stands. Most people in AI believe that we will someday create an
"intelligent" machine, but Ken's claim seems to go beyond that;
"automating true creativity" seems to be saying that we can create not
just intelligent, but "genius" systems, at will. The automation of
genius is a more sticky claim in my mind.

    For example, if we create an intelligent system, do we make it a
genius system by just turning up the speed or increasing its memory?
That"s like saying a painter could become Rembrandt if he/she just
painted 1000 times more. More likely is that the wrong (or uncreative)
ideas would simply pour out faster, or be remembered longer. Turning
up the speed of the early blind-search chess programs made them
marginally better players, but no more creative.

    Or let's say we stumble onto the creation of some genius system,
call it "Einstein". Do we get all of the new genius systems we need by
merely duplicating "Einstein", something impossible to do with human
systems? Again, we hit a dead end... "Einstein" will only be useful in
a small domain of creativity, and will never be a Bach or a Rembrandt
no matter how many we clone.  Even more discouraging, if we xerox off
1000 of our "Einstein" systems, do we get 1000 times the creative
ideas? Probably not; we will cover the range of "Einstein's" potential
creativity better, but that's it. Even a genius has only a range of
creativity.

    What is it about genius systems that makes them so intractable?  
If we will someday create intelligent systems consistently and
reliably, what stands in the way of creating genius systems on demand?
I would suggest that statistics get in our way here; that genius
systems cannot be created out of dust, but that every once in a while,
an intelligent system has the proper conditioning and evolves into a
genius system. In this light, the number of genius systems possible
depends on the pool of intelligent systems that are available as
substrate.

    In short, while I feel we will be able to create intelligent 
systems, we will not be able to directly construct superintelligent 
ones. While there will be advantages in duplicating, speeding up, or
otherwise manipulating a genius system once created, the process of
creating one will remain maddeningly elusive.

David Rogers DRogers@SUMEX-AIM.ARPA


[I would like to stake out a middle ground: creative systems.

We will certainly have intelligent systems, and we will certainly have
trouble devising genius systems.  (Genius in human terms: I don't want
to get into whether an AI program can be >>sui generis<< if we can
produce a thousand variations of it before breakfast.)  A [scientific]
genius is someone who develops an idea for which there is, or at least
seems to be, no precedent.

Creativity, however, can exist in a lesser being.  Forget Picasso,
just consider an ordinary artist who sees a new style of bold,
imaginative painting.  The artist has certain inborn or learned
measures of artistic merit: color harmony, representational accuracy,
vividness, brush technique, etc.  He evaluates the new painting and
finds that it exists in a part of his artistic "parameter space" that
he has never explored.  He is excited, and carefully studies the
painting for clues as to the techniques that were used.  He
hypothesizes rules for creating similar visual effects, trys them out,
modifies them, iterates, adds additional constraints (yes, but can I
do it with just rectangles ...), etc.  This is creativity.  Nothing
that I have said above precludes our artist from being a machine.

Another example, which I believe I heard from a recent Stanford Ph.D.
(sorry, can't remember who): consider Solomon's famous decision.
Everyone knows that a dispute over property can often be settled by
dividing the property, providing that the value of the property is not
destroyed by the act of division.  Solomon's creative decision
involved the realization (at least, we hope he realized it) that in a
particular case, if the rule was implemented in a particular
theatrical manner, the precondition could be ignored and the rule
would still achieve its goal.  We can then imagine Solomon to be a
rule-based system with a metasystem that is constantly checking for
generalizations, specializations, and heuristic shortcuts to the
normal rule sequences.  I think that Doug Lenat's EURISKO program has
something of this flavor, as do other learning programs.

In the limit, we can imagine a system with nearly infinite computing 
power that builds models of its environment in its memory.  It carries
out experiments on this model, and verifies the experiments by
carrying them out in the real world when it can.  It can solve
ordinary problems through various applicable rule invocations,
unifications, planning, etc.  Problems requiring creativity can often
be solved by applying inappropriate rules and techniques (i.e.,
violating their preconditions) just to see what will happen --
sometimes it will turn out that the preconditions were unnecessarily
strict.  [The system I have just described is a fair approximation to
a human -- or even to a monkey, dog, or elephant.]

True genius in such a system would require that it construct new 
paradigms of thought and problem solving.  This will be much more 
difficult, but I don't doubt that we and our cybernetic offspring will
even be able to construct such progeny someday.

                                        -- Ken Laws ]

------------------------------

End of AIList Digest
********************

∂19-Sep-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #59
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Sep 83  17:48:20 PDT
Date: Monday, September 19, 1983 4:16PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #59
To: AIList@SRI-AI


AIList Digest            Tuesday, 20 Sep 1983      Volume 1 : Issue 59

Today's Topics:
  Programming Languages -  Micro LISP Reviews,
  Machine Translation - Ada & Dictionary Request & Grammar Translation,
  AI Journals - Addendum,
  Bibliography - SNePS Research Group
----------------------------------------------------------------------

Date: Mon, 19 Sep 1983  11:41 EDT
From: WELD%MIT-OZ@MIT-MC
Subject: Micro LISPs

For a survey of micro LISPs see the August and Sept issues of 
Microsystems magazine. The Aug issue reviews muLISP, Supersoft LISP 
and The Stiff Upper Lisp. I believe that the Sept issue will continue 
the survey with some more reviews.

Dan

------------------------------

Date: 14 Sep 83 1:44:58-PDT (Wed)
From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax
Subject: Re: Translation into Ada:  Request for Info
Article-I.D.: mit-eddi.713

I think the reference to the WWMCS conversion effort is a bad example 
when talking aboutomatic programming language translation.  I would be
very surprised if WWMCS is written in a high-level language.  It runs
on Honeywell GCOS machines, I believe, and I think that GCOS system 
programming is traditionally done in GMAP (GCOS Macro Assembler 
Program), especially at the time that WWMCS was written.  Only a 
masochist would even think of writing an automatic "anticompiler" (I 
have heard of uncompilers, but those are usually restricted to
figuring out the code produced by a known compiler, not arbitrary
human coding); researchers have found it hard enough to teach
computers to "understand" programs in HLLs, and it is often pretty
difficult for humans to understand others' assembler code.
--
                        Barry Margolin
                        ARPA: barmar@MIT-Multics
                        UUCP: ..!genrad!mit-eddie!barmar

------------------------------

Date: Mon 19 Sep 83 14:56:49-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Request for m/c-readable foreign language dictionary info

I am looking for foreign-language dictionaries in machine-readable
form.  Of particular interest would be a subset containing
EDP-terminology.  This would be used to help automate translation of
computer-related technical materials.

Of major interest are German, Spanish, French, but others might be
useful also.

Any pointers appreciated.

Werner (UUCP:  ut-ngp!werner or ut-ngp!utastro!werner
         via:  { decvax!eagle , ucbvax!nbires , gatech!allegra!eagle ,
                 ihnp4 }
        ARPA: werner@utexas-20 or werner@utexas-11 )

------------------------------

Date: 19 Sep 1983 0858-PDT
From: PAZZANI at USC-ECL
Subject: Parsifal

I have a question about PARSIFAL (Marcus's deterministic parser) that
I hope someone can answer:

Is it easy (or possible) to convert grammar rules to the kind of rules 
that Parsifal uses? Is there an algoritm to do so? 
(i.e., by grammar rule, I mean things like:
S -> NP VP
VP -> VP2 NP PP
VP -> V3 INF
INF -> to VP
etc.
where by grammar rule Marcus means things like
{RULE MAJOR-DECL-S in SS-START
[=np][=verb]-->
Label c decl,major.
Deactivate ss-start. Activate parse-subj.}

{RULE UNMARKED-ORDER IN PARSE-SUBJ
[=np][=verb]-->
Attach 1st to c as np.
Deactivate Parse-subj. Activate parse-aux.}

Thanks in advance,
Mike Pazzani
Pazzani@usc-ecl

------------------------------

Date: 16 Sep 83 16:58:30-PDT (Fri)
From: ihnp4!cbosgd!cbscc!cbscd5!lvc @ Ucb-Vax
Subject: addendum to AI journal list
Article-I.D.: cbscd5.589

The following are journals that readers have sent me since the time I 
posted the list of AI journals.  As has been pointed out, individuals
can get subscriptions at a reduced rate.  Most of the prices I quoted
were the institutional price.

The American Journal of Computational Linguistics -- will now be called ->
Computational Linguistics
        Subscription $15
        Don Walker, ACL
        SRI International
        Menlo Park, CA 94025.
------------------------------
Cognition and Brain Theory
        Lawrence Erlbaum Associates, Inc.
        365 Broadway,
        Hillsdale, New Jersey 07642
        $18 Individual $50 Institutional
        Quarterly
        Basic cognition, proposed models and discussion of
        consciousness and mental process, epistemology - from frames to
        neurons, as related to human cognitive processes. A "fringe"
        publication for AI topics, and a good forum for issues in cognitive
        science/psychology.
------------------------------
New Generation Computing
        Springer-Verlag New York Inc.
        Journal Fulfillment Dept.
        44 Hartz Way
        Secaucus, NJ 07094
        A quarterly English-language journal devoted to international
        research on the fifth generation computer.  [It seems to be
        very strong on hardware and logic programming.]
        1983 - 2 issues - $52. (Sample copy free.)
        1984 - 4 issues - $104.

Larry Cipriani
cbosgd!cbscd5!lvc

------------------------------

Date: 16 Sep 1983 10:38:57-PDT
From: shapiro%buffalo-cs@UDel-Relay
Subject: Your request for bibliographies


                           Bibliography
                 SNeRG: The SNePS Research Group
                  Department of Computer Science
             State University of New York at Buffalo
                     Amherst, New York 14226



     Copies of Departmental Technical Reports (marked with an "*")
should be requested from The Library Committee, Dept. of Computer
Science, SUNY/Buffalo, 4226 Ridge Lea Road, Amherst, NY 14226.
Businesses are asked to enclose $3.00 per report requested with their
requests. Others are asked to enclose $1.00 per report.

     Copies of papers other than Departmental Technical Reports may be
requested directly from Prof. Stuart C. Shapiro at the above address.


 1.  Shapiro, S. C. [1971] A net structure for semantic
     information storage, deduction and retrieval. Proc. Second
     International Joint Conference on Artificial Intelligence,
     William Kaufman, Los Altos, CA, 212-223.

 2.  Shapiro, S. C. [1972] Generation as parsing from a network
     into a linear string. American Journal of Computational
     Linguistics, Microfiche 33, 42-62.

 3.  Shapiro, S. C. [1976] An introduction to SNePS (Semantic Net
     Processing System). Technical Report No. 31, Computer
     Science Department, Indiana University, Bloomington, IN, 21
     pp.

 4.  Shapiro, S. C. and Wand, M. [1976] The Relevance of
     Relevance. Technical Report No. 46, Computer Science
     Department, Indiana University, Bloomington, IN, 21pp.

 2.  Bechtel, R. and Shapiro, S. C. [1976] A logic for semantic
     networks. Technical Report No. 47, Computer Science
     Department, Indiana University, Bloomington, IN, 29pp.

 6.  Shapiro, S. C. [1977] Representing and locating deduction
     rules in a semantic network. Proc. Workshop on
     Pattern-Directed Inference Systems. SIGART Newsletter, 63
     14-18.

 7.  Shapiro, S. C. [1977] Representing numbers in semantic
     networks: prolegomena Proc. 2th International Joint
     Conference on Artificial Intelligence, William Kaufman, Los
     Altos, CA, 284.

 8.  Shapiro, S. C. [1977] Compiling deduction rules from a
     semantic network into a set of processes. Abstracts of
     Workshop on Automatic Deduction, MIT, Cambridge, MA.
     (Abstract only), 7pp.

 9.  Shapiro, S. C. [1978] Path-based and node-based inference in
     semantic networks. In D. Waltz, ed. TINLAP-2: Theoretical
     Issues in Natural Languages Processing. ACM, New York,
     219-222.

10.  Shapiro, S. C. [1979] The SNePS semantic network processing
     system. In N. V. Findler, ed. Associative Networks: The
     Representation and Use of Knowledge by Computers. Academic
     Press, New York, 179-203.

11.  Shapiro, S. C. [1979] Generalized augmented transition
     network grammars for generation from semantic networks.
     Proc. 17th Annual Meeting of the Association for
     Computational Linguistics. University of California at San
     Diego, 22-29.

12.  Shapiro, S. C. [1979] Numerical quantifiers and their use in
     reasoning with negative information. Proc. Sixth
     International Joint Conference on Artificial Intelligence,
     William Kaufman, Los Altos, CA, 791-796.

13.  Shapiro, S. C. [1979] Using non-standard connectives and
     quantifiers for representing deduction rules in a semantic
     network. Invited paper presented at Current Aspects of AI
     Research, a seminar held at the Electrotechnical Laboratory,
     Tokyo, 22pp.

14.  * McKay, D. P. and Shapiro, S. C. [1980] MULTI: A LISP Based
     Multiprocessing System. Technical Report No. 164, Department
     of Computer Science, SUNY at Buffalo, Amherst, NY, 20pp.
     (Contains appendices not in LISP conference version)

12.  McKay, D. P. and Shapiro, S. C. [1980] MULTI - A LISP based
     multiprocessing system. Proc. 1980 LISP Conference, Stanford
     University, Stanford, CA, 29-37.

16.  Shapiro, S. C. and McKay, D. P. [1980] Inference with
     recursive rules. Proc. First Annual National Conference on
     Artificial Intelligence, William Kaufman, Los Altos, CA,
     121-123.

17.  Shapiro, S. C. [1980] Review of Fahlman, Scott. NETL: A
     System for Representing and Using Real-World Knowledge. MIT
     Press, Cambridge, MA, 1979. American Journal of
     Computational Linguistics 6, 3, 183-186.

18.  McKay, D. P. [1980] Recursive Rules - An Outside Challenge.
     SNeRG Technical Note No. 1, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 11pp.

19.  * Maida, A. S. and Shapiro, S. C. [1981] Intensional
     concepts in propositional semantic networks. Technical
     Report No. 171, Department of Computer Science, SUNY at
     Buffalo, Amherst, NY, 69pp.

20.  * Shapiro, S. C. [1981] COCCI: a deductive semantic network
     program for solving microbiology unknowns. Technical Report
     No. 173, Department of Computer Science, SUNY at Buffalo,
     Amherst, NY, 24pp.

21.  * Martins, J.; McKay, D. P.; and Shapiro, S. C. [1981]
     Bi-directional Inference. Technical Report No. 174,
     Department of Computer Science, SUNY at Buffalo, Amherst,
     NY, 32pp.

22.  * Martins, J., and Shapiro, S. C. [1981] A Belief Revision
     System Based on Relevance Logic and Heterarchical Contexts.
     Technical Report No. 172, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 42pp.

23.  Shapiro, S. C. [1981] Summary of Scientific Progress. SNeRG
     Technical Note No. 3, Department of Computer Science, SUNY
     at Buffalo, Amherst, NY, 2pp.

24.  Mckay, D. P. and Martins, J. SNePSLOG User's Manual. SNeRG
     Technical Note No. 4, Department of Computer Science, SUNY
     at Buffalo, Amherst, NY, 8pp.

22.  McKay, D. P.; Shubin, H.; and Martins, J. [1981] RIPOFF:
     Another Text Formatting Program. SNeRG Technical Note No. 2,
     Department of Computer Science, SUNY at Buffalo, Amherst,
     NY, 18pp.

26.  * Neal, J. [1981] A Knowledge Engineering Approach to
     Natural Language Understanding. Technical Report No. 179,
     Computer Science Department, SUNY at Buffalo, Amherst, NY,
     67pp.

27.  * Srihari, R. [1981] Combining Path-based and Node-based
     Reasoning in SNePS. Technical Report No. 183, Department of
     Computer Science, SUNY at Buffalo, Amherst, NY, 22pp.

28.  McKay, D. P.; Martins, J.; Morgado, E.; Almeida, M.; and
     Shapiro, S. C. [1981] An Assessment of SNePS for the Navy
     Domain. SNeRG Technical Note No. 6, Department of Computer
     Science, SUNY at Buffalo, Amherst, NY, 48pp.

29.  Shapiro, S. C. [1981] What do Semantic Network Nodes
     Represent? SNeRG Technical Note No. 7, Department of
     Computer Science, SUNY at Buffalo, Amherst, NY, 12pp.
     Presented at the workshop on Foundational Threads in Natural
     Language Processing, SUNY at Stony Brook.

30.  McKay, D. P., and Shapiro, S. C. [1981] Using active
     connection graphs for reasoning with recursive rules.
     Proceedings of the Seventh International Joint Conference on
     Artificial Intelligence, William Kaufman, Los Altos, CA,
     368-374.

31.  Shapiro, S. C. and The SNePS Implementation Group [1981]
     SNePS User's Manual. Department of Computer Science, SUNY at
     Buffalo, Amherst, NY, 44pp.

32.  Shapiro, S. C.; McKay, D. P.; Martins, J.; and Morgado, E.
     [1981] SNePSLOG: A "Higher Order" Logic Programming
     Language. SNeRG Technical Note No. 8, Department of Computer
     Science, SUNY at Buffalo, Amherst, NY, 16pp. Presented at
     the Workshop on Logic Programming for Intelligent Systems,
     R.M.S. Queen Mary, Long Beach, CA.

33.  * Shubin, H. [1981] Inference and Control in Multiprocessing
     Environments. Technical Report No. 186, Department of
     Computer Science, SUNY at Buffalo, Amherst, NY, 26pp.

34.  Shapiro, S. C. [1982] Generalized Augmented Transition
     Network Grammars for Generation from Semantic Networks. The
     American Journal of Computational Linguistics 8, 1 (January
     - March), 12-22.

32.  Almeida, M.J. [1982] NETP2 - A Parser for a Subset of
     English. SNERG Technical Note No. 9, Department of Computer
     Science, SUNY at Buffalo, Amherst, NY, 32pp.

36.  * Tranchell, L.M. [1982] A SNePS Implementation of KL-ONE,
     Technical Report No. 198, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 21pp.

37.  Shapiro, S.C. and Neal, J.G. [1982] A Knowledge engineering
     Approach to Natural language understanding. Proceedings of
     the 20th Annual Meeting of the Association for Computational
     Linguistics, ACL, Menlo Park, CA, 136-144.

38.  Donlon, G. [1982] Using Resource Limited Inference in SNePS.
     SNeRG Technical Note No. 10, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 10pp.

39.  Nutter, J. T. [1982] Defaults revisited or "Tell me if
     you're guessing". Proceedings of the Fourth Annual
     Conference of the Cognitive Science Society, Ann Arbor, MI,
     67-69.

40.  Shapiro, S. C.; Martins, J.; and McKay, D. [1982]
     Bi-directional inference. Proceedings of the Fourth Annual
     Meeting of the Cognitive Science Society, Ann Arbor, MI,
     90-93.

41.  Maida, A. S. and Shapiro, S. C. [1982] Intensional concepts
     in propositional semantic networks. Cognitive Science 6, 4
     (October-December), 291-330.

42.  Martins, J. P. [1983] Belief revision in MBR. Proceedings of
     the 1983 Conference on Artificial Intelligence, Rochester,
     MI.

43.  Nutter, J. T. [1983] What else is wrong with non-monotonic
     logics?: representational and informational shortcomings.
     Proceedings of the Fifth Annual Meeting of the Cognitive
     Science Society, Rochester, NY.

44.  Almeida, M. J. and Shapiro, S. C. [1983] Reasoning about the
     temporal structure of narrative texts. Proceedings of the
     Fifth Annual Meeting of the Cognitive Science Society,
     Rochester, NY.

42.  * Martins, J. P. [1983] Reasoning in Multiple Belief Spaces.
     Ph.D. Dissertation, Technical Report No. 203, Computer
     Science Department, SUNY at Buffalo, Amherst, NY, 381 pp.

46.  Martins, J. P. and Shapiro, S. C. [1983] Reasoning in
     multiple belief spaces. Proceedings of the Eighth
     International Joint Conference on Artificial Intelligence,
     William Kaufman, Los Altos, CA, 370-373.

47.  Nutter, J. T. [1983] Default reasoning using monotonic
     logic: a modest proposal. Proceedings of The National
     Conference on Artificial Intelligence, William Kaufman, Los
     Altos, CA, 297-300.

------------------------------

End of AIList Digest
********************

∂20-Sep-83  1121	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #60
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Sep 83  11:19:24 PDT
Date: Tuesday, September 20, 1983 9:41AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #60
To: AIList@SRI-AI


AIList Digest            Tuesday, 20 Sep 1983      Volume 1 : Issue 60

Today's Topics:
  AI Journals - AI Journal Changes,
  Applications - Cloud Data & AI and Music,
  Games - Go Tournament,
  Intelligence - Turing test & Definitions
----------------------------------------------------------------------

Date: Mon, 19 Sep 83 18:51 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: News about the Artificial Intelligence Journal


Changes in the Artificial Intelligence Journal

Daniel G. Bobrow (Editor-in-chief)

There have been a number of changes in the Artificial Intelligence
Journal which are of interest to the AI community.

1) The size of the journal is increasing.  In 1982, the journal was
published in two volumes of three issues each (about 650 printed
pages per year).  In 1983, we increased the size to two volumes of
four issues each (about 900 printed pages per year).  In order to
accomodate the increasing number of high quality papers that are
being submitted to the journal, in 1984 the journal will be published
in three volumes of three issues each (about 1000 printed pages per
year).

2) Despite the journal size increase, North Holland will maintain the
current price of $50 per year for personal subscriptions for
individual (non-institutuional) members of major AI organizations
(e.g. AAAI, SIGART).  To obtain such a subscription, members of such
organizations should send a copy of their membership acknowledgement,
and their check for $50 (made out to Artificial Intelligence) to:
        Elsevier Science Publishers
        Attn: John Tagler
        52 Vanderbilt Avenue
        New York, New York 10017
North Holland (Elsevier) will acknowledge receipt of the request for
subscription, provide information about which issues will be included
in your subscription, and when they should arrive.  Back issues are
not available at the personal rate.

3) The AIJ editorial board has recognized the need for good review
articles in subfields of AI.  To encourage the writing of such
articles, an honorarium of $1000 will be awarded the authors of any
review accepted by the journal.  Although review papers will go
through the usual review process, when accepted they will be given
priority in the publication queue.  Potential authors are reminded
that review articles are among the most cited articles in any field.

4) The publication process takes time.  To keep an even flow of
papers in the journal, we must maintain a queue of articles of about
six months.  To allow people to know about important research results
before articles have been published, we will lists of papers accepted
for publication in earlier issues of the journal, and make such lists
available to other magazines (e.g. AAAI magazine, SIGART news).

5) New book review editor: Mark Stefik has taken the job of book
review editor for the Artificial Intelligence Journal.  The following
note from Mark describes his plans to make the book review section
much more active than it has been in the past.

                    ------------------

The Book Review Section of the Artificial Intelligence Journal

Mark Stefik - Book Review Editor

I am delighted for this opportunity to start an active review column
for AI, and invite your suggestions and participation.

        This is an especially good time to review work in artificial
intelligence.  Not only is there a surge of interest in AI, but there
are also many new results and publications in computer science, in
the cognitive sciences and in other related sciences.  Many new
projects are just beginning and finding new directions (e.g., machine
learning, computational linguistics), new areas of work are opening
up (e.g., new architectures), and others are reporting on long term
projects that are maturing (computer vision).  Some readers will want
to track progress in specialized areas; others will find inspiration
and direction from work breaking outside the field.  There is enough
new and good but unreviewed work that I would like to include two or
three book reviews in every issue of Artificial Intelligence.

        I would like this column of book reviews to become essential
reading for the scientific audience of this journal.  My goal is to
cover both scientific works and textbooks.  Reviews of scientific
work will not only provide an abstract of the material, but also show
how it fits into the body of existing work.  Reviews of textbooks
will discuss not only clarity and scope, but also how well the
textbook serves for teaching.  For controversial work of major
interest I will seek more than one reviewer.

        To get things started, I am seeking two things from the
community now.  First, suggestions of books for review.  Books
written in the past five years or so will be considered.  The scope
of the fields considered will be broad.  The main criteria will be
scientific interest to the readership.  For example, books from as
far afield as cultural anthropology or sociobiology will be
considered if they are sufficiently relevent, and readable by an AI
audience.  Occasionally, important books intended for a popular
audience will also be considered.

        My second request is for reviewers.  I will be asking
colleagues for reviews of particular books, but will also be open
both to volunteers and suggestions.  Although I will tend to solicit
reviews from researchers of breadth and maturity, I recognize that
graduate students preparing theses are some of the best read people
in specialized areas.  For them, reviews in Artificial Intelligence
will be a good way to to share the fruits of intensive reading in
thesis preparation, and also to achieve some visibility.  Reviewers
will receive a personal copy of the book reviewed.

        Suggestions will reach me at the following address.
Publishers should send two copies of works to be reviewed.


Mark Stefik
Knowledge Systems Area
Xerox Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, California  94304

ARPANET Address:  STEFIK@PARC

------------------------------

Date: Mon, 19 Sep 83 17:09:09 PDT
From: Alex Pang <v.pang@UCLA-LOCUS>
Subject: help on satellite image processing


        I'm planning to do some work on cloud formation prediction 
based either purely on previous cloud formations or together with some
other information - e.g. pressure, humidity, wind, etc.  Does anyone
out there know of any existing system doing any related stuff on this,
and if so, how and where I can get more information on it. Also, do
any of you know where I can get satellite data with 3D cloud
information?
        Thank you very much.

                                        alex pang

------------------------------

Date: 16 Sep 83 22:26:21 EDT  (Fri)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: AI and music

Speaking of creativity and such, I've had an interest in AI and music
for some time.  What I'd like is any pointers to companies and/or
universities doing work in such areas as cognitive aspects of
appreciating and creating music, automated music analysis and 
synthesis, and "smart" aids for composers and students.

Assuming a reasonable response, I'll post results to the AIList.  
Thanks in advance.

Randy Trigg
...!seismo!umcp-cs!randy (Usenet)
randy.umcp-cs@udel-relay (Arpanet)

------------------------------

Date: 17 Sep 83 23:51:40-PDT (Sat)
From: harpo!utah-cs!utah-gr!thomas @ Ucb-Vax
Subject: Re: Go Tournament
Article-I.D.: utah-gr.908

I'm sure we could find some time on one of our Vaxen for a Go
tournament.  If you're writing it on some other machine, make sure it
is portable.

=Spencer

------------------------------

Date: Fri 16 Sep 83 20:07:31-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Turing test

It was once playfully proposed to permute the actors in the classical 
definition of the Turing test, and thus define an intelligent entity
as one that can tell the difference between a human and a (deceptively
programmed) computer.  May have been prompted by the well-known
incident involving Eliza.  The result is that, as our AI systems get
better, the standard for intelligence will increase.  This definition
may even enable some latter-day Goedel to prove mathematically that
computers can never be intelligent!

                                - Richard :-)

------------------------------

Date: Fri, 16 Sep 83 19:36:53 PDT
From: harry at lbl-nmm
Subject: Psychology and Artificial Intelligence.

Members of this list might find it interesting to read an article ``In
Search of Unicorns'' by M. A. Boden (author of ``Artificial
Intelligence and Natural Man'') in The Sciences (published by the New
York Academy of Sciences).  It discusses the `computational style' in 
theoretical psychology.  It is not a technical article.

                                        Harry Weeks

------------------------------

Date: 15 Sep 83 17:10:04-PDT (Thu)
From: ihnp4!arizona!robert @ Ucb-Vax
Subject: Another Definition of Intelligence
Article-I.D.: arizona.4675


     A problem that bothers me about the Turing test is having to
provoke the machine with such specific questioning.  So jumping ahead
a couple of steps, I would accept a machine as an adequate
intelligence if it could listen to a conversation between other
intelligences, and be able to interject at appropriate points such
that these others would not be able to infer the mechanical aspect of
this new source.  Our experiences with human intelligence would make
us very suspicous of anyone or anything that sits quietly without new,
original, or synthetic comments while being within a environment of
discussion.

     And then to fully qualify, upon overhearing these discussions
over net, I'd expect it to start conjecturing on the question of
intelligence, produce its own definition, and then start sending out
any feelers to ascertain if there is anything out there qualifying
under its definition.

------------------------------

Date: 16 Sep 83 23:11:08-PDT (Fri)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Re: Another Definition of Intelligence
Article-I.D.: umcp-cs.2608

Finally, someone has come up with a fresh point of view in an
otherwise stale discussion!

Arizona!robert suggests that a machine could be classified as 
intelligent if it can discern intelligence within its environment, as
opposed to being prodded into displaying intelligence.  But how can we
tell if the machine really has a discerning mind?  Does it get
involved in an interesting conversation and respond with its own
ideas?  Perhaps it just sits back and says nothing, considering the
conversation too trivial to participate in.

And therein lies the problem with this idea.  What if the machine 
doesn't feel compelled to interact with its environment?  Is this a
sign of inability, or disinterest?  Possibly disinterest.  A machine
mind might not be interested in its environment, but in its own
thoughts.  Its own thoughts ARE its environment.  Perhaps its a sign
of some mental aberration.  I'm sure that sufficiently intelligent
machines will be able to develop all sorts of wonderfully neurotic
patterns of behavior.

I know.  Let's build a machine with only a console for an output
device and wait for it to say, "Hey, anybody intelligent out there?"
"You got any VAXEN out there?"

                                                - Speaker
-- Full-Name:  Speaker-To-Animals
       Csnet:  speaker@umcp-cs
       Arpa:   speaker.umcp-cs@UDel-Relay

------------------------------

Date: 17 Sep 83 19:17:21-PDT (Sat)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Life, don't talk to me about life....
Article-I.D.: umcp-cs.2628

        From:  jpj@mss
        Subject:  Re: Another Definition of Intelligence
        To:  citcsv!seismo!rlgvax!cvl!umcp-cs!speaker

        I find your notion of an artificial intelligence sitting
        back, taking in all that goes on around it, but not being
        motivated to comment (perhaps due to boredom) an amusing
        idea.  Have you read "The Restaurant at the End of the
        Universe?"  In that story is a most entertaining ai - a
        chronically depressed robot (whos name escapes me at the
        moment - I don't have my copy at hand) who thinks so much
        faster than all the mortals around it that it is always
        bored and *feels* unappreciated.  (Sounds like some of my
        students!)

Ah yes, Marvin the paranoid android.  "Here I am, brain the size of a
planet and all they want me to do is pick up a peice of paper."

This is really interesting.  You might think that a robot with such a
huge intellect would also develop an oversized ego...  but just the
reverse could be true.  He thinks so fast and so well that he becomes
bored and disgusted with everything around himself... so he withdraws
and wishes his boredom and misery would end.

I doubt Adams had this in mind when he wrote the book, but it fits
together nicely anyway.
--
                                        - Speaker
                                        speaker@umcp-cs
                                        speaker.umcp-cs@UDel-Relay

------------------------------

End of AIList Digest
********************

∂22-Sep-83  1847	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #61
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Sep 83  18:47:28 PDT
Date: Thursday, September 22, 1983 5:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #61
To: AIList@SRI-AI


AIList Digest            Friday, 23 Sep 1983       Volume 1 : Issue 61

Today's Topics:
  AI Applications - Music,
  AI at Edinburgh - Request,
  Games - Prolog Puzzle Solution,
  Seminars - Talkware & Hofstader,
  Architectures - Parallelism,
  Technical Reports - Rutgers
----------------------------------------------------------------------

Date: 20 Sep 1983 2120-PDT
From: FC01@USC-ECL
Subject: Re: Music in AI

Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci.
He had a real nice program to imitate Debuse (experts could not tell
its compositions from originals).

------------------------------

Date: 18 Sep 83 12:01:27-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: U of Edinburgh, Scotland Inquiry
Article-I.D.: dartvax.224


Who knows anything about the current status of the Artificial
Intelligence school at the University of Edinburgh?  I've heard
they've been through hard times in recent years, what with the
Lighthill report and British funding shakeups, but what has been going
on within the past year or so?  I'd appreciate any gossip/rumors/facts
and if anyone knows that they're on the net, their address.

                               --decvax!dartvax!dartlib!lorien
                                 Lorien Y. Pratt

------------------------------

Date: Mon 19 Sep 83 02:25:41-PDT
From: Motoi Suwa <Suwa@Sumex-AIM>
Subject: Puzzle Solution

                 [Reprinted from the Prolog Digest.]

    Date: 14 Sep. 1983
    From: K.Handa  ETL Japan
    Subject: Another Puzzle Solution

This is the solution of Alan's puzzle introduced on 24 Aug.

  ?-go(10).

will display the ten disgit number as following:

  -->6210001000

and

  ?-go(4).

will:

  -->1210
  -->2020

I found following numbers:

  6210001000
   521001000
    42101000
     3211000
       21200
        1210
        2020

The Following is the total program ( DEC10 Prolog Ver.3 )



/*** initial assertion ***/

init(D):- ass←xn(D),assert(rest(D)),!.

ass←xn(0):- !.
ass←xn(D):- D1 is D-1,asserta(x(D1,←)),asserta(n(D1)),ass←xn(D1).

/*** main program ***/

go(D):- init(D),guess(D,0).
go(←):- abolish(x,2),abolish(n,1),abolish(rest,1).

/* guess 'N'th digit */

guess(D,D):- result,!,fail.
guess(D,N):- x(N,X),var(X),!,n(Y),N=<Y,N*Y=<D,ass(N,Y),set(D,N,Y),
           N1 is N+1,guess(D,N1).
guess(D,N):- x(N,X),set(D,N,X),N1 is N+1,guess(D,N1).

/* let 'N'th digit be 'X' */

ass(N,X):- only(retract(x(N,←))),asserta(x(N,X)),only(update(1)).
ass(N,←):- retract(x(N,←)),asserta(x(N,←)),update(-1),!,fail.

only(X):- X,!.

/* 'X' 'N's appear in the sequence of digit */

set(D,N,X):- count(N,Y),rest(Z),!,Y=<X,X=<Y+Z,X1 is X-Y,set1
                                                  (D,N,X1,0).

set1(←,N,0,←):- !.
set1(D,N,X,P):- n(M),P=<M,x(M,Y),var(Y),M*N=<D,ass(M,N),set(D,M,N),
              X1 is X-1,P1 is M,set1(D,N,X1,P1).

/* 'X' is the number of digits which value is 'N' */

count(N,X):- bagof(M,M↑(x(M,Z),nonvar(Z),Z=N),L),length(L,X).
count(←,0).

/* update the number of digits which value is not yet assigned */

update(Z):- only(retract(rest(X))),Z1 is X-Z,assert(rest(Z1)).
update(Z):- retract(rest(X)),Z1 is X+Z,assert(rest(Z1)),!,fail.

/* display the result */

result:- print(-->),n(N),x(N,M),print(M),fail.
result:- nl.

------------------------------

Date: 21 Sep 83  1539 PDT
From: David Wilkins <DEW@SU-AI>
Subject: Talkware Seminars

                [Reprinted from the SU-SCORE bboard.]


           1127 TW Talkware seminar Weds. 2:15

I will be organizing a weekly seminar this fall on a new area I am 
currently developing as a research topic: the theory of "talkware".
This area deals with the design and analysis of languages that are
used in computing, but are not programming languages.  These include
specification languages, representation languages, command languages,
protocols, hardware description languages, data base query languages,
etc.  There is currently a lot of ad hoc but sophisticated practice
for which a more coherent and general framework needs to be developed.
The situation is analogous to the development of principles of
programming languages from the diversity of "coding" languages and
methods that existed in the early fifties.

The seminar will include outside speakers and student presentations of
relevant literature, emphasizing how the technical issues dealt with
in current projects fit into the development talkware theory.  It will
meet at 2:15 every Wednesday in Jacks 301.  The first meeting will be
Wed.  Sept. 28.  For a more extensive description, see
{SCORE}<WINOGRAD>TALKWARE or {SAIL}TALKWA[1,TW].

------------------------------

Date: Thu 22 Sep 00:23
From: Jeff Shrager
Subject: Hofstader seminar at MIT

                 [Reprinted from the CMU-AI bboard.]


Douglas Hofstader is giving a course this semester at MIT.  I thought 
that the abstract would interest some of you.  The first session takes
place today.
                          ------
"Perception, Semanticity, and Statistically Emergent Mentality"
A seminar to be given fall semester by Douglas Hofstader

        In this seminar, I will present my viewpoint about the nature
of mind and the goals of AI.  I will try to explain (and thereby
develop) my vision of how we perceive the essence of things, filtering
out the details and getting at their conceptual core.  I call this
"deep perception", or "recognition".

        We will review some earlier projects that attacked some
related problems, but primarily we will be focussing on my own
research projects, specifically: Seek-Whence (perception of sequential
patterns), Letter Spirit (perception of the style of letters), Jumbo
(reshuffling of parts to make "well-chunked" wholes), and Deep Sea
(analogical perception).  These tightly related projects share a
central philosophy: that cognition (mentality) cannot be programmed
explicitly but must emerge "epiphenomenally", i.e., as a consequence
of the nondeterministic interaction of many independent "subcognitive"
pieces.  Thus the overall "mentality" of such a system is not directly
programmed; rather, it EMERGES as an observable (but onnprogrammed)
phenomenon -- a statistical consequence of many tiny semi-cooperating
(and of course programmed) pieces.  My projects all involve certain
notions under development, such as:

-- "activation level": a measure of the estimated relevance of a given
   Platonic concept at a given time;
-- "happiness": a measure of how easy it is to accomodate a structure
   and its currently accepted Platonic class to each other;
-- "nondeterministic terraced scan": a method of homing in to the best
   category to which to assign something;
-- "semanticity": the measure of how abstractly rooted (intensional) a
   perception is;
-- "slippability": the ease of mutability of intensional
   representational structures into "semantically close" structures;
-- "system temprature": a number measuring how chaotically active the
   whole system is.

        This strategy for AI is permeated by probabilistic or
statistical ideas.  The main idea is that things need not happen in
any fixed order; in fact, that chaos is often the best path to follow
in building up order.  One puts faith in the reliability of
statistics: a sensible, coherent total behavior will emerge when there
are enouh small independent events being influenced by high-level
parameters such as temperature, activation levels, happinesses.  A
challange is to develop ways such a system can watch its own 
activities and use those observations ot evaluate its own progress, to
detect and pull itself out of ruts it chances to fall into, and to
guide itself toward a satisfying outcome.

        ... Prerequisits: an ability to program well, preferably in
Lisp, and an interest in philosophy of mind and artificial
intelligence.

------------------------------

Date: 18 Sep 83 22:48:56-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Parallelism et. al.
Article-I.D.: dartvax.229

The Parallelism and AI projects at the University of Maryland sound
very interesting.  I agree with an article posted a few days back that
parallel hardware won't necessarily produce any significantly new
methods of computing, as we've been running parallel virtual machines
all along.  Parallel hardware is another milestone along the road to
"thinking in parallel", however, getting away from the purely Von
Neumann thinking that's done in the DP world these days.  It's always
seemed silly to me that our computers are so serial when our brains:
the primary analogy we have for "thinking machines" are so obviously
parallel mechanisms.  Finally we have the technology (software AND
hardware) to follow in our machine architecture cognitive concepts
that evolution has already found most powerful.

I feel that the sector of the Artificial Intelligence community that
pays close attention to psychology and the workings of the human brain
deserves more attention these days, as we move from writing AI
programs that "work" (and don't get me wrong, they work very well!) to
those that have generalizable theoretical basis.  One of these years,
and better sooner than later, we'll make a quantum leap in AI research
and articulate some of the fundamental structures and methods that are
used for thinking.  These may or may not be isomorphic to human
thinking, but in either case we'll do well to look to the human brain
for inspiration.

I'd like to hear more about the work at the University of Maryland; in
particular the prolog and the parallel-vision projects.

What do you think of the debate between what I'll call the Hofstadter 
viewpoint: that we should think long term about the future of
artificial intelligence, and the Feigenbaum credo: that we should stop
philosophizing and build something that works?  (Apologies to you both
if I've misquoted)

                            --Lorien Y. Pratt
                              decvax!dartvax!lorien
                              (Dartmouth College)

------------------------------

Date: 18 Sep 83 23:30:54-PDT (Sun)
From: pur-ee!uiucdcs!uiuccsb!cytron @ Ucb-Vax
Subject: AI and architectures - (nf)
Article-I.D.: uiucdcs.2883


Forward at the request of speaker:  /***** uiuccsb:net.arch /
umcp-cs!speaker / 12:20 am Sep 17, 1983 */

        The fact remains that if we don't have the algorithms for
        doing something with current hardware, we still won't be
        able to do it with faster or more powerful hardware.

The fact remains that if we don't have any algorithms to start with 
then we shouldn't even be talking implementation.  This sounds like a
software engineer's solution anyway, "design the software and then 
find a CPU to run it on."

New architectures, while not providing a direct solution to a lot of
AI problems, provide the test-bed necessary for advanced AI research.
That's why everyone wants to build these "amazingly massive" parallel
architectures.  Without them, AI research could grind to a standstill.

        To some extent these efforts change our way of thinking
        about problems, but for the most part they only speed up
        what we knew how to do already.

Parallel computation is more than just "speeding things up."  Some
problems are better solved concurrently.

        My own belief is that the "missing link" to AI is a lot of
        deep thought and hard work, followed by VLSI implementation
        of algorithms that have (probably) been tested using
        conventional software running on conventional architectures.

Gad...that's really provincial: "deep thought, hard work, followed by
VLSI implementation."  Are you willing to wait a millenia or two while
your VAX grinds through the development and testing of a truly 
high-velocity AI system?

        If we can master knowledge representation and learning, we
        can begin to get away from programming by full analysis of
        every part of every algorithm needed for every task in a
        domain.  That would speed up our progress more than new
        architectures.

I agree.  I also agree with you that hardware is not in itself a
solution and that we need more thought put to the problems of building
intelligent systems.  What I am trying to point out, however, is that
we need integrated hardware/software solutions.  Highly parallel
computer systems will become a necessity, not only for research but
for implementation.

                                                        - Speaker
-- Full-Name:  Speaker-To-Animals
Csnet:  speaker@umcp-cs
Arpa:   speaker.umcp-cs@UDel-Relay

This must be hell...all I can see are flames... towering flames!

------------------------------

Date: 19 Sep 83 9:36:35-PDT (Mon)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: AI and Architecture
Article-I.D.: ncsu.2338


    Sheesh.  Everyone seems so excited about whether a parallel machine
is or will lead to fundamentally new things.  I agree with someone's
comment that conceptually time-sharing and multi-programming have
been conceptually quite parellel "virtual" machines for some time.
Just more and cheaper of the same.  Perhaps the added availability
will lead someone to have a good idea or two about how to do
something better -- in that sense it seems certain that something
good will come of proliferation and popularization of parallelism.
But for my money, there is nothing really, fundamentally different.

    Unless it is non-determinism.  Parallel system tend to be less
deterministic then their simplex brethern, though vast effort are
usually expended in an effort to stamp out this property.  Take me
for example: I am VERY non-deterministic (just ask my wife) and yet I
am also smarter then a lot of AI programs.  The break thru in AI/Arch
will, in my non-determined opinion, come when people stop trying to
sqeeze paralle systems into the more restricted modes of simplex
systems, and develop new paradigms for how to let such a system spred
its wings in a dimension OTHER THAN performance.  From a pragmatic
view, I think this will not happen until people take error recovery
and exception processing more seriously, since there is a fine line
between an error and a new thought ....
    ----GaryFostel----

------------------------------

Date: 20 Sep 83 18:12:15 PDT (Tuesday)
From: Bruce Hamilton <Hamilton.ES@PARC-MAXC.ARPA>
Reply-to: Hamilton.ES@PARC-MAXC.ARPA
Subject: Rutgers technical reports

This is probably of general interest.  --Bruce

    From: PETTY@RUTGERS.ARPA
    Subject: 1983 abstract mailing

Below is a list of our newest technical reports.

The abstracts for these are available for access via FTP with user 
account <anonymous> with any password.  The file name is:

        <library>tecrpts-online.doc

If you wish to order copies of any of these reports please send mail 
via the ARPANET to LOUNGO@RUTGERS or PETTY@RUTGERS.  Thank you!!


CBM-TR-128 EVOLUTION OF A PLAN GENERATION SYSTEM, N.S.  Sridharan,
J.L.  Bresina and C.F. Schmidt.

CBM-TR-133 KNOWLEDGE STRUCTURES FOR A MODULAR PLANNING SYSTEM, 
N.S.  Sridharan and J.L. Bresina.

CBM-TR-134 A MECHANISM FOR THE MANAGEMENT OF PARTIAL AND 
INDEFINITE DESCRIPTIONS, N.S. Sridharan and J.L. Bresina.

DCS-TR-126 HEURISTICS FOR FINDING A MAXIMUM NUMBER OF DISJOINT 
BOUNDED BATHS, D. Ronen and Y. Perl.

DCS-TR-127 THE BALANCED SORTING NETWORK,M. Dowd, Y. Perl, L.  
Rudolph and M. Saks.

DCS-TR-128 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT 
SATISFACTION) PROBLEM: TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES,
B. Nudel.

DCS-TR-129 FOURIER METHODS IN COMPUTATIONAL FLUID AND FIELD 
DYNAMICS, R. Vichnevetsky.

DCS-TR-130 DESIGN AND ANALYSIS OF PROTECTION SCHEMES BASED ON THE 
SEND-RECEIVE TRANSPORT MECHANISM, (Thesis) R.S.  Sandhu.  (If you wish
to order this thesis, a pre-payment of $15.00 is required.)

DCS-TR-131 INCREMENTAL DATA FLOW ANALYSIS ALGORITHMS, M.C.  Paull 
and B.G.  Ryder.

DCS-TR-132 HIGH ORDER NUMERICAL SOMMERFELD BOUNDARY CONDITIONS:  
THEORY AND EXPERIMENTS, R. Vichnevetsky and E.C. Pariser.

LCSR-TR-43 NUMERICAL METHODS FOR BASIC SOLUTIONS OF GENERALIZED 
FLOW NETWORKS, M. Grigoriadis and T. Hsu.

LCSR-TR-44 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT 
RECOGNITION, R. Keller.

LCSR-TR-45 LEARNING AND PROBLEM SOLVING, T.M. Mitchell.

LRP-TR-15 CONCEPT LEARNING BY BUILDING AND APPLYING 
TRANSFORMATIONS BETWEEN OBJECT DESCRIPTIONS, D. Nagel.

------------------------------

End of AIList Digest
********************

∂25-Sep-83  1736	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #62
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Sep 83  17:35:28 PDT
Date: Sunday, September 25, 1983 4:27PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #62
To: AIList@SRI-AI


AIList Digest            Sunday, 25 Sep 1983       Volume 1 : Issue 62

Today's Topics:
  Language Understanding & Scientific Method,
  Conferences - COLING 84
----------------------------------------------------------------------

Date: 19 Sep 83 17:50:32-PDT (Mon)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: Natural Language Understanding
Article-I.D.: utah-cs.1914

Lest usenet readers think things had gotten silent all at once, here's
an article by Fernando Pereira that (apparently and inexplicably) was
*not* sent to usenet, and my reply (fortunately, I now have read-only
access to Arpanet, so I was able to find out about this) 
                        ←←←←←←←←←←←←←←←←←←←←←

    Date: Wed 31 Aug 83 18:42:08-PDT
    From: PEREIRA@SRI-AI.ARPA
    Subject: Solutions of the natural language analysis problem

[I will abbreviate the following since it was distributed in V1 #53
on Sep. 1.  -- KIL]

Given the downhill trend of some contributions on natural language 
analysis in this group, this is my last comment on the topic, and is 
essentially an answer to Stan the leprechaun hacker (STLH for short).

[...]

Lack of rigor follows from lack of method. STLH tries to bludgeon us 
with "generating *all* the possible meanings" of a sentence.  Does he 
mean ALL of the INFINITY of meanings a sentence has in general? Even 
leaving aside model-theoretic considerations, we are all familiar with

        he wanted me to believe P so he said P
        he wanted me to believe not P so he said P because he thought
           that I would think that he said P just for me to believe P
           and not believe it
        and so on ...

in spy stories.

[...]

Fernando Pereira
                         ←←←←←←←←←←←←←←←←←←←

The level of discussion *has* degenerated somewhat, so let me try to
bring it back up again.  I was originally hoping to stimulate some
debate about certain assumptions involved in NLP, but instead I seem
to see a lot of dogma, which is *very* dismaying.  Young idealistic me
thought that AI would be the field where the most original thought was
taking place, but instead everyone seems to be divided into warring
factions, each of whom refuses to accept the validity of anybody
else's approach.  Hardly seems scientific to me, and certainly other
sciences don't evidence this problem (perhaps there's some fundamental
truth here - that the nature of epistemology and other AI activities
are such that it's very difficult to prevent one's thought from being
trapped into certain patterns - I know I've been caught a couple
times, and it was hard to break out of the habit - more on that later)

As a colleague of mine put it, we seem to be suffering from a 
"difference in context".  So let me describe the assumptions 
underpinning my theory (yes I do have one):

1. Language is a very fuzzy thing.  More precisely, the set of sound
strings meaningful to a human is almost (if not exactly) the set of
all possible sound strings.  Now, before you flame, consider:  Humans
can get at least *some* understanding out of a nonsense sequence,
especially if they have any expectations about what they're hearing
(this has been demonstrated experimentally) although it will likely be
wrong.  Also, they can understand mispronounced or misspelled words,
sentences with missing words, sentences with repeated words, sentences
with scrambled word order, sentences with mixed languages (I used to
have fun by speaking English using German syntax, and you can
sometimes see signs using English syntax with "German" words), and so
forth.  Language is also used creatively (especially netters!).  Words
are continually invented, metaphors are created and mixed in novel
ways. I claim that there is no rule of grammar that cannot be
violated.  Note that I have said *nothing* about changes of meaning,
nor have I claimed that one could get much of anything out of a random
sequence of words strung together.  I have only claimed that the set
of linguistically valid utterances is actually a large fuzzy set (in
the technical sense of "fuzzy").  If you accept this, the implications
for grammar are far-reaching
- in fact, it may be that classical grammar is a curious but basically
irrelevant description of language (however, I'm not completely
convinced of that).

2. Meaning and interpretation are distinct.  Perhaps I should follow
convention and say "s-meaning" and "s-interpretation", to avoid
terminology trouble.  I think it's noncontroversial that the "true
meaning" of an utterance can be defined as the totality of response to
that utterance.  In that case, s-meaning is the individual-independent
portion of meaning (I know, that's pretty vague.  But would saying
that 51% of all humans must agree on a meaning make it any more
precise?  Or that there must be a predicate to represent that meaning?
Who decides which predicate is appropriate?).  Then s-interpretation
is the component that depends primarily on the individual and his
knowledge, etc.

Let's consider an example - "John kicked the bucket."  For most 
people, this has two s-meanings - the usual one derived directly from
the words and an idiomatic way of saying "John died".  Of course,
someone may not know the idiom, so they can assign only one s-meaning.
But as Mr. Pereira correctly points out, there are an infinitude of
s-interpretations, which will completely vary from individual to
individual.  Most can be derived from the s-meaning, for instance the
convoluted inferences about belief and intention that Mr. Pereira
gave.  On the other hand, I don't normally make those
s-interpretations, and a "naive" person might *never* do so.  Other
parts of the s-interpretation could be (if the second s-meaning above
was intended) that the speaker tends to be rather blunt; certainly a
part of the response to the utterance, but is less clearly part of a
"meaning".  Even s- meanings are pretty volatile though - to use
another spy story example, the sentence might actually be a code
phrase with a completely arbitrary meaning!

3. Cognitive science is relevant to NLP.  Let me be the first to say
that all of its results are at best suspect.  However, the apparent
inclination of many AI people to regard the study of human cognition
as "unscientific" is inexplicable.  I won't claim that my program
defines human cognition, since that degree of hubris requires at least
a PhD :-) .  But cognitive science does have useful results, like the
aforementioned result about making sense out of nonsense.  Also, lot
of common-sense results can be more accurately described by doing
experiments.  "Don't think of a zebra for the next ten minutes" - my
informal experimentation indicates that *nobody* is capable - that
seems to say a lot about how humans operate.  Perhaps cognitive
science gets a bad review because much of it is Gedanken experiments;
I don't need tests on a thousand subjects to know that most kinds of 
ungrammaticality (such as number agreement) are noticeable, but rarely
affect my understanding of a sentence.  That's why I say that humans
are experts at their own languages - we all (at least intuitively)
understand the different parts of speech and how sentences are put
together, even though we have difficulty expressing that knowledge
(sounds like the knowledge engineer's problems in dealing with
experts!).  BTW, we *have* had a non- expert (a CS undergrad) add
knowledge to our NLP system, and the folks at Berkeley have reported
similar results [Wilensky81].

4.  Theories should reflect reality.  This is especially important
because the reverse is quite pernicious - one ignores or discounts
information not conforming to one's theories.  The equations of motion
are fine for slow-speed behavior, but fail as one approaches c (the
language or the velocity? :-) ).  Does this mean that Lorenz
contractions are experimental anomalies?  The grammar theory of
language is fine for very restricted subsets of language, but is less
satisfactory for explaining the phenomena mentioned in 1., nor does it
suggest how organisms *learn* language.  Mr. Pereira's suggestion that
I do not have any kind of theoretical basis makes me wonder if he
knows what Phrase Analysis *is*, let alone its justification.
Wilensky and Arens of UCB have IJCAI-81 papers (and tech reports) that
justify the method much better than I possibly could.  My own
improvement was to make it follow multiple lines of parsing (have to
be contrite on this; I read Winograd's new book recently and what I
have is really a sort of active chart parser; also noticed that he
gives nary a mention to Phrase Analysis, which is inexcusable - that's
the sort of thing I mean by "warring factions").

4a.  Reflecting reality means "all of it" or (less preferable) "as
much as possible".  Most of the "soft sciences" get their bad 
reputation by disregarding this principle, and AI seems to have a 
problem with that also.  What good is a language theory that cannot
account for language learning, creative use of language, and the
incredible robustness of language understanding?  The definition of
language by grammar cannot properly explain these - the first because
of results (again mentioned by Winograd) that children receive almost
no negative examples, and that a grammar cannot be learned from
positive examples alone, the third because the grammar must be
extended and extended until it recognizes all strings as valid.  So
perhaps the classical notion of grammar is like classical mechanics -
useful for simple things, but not so good for photon drives or
complete NLP systems.  The basic notions in NLP have been thoroughly
investigated;

IT'S TIME TO DEVELOP THEORIES THAT CAN EXPLAIN *ALL* ASPECTS OF 
LANGUAGE BEHAVIOR!


5. The existence of "infinite garden-pathing".  To steal an example
from [Wilensky80],

        John gave Mary a piece of his.........................mind.

Only the last word disambiguates the sentence.  So now, what did *you*
fill in, before you read that last word?  There's even more 
interesting situations.  Part of my secret research agenda (don't tell
Boeing!) has been the understanding of jokes, particularly word plays.
Many jokes are multi-sentence versions of garden- pathing, where only
the punch line disambiguates.  A surprising number of crummy sitcoms
can get a whole half-hour because an ambiguous sentence is interpreted
differently by two people (a random thought - where *did* this notion
of sentence as fundamental structure come from?  Why don't speeches
and discourses have a "grammar" precisely defining *their* 
structure?).  In general, language is LR(lazy eight).

Miscellaneous comments:

This has gotten pretty long (a lot of accusations to respond to!), so
I'll save the discussion of AI dogma, fads, etc for another article.

When I said that "problems are really concerned with the acquisition
of linguistic knowledge", that was actually an awkward way to say
that, having solved the parsing problem, my research interests
switched to the implementation of full-scale error correction and
language learning (notice that Mr. Pereira did not say "this is
ambiguous - what did you mean?", he just assumed one of the meanings
and went on from there.  Typical human language behavior, and
inadequately explained by most existing theories...).  In fact, I have
a detailed plan for implementation, but grad school has interrupted
that and it may be a while before it gets done.  So far as I can tell,
the implementation of learning will not be unusually difficult.  It 
will involve inductive learning, manipulation of analogical 
representations to acquire meanings ("an mtrans is like a ptrans, but
with abstract objects"....), and other good things.  The 
nonrestrictive nature of Phrase Analysis seems to be particularly 
well-suited to language knowledge acquisition.

Thanks to Winograd (really quite a good book, but biased) I now know
what DCG's are (the paper I referred to before was [Pereira80]).  One
of the first paragraphs in that paper was revealing.  It said that
language was *defined* by a grammar, then proceeded from there.
(Different assumptions....) Since DCG's were compared only to ATN's,
it was of course easy to show that they were better (almost any
formalism is better than one from ten years before, so that wasn't
quite fair).  However, I fail to see any important distinction between
a DCG and a production rule system with backtracking.  In that case, a
DCG is really a special case of a Phrase Analysis parser (I did at one
time tinker with the notion of compiling phrase rules into OPS5 rules,
but OPS5 couldn't manage it very well - no capacity for the
parallelism that my parser needed).  I am of course interested in
being contradicted on any of this.

Mr. Pereira says he doesn't know what the "Schank camp" is.  If that's
so then he's the only one in NLP who doesn't.  I have heard some
highly uncomplimentary comments about Schank and his students.  But
then that's the price for going against conventional wisdom...

Sorry for the length, but it *was* time for some light rather than
heat!  I have refrained from saying much of anything about my theories
of language understanding, but will post details if accusations
warrant :-)

                                Theoretically yours*,
                                Stan (the leprechaun hacker) Shebs
                                utah-cs!shebs

* love those double meanings!

[Pereira80] Pereira, F.C.N., and Warren, D.H.D. "Definite Clause
    Grammars for Language Analysis - A Survey of the Formalism and
    a Comparison with Augmented Transition Networks", Artificial
    Intelligence 13 (1980), pp 231-278.

[Wilensky80] Wilensky, R. and Arens, Y.  PHRAN: A Knowledge-based
    Approach to Natural Language Analysis (Memorandum No.
    UCB/ERL M80/34).  University of California, Berkeley, 1980.

[Wilensky81] Wilensky, R. and Morgan, M.  One Analyzer for Three
    Languages (Memorandum No. UCB/ERL M81/67). University of
    California, Berkeley, 1981.

[Winograd83] Winograd, T.  Language as a Cognitive Process, vol. 1:
    Syntax.  Addison-Wesley, 1983.

------------------------------

Date: Fri 23 Sep 83 14:34:44-CDT
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: COLING 84 -- Call for papers

               [Reprinted from the UTexas-20 bboard.]


                              CALL FOR PAPERS

   COLING 84, TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS

COLING 84 is scheduled for 2-6 July 1984 at Stanford University,
Stanford, California.  It will also constitute the 22nd Annual Meeting
of the Association for Computational Linguistics, which will host the
conference.

Papers for the meeting are solicited on linguistically and
computationally significant topics, including but not limited to the
following:

   o Machine translation and machine-aided translation.

   o Computational applications in syntax, semantics, anaphora, and
       discourse.

   o Knowledge representation.

   o Speech analysis, synthesis, recognition, and understanding.

   o Phonological and morpho-syntactic analysis.

   o Algorithms.

   o Computational models of linguistic theories.

   o Parsing and generation.

   o Lexicology and lexicography.

Authors wishing to present a paper should submit five copies of a
summary not more than eight double-spaced pages long, by 9 January
1984 to: Prof.  Yorick Wilks, Languages and Linguistics, University of
Essex, Colchester, Essex, CO4 3SQ, ENGLAND [phone: 44-(206)862 286;
telex 98440 (UNILIB G)].

It is important that the summary contain sufficient information,
including references to relevant literature, to convey the new ideas
and allow the program committee to determine the scope of the work.
Authors should clearly indicate to what extent the work is complete
and, if relevant, to what extent it has been implemented.  A summary
exceeding eight double-spaced pages in length may not receive the
attention it deserves.

Authors will be notified of the acceptance of their papers by 2 April
1984.  Full length versions of accepted papers should be sent by 14
May 1984 to Dr. Donald Walker, COLING 84, SRI International, Menlo
Park, California, 94025, USA [phone: 1-(415)859-3071; arpanet:
walker@sri-ai].

Other requests for information should be addressed to Dr. Martin Kay,
Xerox PARC, 3333 Coyote Hill Road, Palo Alto, California 94304, USA 
[phone: 1-(415)494-4428; arpanet: kay@parc].


------------------------------

End of AIList Digest
********************

∂25-Sep-83  2055	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #63
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Sep 83  20:54:48 PDT
Date: Sunday, September 25, 1983 7:47PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #63
To: AIList@SRI-AI


AIList Digest            Monday, 26 Sep 1983       Volume 1 : Issue 63

Today's Topics:
  Robotics - Physical Strength,
  Parallelism & Physiology,
  Intelligence - Turing Test,
  Learning & Knowledge Representation,
  Rational Psychology
----------------------------------------------------------------------

Date: 21 Sep 83 11:50:31-PDT (Wed)
From: ihnp4!mtplx1!washu!eric @ Ucb-Vax
Subject: Re: Strong, agile robot
Article-I.D.: washu.132

I just glanced at that article for a moment, noting the leg mechanism 
detail drawing.  It did not seem to me that the beastie could move
very fast.  Very strong IS nice, tho...  Anyway, the local supplier of
that mag sold them all.  Anyone remember if it said how fast it could
move, and with what payload?

eric ..!ihnp4!washu!eric

------------------------------

Date: 23 Sep 1983 0043-PDT
From: FC01@USC-ECL
Subject: Parallelism

I thought I might point out that virtually no machine built in the
last 20 years is actually lacking in parallelism. In reality, just as
the brain has many neurons firing at any given time, computers have
many transistors switching at any given time. Just as the cerebellum
is able to maintain balance without the higher brain functions in the
cerebrum explicitly controlling the IO, most current computers have IO
controllers capable of handling IO while the CPU does other things.
Just as people have faster short term memory than long term memory but
less of it, computers have faster short term memory than long term 
memory and use less of it. These are all results of cost/benefit
tradeoffs for each implementation, just as I presume our brains and
bodies are. Don't be so fast to think that real computer designers are
ignorant of physiology. The trend towards parallelism now is more like
the human social system of having a company work on a problem. Many
brains, each talking to each other when they have questions or
results, each working on different aspects of a problem. Some people
have breakdowns, but the organization keeps going. Eventually it comes
up with a product, although it may not really solve the problem posed
at the beginning, it may have solved a related problem or found a
better problem to solve.

        Another copyrighted excerpt from my not yet finished book on
computer engineering modified for the network bboards, I am ever
yours,
                                        Fred

------------------------------

Date: 14 Sep 83 22:46:10-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: in defense of Turing - (nf)
Article-I.D.: uiucdcs.2822



Two points where Martin Taylor's response reveals that I was not
emphatic enough [you see, it is possible to underflame, and thus be 
misunderstood!] in my comments on the Turing test.

1. One of Dennett's main points (which I did not mention, since David
Rogers had already posted it in the original note of this string) is
that the unrestricted Turing-like test of which he spoke is a
SUFFICIENT, but not a NECESSARY test for intelligence comparable to
that possessed and displayed by most humans in good working order.  [I
myself would add that it tests as much for mastery of human
communication skills (which are indeed highly dependent on particular
cultures) as it does for intelligence.] That is to say, if a program
passes such a rigorous test, then the practitioners of AI may
congratulate themselves for having built such a clever beast.
However, a program which fails such a test need not be considered
unintelligent.  Indeed, a human which fails such a test need not be
considered unintelligent -- although one would probably consider
him/her to be of substandard intelligence, or of impaired
intelligence, or dyslexic, or incoherent, or unconscious, or amnesic,
or aphasic, or drunk (i.e. disabled in some fashion).

2. I did not post "a set of criteria which an AI system should pass to
be accepted as human-like at a variety of levels."  I posted a set of
tests by which to gauge progress in the field of AI.  I don't imagine
that these tests have anything to do with human-ness.  I also don't
imagine that many people who discuss and discourse upon "intelligence"
have any coherent definition for what it might be.


Other comments that seem relevant (but might not be)
----- -------- ---- ---- -------- ---- ----- --- ---

Neither Dennett's test, nor my tests are intended to discern whether
or not the entity in question possesses a human brain.

In addition to flagrant use of hindsight, my tests also reveal my bias
that science is an endeavor which requires intelligence on the part of
its human practitioners.  I don't mean to imply that it is the only
such domain.  Other domains which require that the people who live in 
them have "smarts" are puzzle solving, language using, language
learning (both first and second), etc.  Other tasks not large enough
to qualify as domains that require intelligence (of a degree) from
people who do them include: figuring out how to use a paper clip or a
stapler (without being told or shown), figuring out that someone was 
showing you how to use a stapler (without being told that such
instruction was being given), improvising a new tool or method for a
routine task that one is accustomed to doing with an old tool or
method, realizing that an old method needs improvement, etc.

The interdependence of intelligence and culture is much more important
that we usually give it credit for.  Margaret Mead must have been
quite a curiousity to the peoples she studied.  Imagine that a person
of such a different and strange (to us) culture could be made to
understand enough about machines and the Turing test so that he/she 
could be convinced to serve as an interlocutor...  On second thought,
that opens up such a can of worms that I'd rather deny having proposed
it in the first place.

------------------------------

Date: 19 Sep 83 17:43:53-PDT (Mon)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: utah-cs.1913

I just read Jon Doyle's article about Rational Psychology in the
latest AI Magazine (Fall '83), and am also very interested in the
ideas therein.  The notion of trying to find out what is *possible* 
for intelligences is very intriguing, not to mention the idea of
developing some really sound theories for a change.

Perhaps I could mention something I worked on a while back that
appears to be related.  Empirical work in machine learning suggests
that there are different levels of learning - learning by being
programmed, learning by being told, learning by example, and so forth,
with the levels being ordered by their "power" or "complexity",
whatever that means.  My question: is there something fundamental
about this classification?  Are there other levels?  Is there a "most
powerful" form of learning, and if so, what is it?

I took the approach of defining "learning" as "behavior modification",
even though that includes forgetting (!), since I wasn't really
concerned with whether the learning resulted in an "improvement" in
behavior or not.  The model of behavior was somewhat interesting.
It's kind of a dualistic thing, consisting of two entities:  the
organism and the environment.  The environment is everything outside,
including the organsism's own physical body, while the organism is
more or less equivalent to a mind.  Each of these has a state, and
behavior can be defined as functions mapping the set of all states to
itself.  Both the environment and the organism have behaviors that can
be treated in the same way (that is, they are like mirror images of
each other).  The whole development is too elaborate for an ASCII
terminal, but it boiled down to this:  that since learning is a part
of behavior, but it also *modifies* behavior, then there is a part of
the behavior function that is self-modifying.  One can then define
"1st order learning" as that which modifies ordinary behavior.  2nd
order learning would be "learning how to learn", 3rd order would be
"learning how to learn how to learn" (whatever *that* means!).  The
definition of these is more precise than my Anglicization here, and
seem to indicate a whole infinite heirarchy of learning types, each
supposedly more powerful than the last.  It doesn't do much for my
original questions, because the usual types of learning are all 1st
order - although they don't have to be.  Lenat's work on learning
heuristics might be considered 2nd order, and if you look at it in the
right way, it may actually be that EURISKO actually implements all
orders of learning at the same time, so the above discussion is 
garbage (sigh).

Another question that has concerned me greatly (particularly since
building my parser) is the relation of the Halting Problem to AI.  My
program was basically a production system, and had an annoying
tendency to get caught in infinite loops of various sorts.  More
misfeatures than bugs, though, since the theory did not expressly
forbid such loops!  To take a more general example, why don't circular
definitions cause humans to go catatonic?  What is the mechanism that
seems to cut off looping?  Do humans really beat the Halting Problem?
One possible mechanism is that repetition is boring, and so all loops
are cut off at some point or else pushed so far down on the agenda of
activities that they are effectively terminated.  What kind of theory
could explain this?

Yet another (last one folks!) question is one that I raised a while
back, about all representations reducing down to attribute-value
pairs.  Yes, they used to be fashionable but are now out of style, but
I'm talking about a very deep underlying representation, in the same
way that the syntax of s-expressions underlies Lisp.  Counterexamples 
to my conjecture about AV-pairs being universal were algebraic 
expressions (which can be turned into s-expressions, which can be 
turned into AV-pairs) and continuous values, but they must have *some*
closed form representation, which can then be reduced to AV-pairs.  So
I remained unconvinced that the notion of objects with AV-pairs 
attached is *not* universal (of course, for some things, the
representation is so primitive as to be as bad as Fortran, but then
this is an issue of possibility, not of goodness or efficiency).

Looking forward to comments on all of these questions...

                                        stan the l.h.
                                        utah-cs!shebs

------------------------------

Date: 22 Sep 83 11:26:47-PDT (Thu)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: drufl.663

        To me personally, Rational Psychology is a misnomer.
"Rational" negates what "Psychology" wants to understand.

Flames to /dev/null.
Interesting discussions welcome.


                                    Samir Shah
                                    drufl!samir
                                    AT&T Information Systems, Denver.

------------------------------

Date: 22 Sep 83 17:12:11-PDT (Thu)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.456

         Samir's view:  "To me personally, Rational Psychology
                         is a misnomer. "Rational" negates
                         what "Psychology" wants to understand."


How so?

Can you support your claim? What does psychology want to understand
that Rationality negates?  Psychology is the Logos of the Psyche or
the logic of the psyche.  How does one understand without logic?  How
does one understand without rationality?  What is understand?  Isn't
language itself dependent upon the rational faculty, or more
specifically, upon the ability to form concepts, as opposed to
percepts?  Can you understand without language?  To be totally without
rationality (lacking the functional capacity for rationality
- the CONCEPTUAL faculty) would leave you without language, and
therefore without understanding.  In what TERMS is something said to
be understood?  How can terms have meaning without rationality?

Or perhaps you might claim that because men are not always rational
that man does not possess a rational faculty, or that it is defective,
or inadequate?  How about telling us WHY you think Rational negates
Psychology?

These issues are important to AI, psychology and philosophy
students...  The day may not be far off when AI research yields
methods of feature abstraction and integration that approximate
percept-formation in humans.  The next step, concept formation, will
be much harder.  How does an epistemology come about?  What are the
sequential steps necessary to form an epistemology of any kind?  By
what method does the mind (what's that?) integrate percepts into
concepts, make identifications on a conceptual level ("It is an X"),
justify its identifications ("and I know it is an X because..."), and
then decide (what's that?) what to do about it ("...so therefore I
should do Y")?

Do you seriously think that understanding these things won't take
Rationality?

Norm Andrews, AT&T Information Systems, Holmdel, N.J. ariel!norm

------------------------------

Date: 22 Sep 83 12:02:28-PDT (Thu)
From: decvax!genrad!mit-eddie!mit-vax!eagle!mhuxi!mhuxj!mhuxl!achilles
      !ulysses!princeton!leei@Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: princeto.77

I really think that the ability that we humans have that allows us to
avoid looping is the simple ability to recognize a loop in our logic
when it happens.  This comes as a direct result of our tendency for
constant self- inspection and self-evaluation.  A machine with this
ability, and the ability to inspect its own self-inspections . . .,
would probably also be able to "solve" the halting problem.

Of course, if the loop is too subtle or deep, then even we cannot see
it.  This may explain the continued presence of various belief systems
that rely on inherently circular logic to get past their fundamental
problems.


                                        -Lee Iverson
                                        ..!princeton!leei

------------------------------

End of AIList Digest
********************

∂26-Sep-83  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #64
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Sep 83  23:47:27 PDT
Date: Monday, September 26, 1983 9:28PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #64
To: AIList@SRI-AI


AIList Digest            Tuesday, 27 Sep 1983      Volume 1 : Issue 64

Today's Topics:
  Database Systems - DBMS Software Available,
  Symbolic Algebra - Request for PRESS,
  Humor - New Expert Systems,
  AI at Edinburgh - Michie & Turing Institute,
  Rational Psychology - Definition,
  Halting Problem & Learning,
  Knowledge Representation - Course Announcement
----------------------------------------------------------------------

Date: 21 Sep 83 16:17:08-PDT (Wed)
From: decvax!wivax!linus!philabs!seismo!hao!csu-cs!denelcor!pocha@Ucb-Vax
Subject: DBMS Software Available
Article-I.D.: denelcor.150

Here are 48 vendors of the most popular DBMS packages which will be presented
at the National Database & 4th Generation Language Symposium.
Boston, Dec. 5-8 1983, Radisson-Ferncroft Hotel, 50 Ferncroft Rd., Davers, Ma
For information write. Software Institute of America, 339 Salem St, Wakefield
Mass 01880 (617)246-4280.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Applied Data Research   DATACOM, IDEAL |Mathamatica Products    RAMIS II
Battelle - - - - - - -  BASIS          |Manager Software Prod.  DATAMANAGER
Britton-Lee             IDM            |                        DESIGNMANAGER
Cincom Systems          TIS, TOTAL,    |                        SOURCEMANAGER
                        MANTIS         |National CSS, Inc.      NOMAD2
Computer Associates     CA-UNIVERSE    |Oracle Corp.            ORACLE
Computer Co. of America MODEL 204      |Perkin-Elmer            RELIANCE
                        PRODUCT LINE   |Prime Computer          PRIME DBMS
Computer Techniques     QUEO-IV        |                        INFORMATION
Contel - - - - - - - -  RTFILE         |Quassar Systems         POWERHOUSE
Cullinet Software       IDMS, ADS      |                        POWERPLAN
Database Design, Inc.   DATA DESIGNER  |Relational Tech. Inc.   INGRES
Data General            DG/DBMS        |Rexcom Corp.            REXCOM
                        PRESENT        |Scientific Information  SIR/DBMS
Digital Equipment Co.   VAX INFO. ARCH |Seed Software           SEED
Exact Systems & Prog.   DNA-4          |Sensor Based System     METAFILE
Henco Inc.              INFO           |Software AG of N.A.     ADABAS
Hewlett Packard         IMAGE          |Software House          SYSTEM 1022
IBM Corp.               SQL/DS, DB2    |Sydney Development Co.  CONQUER
Infodata Systems        INQUIRE        |Tandem Computers        ENCOMPASS
Information Builders    FOCUS          |Tech. Info. Products    IP/3
Intel Systems Corp.     SYSTEM 2000    |Tominy, Inc.            DATA BASE-PLUS
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
                                     John Pocha
                                    Denelcor, Inc.
                                    17000 E. Ohio Place
                                    Aurora, Colorado 80017
                                    work (303)337-7900 x379
                                    home (303)794-5190
                                 {csu-cs|nbires|brl-bmd}!denelcor!pocha

------------------------------

Date: 23 Sep 83 19:04:12-PDT (Fri)
From: decvax!tektronix!tekchips!wm @ Ucb-Vax
Subject: Request for PRESS
Article-I.D.: tekchips.317

Does anyone know where I can get the PRESS algebra system, by Alan
Bundy, written in Prolog?

                        Wm Leler
                        tektronix!tekchips!wm
                        wm.Tektronix@Rand-relay

------------------------------

Date: 23 Sep 83 1910 EDT (Friday)
From: Jeff.Shrager@CMU-CS-A
Subject: New expert systems announced:

Dentrol: A dental expert system based upon tooth maintenance
      principles.
Faust: A black magic advisor with mixed initiative goal generation.
Doug: A system which will convert any given domain into set theory.
Cray: An expert arithmetic advisor.  Heuristics exist for any sort of
      real number computation involving arithmetic functions (+, -,
      and several others) within a finite (but large) range around 0.0.
      The heuristics are shown to be correct for typical cases.
Meta: An expert at thinking up new domains in which there should be
      expert systems.
Flamer: A expert at seeming to be an expert in any domain in which it
      is not an expert.
IT: (The Illogic Theorist) A expert at fitting any theory to any quanity
    of protocol data.  Theories must be specified in "ITLisp" but IT can
    construct the protocols if need be.

------------------------------

Date: 22 Sep 83 23:25:15-PDT (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: U of Edinburgh, Scotland Inquiry - (nf)
Article-I.D.: uiucdcs.2935


I can't tell you about the Dept of AI at Edinburgh, but I do know
about the Machine Intelligence Research Unit chaired by Prof. Donald
Michie.

The MIRU will fold in future, because Prof Michie intends to set up a
new research institute in the UK. He's been planning this and fighting
for it for quite a while now. It will be called the "Turing
Institute", and is intended to become one of the prime centers of AI
research in the UK. In fact, it will be one of the very few centers at
which research is the top priority, rather than teaching. Michie has
recently been approached by the University of Strathclyde near
Glasgow, which is interested in functioning as the associated teaching
institution (cp SRI and Stanford). If that works out, the Turing 
Institute may be operational by September 1984.

------------------------------

Date: 23 Sep 83 5:04:46-PDT (Fri)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ssc-vax.538

(should be posting from utah, but I saw it here first and just 
couldn't resist...)

I think we've got a terminology problem here.  The word "rational" is
so heavily loaded that it can hardly move! (as net.philosophy readers
well know).  The term "rational psychology" does seem to exclude
non-rational behavior (whatever that is) from consideration, which is
not true at all.  Rather, the idea is to explore the entire universe
of possibilities for intelligent behavior, rather than restricting
oneself to observing the average college sophomore or the AI programs
small enough to fit on present-day machines.

Let me propose the term "universal psychology" as a substitute, 
analogous to the mathematical study of universal algebras.  Fewer
connotations, and it better suggests the real thrust of this field -
the study of *possible* intelligent behavior.

                                stan the r.h. (of lightness)
                                ssc-vax!sts
                                (but mail to harpo!utah-cs!shebs)

------------------------------

Date: 26 Sep 1983 0012-PDT
From: Jay <JAY@USC-ECLC>
Subject: re: the halting problem, orders of learning

Certain representaions of calculations lead to easy
detection of looping.  Consider the function...
	f(x) = x
This could lead to ...
	f(f(x)) = x 
Or to ...
	f(f(f(f( ... )))) = x
But why bother!  Or for another example, consider the life blinker..
                   +
 + + +   becomes   +   becomes  + + +   becomes (etc.)
                   +
Why bother calculateing all the generations for this arangement?  The
same information lies in ...
for any integer i                         +
Blinker(2i) = + + +  and Blinker(2i+1) =  +
                                          +
There really is no halting problem, or infinite looping.  The
information for the blinker need not be fully decoded, it can be just
the above "formulas".  So humans could choses a representation of
circular or "infinite looping" ideas, so that the circularity is
expresed in a finite number of bits.

As for the orders of learning; Learning(1) is a behavior.  That is
modifying behaivor is a behavior.  It can be observed in schools,
concentration camps, or even in the laboratory.  So learning(2) is
modifying a certain behavior, and thus nothing more (in one view)
than learning(1).  Indeed it is just learning(1) applied to itself!
So learning(i) is just
                              i
(the way an organism modifies)  its behavior 

But since behavior is just the way an organism modifies the
enviroment,
                                            i+1
Learning(i) = (the way an organism modifies)    the enviroment.

and learning(0) is just behavior.  So depending on your view, there
are either an infinite number of ways to learn, or there are an
infinite number of organisms (most of whose enviroments are just other
organisms).

j'

------------------------------

Date: Mon 26 Sep 83 11:48:33-MDT
From: Jed Krohnfeldt <KROHNFELDT@UTAH-20.ARPA>
Subject: Re: learning levels, etc.

Some thoughts about Stan Shebs' questions:

I think that your continuum of 1st order learning, 2nd order learning,
etc. can really be collapsed to just two levels - the basic learning
level, and what has been popularly called the "meta level".  Learning
about learning about learning, is really no different than learning
about learning, is it?  It is simply a capability to introspect (and
possibly intervene) into basic learning processes.

This also proposes an answer to your second question - why don't 
humans go catatonic when presented with circular definitions - the
answer may be that we do have heuristics, or meta-level knowledge,
that prevents us from endlessly looping on circular concepts.

                                        Jed Krohnfeldt
                                         utah-cs!jed
                                       krohnfeldt@utah-20

------------------------------

Date: Mon 26 Sep 83 10:44:34-PDT
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: course announcement

                         COURSE ANNOUNCEMENT

                         COMPUTER SCIENCE 400

                REPRESENTATION, MEANING, AND INFERENCE


Instructor: Robert Moore
            Artificial Intelligence Center
            SRI International

Time:       MW @ 11:00-12:15 (first meeting Wed. 9/28)

Place:      Margaret Jacks Hall, Rm. 301


The problem of the formal representation of knowledge in intelligent
systems is subject to two important constraints.  First, a general
knowledge-representation formalism must be sufficiently expressive to
represent a wide variety of information about the world.  A long-term
goal here is the ability to represent anything that can be expressed
in natural language.  Second, the system must be able to draw
inferences from the knowledge represented.  In this course we will
examine the knowledge representation problem from the perspective of
these constraints.  We will survey techniques for automatically
drawing inferences from formalizations of commonsense knowledge; we
will look at some of the aspects of the meaning of natural-language
expressions that seem difficult to formalize (e.g., tense and aspect,
collective reference, propositional attitudes); and we will consider
some ways of bridging the gap between formalisms for which the
inference problem is fairly well understood (first-order predicate
logic) and the richer formalisms that have been proposed as meaning
representations for natural language (higher-order logics, intentional
and modal logics).

------------------------------

End of AIList Digest
********************

∂29-Sep-83  1120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #65
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83  11:19:49 PDT
Date: Thursday, September 29, 1983 9:46AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #65
To: AIList@SRI-AI


AIList Digest           Thursday, 29 Sep 1983      Volume 1 : Issue 65

Today's Topics:
  Automatic Translation - French-to-English Request,
  Music and AI - Request,
  Publications - CSLI Newsletter & Apollo User's Mailing List,
  Seminar - Parallel Algorithms: Cook at UTexas Oct. 6,
  Lab Reports - UM Expansion,
  Software Distributions - Maryland Franz Lisp Code,
  Conferences - Intelligent Sys. and Machines, CSCSI,
----------------------------------------------------------------------

Date: Wed 28 Sep 83 11:37:27-PDT
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Re: Automatic Translation


  I'm looking for a program to perform automatic translation from
French to English.  The output doesn't have to be perfect (I hardly
expect it).  I'll appreciate any leads you can give me.

                                Dave Foulser

------------------------------

Date: Wed 28 Sep 83 18:46:09-EDT
From: Ted Markowitz <TJM@COLUMBIA-20.ARPA>
Subject: Music & AI, pointers wanted

I'd like to hear from anyone doing work that somehow relates AI and
music in some fashion. Particularly, are folks using AI programs and
techniques in composition (perhaps as a composer's assistant)? Any
responses will be passed on to those interested in the results.

--ted

------------------------------

Date: Mon 26 Sep 83 12:08:44-CDT
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: CSLI newsletter

                [Reprinted from the UTexas-20 bboard.]


A copy of the first newsletter from the Center for the Study of
Language and Information (CSLI) at Stanford is in
PS:<CGS.PUB>CSLI.NEWS.  The section on "Remote Affiliates" is of some
interest to many people here.

------------------------------

Date: Thu, 22 Sep 83 14:29:56 EDT
From: Nathaniel Mishkin <Mishkin@YALE.ARPA>
Subject: Apollo Users Mailing List

This message is to announce the creation of a new mailing list:

        Apollo@YALE

in which I would like to include all users of Apollo computers who are
interested in sharing their experiences about Apollos.  I think all
people could benefit from finding out what other people are doing on
their Apollos.

Mail to the list will be archived in some public place that I will 
announce at a later date.  At least initially, the list will not be 
moderated or digested.  If the volume is too great, this may change.  
If you are interested in getting on this mailing list, send mail to:

        Apollo-Request@YALE

If several people at your site are interested in being members and 
your mail system supports local redistribution, please tell me so I
can add a single entry (e.g. "Apollo-Podunk@PODUNK") instead of one
for each person.

------------------------------

Date: Mon 26 Sep 83 16:44:31-CDT
From: CS.GLORIA@UTEXAS-20.ARPA
Subject: Cook Colloquium, Oct 6

               [Reprinted from the UTexas-20 bboard.]


Stephen A. Cook, University of Toronto, will present a talk entitled
"Which Problems are Subject to Exponential Speed-up by Parallel Computers?"
on Thursday, Oct. 6 at 3:30 p.m. in Painter Hall 4.42.
Abstract:
      In the future we expect large parallel computers to exist with
thousands or millions of processors able to work together on a single
problem. There is already a significant literature of published algorithms
for such machines in which the number of processors available is treated
as a resource (generally polynomial in the input size) and the computation
time is extremely fast (polynomial in the logarithm of the input size).
We shall give many examples of problems for which such algorithms exist
and classify them according to the kind of algirithm which can be used.
On the other hand, we will give examples of problems with feasible sequential
algorithms which appear not to be amenable to such fast parallel algorithms.

------------------------------

Date: 21 Sep 83 16:33:08 EDT  (Wed)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: UM Expansion

[Due to a complaint that even academic job ads constitute an
"egregious violation" of Arpanet standards, and following failure of
anyone to reply to my subsequent queries, I have decided to publish
general notices of lab expansions but not specific positions.  The
following solicitation has been edited accordingly.  -- KIL]


The University of Maryland was recently awarded 4.2 million dollars
by the National Science Foundation to develop the hardware and
software for a parallel processing laboratory.  More than half of
the award amount is going directly for hardware acquisition, and
this money is also being leveraged through substantial vendor
discounts and joint research programs now being negotiated.  We
will be buying things like lots of Vaxes, Sun's, Lisp Machines,
etc., to augment our current 2 780's, ethernet, etc. system.
Several new permanent positions are being created in the Computer
Science Department for this laboratory.

[...]

Anyone interested should make initial inquiries, send resumes, etc.
to Mark Weiser at one of the addresses below:

        Mark Weiser
        Computer Science Department
        University of Maryland
        College Park, MD 20742
        (301) 454-6790/4251/6291 (in that order).
        UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!mark
        CSNet:  mark@umcp-cs
        ARPA:   mark.umcp-cs@UDel-Relay

------------------------------

Date: 26 Sep 83 17:32:04-PDT (Mon)
From: decvax!mcvax!philabs!seismo!rlgvax!cvl!umcp-cs!liz @ Ucb-Vax
Subject: Maryland software distribution
Article-I.D.: umcp-cs.2755

This is to announce the availability of the Univ of Maryland software
distribution.  This includes source code for the following:

1.  The flavors package written in Franz Lisp.  This package has
    been used successfully in a number of large systems at Maryland,
    and while it does not implement all the features of Lisp Machine
    Flavors, the features present are as close to the Lisp Machine
    version as possible within the constraints of Franz Lisp.
    (Note that Maryland flavors code *can* be compiled.)
2.  Other Maryland Franz hacks including the INTERLISP-like top
    level, the lispbreak error handling package, the for macro and
    the new loader package.
3.  The YAPS production system written in Franz Lisp.  This is
    similar to OPS5 but more flexible in the kinds of lisp expressions
    that may appear as facts and patterns (sublists are allowed
    and flavor objects are treated atomically), the variety of
    tests that may appear in the left hand sides of rules and the
    kinds of actions may appear in the right hand sides of rules.
    In addition, YAPS allows multiple data bases which are flavor
    objects and may be sent messages such as "fact" and "goal".
4.  The windows package in the form of a C loadable library.  This
    flexible package allows convenient management of multiple
    contexts on the screen and runs on ordinary character display
    terminals as well as bit-mapped displays.  Included is a Franz
    lisp interface to the window library, a window shell for
    executing shell processes in windows, and a menu package (also
    a C loadable library).

You should be aware of the fact that the lisp software is based on
Franz Opus 38.26 and that we will be switching to the newer version
of lisp that comes with Berkeley 4.2 whenever that comes out.

---------------------------------------------------------------------

To obtain the Univ of Maryland distribution tape:

1.  Fill in the form below, make a hard copy of it and sign it.
2.  Make out a check to University of Maryland Foundation for $100,
    mail it and the form to:

                Liz Allen
                Univ of Maryland
                Dept of Computer Science
                College Park MD 20742

3.  If you need an invoice, send me mail, and I will get one to you.
    Don't forget to include your US Mail address.

Upon receipt of the money, we will mail you a tape containing our
software and the technical reports describing the software.  We
will also keep you informed of bug fixes via electronic mail.

---------------------------------------------------------------------

The form to mail to us is:


In exchange for the Maryland software tape, I certify to the
following:

a.  I will not use any of the Maryland software distribution in a
    commercial product without obtaining permission from Maryland
    first.
b.  I will keep the Maryland copyright notices in the source code,
    and acknowledge the source of the software in any use I make of
    it.
c.  I will not redistribute this software to anyone without permission
    from Maryland first.
d.  I will keep Maryland informed of any bug fixes.
e.  I am the appropriate person at my site who can make guarantees a-d.

                                Your signature, name, position,
                                phone number, U.S. and electronic
                                mail addresses.

---------------------------------------------------------------------

If you have any questions, etc, send mail to me.

--
                                -Liz Allen, U of Maryland, College Park MD
                                 Usenet:   ...!seismo!umcp-cs!liz
                                 Arpanet:  liz%umcp-cs@Udel-Relay

------------------------------

Date: Tue, 27 Sep 83 14:57:00 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Conference Announcement


              ****************  CONFERENCE  ****************

                     "Intelligent Systems and Machines"

                    Oakland University, Rochester Michigan

                                April 24-25, 1984

              *********************************************

A notice for call for papers should also appear through SIGART soon.

Conference Chairmen:  Dr. Donald Falkenburg (313-377-2218)
                      Dr. Nan Loh           (313-377-2222)
                      Center for Robotics and Advanced Automation
                      School of Engineering
                      Oakland University
                      Rochester, MI 48063
            ***************************************************

AUTHORS PLEASE NOTE:  A Public Release/Sensitivity Approval is necessary.
Authors from DOD, DOD contractors, and individuals whose work is government
funded must have their papers reviewed for public release and more
importantly sensitivity (i.e. an operations security review for sensitive
unclassified material) by the security office of their sponsoring agency.

In addition, I will try to answer questions for those on the net.  Mort
Queries can be sent to mort@brl

------------------------------

Date: Mon 26 Sep 83 11:08:58-PDT
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: CSCSI call for papers

                         CALL FOR PAPERS

                         C S C S I - 8 4

                      Canadian Society for
              Computational Studies of Intelligence

                  University of Western Ontario
                         London, Ontario
                         May 18-20, 1984

     The Fifth National Conference of the CSCSI will be  held  at
the  University of Western Ontario in London, Canada.  Papers are
requested in all areas of AI research, particularly those  listed
below.  The Program Committee members responsible for these areas
are included.

Knowledge Representation :
   Ron Brachman (Fairchild R & D), John Mylopoulos (U of Toronto)
Learning :
   Tom Mitchell (Rutgers U), Jaime Carbonell (CMU)
Natural Language :
   Bonnie Weber (U of Pennsylvania), Ray Perrault (SRI)
Computer Vision :
   Bob Woodham (U of British Columbia), Allen Hanson (U Mass)
Robotics :
   Takeo Kanade (CMU), John Hollerbach (MIT)
Expert Systems and Applications :
   Harry Pople (U of Pittsburgh),  Victor  Lesser  (U  Mass)
Logic Programming :
   Randy Goebel (U of Waterloo), Veronica Dahl (Simon Fraser U)
Cognitive Modelling :
   Zenon Pylyshyn,  Ed  Stabler  (U  of Western Ontario)
Problem Solving and Planning :
   Stan Rosenschein (SRI), Drew McDermott (Yale)

     Authors are requested to prepare Full  papers,  of  no  more
than  4000  words in length, or Short papers of no more than 2000
words in length.  A full page of clear diagrams  counts  as  1000
words.   When  submitting,  authors must supply the word count as
well as the area in which they wish their paper reviewed.   (Com-
binations  of  the  above  areas are acceptable).  The Full paper
classification is intended for well-developed ideas, with  signi-
ficant demonstration of validity, while the Short paper classifi-
cation is intended for descriptions of research in progress.  Au-
thors  must  ensure that their papers describe original contribu-
tions to or novel applications of  Artificial  Intelligence,  re-
gardless of length classification, and that the research is prop-
erly compared and contrasted with relevant literature.
     Three copies of each submitted paper must be in the hands of
the  Program Chairman by December 7, 1983.  Papers arriving after
that date will be returned  unopened,  and  papers  lacking  word
count  and classifications will also be returned.  Papers will be
fully reviewed by appropriate members of the  program  committee.
Notice of acceptance will be sent on February 28, 1984, and final
camera ready versions are due on March 31,  1984.   All  accepted
papers will appear in the conference proceedings.

     Correspondence should be addressed  to  either  the  General
Chairman or the Program Chairman, as appropriate.

General Chairman                    Program Chairman

Ted Elcock,                         John K. Tsotsos
Dept. of Computer Science,          Dept. of Computer Science,
Engineering and Mathematical        10 King's College Rd.,
     Sciences Bldg.,                University of Toronto,
University of Western Ontario       Toronto, Ontario, Canada,
London, Ontario, Canada             M5S 1A4
N6A 5B9                             (416)-978-3619
(519)-679-3567

------------------------------

End of AIList Digest
********************

∂29-Sep-83  1438	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #66
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83  14:37:21 PDT
Date: Thursday, September 29, 1983 12:50PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #66
To: AIList@SRI-AI


AIList Digest            Friday, 30 Sep 1983       Volume 1 : Issue 66

Today's Topics:
  Rational Psychology - Definition,
  Halting Problem
  Natural Language Understanding
----------------------------------------------------------------------

Date: Tue 27 Sep 83 22:39:35-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational X

Oh dear! "Rational psychology" is no more about rational people than 
"rational mechanics" is about rational rocks or "rational 
thermodynamics" about rational hot air. "Rational X" is the 
traditional name for the mathematical, axiomatic study of systems 
inspired and intuitively related to the systems studied by the 
empirical science "X." Got it?

Fernando Pereira

------------------------------

Date: 27 Sep 83 11:57:24-PDT (Tue)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.463

Actually, the word "rational" in "rational psychology" is merely
redundant.  One would hope that psychology would be, as other
sciences, rational.  This would in no way detract from its ability to
investigate the causes of human irrationality.  No science really
should have to be prefaced with the word "rational", since we should
be able to assume that science is not "irrational".  Anyone for
"Rational Chemistry"?

Please note that the scientist's "flash of insight", "intuituion",
"creative leap" is heavily dependent upon the rational faculty, the
faculty of CONCEPT-FORMATION.  We also rely upon the rational faculty
for verifying and for evaluating such insights and leaps.

--Norm Andrews, AT&T Information Systems, Holmdel, New Jersey

------------------------------

Date: 26 Sep 83 13:01:56-PDT (Mon)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Rational Psychology
Article-I.D.: drufl.670

Norm,

        Let me elaborate. Psychology, or logic of mind, involves BOTH 
rational and emotional processes. To consider one exclusively defeats 
the purpose of understanding.

        I have not read the article we are talking about so I cannot 
comment on that article, but an example of what I consider a "Rational
Psychology" theory is "Personal Construct Theory" by Kelly. It is an 
attractive theory but, in my opinion, it falls far short of describing
"logic of mind" as it fails to integrate emotional aspects.

        I consider learning-concept formation-creativity to have BOTH 
rational and emotional attributes, hence it would be better if we 
studied them as such.

        I may be creating a dichotomy where there is none. (Rational
vs.  Emotional). I want to point you to an interesting book "Metaphors
we live by" (I forget the names of Authors) which in addition to
discussing many other ai-related (without mentioning ai) concepts
discusses the question of Objective vs. Subjective, which is similar
to what we are talking here, Rational vs. Emotional.

        Thanks.

                                Samir Shah
                                AT&T Information Systems, Denver.
                                drufl!samir

------------------------------

Date: Tue, 27 Sep 1983  13:30 EDT
From: MINSKY@MIT-OZ
Subject: Re: Halting Problem

About learning:  There is a lot about how to get out of loops in my
paper "Jokes and the Cognitive Unconscious".  I can send it to whoever
wants, either over this net or by U.S. Snail.
 -- minsky

------------------------------

Date: 26 Sep 83 10:31:31-PDT (Mon)
From: ihnp4!clyde!floyd!whuxlb!pyuxll!eisx!pd @ Ucb-Vax
Subject: the Halting problem.
Article-I.D.: eisx.607

There are two AI problems that I know about: the computing power 
problem (combinatorial explosions, etc) and the "nature of thought"
problem (knowledge representation, reasoning process etc).  This
article concerns the latter.

AI's method (call it "m") seems to model human information processing
mechanisms, say legal reasoning methods, and once it is understood
clearly, and a calculus exists for it, programming it. This idea can
be transferred to various problem domains, and voila, we have programs
for "thinking" about various little cubbyholes of knowledge.

The next thing to tackle is, how do we model AI's method "m" that was 
used to create all these cubbyhole programs ?  How did whoever thought
of Predicate Calculus, semantic networks, Ad nauseum block world 
theories come up with them ? Let's understand that ("m"), formalize
it, and program it. This process (let's call it "m'") gives us a
program that creates cubbyhole programs. Yeah, it runs on a zillion 
acres of CMOS, but who cares.

Since a human can do more than just "m", or "m'", we try to make 
"m''", "m'''" et al. When does this stop ? Evidently it cannot.  The
problem is, the thought process that yields a model or simulation of a
thought process is necessarily distinct from the latter (This is true
of all scientific investigation of any kind of phenomenon, not just
thought processes). This distinction is one of the primary paradigms
of western Science.

Rather naively, thinking "about" the mind is also done "with" the
mind.  This identity of subject and object that ensues in the
scientific (dualistic) pursuit of more intelligent machine behavior - 
do you folks see it too ? Since scientific thought relies on the clear
separation of a theory/model and reality, is a
mathematical/scientific/engineering discipline inadequate for said 
pursuit ? Is there a system of thought that is self-describing ? Is 
there a non-dualistic calculus ?

What we are talking about here is the ability to separate oneself from
the object/concept/process under study, understand it, model it,
program it... it being anything, including the ability it self.  The
ability to recognize that a model is a representation within one's
mind of a reality outside of ones mind. Trying to model this ability
is leads one to infinite regress.  What is this ability ? Lets call it
conciousness.  What we seem to be coming up with here is, the
INABILITY of math/sci etc to deal with this phenomenon, codify at it,
and to boldly program a computer that has conciousness. Does this mean
that the statement:

"CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS OWN
ACCORD"

is true ? "Conciousness" was used for lack of a better word. Replace 
it by X, and you still have a significant statement. Conciousness
already has come to existence; and according to the line of reasoning
above, cannot be brought into existence by methods available.

If so, how can we "help" machines to achieve conciousness, as
benevolent if rather impotent observers ?  Should we just
mechanistically build larger and larger neural network simulators
until one says "ouch" when we shut a portion of it off, and better,
tries to deliberately modify(sic) its environment so that that doesn't
happen again? And may be even can split infinitives ?

As a parting shot, it's clear that such neural networks, must have 
tremendous power to come close to a fraction of our level of
abstraction ability.

Baffled, but still thinking...  References, suggestions, discussions, 
pointers avidly sought.

Prem Devanbu

ATTIS Labs , South Plainfield.

------------------------------

Date: 27 Sep 83 05:20:08 EDT (Tue)
From: rlgvax!cal-unix!wise@SEISMO
Subject: Natural Language Analysis and looping


A side light to the discussions of the halting problem is "what then?"
What do we do when a loop is detected?  Ignore the information?
Arbitrarily select some level as the *true* meaning?

In some cases, meaning is drawn from outside the language.  As an
example, consider a person who tells you, "I don't know a secret".
The person may really know a secret but doesn't want you to know, or
may not know a secret and reason that you'll assume that nobody with a
secret would say something so suspicious ...

A reasonable assumption would be that if the person said nothing,
you'd have no reason to think he knows a secret, so if that was the
assumption which he desired for you to make, he would just have kept
quiet, so you may conclude that the person knows no secret.

This rather simplistic example demonstrates one response to the loop,
i.e., when confronted with circular logic, we disregard it.  Another
possibility is that we may use external information to attempt to help
dis-ambiguate by selecting a level of the loop. (e.g. this is a
three-year-old, who is sufficiently unsophisticated that he may say
the above when he does, in fact, know a secret.)

This may support the study of cognition as an underpinning for NLP.  
Certainly we can never expect a machine to react as we (who is 'we'?)
do unless we know how we react.

------------------------------

Date: 28 Sep 1983 1723-PDT
From: Jay <JAY@USC-ECLC>
Subject: NLP, Learning, and knowledge rep.

As an undergraduate student here at USC, I am required to pass a 
Freshman Writting class.  I have noticed in this class that one field 
of the NL Problem is UNSOLVED even in humans.  I am speaking of the 
generation of prose.

In AI terms the problems are...

The selection of a small area of the knowledge base which is small 
enough to be written about in a few pages, and large enough that a 
paper can be generated at all.

One of the solutions to this problem is called 'clustering.'  In the 
middle of a page one draws a circle about the topic.  Then a directed 
graph is built by connecting associated ideas to nodes in the graph.  
Just free association does not seem to work very well, so it is 
sugested to ask a number of question, about the main idea, or any 
other node.  Some of the questions are What, Where, When, Why (and the
rest of the "Journalistic" q's), can you RELATE an incident about it, 
can you name its PARTS, can you describe a process to MAKE or do it.  
Finally this smaller data base is reduced to a few interesting areas.
This solution is then a process of Q and A on the data base to 
construct a smaller data base.

Once a small data base has been selected, it needs to be given a 
linear representation.  That is, it must be organized into a new data 
base that is suitable to prose.  There are no solutions offered for 
this step.

Finally the data base is coded into English prose.  There are no 
solutions offered for this step.

This prose is read back in, and compared to the original data base.  
Ambiguities need to be removed, some areas elaborated on, and others 
rewritten in a clearer style.  There are no solutions offered for this
step, but there are some rules - Things to do, and things not to do.

j'

------------------------------

Date: Tuesday, 27 September 1983 15:25:35 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: NL argument between STLH and Pereira

Several comments in the last message in this exchange seemed worthy of
comment.  I think my basic sympathies lie with STLH, although he
overstates his case a bit.

While language is indeed a "fuzzy thing", there are different shades
of correctness, with some sentences being completely right, some with
one obvious *error*, which is noticed by the hearer and corrected,
while others are just a mess, with the hearer guessing the right
answer.  This is similar in some ways to error-correcting codes, where
after enough errors, you can't be sure anymore which interpretation is
correct.  This doesn't say much about whether the underlying ideal is
best expressed by a grammar.  I don't think it is, for NL, but the
reason has more to do with the fact that the categories people use in
language seem to include semantics in a rather pervasive way, so that
making a major distinction between grammatical (language-specific,
arbitrary) and other knowledge (semantics) might not be the best
approach.  I could go on at length about this (in fact I'm currently
working on a Tech Report discussing this idea), but I won't, unless
pressed.

As for ignoring human cognition, some AI people do ignore it, but
others (especially here at C-MU) take it very seriously.  This seems
to be a major division in the field -- between those who think the
best search path is to go for what the machine seems best suited for,
and those who want to use the human set-up as a guide.  It seems to me
that the best solution is to let both groups do their thing --
eventually we'll find out which path (or maybe both) was right.

I read with interest your description of your system -- I am currently
working on a semantic chart parser that sounds fairly similar to your
brief description, except that it is written in OPS5.  Thus I was
surprised at the statement that OPS5 has "no capacity for the
parallelism" needed.  OPS5 users suffer from the fact that there are
some fairly non-obvious but simple ways to build powerful data
structures in it, and these have not been documented.  Fortunately, a
production system primer is currently being written by a group headed
by Elaine Kant.  Anyway, I have an as-yet-unaccepted paper describing
my OPS5 parser available, if anyone is interested.

As for scientific "camps" in AI, part of the reason for this seems to
be the fact that AI is a very new science, and often none of the
warring factions have proved their points.  The same thing happens in
other sciences, when a new theory comes out, until it is proven or
disproven.  In AI, *all* the theories are unproven, and everyone gets
quite excited.  We could probably use a little more of the "both
schools of thought are probably partially correct" way of thinking,
but AI is not alone in this.  We just don't have a solid base of
proven theory to anchor us (yet).

In regard to the call for a theory which explains all aspects of
language behavior, one could answer "any Turing-equivalent computer".
The real question is, how *specifically* do you get it to work?  Any
claim like "my parser can easily be extended to do X" is more or less
moot, unless you've actually done it.  My OPS5 parser is embedded in a
Turing-equivalent production system language.  I can therefore
guarantee that if any computer can do language learning, so can my
program.  The question is, how?  The way linguists have often wanted
to answer "how" is to define grammars that are less than
Turing-equivalent which can do the job, which I suspect is futile when
you want to include semantics.  In any event, un-implemented
extensions of current programs are probably always much harder than
they appear to be.

(As an aside about sentences as fundamental structures, there is a
two-prong answer: (1) Sentences exist in all human languages.  They
appear to be the basic "frame" [I can hear nerves jarring all over the
place] or unit for human communication of packets of information.  (2)
Some folks have actually tried to define grammars for dialogue
structures.  I'll withhold comment.)

In short, I think warring factions aren't that bad, as long as they
all admit that no one has proven anything yet (which is definitely not
always the case), semantic chart parsing is the way to go for NL,
theories that explain all of cognitive science will be a long time in
coming, and that no one should accept a claim about AI that hasn't
been implemented.

------------------------------

End of AIList Digest
********************

∂29-Sep-83  1610	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #67
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83  16:09:36 PDT
Date: Thursday, September 29, 1983 12:56PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #67
To: AIList@SRI-AI


AIList Digest            Friday, 30 Sep 1983       Volume 1 : Issue 67

Today's Topics:
  Alvey Report & Fifth Generation,
  AI at Edinburgh - Reply,
  Machine Organisms - Desirability,
  Humor - Famous Flamer's School
----------------------------------------------------------------------

Date: 23 Sep 83 13:17:41-PDT (Fri)
From: decvax!genrad!security!linus!utzoo!watmath!watdaisy!rggoebel@Ucb-Vax
Subject: Re: Alvey Report and Fifth Generation
Article-I.D.: watdaisy.298

The ``Alvey Report'' is the popular name for the following booklet:

  A Programme for Advanced Information Technology
  The Report of the Alvey Committee

  published by the British Department of Industry, and available from
  Her Majesty's Stationery Office.  One London address is

    49 High Holborn
    London WC1V 6HB

The report is indeed interesting because it is a kind of response to
the Japanese Fifth Generation Project, but is is also interesting in
that it is not nearly so much the genesis of a new project as the
organization of existing potential for research and development.  The
quickest way to explain the point is that of the proposed 352 million
pounds that the report suggests to be spent, only 42 million is for
AI (Actually it's not for AI, but for IKBS-Intelligent Knowledge Based
Systems; seniors will understand the reluctance to use the word AI after
the Lighthill report).

The areas of proposed development include 1) Software engineering,
2) Man/Machine Interfaces, 3) IKBS, and 4) VLSI.  I have heard the
the most recent national budget in Britain has not committed the
funds expected for the project, but this is only rumor.  I would appreciate
further information (Can you help D.H.D.W.?).

On another related topic, I think it displays a bit of AI chauvinism
to believe that anyone, including the Japanese and the British
are so naive as to put all the eggs in one basket.

Incidently, I believe Feigenbaum and McCorduck's book revealed
at least two things: a disguised plea for more funding, and a not so
disguised expose of American engineering chauvinism.  Much of the American
reaction to the Japanese project sounds like the old cliches of
male chauvinism like ``...how could a women ever do the work of a real man?''
It just maybe that American Lisper's may end up ``eating quiche.'' 8-)

Randy Goebel
Logic Programming Group
University of Waterloo
UUCP: watmath!rggoebel

------------------------------

Date: Tue 27 Sep 83 22:31:28-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: U of Edinburgh, Scotland Inquiry

Since the Lighthill Report, a lot has changed for AI in Britain. The 
Alvey Report (British Department of Industry) and the Science and 
Engineering Research Council (SERC) initiative on Intelligent 
Knowledge-Based Systems (IKBS) have released a lot of money for 
Information Technology in general, and AI in particular (It remains to
be seen whether that huge amount of money -- 100s of millions -- is 
going to be spent wisely). The Edinburgh Department of AI has managed 
to get a substantial slice of that money. They have been actively 
looking for people both at lecturer and research associate/fellow 
level [a good opportunity for young AIers from the US to get to know 
Scotland, her great people and unforgetable Highlands].

The AI Dept. have recently added 3 (4?) new people to their teaching 
staff, and have more machines, research staff, and students than ever.
The main areas they work on are: Natural Language (Henry Thompson, 
Mark Steedman, Graeme Ritchie), controlled deduction and problem 
solving (Alan Bundy and his research assistant and students), Robotics
(Robin Popplestone, Pat Ambler and a number of others), LOGO-style 
stuff (Jim Howe [head of department] and Peter Ross) and AI languages 
(Robert Rae, Dave Bowen and others).  There are probably others I 
don't remember. The AI Dept.  is both on UUCP and on a network 
connected to ARPANET:

        <username>%edxa%ucl-cs@isid (ARPANET)
        ...!vax135!edcaad!edee!edai!<username> (UUCP)

I have partial lists of user names for both connections which I will
mail directly to interested persons.

Fernando Pereira SRI AI Center [an old Edinburgh hand]

pereira@sri-ai (ARPA) ...!ucbvax!pereira@sri-ai (UUCP)

------------------------------

Date: 24 Sep 83 3:54:20-PDT (Sat)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Machine Organisms? - (nf)
Article-I.D.: hp-pcd.1920


I was reading a novel recently, and ran across the following passage re-
lating to "intelligent" machines, robots, etc.  In case anyone is interested,
the book is Satan's World, by Poul Anderson, Doubleday 1969 (p. 132).
(I hope this article doesn't seem more appropriate to sf-lovers than to ai.)

        ... They had electronic speed and precision, yes, but not
        full decision-making capacity.  ... This is not for lack
        of mystic vital forces.  Rather, the biological creature
        has available to him so much more physical organization.
        Besides sensor-computer-effector systems comparable to
        those of the machine, he has feed-in from glands, fluids,
        chemistry reaching down to the molecular level -- the
        integrated ultracomplexity, the entire battery of
        *instincts* -- that a billion-odd years of ruthlessly
        selective evolution have brought forth.  He perceives and
        thinks with a wholeness transcending any possible symbolism;
        his purposes arise from within, and therefore are infinitely
        flexible.  The robot can only do what it was designed to
        do.  Self-programming has [can] extended these limits, to the
        point where actual consciousness may occur if desired.  But
        they remain narrower than the limits of those who made
        the machines.

Later in the book, the author describes a view that if a robot "were so
highly developed as to be equivalent to a biological organism, there
would be no point in building it."  This is explained as being true
because "nature has already provided us means for making new biological
organisms, a lot cheaper and more fun than producing robots."

I won't go on with the discussion in the book, as it degenerates into the
usual debate about the theoretical, fully motivated computer that is
superior in any way..., and how such a computer would rule the world, etc.
My point in posting the above passage was to ask the experts of netland
to give their opinions of the aforementioned views.

More specifically, how do we feel about the possibilities of building
machines that are "equivalent" to intelligent biological organisms?
Or even non-intelligent ones?  Is it possible?  And if so, why bother?

It's probably obvious that we don't need to disagree with the views given
by the author in order to want to continue with our studies in Artificial
Intelligence.  But how many of us do agree?  Disagree?

Marion Hakanson         {hp-pcd,teklabs}!orstcs!hakanson        (Usenet)
                        hakanson.oregon-state@rand-relay        (CSnet)
                        hakanson@{oregon-state,orstcs}          (also CSnet)

------------------------------

Date: Wed 28 Sep 83 17:18:53-PDT
From: Peter Karp <KARP@SUMEX-AIM>
Subject: Amusement from CMU's opinion bboard

   [Reprinted from the CMU opinion board via the SU-SCORE bboard.]


Ever dreamed of flaming with the Big Boys?  ...  Had that desire to
write an immense diatribe, berating de facto all your peers who hold
contrary opinions?  ...  Felt the urge to have your fingers moving
without being connected to your brain?  Well, by simply sending in the
form on the back of this bboard post, you could begin climbing into
your pulpit alongside greats from all walks of life such as Chomsky,
Weizenbaum, Reagan, Von Danneken, Ellison, Abzug, Arifat and many many
more.  You don't even have to leave the comfort of your armchair!

Here's how it works:  Each week we send you a new lesson.  You read
the notes and then simply write one essay each week on the assigned
topic.  Your essays will be read by our expert pool of professional
flamers and graded on Sparsity, Style, Overtness, Incoherence, and a
host of other important aspects.  You will receive a long letter from
your specially selected advisor indicating in great detail why you
obviously have the intellectual depth of a soap dish.  This
apprenticeship is all there is to it.

Here are some examples of the courses offered by The School:

        Classical Flames:  You will study the flamers who started it 
all.  For example, Descarte's much quoted demonstration that reality 
isn't.  Special attention is paid, in this course, to the old and new 
testaments and how western flaming was influenced by their structure.
(The Bible plays a particularly important role in our program and most
courses will spend at least some time tracing biblical origins or 
associations of their special topic.  See, particularly, the special 
seminar on Space Cadetism, which concentrate on ESP and UFO
phenomena.)

        Contemporary Flame Technique:  Attention is paid to the detail
of flame form in this course.  The student will practice the subtle
and overt ad hominem argument; fact avoidance maneuvers; "at length" 
writing style; over generalization; and other important factor which 
make the modern flame inaccessible to the general populace.  Readings 
from Russell ("Now I will admit that some unusually stupid children of
ten may find this material a bit difficult to fathom..."), Skinner, 
(primarily concentrating on his Verbal Learning), Sagan (on abstract 
overestimation) and many others.  This course is most concerned with 
politicians (sometimes, redundantly, referred to as "political
flamers") since their speech writers are particularly adept at the
technique that we wish to foster.

        Appearing Brilliant (thanks to the Harvard Lampoon): Nobel
laureates lecture on topics of world import but which are very much
outside their field of expertise.  There is a large representation of
Nobels in physics:  the discoverer of the UnCharmed Pi Mesa Beta Quark
explains how the population explosion can be averted through proper
reculterization of mothers; and professor Nikervator, first person to
properly develop the theory of faster- than-sound "Whizon" docking
coreography, tells us how mind is the sole theological entity.

        Special seminar in terminology:  The name that you give 
something is clearly more important than its semantics.  Experts in 
nomenclature demonstrate their skills.  Pulitzer Prize winner Douglas 
Hofstader makes up 15,000 new words whose definitions, when read 
sideways prove the existence of themselves and constitute fifteen
months of columns in Scientific American.  A special round table of
drug company and computer corporation representatives discuss how to
construct catchy names for new products and never give the slightest
hint to the public about what they mean.

        Writing the Scientific Journal Flame: Our graduates will be
able to compete in the modern world of academic and industrial
research flaming, where the call is high for trained pontificators.
the student reads short sections from several fields and then may
select a field of concentration for detailed study.

Here is an example description of a detailed scientific flaming
seminar:

        Computer Science: This very new field deals directly with the 
very metal of the flamer's tools: information and communication.  The
student selecting computer science will study several areas including,
but not exclusively:

    Artificial Intelligence: Roger Schank explains the design of
    his flame understanding and generation engine (RUSHIN) and
    will explain how the techniques that it employs constitute a
    complete model of mind, brain, intelligence, and quantum
    electrodynamics.  For contrast, Marvin Minsky does the same.
    Weizenbaum tells us, with absolutely no data or alternative
    model, why AI is logically impossible, and moreover,
    immoral.

    Programming Languages: A round table is held between Wirth,
    Hoare, Dykstra, Iverson, Perlis, and Jean Samett, in order
    to keep them from killing each other.

    Machines and systems: Fred Brooks and Gordon Bell lead a
    field of experts over the visual cliff of hardware
    considerations.

The list of authoritative lectures goes on and on.  In addition, an 
inspiring introduction by Feigenbaum explains how important it is that
flame superiority be maintained by the United States in the face of
the recent challenges from Namibia and the Panama Canal zone.

But there's more.  Not only will you read famous flamers in abundance,
but you will actually have the opportunity to "run with the pack".
The Famous Flamer's School has arranged to provide access for all
computer science track students, to the famous ARPANet where students
will be able to actually participate in discussions of earthshaking
current importance, along with the other brilliant young flamers using
this nationwide resource.  You'll read and write about whether
keyboards should have a space bar across the whole bottom or split
under the thumbs; whether or not Emacs is God, and which deity is the
one true editor; whether the brain actually cools the body or not;
whether the earth revolves around the sun or vice versa -- and much
more.  You contributions will be whisked across the nation, faster
than throwing a 2400 foot magtape across the room, into the minds of
thousands of other electrolusers whose brain cells will merge with
yours for the moment that they read your personal opinion of matters
of true science!  What importance!

We believe that the program we've constructed is very special and will
provide, for the motivated student, an atmosphere almost completely 
content free in which his or her ideas can flow in vacuity.  So, take 
the moment to indicate your name, address, age, and hat size by
filling out the rear of this post and mailing it to:

        FAMOUS FLAMER'S SCHOOL
        c/o Locker number 6E
        Grand Central Station North
        New York, NY.

Act now or forever hold your peace.

------------------------------

End of AIList Digest
********************

∂03-Oct-83  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #68
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83  11:03:40 PDT
Date: Monday, October 3, 1983 9:33AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #68
To: AIList@SRI-AI


AIList Digest             Monday, 3 Oct 1983       Volume 1 : Issue 68

Today's Topics:
  Humor - Famous Flamer's School Credit,
  Technology Transfer & Research Ownership,
  AI Reports - IRD & NASA,
  TV Coverage - Computer Chronicles,
  Seminars - Ullman, Karp, Wirth, Mason,
  Conferences - UTexas Symposium & IFIP Workshop
----------------------------------------------------------------------

Date: Mon 3 Oct 83 09:29:16-PDT From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Famous Flamer's School -- Credit

The Famous Flamer's School was created by Jeff.Shrager@CMU-CS-A; my
apologies for not crediting him in the original article.  If you
saved or distributed a copy, please add a note crediting Jeff.

					-- Ken Laws

------------------------------

Date: Thu 29 Sep 83 17:58:29-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Alas, I must flame...

[ I hate to flame, but here's an issue that really got to me...]

From the call for papers for the "Artificial Intelligence and Machines":

    AUTHORS PLEASE NOTE:  A Public Release/Sensitivity Approval is necessary.
    Authors from DOD, DOD contractors, and individuals whose work is government
    funded must have their papers reviewed for public release and more
    importantly sensitivity (i.e. an operations security review for sensitive
    unclassified material) by the security office of their sponsoring agency.

  How much AI work does *NOT* fall under one of the categories "Authors from
DOD, DOD contractors, and individuals whose work is government funded" ?
I read this to mean that essentially any government involvement with
research now leaves one open to goverment "protection".

  At issue here is not the goverment duty to safeguard classified materials;
it is the intent of the government to limit distribution of non-military
basic research (alias "sensitive unclassified material"). This "we paid for
it, it's OURS (and the Russians can't have it)" mentality seems the rule now.

  But isn't science supposed to be for the benefit of all mankind,
and not just another economic bargaining chip? I cannot help but to
be chilled by this divorce of science from a higher moral outlook.
Does it sound old fashioned to believe that scientific thought is
part of a common heritage, to be used to improve the lives of all? A
far as I can see, if all countries in the world follow the lead of
the US and USSR toward scientific protectionism, we scientists will
have allowed science to abandon its primary role toward learning
about ourselves and become a mere intellectual commodity.

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Fri 30 Sep 83 10:09:08-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IRD Report


         [Reprinted from IEEE Computer, Sep. 1983, p. 116.]


             Rapid Growth Predicted for AI-Based System

Expert systems are now moving out of the research laboratory and into
the commercial marketplace, according to "Artificial Intelligence,"
a 167-page research report from International Resource Development.
Revenue from all AI hardware, software, and services will amount to
only $70 million this year but is expected to reach $8 billion
in the next 10 years.

Biomedical applications promise to be among the fastest growing
uses of AI, reducing the time and cost of diagnosing illnesses and
adding to the accuracy of diagnoses.  AI-based systems can range
from "electronic encyclopedias," which physicians can use as
reference sources, to full-fledged "electronic consultants"
capable of taking a patient through an extensive series of diagnostic
tests and determining the patient's ailments with great precision.

"Two immediate results of better diagnostic procedures may be a
reduction in the number of unnecessary surgical procedures performed
on patients and a decrease in the average number of expensive tests
performed on patients," predicts Dave Ledecky of the IRD research
staff.  He also notes that the AI technology may leave hospitals
half-empty, since some operations turn out to be unnecessary.
However, he expects no such dramatic result anytime soon, since
widespread medical application of AI technology isn't expected for
about five years.

The IRD report also describes the activities of several new companies
that are applying AI technology to medical systems.  Helena Laboratories
in Beaumont, Texas, is shipping a densitometer/analyzer, which
includes a serum protein diagnostic program developed by Rutgers
University using AI technology.  Still in the development stage
are the AI-based products of IntelliGenetics in Palo Alto,
California, which are based on work conducted at Stanford University
over the last 15 years.

Some larger, more established companies are also investing in AI
research and development.  IBM is reported to have more than five
separate programs underway, while Schlumberger, Ltd., is
spending more than $5 million per year on AI research, much of
which is centered on the use of AI in oil exploration.

AI software may dominate the future computer industry, according to
the report, with an increasing percentage of applications
programming being performed in Lisp or other AI-based "natural"
languages.

Further details on the $1650 report are available from IRD,
30 High Street, Norwalk, CT 06851; (800) 243-5008,
Telex: 64 3452.

------------------------------

Date: Fri 30 Sep 83 10:16:43-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: NASA Report


[Reprinted from IEEE Spectrum, Oct. 1983, p. 78]


Overview Explains AI

A technical memorandum from the National Aeronautics and
Space Administration offers an overview of the core ingredients
of artificial intelligence.  The volume is the first in a series
that is intended to cover both artificial intelligence and
robotics for interested engineers and managers.

The initial volume gives definitions and a short history entitled
"The rise, fall, and rebirth of AI" and then lists applications,
principal participants in current AI work, examples of the
state of the art, and future directions.  Future volumes in AI
will cover application areas in more depth and will also cover
basic topics such as search-oriented problem-solving and
planning, knowledge representation, and computational logic.

The report is available from the National Technical Information
Service, Springfield, Va. 22161.  Please ask for NASA Technical
Memorandum Number 85836.

------------------------------

Date: Thu 29 Sep 83 20:13:09-PDT
From: Ellie Engelmore <EENGELMORE@SUMEX-AIM>
Subject: TV documentary

                [Reprinted from the SU-SCORE bboard.]


KCSM-TV Channel 60 is producing a series entitled "The Computer
Chronicles".  This is a series of 30-minute programs intended to be a
serious look at the world of computers, a potential college-level
teaching device, and a genuine historical document.  The first episode
in the series (with Don Parker discussing computer security) will be
broadcast this evening...Thursday, September 29...9pm.

The second portion of the series, to be broadcast 9 pm Thursday,
October 6, will be on the subject of Artificial Intelligence (with Ed
Feigenbaum).

------------------------------

Date: Thu 29 Sep 83 19:03:27-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA@SU-Score>
Subject: AFLB

                [Reprinted from the SU-SCORE bboard.]


The "Algorithms  for  Lunch  Bunch"  (AFLB) is  a  weekly  seminar  in
analysis  of  algorithms  held   by  the  Stanford  Computer   Science
Department, every Thursday, at 12:30 p.m., in Margaret Jacks Hall, rm.
352.

At the first meeting this year, (Thursday, October 6) Prof. Jeffrey D.
Ullman, from Stanford,  will talk on  "A time-communication  tradeoff"
Abstract follows.

Further  information  about   the  AFLB  schedule   is  in  the   file
[SCORE]<broder>aflb.bboard .

If you want to  get abstracts of  the future talks,  please send me  a
message to put you on the AFLB mailing list.  If you just want to know
the title of the  next talk and  the name of the  speaker look at  the
weekly Stanford CSD  schedule that  is (or  should be)  sent to  every
bboard.
                      ------------------------

10/6/83 - Prof. Jeffrey D. Ullman (Stanford):

                       "A time-communication  tradeoff"

We examine how multiple  processors could share  the computation of  a
collection of values  whose dependencies  are in  the fom  of a  grid,
e.g., the estimation of nth derivatives.  Two figures of merit are the
time t the shared computation takes and the amount of communication c,
i.e., the number of values that  are either inputs or are computed  by
one processor and  used by another.   We prove that  no matter how  we
share the responsibility for computing  an n by n  grid, the law ct  =
OMEGA(n↑3) must hold.

******** Time and place: Oct. 6, 12:30 pm in MJ352 (Bldg. 460) *******

------------------------------

Date: Thu 29 Sep 83 09:33:24-CDT
From: CS.GLORIA@UTEXAS-20.ARPA
Subject: Karp Colloquium, Oct. 13, 1983

               [Reprinted from the UTexas-20 bboard.]


Richard M. Karp, University of California at Berkeley, will present a talk
entitled, "A Fast Parallel Algorithm for the Maximal Independent Set Problem"
on Thursday, October 13, 1983 at 3:30 p.m. in Painter Hall 4.42.  Coffee
at 3 p.m. in PAI 3.24.
Abstract:
     One approach to understanding the limits of parallel computation is to
search for problems for which the best parallel algorithm is not much faster
than the best sequential algorithm.  We survey what is known about this
phenomenon and show that--contrary to a popular conjecture--the problem of
finding a maximal inependent set of vertices in a graph is highly amenable
to speed-up through parallel computation.  We close by suggesting some new
candidates for non-parallelizable problems.

------------------------------

Date: Fri 30 Sep 83 21:39:45-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: N. Wirth, Colloquium 10/4/83

                [Reprinted from the SU-SCORE bboard.]


CS COLLOQUIUM:  Niklaus Wirth will be giving the
opening colloquium of this quarter on Tuesday (Oct. 4),
at 4:15 in Terman Auditorium.  His talk is titled
"Reminiscences and Reflections".  Although there is
no official abstract, in discussing this talk with him
I learned that Reminiscences refer to his days here at
Stanford one generation ago, and Reflections are on
the current state of both software and hardware, including
his views on what's particularly good and bad in the
current research in each area.  I am looking forward to
this talk, and invite all members of our department,
and all interested colleagues, to attend.

Professor Wirth's talk will be preceded by refreshments
served in the 3rd floor lounge (in Margaret Jacks Hall)
at 3:45.  Those wishing to schedule an appointment with
Professor Wirth should contact ELYSE@SCORE.

------------------------------

Date: 30 Sep 83  1049 PDT
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR IN LOGIC AND FOUNDATIONS

                [Reprinted from the SU-SCORE bboard.]


Organizational and First Meeting

Time: Wednesday, Oct. 5, 4:15-5:30 PM

Place:  Mathematics Dept. Faculty Lounge, 383N Stanford

Speaker: Ian Mason

Title: Undecidability of the metatheory of the propositional calculus.

   Before the talk there will be a discussion of plans for the seminar
this fall.
                       S. Feferman


[PS - If you read this notice on a bboard and would like to be on the
distribution list send me a message.  - CLT@SU-AI]

------------------------------

Date: Thu 29 Sep 83 14:24:36-CDT
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Schedule for C.S. Dept. Centennial Symposium

               [Reprinted from the UTexas-20 bboard.]


                        COMPUTING AND THE INFORMATION AGE

                             October 20 & 21, 1983

                        Joe C. Thompson Conference Center

Thursday, Oct. 20
-----------------

8:30    Welcoming address - A. G. Dale (UT Austin)
                            G. J. Fonken, VP for Acad. Affairs and Research

9:00    Justin Rattner (Intel)
        "Directions in VLSI Architecture and Technology"

10:00   J. C. Browne (UT Austin)

10:15   Coffee Break

10:45   Mischa Schwartz (Columbia)
        "Computer Communications Networks: Past, Present and Future"

11:45   Simon S. Lam (UT Austin)

12:00   Lunch

2:00    Herb Schwetman (Purdue)
        "Computer Performance: Evaluation, Improvement, and Prediction"

3:00    K. Mani Chandy (UT Austin)

3:15    Coffee Break

3:45    William Wulf (Tartan Labs)
        "The Evolution of Programming Languages"

4:45    Don Good (UT Austin)

Friday, October 21
------------------

8:30    Raj Reddy (CMU)
        "Supercomputers for AI"

9:30    Woody Bledsoe (UT Austin)

9:45    Coffee Break

10:15   John McCarthy (Stanford)
        "Some Expert Systems Require Common Sense"

11:15   Robert S. Boyer and J Strother Moore (UT Austin)

11:30   Lunch

1:30    Jeff Ullman (Stanford)
        "A Brief History of Achievements in Theoretical Computer Science"

2:30    James Bitner (UT Austin)

2:45    Coffee Break

3:15    Cleve Moler (U. of New Mexico)
        "Mathematical Software -- The First of the Computer Sciences"

4:15    Alan Cline (UT Austin)

4:30    Summary - K. Mani Chandy, Chairman, Dept. of Computer Sciences

------------------------------

Date: Sunday, 2 October 1983 17:49:13 EDT
From: Mario.Barbacci@CMU-CS-SPICE
Subject: Call For Participation -- IFIP Workshop

                            CALL FOR PARTICIPATION
                IFIP Workshop on Hardware Supported Implementation of
                    Concurrent Languages in Distributed Systems
                        March 26-28, 1984, Bristol, U.K.

TOPICS:
- the impact of distributed computing languages and compilers on the
                architecture of distributed systems.
- operating systems; centralized/decentralized control, process
                communications and synchronization, security
- hardware design and interconnections
- hardware/software interrelation and trade offs
- modelling, measurements, and performance

Participation is by INVITATION ONLY, if you are interested in attending this
workshop write to the workshop chairman and include an abstract (1000 words
approx.) of your proposed contribution.

Deadline for Abstracts: November 15, 1983
Workshop Chairman:      Professor G.L. Reijns
                        Chairman, IFIP Working Group 10.3
                        Delft University of Technology
                        P.O. Box 5031
                        2600 GA Delft
                        The Netherlands

------------------------------

End of AIList Digest
********************

∂03-Oct-83  1255	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #69
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83  12:51:38 PDT
Date: Monday, October 3, 1983 9:50AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #69
To: AIList@SRI-AI


AIList Digest             Monday, 3 Oct 1983       Volume 1 : Issue 69

Today's Topics:
  Rational Psychology - Examples,
  Organization - Reflexive Reasoning & Conciousness & Learning & Parallelism
----------------------------------------------------------------------

Date: Thu, 29 Sep 83 18:29:39 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: "Rational Psychology"


     Recently on this list, Pereira held up as a model for us all, Doyle's
"Rational Psychology" article in AI Magazine.  Actually, I think what Pereira
is really requesting is a reduction of overblown claims and assertions with no
justification (e.g., "solutions" to the natural language problem).  However,
since he raised the "rational psychology" issue I though I would comment on it.

     I too read Doyle's article with interest (although it seemed essentially
the same as Don Norman's numerous calls for a theoretical psychology in the
early 1970s), but (like the editor of this list) I was wondering what the
referents were of the vague descriptions of "rational psychology."  However,
Doyle does give some examples of what he means: mathematical logic and
decision theory, mathematical linguistics, and mathematical theories of
perception.  Unfortunately, this list is rather disappointing because --
with the exception of the mathematical theories of perception -- they have
all proved to be misleading when actually applied to people's behavior.

     Having a theoretical (or "rational" -- terrible name with all the wrong
connotations) psychology is certainly desirable, but it does have to make some
contact with the field it is a theory of.  One of the problems here is that
the "calculus" of psychology has yet to be invented, so we don't have the tools
we need for the "Newtonian mechanics" of psychology.  The latest mathematical
candidate was catastrophe theory, but it turned out to be a catastrophe when
applied to human behavior.  Perhaps Periera and Doyle have a "calculus"
to offer.

     Lacking such a appropriate mathematics, however, does not stop a
theoretical psycholology from existing.  In fact, I offer three recent examples
of what a theoretical psychology ought to be doing at this time:

 Tversky, A.  Features of similarity.  PSYCHOLOGICAL REVIEW, 1977, 327-352.

 Schank, R.C.  DYNAMIC MEMORY.  Cambridge University Press, 1982.

 Anderson, J.R.  THE ARCHITECTURE OF COGNITION.  Harvard University Press, 1983.

------------------------------

Date: Thu 29 Sep 83 19:03:40-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Self-description, multiple levels, etc.

For a brilliant if tentative attack of the questions noted by
Prem Devanbu, see Brian Smith's thesis "Reflection and Semantics
in a Procedural Language,", MIT/LCS/TR-272.

Fernando Pereira

------------------------------

Date: 27 Sep 83 22:25:33-PDT (Tue)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: reflexive reasoning ? - (nf)
Article-I.D.: uiucdcs.3004


I believe the pursuit of "consciousness" to be complicated by the difficulty
of defining what we mean by it (to state the obvious). I prefer to think in
less "spiritual" terms, say starting with the ability of the human memory to
retain impressions for varying periods of time. For example, students cramming
for an exam can remember long lists of things for a couple of hours -- just
long enough -- and forget them by the end of the same day. Some thoughts are
almost instantaneously lost, others last a lifetime.

Here's my suggestion: let's start thinking in terms of self-observation, i.e.
the construction of models to explain the traces that are left behind by things
we have already thought (and felt?). These models will be models of what goes
on in the thought processes, can be incorrect and incomplete (like any other
model), and even reflexive (the thoughts dedicated to this analysis leave
their own traces, and are therefore subject to modelling, creating notions
of self-awareness).

To give a concrete (if standard) example: it's quite reasonable for someone
to say to us, "I didn't know that." Or again, "Oh, I just said it, what was
his name again ... How can I be so forgetful!"

This leads us into an interesting "problem": the fading of human memory with
time. I would not be surprized if this was actually desirable, and had to be
emulated by computer. After all, if you're going to retain all those traces
of where a thought process has gone; traces of the analysis of those traces,
etc; then memory would fill up very quickly.

I have been thinking in this direction for some time now, and am working on
a programming language which operates on several of the principles stated
above. At present the language is capable of responding dynamically to any
changes in problem state produced by other parts of the program, and rules
can even respond to changes induced by themselves. Well, that's the start;
the process of model construction seems to me to be by far the harder part
of the task.

It becomes especially interesting when you think about modelling what look
like "levels" of self-awareness, but could actually be manifestations of just
one mechanism: traces of some work, which are analyzed, thus leaving traces
of self-analysis; which are analyzed ... How are we to decide that the traces
being analyzed are somehow different from the traces of the analysis? Even
"self-awareness" (as opposed to full-blown "consciousness") will be difficult
to understand. However, at this point I am convinced that we are not dealing
with a potential for infinite regress, but with a fairly simple mechanism
whose results are hard to interpret. If I am right, we may have some thinking
to do about subject-object distinctions.

In case you're interested in my programming language, look for some papers due
to appear shortly:

        Logic-Programming Production Systems with METALOG.  Software Practice
           and Experience, to appear shortly.

        METALOG: a Language for Knowledge Representation and Manipulation.
           Conf on AI (April '83).

Of course, I don't say that I'm thinking about "self-awareness" as a long-term
goal (my co-author isn't) ! If/when such a goal becomes acceptable to the AI
community it will probably be called something else. Doesn't "reflexive
reasoning" sound more scientific?.

                                Marcel Schoppers,
                                Dept of Comp Sci,
                                U of Illinois @ Urbana-Champaign
                                uiucdcs!marcel

------------------------------

Date: 27 Sep 83 19:24:19-PDT (Tue)
From: decvax!genrad!security!linus!philabs!cmcl2!floyd!vax135!ariel!ho
      u5f!hou5e!hou5d!mat@Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: hou5d.674

I may be naive, but it seems to me that any attempt to produce a system that
will exhibit conciousness-;like behaviour will require emotions and the
underlying base that they need and supply.  Reasoning did not evolve
independently of emotions; human reason does not, in my opinion, exist
independently of them.

Any comments?  I don't recall seeing this topic discussed.  Has it been?  If
not, is it about time to kick it around?
                                                Mark Terribile
                                                hou5d!mat

------------------------------

Date: 28 Sep 83 12:44:39-PDT (Wed)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: drufl.674

I agree with mark. An interesting book to read regarding conciousness is
"The origin of conciousness in the breakdown of bicamaral mind" by
Julian Jaynes. Although I may not agree fully with his thesis, it did
get me thinking and questioning about the usual ideas regarding
conciousness.

An analogy regarding conciousness, "emotions are like the roots of a
plant, while conciousness is the fruit".

                                Samir Shah
                                AT&T Information Systems, Denver.
                                drufl!samir

------------------------------

Date: 30 Sep 83 13:42:32 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Recursion of reperesentations.


Some of the more recent messages have questioned the possibility of
producing programs which can "understand" and "create" human discourse,
because this kind of "understanding" seems to be based upon an infinite
kind of recursion. Stated very simply, the question is "how can the human
mind understand itself, given that it is finite in capacity?", which
implies that humans cannot create a machine equivalent of a human mind,
since (one assumes) that underatnding is required before construction
becomes possible.

There are two rather simple objections to this notion:
        1) Humans create minds every day, without understanding
           anything about it. Just some automatic biochemical
           machinery, some time, and exposure to other minds
           does the trick for human infants.

        2) John von Neumann, and more recently E.F. Codd
           demostrated in a very general way the existence
           of universal constructors in cellular automata.
           These are configurations in cellular space which
           able to construct any configuration, including
           copies of themselves, in finite time (for finite
           configurations)

No infinite recursion is involved in either case, nor is "full"
understanding required.

I suspect that at some point in the game we will have learned enough about
what works (in a primarily empirical sense) to produce machine intelligence.
In the process we will no doubt learn a lot about mind in general, and our
own minds in particular, but we will still not have a complete understanding
of either.

Peolpe will continue to produce AI programs; they will gradually get better
at various tasks; others will combine various approaches and/or programs to
create systems that play chess and can talk about the geography of South
America; occasionally someone will come up with an insight and a better way
to solve a sub-problem ("subjunctive reference shift in frame-demon
instantiation shown to be optimal for linearization of semantic analysis
of noun phrases" IJCAI 1993); lay persons will come to take machine intelligence
for granted; AI people will keep searching for a better definition of
intelligence; nobody will really believe that machines have that indefinable
something (call it soul, or whatever) that is essential for a "real" mind.

                        Pete Biesel@Rutgers.arpa

------------------------------

Date: 29 Sep 83 14:14:29 EDT
From: SOO@RUTGERS.ARPA
Subject: Top-Down? Bottom-Up?

                [Reprinted from the Rutgers bboard.]


 I happen to read a paper by Michael A. Arbib about brain theory.
 The first section of which is "Brain Theory: 'Bottom-up' and
 'Top-Down'" which I think will shed some light on our issue of
 top-down and bottom-up approach in machine learning seminar.
 I would like to quote several remarks from the brain theorist
 view point to share with those interesed:

"    I want to suggest that brain theory should confront the 'bottom-up'
analyses of neural modellling no only with biological control theory but
also with the 'top-down' analyses of artificial intelligence and congnitive
psychology. In bottom-up analyses, we take components of known function, and
explore ways of putting them together to synthesize more and more complex
systems. In top-down analyses, we start from some complex functional behavior
that interests us, and try to determine what are natural subsystems into which
we can decompose a system that performs in the specified way.  I would argue
that progress in brain theory will depend on the cyclic interaction of these
two methodologies. ..."


"  The top-down approach complement bottom-up studies, for one cannot simply
wait until one knows all the neurons are and how they are connected to then
simulate the complete system. ..."

I believe that the similar philosophy applies to the machine learning study
too.

For those interested, the paper can be found in COINS techical report 81-31
by M. A. Arbib "A View of Brain Theory"


Von-Wun,

------------------------------

Date: Fri, 30 Sep 83 14:45:55 PDT
From: Rik Verstraete <rik@UCLA-CS>
Subject: Parallelism and Physiology

I would like to comment on your message that was printed in AIList Digest
V1#63, and I hope you don't mind if I send a copy to the discussion list
"self-organization" as well.

        Date: 23 Sep 1983 0043-PDT
        From: FC01@USC-ECL
        Subject: Parallelism

        I thought I might point out that virtually no machine built in the
        last 20 years is actually lacking in parallelism. In reality, just as
        the brain has many neurons firing at any given time, computers have
        many transistors switching at any given time. Just as the cerebellum
        is able to maintain balance without the higher brain functions in the
        cerebrum explicitly controlling the IO, most current computers have IO
        controllers capable of handling IO while the CPU does other things.

The issue here is granularity, as discussed in general terms by E. Harth
("On the Spontaneous Emergence of Neuronal Schemata," pp. 286-294 in
"Competition and Cooperation in Neural Nets," S. Amari and M.A. Arbib
(eds), Springer-Verlag, 1982, Lecture Notes in Biomathematics # 45).  I
certainly recommend his paper.  I quote:

One distinguishing characteristic of the nervous system is
thus the virtually continuous range of scales of tightly
intermeshed mechanisms reaching from the macroscopic to the
molecular level and beyond.  There are no meaningless gaps
of just matter.

I think Harth has a point, and applying his ideas to the issue of parallel
versus sequential clarifies some aspects.

The human brain seems to be parallel at ALL levels.  Not only is a large
number of neurons firing at the same time, but also groups of neurons,
groups of groups of neurons, etc. are active in parallel at any time.  The
whole neural network is a totally parallel structure, at all levels.

You pointed out (correctly) that in modern electronic computers a large
number of gates are "working" in parallel on a tiny piece of the problem,
and that also I/O and CPU run in parallel (some systems even have more than
one CPU).  However, the CPU itself is a finite state machine, meaning it
operates as a time-sequence of small steps.  This level is inherently
sequential.  It therefore looks like there's a discontinuity between the
gate level and the CPU/IO level.

I would even extend this idea to machine learning, although I'm largely
speculating now.  I have the impression that brains not only WORK in
parallel at all levels of granularity, but also LEARN in that way.  Some
computers have implemented a form of learning, but it is almost exclusively
at a very high level (most current AI on learning work is at this level),
or only at a very low level (cf. Perceptron).  A spectrum of adaptation is
needed.

Maybe the distinction between the words learning and self-organization is
only a matter of granularity too. (??)

        Just as people have faster short term memory than long term memory but
        less of it, computers have faster short term memory than long term
        memory and use less of it. These are all results of cost/benefit
        tradeoffs for each implementation, just as I presume our brains and
        bodies are.

I'm sure most people will agree that brains do not have separate memory
neurons and processing neurons or modules (or even groups of neurons).
Memory and processing is completely integrated in a human brain.
Certainly, there are not physically two types of memories, LTM and STM.
The concept of LTM/STM is only a paradigm (no doubt a very useful one), but
when it comes to implementing the concept, there is a large discrepancy
between brains and machines.

        Don't be so fast to think that real computer designers are
        ignorant of physiology.

Indeed, a lot of people I know in Computer Science do have some idea of
physiology.  (I am a CS major with some background in neurophysiology.)
Furthermore, much of the early CS emerged from neurophysiology, and was an
explicit attempt to build artificial brains (at a hardware/gate level).
However, although "real computer designers" may not be ignorant of
physiology, it doesn't mean that they actually manage to implement all the
concepts they know.  We still have a long way to go before we have
artificial brains...

        The trend towards parallelism now is more like
        the human social system of having a company work on a problem. Many
        brains, each talking to each other when they have questions or
        results, each working on different aspects of a problem. Some people
        have breakdowns, but the organization keeps going. Eventually it comes
        up with a product, although it may not really solve the problem posed
        at the beginning, it may have solved a related problem or found a
        better problem to solve.

Again, working in parallel at this level doesn't mean everything is
parallel.

                Another copyrighted excerpt from my not yet finished book on
        computer engineering modified for the network bboards, I am ever
        yours,
                                                Fred


All comments welcome.

        Rik Verstraete <rik@UCLA-CS>

PS: It may sound like I am convinced that parallelism is the only way to
go.  Parallelism is indeed very important, but still, I believe sequential
processing plays an important role too, even in brains.  But that's a
different issue...

------------------------------

End of AIList Digest
********************

∂03-Oct-83  1907	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #70
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83  19:06:31 PDT
Date: Monday, October 3, 1983 5:38PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #70
To: AIList@SRI-AI


AIList Digest            Tuesday, 4 Oct 1983       Volume 1 : Issue 70

Today's Topics:
  Technology Transfer & Research Ownership - Clarification,
  AI at Edinburgh - Description
----------------------------------------------------------------------

Date: Mon 3 Oct 83 11:55:41-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: recent flame

    I would like to clarify my recent comments on the disclaimer published
with the conference announcement for the "Intelligent Systems and Machines"
conference to be given at Oakland University. I did not mean to suggest
that the organizers of this particular conference are the targets of my
criticism; indeed, I congratulate them for informing potential attendees
of their obligations under the law. I sincerely apologize for not making
this obvious in my original note.

    I also realize that most conferences will have to deal with this issue
in the future, and meant my message not as a "call to action", but rather,
as a "call to discussion" of the proper role of goverment in AI and science
in general. I believe that we should follow these rules, but should
also participate in informed discussion of their long-range effect and
direction.

Apologies and regards,

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Friday, 30-Sep-83  14:17:58-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Does Edinburgh AI exist?


        A while back someone in your digest asked whether the AI
dept at Edinburgh still exists. The short answer is yes it flourishes.
The long answer is contained in the departmental description that follows.
                Alan Bundy

------------------------------

Date: Friday, 30-Sep-83  14:20:00-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Edinburgh AI Dept - A Description


THE DEPARTMENT OF ARTIFICIAL INTELLIGENCE AT EDINBURGH UNIVERSITY

Artificial Intelligence was recognised as a separate discipline by Edinburgh
University in 1966.  The Department in its present form was created in 1974.
During its existence it has steadily built up a programme of undergraduate and
post-graduate teaching and engaged in a vigorous research programme.  As the
only Department of Artificial Intelligence in any university, and as an
organisation which has made a major contribution to the development of the
subject, it is poised to play a unique role in the advance of Information
Technology which is seen to be a national necessity.

The Department collaborates closely with other departments within the
University in two distinct groupings.  Departments concerned with Cognitive
Science, namely A.I., Linguistics, Philosophy and Psychology all participate
in the School of Epistemics, which dates from the early 70's.  A new
development is an active involvement with Computer Science and Electrical
Engineering.  The 3 departments form the basis of the School of Information
Technology.  A joint MSc in Information Technology began in 1983.

A.I. are involved in collaborative activities with other institutions
which are significant in that they involve the transfer of people,
ideas and software.  In particular this involves MIT (robotics),
Stanford (natural language), Carnegie-Mellon (the PERQ machine) and
Grenoble (robotics).

Relationships with industry are progressing.  As well as a number of
development contracts, A.I. have recently had a teaching post funded by the
software house Systems Designers Ltd.  There, however, is a natural limit to
the extent to which a University Department can provide a service to industry:
consequently a proposal to create an Artificial Intelligence Applications
Institute has been put forward and is at an advanced stage of planning.  This
will operate as a revenue earning laboratory, performing a technology transfer
function on the model of organisations like the Stanford Research Institute or
Bolt Beranek and Newman.

Research in A.I.

A.I. is a new subject so that there is a very close relationship between
teaching at all levels, and research.  Artificial Intelligence is about making
machines behave in ways which exhibit some of the characteristics of
intelligence, and about how to integrate such capabilities into larger
coherent systems.  The vehicle for such studies has been the digital computer,
chosen for its flexibility.

A.I. Languages and Systems.

The development of high level programming languages has been crucial to all
aspects of computing because of the consequent easing of the task of
communicating with these machines.  Artificial Intelligence has given birth to
a distinctive series of languages which satisfy different design constraints
to those developed by Computer Scientists whose primary concern has been to
develop languages in which to write reliable and efficient programming systems
to perform standard computing tasks.  Languages developed in the Artificial
Intelligence field have been intended to allow people readily to try out ideas
about how a particular cognitive process can be mechanised.  Consequently they
have provided symbolic computation as well as numeric, and have allowed
program code and data to be equally manipulable.  They are also highly
interactive, and often integrated with a sophisticated text editor, so that
the iteration time for trying out a new idea can be rapid.

Edinburgh has made a substantial contribution to A.I. programming languages
(with significant cross fertilisation to the Computer Science world) and will
continue to do so.  POP-2 was designed and developed in the A.I. Department
by Popplestone and Burstall.  The development of Prolog has been more complex.
Kowalski first formulated the crucial idea of predicate logic as a programming
language during his period in the A.I. Department.  Prolog itself was designed
and first implemented in Marseille, as a result of Kowalski's interaction with
a research group there.  This was followed by a re-implementation at
Edinburgh, which demonstrated its potential as a practical tool.

To date the A.I. Department have supplied implementations of A.I. languages
to over 200 laboratories around the world, and are involved in an active
programme of Prolog systems development.

The current development in languages is being undertaken by a group supported
by the SERC, led by Robert Rae, and supervised by Dr Howe.  The concern of the
group is to provide language support for A.I. research nationwide, and to
develop A.I. software for a single user machine, the ICL PERQ.  The major goal
of this project is to provide the superior symbolic programming capability of
Prolog, in a user environment of the quality to be found in modern personal
computers with improved interactive capabilities.

Mathematical Reasoning.

If Artificial Intelligence is about mechanising reasoning, it has a close
relationship with logic which is about formalising mathematical reasoning, and
with the work of those philosophers who are concerned with formalising
every-day reasoning.  The development of Mathematical Logic during the 20th
century has provided a part of the theoretical basis for A.I.  Logic provides a
rigorous specification of what may in principle be deduced - it says little
about what may usefully be deduced.  And while it may superficially appear
straightforward to render ordinary language into logic, on closer examination
it can be seen to be anything but easy.

Nevertheless, logic has played a central role in the development of A.I. in
Edinburgh and elsewhere.  An early attempt to provide some control over the
direction of deduction was the resolution principle, which introduced a sort
of matching procedure called unification between parts of the axioms and parts
of a theorem to be proved.  While this principle was inadequate as a means of
guiding a machine in the proof of significant theorems, it survives in Prolog
whose equivalent of procedure call is a restricted form of resolution.

A.I. practioners still regard the automation of mathematical reasoning to
be a crucial area in A.I., but have moved from earlier attempts to find uniform
procedures for an efficient search of the space of possible deductions to the
creation of systems which embody expert knowledge about specific domains.  For
example if such a system is trying to solve a (non linear) equation, it may
adopt a strategy of using the axioms of algebra to bring two instances of the
unknown closer together with the "intention" of getting them to coalesce.
Work in mathematical reasoning is under the direction of Dr Bundy.

Robotics.

The Department has always had a lively interest in robotics, in particular in
the use of robots for assembly.  This includes the use of vision and force
sensing, and the design of languages for programming assembly robots.  Because
of the potential usefulness of fast moving robots, the Department has
undertaken a study of their dynamics behaviour, design and control.  The work
of the robot group is directed by Mr Popplestone.

A robot command language RAPT is under development:  this is intended to make
it easy for non-computer experts to program an assembly robot.  The idea is
that the assembly task should be programmed in terms of the job that is to be
done and how the objects are to be fitted together, rather than in terms of
how the manipulator should be moved.  This SERC funded work is steered by a
Robot Language Working Party which consists of industrialists and academics;
the recently formed Tripartite Study Group on Robot Languages extends the
interest to France and Germany.

An intelligent robot needs to have an internal representation of its world
which is sufficiently accurate to allow it to predict the results of planned
actions.  This means that, among other things, it needs a good representation
of the shapes of bodies.  While conventional shape modelling techniques permit
a hypothetical world to be represented in a computer they are not ideal for
robot applications, and the aim at Edinburgh is to combine techniques of shape
modelling with techniques used in A.I. so that the advantages of both may be
used.  This will include the ability to deal effectively with uncertainty.

Recently, in collaboration with GEC, the robotics group have begun to consider
how the techniques of spatial inference which have been developed can be
extended into the area of mechanical design, based on the observation that the
essence of any design is the relationship between part features, rather than
the specific quantitative details.  A proposal is being persued for a
demonstrator project to produce a small scale, but highly integrated "Design
and Make" system on these lines.

Work on robot dynamics, also funded by the SERC, has resulted in the
development of highly efficient algorithms for simulating standard serial
robots, and in a novel representation of spatial quantities, which greatly
simplifies the mathematics.

Vision and Remote Sensing.

The interpretation of data derived from sensors depends on expectations about
the structure of the world which may be of a general nature, for example that
continuous surfaces occupy much of the scene, or specific.  In manufacture the
prior expectations will be highly specific: one will know what objects are
likely to be present and how they are likely to be related to each other.  One
vision project in the A.I. Department is taking advantage of this in
integrating vision with the RAPT development in robotics - the prior
expectations are expressed by defining body geometry in RAPT, and by defining
the expected inter-body relationships in the same medium.

A robot operating in a natural environment will have much less specific
expectations, and the A.I. Department collaborate with the Heriot Watt
University to study the sonar based control of a submersible.  This involves
building a world representation by integrating stable echo patterns, which are
interpreted as objects.

Natural Language.

A group working in the Department of A.I. and related departments in the School
of Epistemics is studying the development of computational models of language
production, the process whereby communicative intent is transformed into
speech.  The most difficult problems to be faced when pursuing this goal cover
the fundamental issues of computation:  structure and process.  In the domain
of linguistic modelling, these are the questions of representation of
linguistic and real world knowledge, and the understanding of the planning
process which underlies speaking.

Many sorts of knowledge are employed in speaking - linguistic knowledge of how
words sound, of how to order the parts of a sentence to communicate who did
what to whom, of the meaning of words and phrases, and common sense knowledge
of the world.  Representing all of these is prerequisite to using them in a
model of language production.

On the other hand, planning provides the basis for approaching the issue of
organizing and controlling the production process, for the mind seems to
produce utterances as the synthetic, simultaneous resolution of numerous
partially conflicting goals - communicative goals, social goals, purely
linguistic goals - all variously determined and related.

The potential for dramatic change in the study of human language which is made
possible by this injection of dynamic concerns into what has heretofore been
an essentially static enterprise is vast, and the A.I. Department sees its
work as attempting to realise some of that potential.  The study of natural
language processing in the department is under the direction of Dr Thompson.

Planning Systems.

General purpose planning systems for automatically producing plans of action
for execution by robots have been a long standing theme of A.I. research.  The
A.I. Department at Edinburgh had a very active programme of planning research
in the mid 1970s and was one of the leading international centres in this
area.  The Edinburgh planners were applied to the generation of project plans
for large industrial activities (such as electricity turbine overhaul
procedures).  These planners have continued to provide an important source of
ideas for later research and development in the field.  A prototype planner in
use at NASA's Jet Propulsion Laboratory which can schedule the activities of a
Voyager-type planetary probe is based on Edinburgh work.

New work on planning has recently begun in the Department and is mainly
concerned with the interrelationships between planning, plan execution and
monitoring.  The commercial exploitation of the techniques is also being
discussed.  The Department's planning work is under the direction of Dr Tate.

Knowledge Based and Expert Systems.

Much of the A.I. Department's work uses techniques often referred to as
Intelligent Knowledge Based Systems (IKBS) - this includes robotics, natural
language, planning and other activities.  However, researchers in the
Department of A.I. are also directly concerned with the creation of Expert
Systems in Ecological Modelling, User Aids for Operating Systems, Sonar Data
Interpretation, etc.

Computers in Education.

The Department has pioneered in this country an approach to the use of
computers in schools in which children can engage in an active and creative
interaction with the computer without needing to acquire abstract concepts and
manipulative skills for which they are not yet ready.  The vehicle for this
work has been the LOGO language, which has a simple syntax making few demands
on the typing skills of children.  While LOGO is in fact equivalent to a
substantial subset of LISP, a child can get moving with a very small subset of
the language, and one which makes the actions of the computer immediately
concrete in the form of the movements of a "turtle" which can either be
steered around a VDU or in the form of a small mobile robot.

This approach has a significant value in Special Education.  For example in
one study an autistic boy found he was able to communicate with a "turtle",
which apparently acted as a metaphor for communicating with people, resulting
in his being able to use language spontaneously for the first time.  In
another study involving mildly mentally and physically handicapped youngsters
a touch screen device invoked procedures for manipulating pictorial materials
designed to teach word attack skills to non-readers.  More recent projects
include a diagnostic spelling program for dyslexic children, and a suite of
programs which deaf children can use to manipulate text to improve their
ability to use language expressively.  Much of the Department's Computers in
Education work is under the direction Dr Howe.
Teaching in the Department of A.I.

The Department is involved in an active teaching programme at undergraduate
and postgraduate level.  At undergraduate level, there are A.I.  first, second
and third year courses.  There is a joint honours degree with the Department
of Linguistics.  A large number of students are registered with the Department
for postgraduate degrees.  An MSc/PhD in Cognitive Science is provided in
collaboration with the departments of Linguistics, Philosophy and Psychology
under the aegis of the School of Epistemics.  The Department contributes two
modules on this:  Symbolic Computation and Computational Linguistics.  This
course has been accepted as a SERC supported conversion course.  In October
1983 a new MSc programme in IT started.  This is a joint activity with the
Departments of Computer Science and Electrical Engineering.  It has a large
IKBS content which is supported by SERC.

Computing Facilities in the Department of A.I.

Computing requirements of researchers are being met largely through the
SERC DEC-10 situated at the Edinburgh Regional Computing Centre or residually
through use of UGC facilities.  Undergraduate computing for A.I. courses is
supported by the EMAS facilities at ERCC.  Postgraduate computing on courses
is mainly provided through a VAX 11/750 Berkeley 4.1BSD UNIX system within the
Department.  Several groups in the Department use the ICL PERQ single user
machine.  A growth in the use of this and other single user machines is
envisaged over the next few years.  The provision of shared resources to these
systems in a way which allows for this growth in an orderly fashion is a
problem the Department wishes to solve.

It is anticipated that several further multi-user computers will soon be
installed - one at each site of the Department - to act as the hub of future
computing provision for the research pursued in Artificial Intelligence.

------------------------------

End of AIList Digest
********************

∂06-Oct-83  1525	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #71
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Oct 83  15:25:33 PDT
Date: Thursday, October 6, 1983 9:55AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #71
To: AIList@SRI-AI


AIList Digest            Thursday, 6 Oct 1983      Volume 1 : Issue 71

Today's Topics:
  Humor - The Lightbulb Issue in AI,
  Reports - Edinburgh AI Memos,
  Rational Psychology,
  Halting Problem,
  Artificial Organisms,
  Technology Transfer,
  Seminar - NL Database Updates
----------------------------------------------------------------------

Date: 6 Oct 83 0053 EDT (Thursday)
From: Jeff.Shrager@CMU-CS-A
Subject: The lightbulb issue in AI.

How many AI people does it take to change a lightbulb?

At least 55:

   The problem space group (5):
        One to define the goal state.
        One to define the operators.
        One to describe the universal problem solver.
        One to hack the production system.
        One to indicate about how it is a model of human lightbulb
         changing behavior.

   The logical formalism group (16):
        One to figure out how to describe lightbulb changing in
         first order logic.
        One to figure out how to describe lightbulb changing in
         second order logic.
        One to show the adequecy of FOL.
        One to show the inadequecy of FOL.
        One to show show that lightbulb logic is non-monotonic.
        One to show that it isn't non-monotonic.
        One to show how non-monotonic logic is incorporated in FOL.
        One to determine the bindings for the variables.
        One to show the completeness of the solution.
        One to show the consistency of the solution.
        One to show that the two just above are incoherent.
        One to hack a theorm prover for lightbulb resolution.
        One to suggest a parallel theory of lightbulb logic theorm
         proving.
        One to show that the parallel theory isn't complete.
        ...ad infinitum (or absurdum as you will)...
        One to indicate how it is a description of human lightbulb
         changing behavior.
        One to call the electrician.

   The robotics group (10):
        One to build a vision system to recognize the dead bulb.
        One to build a vision system to locate a new bulb.
        One to figure out how to grasp the lightbulb without breaking it.
        One to figure out how to make a universal joint that will permit
         the hand to rotate 360+ degrees.
        One to figure out how to make the universal joint go the other way.
        One to figure out the arm solutions that will get the arm to the
         socket.
        One to organize the construction teams.
        One to hack the planning system.
        One to get Westinghouse to sponsor the research.
        One to indicate about how the robot mimics human motor behavior
         in lightbulb changing.

   The knowledge engineering group (6):
        One to study electricians' changing lightbulbs.
        One to arrange for the purchase of the lisp machines.
        One to assure the customer that this is a hard problem and
         that great accomplishments in theory will come from his support
         of this effort. (The same one can arrange for the fleecing.)
        One to study related research.
        One to indicate about how it is a description of human lightbulb
         changing behavior.
        One to call the lisp hackers.

   The Lisp hackers (13):
        One to bring up the chaos net.
        One to adjust the microcode to properly reflect the group's
         political beliefs.
        One to fix the compiler.
        One to make incompatible changes to the primitives.
        One to provide the Coke.
        One to rehack the Lisp editor/debugger.
        One to rehack the window package.
        Another to fix the compiler.
        One to convert code to the non-upward compatible Lisp dialect.
        Another to rehack the window package properly.
        One to flame on BUG-LISPM.
        Another to fix the microcode.
        One to write the fifteen lines of code required to change the
         lightbulb.

   The Psychological group (5):
        One to build an apparatus which will time lightbulb
         changing performance.
        One to gather and run subjects.
        One to mathematically model the behavior.
        One to call the expert systems group.
        One to adjust the resulting system so that it drops the
         right number of bulbs.

[My apologies to groups I may have neglected.  Pages to code before
 I sleep.]

------------------------------

Date: Saturday, 1-Oct-83  15:13:42-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Edinburgh AI Memos


        If you want to receive a regular abstracts list and order form
for Edinburgh AI technical reports then write (steam mail I'm afraid)
to Margaret Pithie, Department of Artificial Intelligence, Forrest
Hill, Edinburgh, Scotland.  Give your name and address and ask to be put
on the mailing list for abstracts.

                        Alan Bundy

------------------------------

Date: 29 Sep 83 22:49:18-PDT (Thu)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Rational Psychology - (nf)
Article-I.D.: uiucdcs.3046


The book mentioned, Metaphors We Live By, was written by George Lakoff
and Mark Johnson.  It contains some excellent ideas and is written in a
style that makes for fast, enjoyable reading.

--Rick Dinitz
uicsl!dinitz

------------------------------

Date: 28 Sep 83 10:32:35-PDT (Wed)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology [and Reply]


I must say its been exciting listening to the analysis of what "Rational
Psychology" might mean or should not mean.  Should I go read the actual
article that started it all?  Perish the thought.  Is psychology rational?
Someone said that all sciences are rational, a moot point, but not that
relevant unless one wishes to consider Psychology a science.  I do not.
This does not mean that psychologists are in any way inferior to chemists
or to REAL scientists like those who study physics.  But I do think there
is a difference IN KIND between these fields and psychology.  Very few of
us have any close intimate relationships with carbon compounds or inter-
stellar gas clouds. (At least not since the waning of the LSD era.) But
with psychology, anyone NOT in this catagory has no business in the field.
(I presume we are talking Human psychology.)

The way this difference might exert itself is quite hard to predict, tho
in my brief foray into psychology it was not so hard to spot.  The great
danger is a highly amplified form of anthropomorphism which leads one to
form technical opinions quite possibly unrelated to technical or theoretical
analysis.  In physics, there is a superficially similar process in which
the scientist develops a theory which seems to be a "pet theory" and then
sets about trying to show it true or false.  The difference is that the
physicist developed his pet theory from technical origins rather than from
personal experience.  There is no other origin for his ideas unless you
speculate that people have a inborn understanding of psi-mesons or spin
orbitals.  Such theories MUST have developed from these ideas.  In
psychology, the theory may well have been developed from a big scary dog
when the psychologist was two.  THAT is a difference in kind, and I think
that is why I will always be suspicious of psychologists.
----GaryFostel----

[I think that is precisely the point of the call for rational psychology.
It is an attempt to provide a solid theoretical underpinning based on
the nature of mind, intelligence, emotions, etc., without regard to
carbon-based implementations or the necessity of explaining human psychoses.
As such, rational psychology is clearly an appropriate subject for
AIList and net.ai.  Traditional psychology, and subjective attacks or
defenses of it, are less appropriate for this forum.  -- KIL]

------------------------------

Date: 2 Oct 83 1:42:26-PDT (Sun)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Re: the Halting problem
Article-I.D.: ihuxv.565

I think that the answer to the halting problem in intelligent
entities is that there must exist a mechanism for telling it
whether its efforts are getting it anywhere, i.e. something that
senses its internal state and says if things are getting better,
worse, or whatever.  Normally for humans, if a "loop" were to
begin, it should soon be broken by concerns like "I'm hungry
now, let's eat".  No amount of cogitation makes that feeling
go away.

I would rather call this mechanism need than emotion, since I
think that some emotions are learned.

So then, needs supply two uses to intelligence: (1) they supply
a direction for the learning which is a necessary part of
intelligence, and (2) they keep the intelligence from getting
bogged down in fruitless cogitation.

             Tom Portegys
             Bell Labs, IH
             ihuxv!portegys

------------------------------

Date: 3 Oct 83 20:22:47 EDT  (Mon)
From: Speaker-To-Animals <speaker%umcp-cs@UDel-Relay>
Subject: Re:  Artificial Organisms

Why would we want to create machines equivelent to people when
organisms already have a means to reproduce themselves?

Because then we might be able to make them SMARTER than humans
of course!  We might also learn something about ourselves along
the way too.

                                                        - Speaker

------------------------------

Date: 30 Sep 83 1:16:31-PDT (Fri)
From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax
Subject: November F&SF
Article-I.D.: mit-eddi.774

Some of you may be interested in reading Isaac Asimov's article in the
latest (November, I think) Magazine of Fantasy and Science Fiction.  The
article is entitled "More Thinking about Thinking", and is the Good
Doctor's views on artificial intelligence.  He makes a very good case
for the idea that non-human thinking (i.e. in computers and
dolphins) is likely to be very different, and perhaps superior to, human
thinking.  He uses an effective analogy to locomotion: artificial
locomotion, namely the wheel, is completely unlike anything found in
nature.
--
                        Barry Margolin
                        ARPA: barmar@MIT-Multics
                        UUCP: ..!genrad!mit-eddie!barmar

------------------------------

Date: Mon, 3 Oct 83 23:17:18 EDT
From: Brint Cooper (CTAB) <abc@brl-bmd>
Subject: Re:  Alas, I must flame...

I don't believe, as you assert, that the motive for clearing
papers produced under DOD sponsorship is 'econnomic' but, alas,
military.  You then may justly argue the merits of non-export
of things militarily important vs the benefuits which acaccrue
to all of us by a free and open exchange.

I'm not taking sides--yet., but am trying to see the issue
clearly defined.

Brint

------------------------------

Date: Tue, 4 Oct 83 8:16:20 EDT
From: Earl Weaver (VLD/VMB) <earl@brl-vat>
Subject: Flame on DoD

No matter what David Rogers @ sumex-aim thinks, the DoD "review" of all papers
before publishing is not to keep information private, but to make sure no
classified stuff gets out where it shouldn't be and to identify any areas
of personal opinion or thinking that could be construed to be official DoD
policy or position.  I think it will have very little effect on actually
restricting information.

As with most research organizations, the DoD researchers are not immune to the
powers of the bean counters and must publish.

------------------------------

Date: Mon 3 Oct 83 16:44:24-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. oral

                      [Reprinted from the SU-SCORE bboard.]



                          Computer Science Department

                           Ph.D. Oral, Jim Davidson

                         October 18, 1983 at 2:30 p.m.

                            Rm. 303, Building 200

                Interpreting Natural Language Database Updates

Although the problems of querying databases in natural language are well
understood, the performance of database updates via natural language introduces
additional difficulties.  This talk discusses the problems encountered in
interpreting natural language updates, and describes an implemented system that
performs simple updates.

The difficulties associated with natural language updates result from the fact
that the user will naturally phrase requests with respect to his conception of
the domain, which may be a considerable simplification of the actual underlying
database structure.  Updates that are meaningful and unambiguous from the
user's standpoint may not translate into reasonable changes to the underlying
database.

The PIQUE system (Program for Interpretation of Queries and Updates in English)
operates by maintaining a simple model of the user, and interpreting update
requests with respect to that model.  For a given request, a limited set of
"candidate updates"--alternative ways of fulfilling the request--are
considered, and ranked according to a set of domain-independent heuristics that
reflect general properties of "reasonable" updates.  The leading candidate may
be performed, or the highest ranking alternatives presented to the user for
selection.  The resultant action may also include a warning to the user about
unanticipated side effects, or an explanation for the failure to fulfill a
request.

This talk describes the PIQUE system in detail, presents examples of its
operation, and discusses the effectiveness of the system with respect to
coverage, accuracy, efficiency, and portability.  The range of behaviors
required for natural language update systems in general is discussed, and
implications of updates on the design of data models are briefly considered.

------------------------------

End of AIList Digest
********************

∂10-Oct-83  1623	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #72
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Oct 83  16:22:34 PDT
Date: Monday, October 10, 1983 10:16AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #72
To: AIList@SRI-AI


AIList Digest            Monday, 10 Oct 1983       Volume 1 : Issue 72

Today's Topics:
  Administrivia - AIList Archives,
  Music & AI - Request,
  NL - Semantic Chart Parsing & Simple English Grammar,
  AI Journals - Address of "Artificial Intelligence",
  Alert - IEEE Computer Issue,
  Seminars - Stanfill at Univ. of Maryland, Zadeh at Stanford,
  Commonsense Reasoning
----------------------------------------------------------------------

Date: Sun 9 Oct 83 18:03:24-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: AIList Archives

The archives have grown to the point that I can no longer
keep them available online.  I will keep the last three month's
issues available in <ailist>archive.txt on SRI-AI.  Preceding
issues will be backed up on tape, and will require about a
day's notice to recover.  The tape archive will consist of
quarterly composites (or smaller groupings, if digest activity
gets any higher than it has been).  The file names will be of
the form AIL1N1.TXT, AIL1N19.TXT, etc.  All archives will be in
the MMAILR mailer format.

The online archive may be obtained via FTP using anonymous login.
Since a quarterly archive can be very large (up to 300 disk pages)
it will usually be better to ask me for particuar issues than to
FTP the whole file.

                                        -- Ken Laws

------------------------------

Date: Thu, 25 Aug 83 00:07:53 PDT
From: uw-beaver!utcsrgv!nixon@LBL-CSAM
Subject: AIList Archive- Univ. of Toronto

[I previously put out a request for online archives that could
be obtained by anonymous FTP.  There were very few responses.
Perhaps this one will be of use.  -- KIL]


Dear Ken,
  Copies of the AIList Digest are kept in directory /u5/nixon/AIList
with file names V1.5, V1.40, etc.  Our uucp site name is "utcsrgv".
This is subject to change in the very near future as the AI group at the
University of Toronto will be moving to a new computer.
  Brian Nixon.

------------------------------

Date: 4 Oct 83 9:23:38-PDT (Tue)
From: hplabs!hao!cires!nbires!ut-sally!riddle @ Ucb-Vax
Subject: Re: Music & AI, pointers wanted
Article-I.D.: ut-sally.86

How about posting the results of the music/ai poll to the net?  There
have been at least two similar queries in recent memory, indicating at
least a bit of general interest.

[...]

                                 -- Prentiss Riddle
                                    {ihnp4,kpno,ctvax}!ut-sally!riddle
                                    riddle@ut-sally.UUCP

------------------------------

Date: 5 Oct 83 19:54:32-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Re: NL argument between STLH and Per - (nf)
Article-I.D.: uiucdcs.3132

I've heard of "syntactic chart parsing," but what is "semantic chart
parsing?"  It sounds interesting, and I'd like to hear about it.

I'm also interested in seeing your paper.  Please make arrangements with me
via net mail.

Rick Dinitz
U. of Illinois
...!uicsl!dinitz

------------------------------

Date: 3 Oct 83 18:39:00-PDT (Mon)
From: pur-ee!ecn-ec.davy @ Ucb-Vax
Subject: WANTED: Simple English Grammar - (nf)
Article-I.D.: ecn-ec.1173


Hello,

        I am looking for a SIMPLE set of grammar rules for English.  To
be specific, I'm looking for something of the form:

                SENT = NP + VP ...
                  NP = DET + ADJ + N ...
                  VP = ADV + V + DOBJ ...

                      etc.

I would prefer a short set of rules, something on the order of one or two
hundred lines.  I realize that this isn't enough to cover the whole English
language, I don't want it to.  I just want something which could handle
"simple" sentences, such as "The cat chased the mouse", etc.  I would like
to have rules for questions included, so that something like "What does a
hen weigh?" can be covered.

        I've scoured our libraries here, and have only found one book with
a grammar for English in it, and it's much more complex than what I want.
Any pointers to books/magazines or grammars themselves would be greatly
appreciated.

Thanks in advance (as the saying goes)
--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue

------------------------------

Date: 6 Oct 83 17:21:29-PDT (Thu)
From: ihnp4!cbosgd!cbscd5!lvc @ Ucb-Vax
Subject: Address of "Artificial Intelligence"
Article-I.D.: cbscd5.739

Here is the address of "Artificial Intelligence" if anyone is interested:

    Artificial Intelligence  (bi-monthly $136 -- Ouch !)
    North-Holland Publishing Co.,
    Box 211, 1000 AE
    Amsterdam, Netherlands.

    Editors D.G. Bobrow, P.J. Hayes

    Advertising, book reviews, circulation 1,100

    Also avail. in microform from

    Microforms International Marketing Co.
    Maxwell House
    Fairview Park
    Elmsford NY 10523

    Indexed: Curr. Cont.

Larry Cipriani
cbosgd!cbscd5!lvc

[There is a reduced rate for members of AAAI. -- KIL]

------------------------------

Date: Sun 9 Oct 83 17:45:52-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Computer Issue

Don't miss the October 1983 issue of IEEE Computer.  It is a
special issue on knowledge representation, and includes articles
on learning, logic, and other related topics.  There is also a
short list of 30 expert system on p. 141.

------------------------------

Date: 8 Oct 83 04:18:04 EDT  (Sat)
From: Bruce Israel <israel%umcp-cs@UDel-Relay>
Subject: University of Maryland AI talk

        [Reprinted from the University of Maryland BBoard]

The University of Maryland Computer Science Dept. is starting an
informal AI seminar, meeting every other Thursday in Room 2330,
Computer Science Bldg, at 5pm.

The first meeting will be held Thursday, October 13.  All are welcome
to attend.  The abstract for the talk follows.

                              MAL: My AI Language

                                Craig Stanfill
                        Department of Computer Science
                            University of Maryland
                            College Park, MD 20742

     In the course of writing my thesis, I implemented an AI  language,  called
MAL,  for  manipulating  symbolic  expressions.   MAL runs in the University of
Maryland Franz Lisp Environment on a VAX 11/780 under Berkely  Unix  (tm)  4.1.
MAL  is  of  potential  benefit  in knowledge representation research, where it
allows the development and testing of knowledge representations without  build-
ing  an  inference engine from  scratch,  and in AI education, where it  should
allow students to experiment with a  simple AI programming language.  MAL  pro-
vides for:

1.   The  representation  of  objects  and  queries  as  symbolic  expressions.
     Objects  are  recursively  constructed from sets, lists, and bags of atoms
     (as in QLISP).  A powerful and efficient pattern matcher is provided.

2.   The rule-directed simplification of expressions.  Limited  facilities  for
     depth first search are provided.

3.   Access to a database.  Rules  can  assert  and  fetch  simplifications  of
     expressions.  The database also employs a truth maintenance system.

4.   The construction of large AI systems by the combination of simpler modules
     called domains.  For each domain, there is a database, a set of rules, and
     a set of links to other domains.

5.   A set of domains which are generally useful, especially for  spatial  rea-
     soning.   This  includes  domains  for  solid and linear geometry, and for
     algebra.

6.   Facilities which allow the user to customize MAL (to a degree).  Calls  to
     arbitrary LISP functions are supported, allowing the language to be easily
     extended.

------------------------------

Date: Thu 6 Oct 83 20:18:09-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: Colloquium Oct 11: ZADEH

                [Reprinted from the SU-SCORE bboard.]


Professor Lotfi Zadeh, of UCB,  will be giving the CS colloquium this
Tuesday (10/11).  As usual, it  will be in Terman Auditorium, at 4:15
(preceded at 3:45 by refreshments in the 3rd floor lounge of Margaret
Jacks Hall).

The title and abstract for the colloquium are as follows:

Reasoning With Commonsense Knowledge

Commonsense knowledge is exemplified  by "Glass is brittle," "Cold is
infectious,"  "The rich are  conservative," "If  a car is  old, it is
unlikely to  be in good shape," etc.  Such  knowledge forms the basis
for most of human reasoning in everyday situations.

Given  the pervasiveness  of commonsense reasoning,  a question which
begs for answer is: Why  is commonsense reasoning a neglected area in
classical  logic?    Because,  almost   by  definition,   commonsense
knowledge  is  that  knowledge   which  is  not  representable  as  a
collection  of  well-formed  formulae in  predicate  logic  or  other
logical  systems which  have the  same basic  conceptual structure as
predicate logic.

The approach to commonsense  reasoning which is described in the talk
is based on the use of fuzzy logic -- a logic which allows the use of
fuzzy predicates, fuzzy  quantifiers and fuzzy truth-values.  In this
logic,  commonsense  knowledge  is defined  to  be  a  collection  of
dispositions, that is propositions with suppressed fuzzy quantifiers.
To infer  from such knowledge, three  basic syllogisms are developed:
(1)   the   intersection/product  syllogism;   (2)   the   consequent
conjunction syllogism; and  (3) the antecedent conjunction syllogism.
The  use of  these  syllogisms  in commonsense  reasoning  and  their
application to  the  combination of  evidence  in expert  systems  is
discussed and illustrated by examples.

------------------------------

Date: Fri 7 Oct 83 09:42:30-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM>
Subject: "rich" = "conservative" ?

                [Reprinted from the SU-SCORE bboard.]


        Subject: Colloquium Oct 11: ZADEH
        The title and abstract for the colloquium are as follows:
        Reasoning With Commonsense Knowledge

I don't think I've seen flames in response to abstracts before, but I get
so sick of hearing "rich," "conservative," and "evil" used as synonyms.

    Commonsense knowledge is exemplified by [...] "The rich are
    conservative," [...].

In fact, in the U.S., 81% of people with incomes over $50,000 are
registered Democrats.  Only 47% with incomes under $50,000 are.  (The
remaining 53% are made up of "independents," &c..)  The Democratic
Party gets the majority of its funding from contributions of over
$1000 apiece.  The Republican Party is mostly funded by contributions
of $10 and under.  (Note: I'd be the last to equate Conservatism and
the Republican Party.  I am a Tory and a Democrat.  However, more
"commonsense knowledge" suggests that I can use the word "Republican"
in place of "conservative" for the purpose of refuting the equation
of "rich" and "conservative."

    Such knowledge forms the basis for most of human reasoning in everyday
    situations.

This statement is so true that it is the reason I gave up political writing.

    Given  the pervasiveness  of commonsense reasoning,  a question which
    begs for answer is: Why  is commonsense reasoning a neglected area in
    classical  logic? [...]

Perhaps because false premeses tend to give rise to false conclusions?  Just
what we need--"ignorant systems."  (:-)
--Christopher

------------------------------

Date: Fri 7 Oct 83 10:22:37-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM>
Subject: Re: "rich" = "conservative" ?

                [Reprinted from the SU-SCORE bboard.]


Why is logic a neglected area in commonsense reasoning?  (to say nothing of
political writing)?

More seriously, or at least more historically, a survey was once taken of
ecological and other pressure groups in England, asking them which had been the
most and least effective methods they had used to convince governmental bodies.
Right at the bottom of the list of "least effective" was Reasoned Argument.

                                        - Richard

------------------------------

Date: Fri, 7 Oct 83 10:36 PDT
From: Vaughan Pratt <pratt@Navajo>
Subject: Reasoned Argument

                [Reprinted from the SU-SCORE bboard.]

[...]

I think if "Breathing" had been on the least along with "Reasoned
Argument" then the latter would only have come in second last.
It is not that reasoned argument is ineffective but that it is on
a par with breathing, namely something we do subconsciously.  Consciously
performed reasoning is only marginally reliable in mathematical circles,
and quite unreliable in most other areas.  It makes most people dizzy,
much as consciously performed breathing does.

-v

------------------------------

End of AIList Digest
********************

∂10-Oct-83  2157	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #73
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Oct 83  21:55:56 PDT
Date: Monday, October 10, 1983 4:17PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #73
To: AIList@SRI-AI


AIList Digest            Tuesday, 11 Oct 1983      Volume 1 : Issue 73

Today's Topics:
  Halting Problem,
  Conciousness,
  Rational Psychology
----------------------------------------------------------------------

Date: Thu 6 Oct 83 18:57:04-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Halting problem discussion

This discussion assumes that "human minds" are at least equivalent
to Universal Turing Machines. If they are restricted to computing
smaller classes of recursive functions, the question dissolves.

Sequential computers are idealized as having infinite memory because
that makes it easier to study mathematically asymptotic behavior.  Of
course, we all know that a more accurate idealization of sequential
computers is the finite automaton (for which there is no halting
problem, of course!).

The discussion on this issue seemed to presuppose that "minds" are the
same kind of object as existing (finite!) computing devices. Accepting
this presupposition for a moment (I am agnostic on the matter), the
above argument applies and the discussion is shown to be vacuous.

Thus fall undecidability arguments in psychology and linguistics...

Fernando Pereira

PS. Any silliness about unlimited amounts of external memory
will be profitably avoided.

------------------------------

Date: 7 Oct 83 1317 EDT (Friday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: AI halting problem

        Actually, this isn't a problem, as far as I can see.  The Halting
Problem's problem is: there cannot be a program for a Turing-equivalent
machine that can tell whether *any* arbitrary program for that machine will
halt.  The easiest proof that a Halts(x) procedure can't exist is the
following program:  (due to Jon Bentley, I believe)
        if halts(x) then
                while true do print("rats")
What happens when you start this program up, with itself as x?  If
halts(x) returns true, it won't halt, and if halts(x) returns false, it
will halt.  This is a contradiction, so halts(x) can't exist.

        My question is, what does this have to do with AI?  Answer, not
much.  There are lots of programs which always halt.  You just can't
have a program which can tell you *for* *any* *program* whether it will
halt.  Furthermore, human beings don't want to halt, i.e., die (this
isn't really a problem, since the question is whether their mental
subroutines halt).

        So as long as the mind constructs only programs which will
definitely halt, it's safe.  Beings which aren't careful about this
fail to breed, and are weeded out by evolution.  (Serves them right.)
All of this seems to assume that people are Turing-equivalent (without
pencil and paper), which probably isn't true, and certainly hasn't been
proved.  At least I can't simulate a PDP-10 in my head, can you?  So
let's get back to real discussions.

------------------------------

Date: Fri,  7 Oct 83 13:05:16 CDT
From: Paul.Milazzo <milazzo.rice@Rand-Relay>
Subject: Looping in humans

Anyone who believes the human mind incapable of looping has probably
never watched anyone play Rogue :-).  The success of Rogomatic (the
automatic Rogue-playing program by Mauldin, et. al.) demonstrates that
the game can be played by deriving one's next move from a simple
*fixed* set of operations on the current game state.

Even in the light of this demonstration, Rogue addicts sit hour after
hour mechanically striking keys, all thoughts of work, food, and sleep
forgotten, until forcibly removed by a girl- or boy-friend or system
crash.  I claim that such behavior constitutes looping.

:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

                                Paul Milazzo <milazzo.rice@Rand-Relay>
                                Dept. of Mathematical Sciences
                                Rice University, Houston, TX

P.S.    A note to Rogue fans:  I have played a few games myself, and
        understand the appeal.  One of the Rogomatic developers is a
        former roommate of mine interested in part in overcoming the
        addiction of rogue players everywhere.  He, also, has played
        a few games...

------------------------------

Date: 5 Oct 83 9:55:56-PDT (Wed)
From: hplabs!hao!seismo!philabs!cmcl2!floyd!clyde!akgua!emory!gatech!owens
      @ Ucb-Vax
Subject: Re: a definition of consciousness?
Article-I.D.: gatech.1379

     I was doing required reading for a linguistics class when I
came across an interesting view of consciousness in "Foundations
of the Theory of Signs", by Charles Morris, section VI, subsection
12, about the 6th paragraph (Its also in the International
Encyclopedia of Unified Science, Otto Neurath, ed.).
     to say that Y experiences X is to define a relation E of which
Y is the domain and X is the range.  Thus, yEx says that it is true
that y experiences x.  E does not follow normal relational rules
(not transitive or symmetric.  I can experience joe, and joe can
experience fred, but it's not nessesarily so that I thus experience
fred.)  Morris goes on to state that yEx is a "conscious experience"
if yE(yEx) ALSO holds, otherwise it's an "unconscious experience".
     Interesting.  Note that there is no infinite regress of
yE(yE(yE....)) that is usually postulated as being a consequence of
computer consciousness.  However the function that defines E is defined,
it only needs to have the POTENTIAL of being able to fit yEx as an x in
another yEx, where y is itself.  Could the fact that the postulated
computer has the option  of NOT doing the insertion be some basis for
free will???  Would the required infinite regress of yE(yE(yE....
manifest some sort of compulsiveness that rules out free will?? (not to
say that an addict of some sort has no free will, although it's worth
thinking about).
     Question:  Am I trivializing the problem by making the problem of
consiousness existing or not being the ability to define the relation
E?  Are there OTHER questions that I haven't considered that would
strengthen or weaken that supposition?  No flames, please, since this
ain't a flame.

                                        G. Owens
                                        at gatech  CSNET.

------------------------------

Date: 6 Oct 83 9:38:19-PDT (Thu)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: towards a calculus of the subjective
Article-I.D.: ihuxr.685

I posted some articles to net.philosophy a while back on this topic
but I didn't get much of rise out of anybody. Maybe this is a better
forum. (Then again, ...) I'm induced to try here by G. Owens article,
"Re: definition of consciousness".

Instead of trying to formulate a general characteristic of conscious
experience, what about trying to characterize different types of subjective
experience in terms of their physical correlates? In particular, what's
the difference between seeing a color (say) and hearing a sound? Even
more particularly, what's the difference between seeing red, and seeing blue?

I think the last question provides a potential experimental test of
dualism. If it could be shown that the subjective experience of a red
image was constituted by an internal set of "red" image cells, and similarly
for a blue image, I would regard this as a proof of dualism. This is
assuming the "red" and "blue" cells to be physically equivalent. The
choice between which were "red" and which were "blue" would have no
physical basis.

On the other hand, suppose there were some qualitative difference in
the firing patterns associated with seeing red versus seeing blue.
We would have a physical difference to hang our hat on, but we would
still be left with the problem of forming a calculus of the subjective.
That is, we would have to figure out a way to deduce the type of subjective
experience from its physical correlates.

A successful effort might show how to experience completely new colors,
for example. Maybe our restriction to a 3-d color space is due to
the restricted stimulation of subjective color space by three inputs.
Any acid heads care to comment?

These thoughts were inspired by Thomas Nagel's "What is it like to be a bat?"
in "The Minds I". I think the whole subjective-objective problem is
given short shrift by radical AI advocates. Hofstadter's critique of
Nagel's article was interesting, but I don't think it addressed Nagel's
main point.

        Lew Mammel, Jr. ihuxr!lew

------------------------------

Date: 6 Oct 83 10:06:54-PDT (Thu)
From: ihnp4!zehntel!tektronix!tekecs!orca!brucec @ Ucb-Vax
Subject: Re: Parallelism and Physiology
Article-I.D.: orca.179

                               -------
Re the article posted by Rik Verstraete <rik@UCLA-CS>:

In general, I agree with your statements, and I like the direction of
your thinking.  If we conclude that each level of organization in a
system (e.g. a conscious mind) is based in some way on the next lower
level, it seems reasonable to suppose that there is in some sense a
measure of detail, a density of organization if you will, which has a
lower limit for a given level before it can support the next level.
Thus there would be, in the same sense, a median density for the
levels of the system (mind), and a standard deviation, which I
conjecture would be bounded in any successful system (only the top
level is likely to be wildly different in density, and that lower than
the median).

        Maybe the distinction between the words learning and
        self-organization is only a matter of granularity too. (??)

I agree.  I think that learning is simply a sophisticated form of
optimization of a self-organizing system in a *very* large state
space.  Maybe I shouldn't have said "simply."  Learning at the level of
human beings is hardly trivial.

        Certainly, there are not physically two types of memories, LTM
        and STM.  The concept of LTM/STM is only a paradigm (no doubt a
        very useful one), but when it comes to implementing the concept,
        there is a large discrepancy between brains and machines.

Don't rush to decide that there aren't two mechanisms.  The concepts of
LTM and STM were developed as a result of observation, not from theory.
There are fundamental functional differences between the two.  They
*may* be manifestations of the same physical mechanism, but I don't
believe there is strong evidence to support that claim.  I must admit
that my connection to neurophysiology is some years in the past
so I may be unaware of recent research.  Does anyone out there have
references that would help in this discussion?

------------------------------

Date: 7 Oct 83 15:38:14-PDT (Fri)
From: harpo!floyd!vax135!ariel!norm @ Ucb-Vax
Subject: Re: life is but a dream
Article-I.D.: ariel.482

re Michael Massimilla's idea (not original, of course) that consciousness
and self-awareness are ILLUSIONS.  Where did he get the concept of ILLUSION?
The stolen concept fallacy strikes again!  This fallacy is that of using
a concept while denying its genetic roots... See back issues of the Objectivist
for a discussion of this fallacy.... --Norm on ariel, Holmdel, N.J.

------------------------------

Date: 7 Oct 83 11:17:36-PDT (Fri)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: life is but a dream
Article-I.D.: ihuxr.690

Michael Massimilla informs us that consciousness and self-awareness are
ILLUSIONS. This is like saying "It's all in your mind." As Nietzsche said,
"One sometimes remains faithful to a cause simply because its opponents
do not cease to be insipid."

        Lew Mammel, Jr. ihuxr!lew

------------------------------

Date: 5 Oct 83 1:07:31-PDT (Wed)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology
Article-I.D.: ncsu.2357


Someone's recent attempt to make the meaning of "Rational Psychology" seem
trivial misses the point a number of people have made in commenting on the
odd nature of the name.  The reasoning was something like this:
 1) rational "X" means the same thing in spite of what "X" is.
 2) => rational psychology is a clear and simple thing
 3) wake up guys, youre being dumb.

Well, I think this line misses at least one point.  The argument above
is probably sound provided one accepts the initial premise, which I do not
neccessarily accept.  Another example of the logic may help.
 1) Brute Force elaboration solve problems of set membership.  E.g. just
    look at the item and compare it with every member of the set.  This
    is a true statement for a wide range of possible sets.
 2) Real Numbers are a kind of set.
 3) Wake up Cantor, you're wasting (or have wasted) your time.
It seems quite clear that in the latter example, the premise is naive and
simply fails to apply to sets of infinite proportions. (Or more properly
one must go to some effort to justify such use.)

The same issue applies to the notion of Rational Psychology.  Does it make
sense to attempt to apply techniques which may be completely inadequate?
Rational analysis may fail completely to explain the workings of the mind,
esp when we are looking at the "non-analytic" capabilities that are
implied by psychology.  We are on the edge of a philosophical debate, with
terms like "dual-ism" and "phsical-ism" etc marking out party lines.

It may be just as ridiculous to some people to propose a rational study
of psychology as it seems to most of us that one use finite analysis
to deal with trans-finite cardinalities [or] as it seems to some people to
propose to explain the mind via physics alone.  Clearly, the people who
expect rational analytic method to be fruitful in the field of psychology
are welcome to coin a new name for themselve.  But if they, or anyone else
has really "Got it now" please write a dissertation on the subject and
enter history along side Kant, St Thomas Aquinus, Kierkergard ....
----GaryFostel----

------------------------------

Date: 4 Oct 83 8:54:09-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!velu @ Ucb-Vax
Subject: Rational Psychology - Gary Fostel's message
Article-I.D.: umcp-cs.2953

Unfortunately, however, many pet theories in Physics have come about as
inspirations, and not from the "technical origins" as you have stated!
(What is a "technical origin", anyway????)

As I see it, in any science a pet theory is a combination of insight,
inspiration, and a knowledge of the laws governing that field. If we
just went by known facts, and did not dream on, we would not have
gotten anywhere!

                                        - Velu
                                -----
Velu Sinha, U of MD, College Park
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!velu
CSNet:  velu@umcp-cs            ARPA:   velu.umcp-cs@UDel-Relay

------------------------------

Date: 6 Oct 83 12:00:15-PDT (Thu)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Intuition in Physics
Article-I.D.: ncsu.2360


Some few days ago I suggested that there was something "different"
about psychology and tried to draw a distinction between the flash
of insight or the pet theory in physics as compared to psychology.

Well, someone else commented on the original, in a way that sugested
I missed the mark in my original effort to make it clear. One more time:

I presume that at birth, one's mind is not predisposed to one or another
of several possible theories of heavy molecule collision (for example.)
Further, I think it unlikely that personal or emotional interaction in
one "pre-analytic" stage (see anything about developmental psych.) is
is likely to bear upon one's opinions about those molecules. In fact I
find it hard to believe that anything BUT technical learning is likely
to bear on one's intuition about the molecules. One might want to argue
that one's personality might force you to lean towards "aggressive" or
overly complex theories, but I doubt that such effects will lead to
the creation of a theory.  Only a rather mild predisposition at best.

In psychology it is entirely different.  A person who is aggresive has
lots of reasons to assume everyone else is as well. Or paranoid, or
that rote learning is esp good or bad, or that large dogs are dangerous
or a number of other things that bear directly on one's theories of the
mind.  And these biases are aquired from the process of living and are
quite un-avoidable.  This is not technical learning.  The effect is
that even in the face of considerable technical learning, one's intuition
or "pet theories" in psychology might be heavily influenced in creation
of the theory as well as selection, by one's life experiences, possibly
to the exclusion of one's technical opinions. (Who knows what goes on in
the sub-conscious.)  While one does not encounter heavy molecules often
in one's everyday life or one's childhood, one DOES encounter other people
and more significantly one's own mind.

It seems clear that intuition in physics is based upon a different sort
of knowledge than intuition about psychology.  The latter is a combination
of technical AND everyday intuition while the former is not.
----GaryFostel----

------------------------------

End of AIList Digest
********************

∂11-Oct-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #74
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Oct 83  19:49:59 PDT
Date: Tuesday, October 11, 1983 11:25AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #74
To: AIList@SRI-AI


AIList Digest           Wednesday, 12 Oct 1983     Volume 1 : Issue 74

Today's Topics:
  Journals - AI Journal,
  Query - Miller's "Living Systems",
  Technology Transfer - DoD Reviews,
  Conciousness
----------------------------------------------------------------------

Date: Tue, 11 Oct 83 07:54 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: AI Journal

The information provided by Larry Cipriani about the AI Journal in the
last issue of AINET is WRONG in a number of important particulars.
Institutional subscriptions to the Artificial Intelligence Journal are
$176 this year (not $136).  Personal subscriptions are available
for $50 per year for members of the AAAI, SIGART  and AISB.  The
circulation is about 2,000 (not 1,100).  Finally, the AI journal
consists of eight issues this year, and nine issues next year (not
bimonthly).
Thanks
Dan Bobrow (Editor-in-Chief)
Bobrow@PARC

------------------------------

Date: Mon, 10 Oct 83 15:41 EDT
From: David Axler <Axler.UPenn@Rand-Relay>
Subject: Bibliographic Query

     Just wondering if anybody out there has read the book 'Living Systems'
by James G. Miller (Mc Graw - Hill, 1977)., and, if so, whether they feel that
Miller's theories have any relevance to present-day AI research.  I won't
even attempt to summarize the book's content here, as it's over 1K pages in
length, but some of the reviews of it that I've run across seem to imply that
it might well be useful in some AI work.

     Any comments?

   Dave Axler (Axler.Upenn-1100@UPenn@Udel-Relay)

------------------------------

Date: 7 Oct 1983 08:11-EDT
From: TAYLOR@RADC-TOPS20
Subject: DoD "reviews"


I must agree with Earl Weaver's comments on the DoD review of DoD
sponsored publications with one additional comment...since I have
"lived and worked" in that environment for more than six years.
DoD has learned (through experience) that given enough
unclassified material, much classified information can be
deduced.  I have seen documents whose individual paragraphs were
unclassified, but when grouped to gether as a single document it
provided too much sensitive information to leave unclassified.
      Roz (RTaylor@RADC-MULTICS)

------------------------------

Date: 4 Oct 83 19:25:13-PDT (Tue)
From: ihnp4!zehntel!tektronix!tekcad!ricks @ Ucb-Vax
Subject: Re: Conference Announcement - (nf)
Article-I.D.: tekcad.66


>              ****************  CONFERENCE  ****************
>
>                     "Intelligent Systems and Machines"
>
>                    Oakland University, Rochester Michigan
>
>                                April 24-25, 1984
>
>              *********************************************
>
>AUTHORS PLEASE NOTE:  A Public Release/Sensitivity Approval is necessary.
>Authors from DOD, DOD contractors, and individuals whose work is government
>funded must have their papers reviewed for public release and more
>importantly sensitivity (i.e. an operations security review for sensitive
>unclassified material) by the security office of their sponsoring agency.


        Another example of so called "scientists" bowing to governmental
pressure to let them decide if the paper you want to publish is OK to
publish. I think that this type of activity is reprehensible and as con-
cerned scientists we should do everything in our power to stop this cen-
sorship of research. I urge everyone to boycott this conference and any
others like it which REQUIRE a Public Release/Sensitivty Approval (funny
how the government tries to make censorship palatible with different words,
isn't it). If we don't stop this now, we may be passing every bit of research
we do under the nose of bureaucrats who don't know an expert system from
an accounting package and who have the power to stop publication of anything
they consider dangerous.
                                        I'm mad as hell and I'm not going to
                                                take it anymore!!!!
                                                Frank Adrian
                                                (teklabs!tekcad!franka)

------------------------------

Date: 6 Oct 83 6:13:46-PDT (Thu)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!aplvax!eric @ Ucb-Vax
Subject: Re: Alas, I must flame...
Article-I.D.: aplvax.358

        The "sensitivity" issue is not limited to government - most
companies also limit the distribution of information that they
consider "company private". I find very little wrong with the
idea of "we paid for it, we should benefit from it". The simple
truth is that they did underwrite the cost of the research. No one
is forced to work under these conditions, but if you want to take
the bucks, you have to realize that there are conditions attached
to them. On the whole, DoD has been amazingly open with the disclosure
of it CS research - one big example is ARPANET. True, they are now
wanting to split it up, but they are still leaving half of it to
research facilities who did not foot the bill for its development.
Perhaps it can be carried to extremes (I have never seen that happen,
but lets assume it that it can happen), they contracted for the work
to be done, and it is theirs to do with as they wish.

--
                                        eric
                                        ...!seismo!umcp-cs!aplvax!eric

------------------------------

Date: 7 Oct 83 18:56:18-PDT (Fri)
From: npois!hogpc!houti!ariel!vax135!floyd!cmcl2!csd1!condict@Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: csd1.124

                     [Very long article.]


Self-awareness is an illusion?  I've heard this curious statement
before and never understood it.  YOUR self-awareness may be an
illusion that is fooling me, and you may think that MY self-awareness
is an illusion, but one thing that you cannot deny (the very, only
thing that you know for sure) is that you, yourself, in there looking
out at the world through your eyeballs, are aware of yourself doing
that.  At least you cannot deny it if it is true.  The point is, I
know that I have self-awareness -- by the very act of experiencing
it.  You cannot take this away from me by telling me that my
experience is an illusion.  That is a patently ludicrous statement,
sillier even then when your mother (no offense -- okay, my mother,
then) used to tell you that the pain was all in your head.  Of course
it is!  That is exactly what the problem is!

Let me try to say this another way, since I have never been able to
get this across to someone who doesn't already believe it.  There are
some statements that are true by definition, for instance, the
statement, "I pronounce you man and wife".  The pronouncement happens
by the very saying of it and cannot be denied by anyone who has heard
it, although the legitimacy of the marriage can be questioned, of
course.  The self-awareness thing is completely internal, so you may
sensibly question the statement "I have self-awareness" when it comes
from someone else.  What you cannot rationally say is "Gee, I wonder
if I really am aware of being in this body and looking down at my
hands with these two eyes and making my fingers wiggle at will?"  To
ask this ques- tion seriously of yourself is an indication that you
need immediate psychiatric help.  Go directly to Bellvue and commit
yourself.  It is as lunatic a question as asking yourself "Gee, am I
really feeling this pain or is it only an illusion that I hurt so bad
that I would happily throw myself in the trash masher to extinguish
it?"

For those of you who misunderstand what I mean by self-awareness,
here is the best I can do at an explanation.  There is an obvious
sense in which my body is not me.  You can cut off any piece of it
that leaves the rest functioning (alive and able to think) and the
piece that is cut off will not take part in any of my experiences,
while the rest of the body will still contain (be the center for?) my
self-awareness.  You may think that this is just because my brain is
in the big piece.  No, there is something more to it than that.  With
a little imagination you can picture an android being constructed
someday that has an AI brain that can be programmed with all the
memories you have now and all the same mental faculties.  Now picture
yourself observing the android and noting that it is an exact copy of
you.  You can then imagine actually BEING that android, seeing what
it sees, feeling what it feels.  What is the difference between
observing the android and being the android?  It is just this -- in
the latter case your self-awareness is centered in the android, while
in the former it is not.  That is what self-awareness, also called a
soul, is.  It is the one true meaning of the word "I", which does not
refer to any particular collection of atoms, but rather to the "you"
that is occupying the body.  This is not a religous issue either, so
back off, all you atheist and Christian fanatics.  I'm just calling
it a soul because it is the real "me", and I can imagine it residing
in various different bodies and machines, although I would, of
course, prefer some to others.

This, then, is the reason I would never step into one of those
teleporters that functions by ripping apart your atoms, then
reconstructing an exact copy at a distant site.  My self-awareness,
while it doesn't need a biological body to exist, needs something!
What guarantee do I have that "I", the "me" that sees and hears the
door of the transporter chamber clang shut, will actually be able to
find the new copy of my body when it is reconstructed three million
parsecs away.  Some of you are laughing at my lack of modernism here,
but I can have the last laugh if you're stupid enough to get into the
teleporter with me at the controls.  Suppose it functions like this
(from a real sci-fi story that I read): It scans your body, transmits
the copying information, then when it is certain that the copy got
through it zaps the old copy, to avoid the inconvenience of there
being two of you (a real mess at tax time!).  Now this doesn't bother
you a bit since it all happens in micro-seconds and your
self-awareness, being an illusion, is not to be consulted in the
matter.  But suppose I put your beliefs to the test by setting the
controls so that the copy is made but the original is not destroyed.
You get out of the teleporter at both ends, with the original you
thinking that something went wrong.  I greet you with:

"Hi there!  Don't worry, you got transported okay.  Here, you can
talk to your copy on the telephone to make sure.  The reason that I
didn't destroy this copy of you is because I thought you would enjoy
doing it yourself.  Not many people get to commit suicide and still
be around to talk about it at cocktail parties, eh?  Now, would you
like the hari-kari knife, the laser death ray, or the nice little red
pills?"

You, of course, would see no problem whatsoever with doing yourself
in on the spot, and would thank me for adding a little excitement to
your otherwise mundane trip.  Right?  What, you have a problem with
this scenario?  Oh, it doesn't bother you if only one copy of you
exists at a time, but if there are ever two, by some error, your
spouse is stuck with both of you?  What does the timing have to do
with your belief in self-awareness?  Relativity theory says that the
order of the two events is indeterminate anyway.

People who won't admit the reality of their own self-awareness have
always bothered me.  I'm not sure I want to go out for a beer with,
much less date or marry someone who doesn't at least claim to have
self-awareness (even if they're only faking).  I get this image of me
riding in a car with this non-self-aware person, when suddenly, as we
reach a curve with a huge semi coming in the other direction, they
fail to move the wheel to stay in the right lane, not seeing any
particular reason to attempt to extend their own unimportant
existence.  After all, if their awareness is just an illusion, the
implication is that they are really just a biological automaton and
it don't make no never mind what happens to it (or the one in the
next seat, for that matter, emitting the strange sounds and clutching
the dashboard).

The Big Unanswered Question then (which belongs in net.philosophy,
where I will expect to see the answer) is this:

                "Why do I have self-awareness?"

By this I do not mean, why does my body emit sounds that your body
interprets to be statements that my body is making about itself.  I
mean why am *I* here, and not just my body and brain?  You can't tell
me that I'm not, because I have a better vantage point than you do,
being me and not you.  I am the only one qualified to rule on the
issue, and I'll thank you to keep your opinion to yourself.  This
doesn't alter the fact that I find my existence (that is, the
existence of my awareness, not my physical support system), to be
rather arbitrary.  I feel that my body/brain combination could get
along just fine without it, and would not waste so much time reading
and writing windy news articles.

Enough of this, already, but I want to close by describing what
happened when I had this conversation with two good friends.  They
were refusing to agree to any of it, and I was starting to get a
little suspicious.  Only, half in jest, I tried explaining things
this way.  I said:

"Look, I know I'm in here, I can see myself seeing and hear myself
hearing, but I'm willing to admit that maybe you two aren't really
self-aware.  Maybe, in fact, you're robots, everybody is robots
except me.  There really is no Cornell University, or U.S.A. for that
matter.  It's all an elaborate production by some insidious showman
who constructs fake buildings and offices wherever I go and rips them
down behind me when I leave."

Whereupon a strange, unreadable look came over Dean's face, and he
called to someone I couldn't see, "Okay, jig's up! Cut! He figured it
out." (Hands motioning, now) "Get, those props out of here, tear down
those building fronts, ... "

Scared the pants off me.

Michael Condict   ...!cmcl2!csd1!condict
New York U.

------------------------------

End of AIList Digest
********************

∂12-Oct-83  1827	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #75
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Oct 83  18:26:51 PDT
Date: Wednesday, October 12, 1983 10:41AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #75
To: AIList@SRI-AI


AIList Digest           Thursday, 13 Oct 1983      Volume 1 : Issue 75

Today's Topics:
  Music & AI - Poll Results,
  Alert - September CACM,
  Fuzzy Logic - Zadeh Syllogism,
  Administrivia - Usenet Submissions & Seminar Notices,
  Seminars - HP 10/13/83 & Rutgers Colloquium
----------------------------------------------------------------------

Date: 11 Oct 83 16:16:12 EDT  (Tue)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: music poll results

Here are the results of my request for info on AI and music.
(I apologize for losing the header to the first mail below.)

                        - Randy
                   ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci.
He had a real nice program to imitate Debuse (experts could not tell
its compositions from originals).

                   ------------------------------

Date:     22 Sep 83 01:55-EST (Thu)
From:     Michael Aramini <aramini@umass-cs>
Subject:  RE: AI and music

At the AAAI conference, I was talking to someone from Atari (from Atari
Cambridge Labs, I think) who was doing work with AI and music.  I can't
remember his name, however.  He was working (with others) on automating
transforming music of one genre into another.  This involved trying to
quasi-formally define what the characteristics of each genre of music are.
It sounded like they were doing a lot of work on defining ragtime and
converting ragtime to other genres.  He said there were other people at Atari
that are working on modeling the emotional state various characteristics of
music evoke in the listener.

I am sorry that I don't have more info as to the names of these people or how
to get in touch with them.  All that I know is that this work is being done
at Atari Labs either in Cambridge, MA or Palo Alto, CA.

                   ------------------------------

Date: Thu 22 Sep 83 11:04:22-EDT
From: Ted Markowitz <TJM@COLUMBIA-20>
Subject: Music and AI
Cc: TJM@COLUMBIA-20

Having an undergrad degree in music and working toward a graduate
degree in CS, I'm very interested in any results you get from your
posting. I've been toying with the idea of working on a music-AI
interface, but haven't pinned down anything specific yet. What
is your research concerned with?

--ted
                   ------------------------------

Date: 24 Sep 1983 20:27:57-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Music analysis/generation & AI

  There are 3 places that immediately come to mind:

1. There is a huge and well-developed (indeed, venerable) computer
music group at Stanford.  They currently occupy what used to be
the old AI Lab.  I'm sure someone else will mention them, but if
not call Stanford (or send me another note and I'll find a net address
you can send mail to for details.)

2. Atari Research is doing a lot of this sort of work -- generation,
analysis, etc., both in Cambridge (Mass) and Sunnyvale (Calif.), I
believe.

3. Some very good work has come out of MIT in the past few years.
David Levitt is working on his PhD in this area there, having completed
his masters in AI approaches to Jazz improvisation, if my memory serves,
and I think William Paseman also wrote his masters on a related topic
there.  Send mail to LEVITT@MIT-MC for info -- I'm sure he'd be happyy
to tell you more about his work.
                                                asc

------------------------------

Date: Wed 12 Oct 83 09:40:48-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Alert - September CACM

The September CACM contains the following interesting items:

A clever cover graphically illustrating the U.S. and Japanese
approaches to the Fifth Generation.

A Harper and Row ad (without prices) including Touretzky's
LISP: A Gentle Introduction to Symbolic Computation and
Eisenstadt and O'Shea's Artificial Intelligence: Tools,
Techniques and Applications.  [AIList would welcome reviews.]

An editorial by Peter J. Denning on the manifest destiny of
AI to succeed because the concept is easily grasped, credible,
expected to succeed, and seen as an improvement.

An introduction and three articles about the Fifth Generation,
Japanese management, the Japanese effort, and MCC.

A report on BELLE's slim victory in the 13th N.A. Computer Chess
Championship.

A note on the sublanguages (i.e., natural restricted languages)
conference at NYU next January.

A note on DOD's wholesale adoption of ADA.

                                        -- Ken Laws

------------------------------

Date: Wed 12 Oct 83 09:24:34-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Zadeh Syllogism

Lotfi Zadeh used a syllogism yesterday that was new to me.  To
paraphrase slightly:


    Cheap apartments are rare and highly sought.
    Rare and highly sought objects are expensive.
    ---------------------------------------------
    Cheap apartments are expensive.


I suppose any reasonable system will conclude that cheap apartments
cannot exist, which may in fact be the case.

                                        -- Ken Laws

------------------------------

Date: Wed 12 Oct 83 10:20:57-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Usenet Submissions

It has come to my attention that I may be failing to distribute
some Usenet-originated submissions back to Usenet readers.  If
this is true, I apologize.  I have not been simply ignoring
submissions; if you haven't heard from me, the item was distributed
to the Arpanet.

The problem involves the Article-I.D. field in Usenet-
originated messages.  The gateway software (maintained by
Knutsen@SRI-UNIX) ignores digest items containing this keyword
so that messages originating from net.ai will not be posted
back to net.ai.

Unfortunately, messages sent directly to AIList instead of to
net.ai also contain this keyword.  I have not been stripping it
out, and so the submission have not been making it back to Usenet.

I will try to be more careful in the future.  Direct AIList
contributors who want to be sure I don't slip should begin
their submissions with a "strip ID field" comment.  Even a
"Dear Moderator," might trigger my editing instincts.  I hope
to handle direct submissions correctly even without prompting,
but the visible distinction between the two message types is
slight.

                                        -- Ken Laws

------------------------------

Date: Wed 12 Oct 83 10:04:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Seminar Notices

There have been a couple of net.ai requests lately that seminar
notices be dropped, plus a strong request that they be
continued.  I would like to make a clear policy statement
on this matter.  Anyone who wishes to discuss it further
may write to AIList-Request@SRI-AI; I will attempt to
compile opinions or moderate the disscussion in a reasonable
manner.

Strictly speaking, AIList seldom prints "seminar notices".
Rather, it prints abstracts of AI-related talks.  The abstract
is the primary item; the fact that the speaker is graduating
or out "selling" is secondary; and the possibility that AIList
readers might attend is tertiary.  I try to distribute the
notices in a timely fashion, but responses to my original
query were two-to-one in favor of the abstracts even when the
talk had already been given.

The abstracts have been heavily weighted in favor of the
Bay Area; some readers have taken this to be provincialism.
Instead, it is simply the case that Stanford, Hewlett-Packard,
and occasionally SRI are the only sources available to me
that provide abstracts.  Other sources would be welcome.

In the event that too many abstracts become available, I will
institute rigorous screening criteria.  I do not feel the need
to do so at this time.  I have passed up database, math, and CS
abstracts because they are outside the general AI and data
analysis domain of AIList; others might disagree.  I have
included some borderline seminars because they were the first
of a series; I felt that the series itself was worth publicizing.

I can't please all of the people all of the time, but your feedback
is welcome to help me keep on course.  At present, I regard the
abstracts to be one of AIList's strengths.

                                        -- Ken Laws

------------------------------

Date: 11 Oct 83 16:30:27 PDT (Tuesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 10/13/83


                Piero P. Bonissone

                Corporate Research and Development
                General Electric Corporation

        DELTA: An Expert System for Troubleshooting
                Diesel Electric Locomotives


The a priori information available to the repair crew is a list of
"symptoms" reported by the engine crew.  More information can be
gathered in the "running repair" shop, by taking measurements and
performing tests provided that the two hour time limit is not exceeded.

A rule based expert system, DELTA (Diesel Electric Locomotive
Troubleshooting Aid) has been developed at the General Electric
Corporate Research and Development Laboratories to guide in the repair
of partially disabled electric locomotives.  The system enforces a
disciplined troubleshooting procedure which minimizes the cost and time
of the corrective maintenance allowing detection and repair of
malfunctions in the two hour window allotted to the service personnel in
charge of those tasks.

A prototype system has been implemented in FORTH, running on a Digital
Equipment VAX 11/780 under VMS, on a PDP 11/70 under RSX-11M, and on a
PDP 11/23 under RSX-11M.  This system contains approximately 550 rules,
partially representing the knowledge of a Senior Field Service Engineer.
The system is provided with graphical/video capabilities which can help
the user in locating and identifying locomotive components, as well as
illustrating repair procedures.

Although the system only contains a limited number of rules (550), it
covers, in a shallow manner, a wide breadth of the problem space.  The
number of rules will soon be raised to approximately 1200 to cover, with
increased depth, a larger portion of the problem space.

        Thursday, October 13, 1983  4:00 PM

        Hewlett Packard
        Stanford Division Labs
        5M Conference room
        1501 Page Mill Rd
        Palo Alto, CA  9430

        ** Be sure to arrive at the building's lobby ON TIME, so that you may
be escorted to the meeting room.

------------------------------

Date: 11 Oct 83 13:47:44 EDT
From: LOUNGO@RUTGERS.ARPA
Subject: colloquium

              [Reprinted from the RUTGERS bboard.  Long message.]



                  Computer Science Faculty Research Colloquia

                       Date: Thursday, October 13, 1983

                                Time: 2:00-4:15

                  Place: Room 705, Hill Center, Busch Campus

Schedule:

2:00-2:15       Prof. Saul Amarel, Chairman, Department of Computer Science
                Introductory Remarks

2:15-2:30       Prof. Casimir Kulikowski
                Title:   Expert Systems and their Applications
                Area(s): Artificial intelligence


2:30-2:45       Prof. Natesa Sridharan
                Title:   TAXMAN
                Area(s): Artificial intelligence (knowledge representation),
                         legal reasoning

2:45-3:00       Prof. Natesa Sridharan
                Title:   Artificial Intelligence and Parallelism
                Area(s): Artificial intelligence, parallelism

3:00-3:15       Prof. Saul Amarel
                Title:   Problem Reformulations and Expertise Acquisition;
                         Theory Formation
                Area(s): Artificial intelligence

3:15-3:30       Prof. Michael Grigoriadis
                Title:   Large Scale Mathematical Programming;
                         Network Optimization; Design of Computer Networks
                Area(s): Computer networks

3:30-3:45       Prof. Robert Vichnevetsky
                Title:   Numerical Solutions of Hyperbolic Equations
                Area(s): Numerical analysis

3:45-4:00       Prof. Martin Dowd
                Title:   P~=NP
                Area(s): Computational complexity

4:00-4:15       Prof. Ann Yasuhara
                Title:   Notions of Complexity for Trees, DAGS,
                                             *
                         and subsets of {0,1}
                Area(s): Computational complexity


                COFFEE AND DONUTS AT 1:30

-------
Mail-From: LAWS created at 12-Oct-83 09:11:56
Mail-From: LOUNGO created at 11-Oct-83 13:48:35
Date: 11 Oct 83 13:48:35 EDT
From: LOUNGO@RUTGERS.ARPA
Subject: colloquium
To: BBOARD@RUTGERS.ARPA
cc: pettY@RUTGERS.ARPA, lounGO@RUTGERS.ARPA
ReSent-date: Wed 12 Oct 83 09:11:56-PDT
ReSent-from: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-to: ailist@SRI-AI.ARPA


                  Computer Science Faculty Research Colloquia

                        Date: Friday, October 14, 1983

                                Time: 2:00-4:15

                  Place: Room 705, Hill Center, Busch Campus

Schedule:

2:00-2:15       Prof. Tom Mitchell
                Title:   Machine Learning and Artificial Intelligence
                Area(s): Artificial intelligence

2:15-2:30       Prof. Louis Steinberg
                Title:   An Artificial Intelligence Approach to Computer-Aided
                         Design for VLSI
                Area(s): Artificial intelligence, computer-aided design, VLSI

2:30-2:45       Prof. Donald Smith
                Title:   Debugging VLSI Designs
                Area(s): Artificial intelligence, computer-aided design, VLSI

2:45-3:00       Prof. Apostolos Gerasoulis
                Title:   Numerical Solutions to Integral Equations
                Area(s): Numerical analysis

3:00-3:15       Prof. Alexander Borgida
                Title:   Applications of AI to Information Systems Development
                Area(s): Artificial intelligence, databases,
                         software engineering

3:15-3:30       Prof. Naftaly Minsky
                Title:   Programming Environments for Evolving Systems
                Area(s): Software engineeging, databases, artificial
                         intelligence

3:30-3:45       Prof. William Steiger
                title:   Random Algorithms
                area(s): Analysis of algorithms, numerical methods,
                         non-numerical methods

3:45-4:00

4:00-4:15
!


                  Computer Science Faculty Research Colloquia

                       Date: Thursday, October 20, 1983

                                Time: 2:00-4:15

                  Place: Room 705, Hill Center, Busch Campus

Schedule:

2:00-2:15       Prof. Thomaz Imielinski
                Title:   Relational Databases and AI; Logic Programming
                Area(s): Dabtabases, artificial intelligence

2:15-2:30       Prof. David Rozenshtein
                Title:   Nice Relational Databases
                Area(s): Databases, data models

2:30-2:45       Prof. Chitoor Srinivasan
                Title:   Expert Systems that Reason About Action with Time
                Area(s): Artificial intelligence, knowledge-based systems

2:45-3:00       Prof. Gerald Richter
                Title:   Numerical Solutions to Partial Differential Equations
                Area(s): Numerical analysis

3:00-3:15       Prof. Irving Rabinowitz
                Title:   - To be announced -
                Area(s): Programming languages

3:15-3:30       Prof. Saul Levy
                Title:   Distributed Computing
                Area(s): Computing, computer architecture

3:30-3:45       Prof. Yehoshua Perl
                Title:   Sorting Networks, Probabilistic Parallel Algorithms,
                         String Matching
                Area(s): Design and analysis of algorithms

3:45-4:00       Prof. Marvin Paull
                Title:   Algorithm Design
                Area(s): Design and analysis of algorithms

4:00-4:15       Prof. Barbara Ryder
                Title:   Incremental Data Flow Analysis
                Area(s): Design and analysis of algorithms,
                         compiler optimization

                COFFEE AND DONUTS AT 1:30

------------------------------

End of AIList Digest
********************

∂13-Oct-83  1804	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #76
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Oct 83  18:04:03 PDT
Date: Thursday, October 13, 1983 10:13AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #76
To: AIList@SRI-AI


AIList Digest           Thursday, 13 Oct 1983      Volume 1 : Issue 76

Today's Topics:
  Intelligent Front Ends - Request,
  Finance - IntelliGenetics,
  Fuzzy Logic - Zadeh's Paradox,
  Publication - Government Reviews
----------------------------------------------------------------------

Date: Thursday, 13-Oct-83  12:04:24-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Request for Information on Intelligent Front Ends


        The UK government has set up a the Alvey Programme as the UK
answer to the Japanese 5th Generation Programme.  One part of that
Programme has been to identify and promote research in a number of
'themes'.  I am the manager of one such theme - on 'Intelligent Front
Ends' (IFE).  An IFE is defined as follows:

"A front end to an existing software package, for example a finite
element package, a mathematical modelling system, which provides a
user-friendly interface (a "human window") to packages which without
it, are too complex and/or technically incomprehensible to be
accessible to many potential users.  An intelligent front end builds a
model of the user's problem through user-oriented dialogue mechanisms
based on menus or quasi-natural language, which is then used to
generate suitably coded instructions for the package."

        One of the theme activities is to gather information about
IFEs, for instance:  useful references and short descriptions of
available tools.  If you can supply such information then please send it
to BUNDY@RUTGERS.  Thanks in advance.

                Alan Bundy

------------------------------

Date: 12 Oct 83  0313 PDT
From: Arthur Keller <ARK@SU-AI>
Subject: IntelliGenetics

                [Reprinted from the SU-SCORE bboard.]


From Tuesday's SF Chronicle (page 56):

"IntelliGenetics Inc., Palo Alto, has filed with the Securities and
Exchange Commission to sell 1.6 million common shares in late November.

The issue, co-managed by Ladenburg, Thalmann & Co. Inc. of New York
and Freehling & Co. of Chicago, will be priced between $6 and $7 a share.

IntelliGenetics provides artificial intelligence based software for use
in genetic engineering and other fields."

------------------------------

Date: Thursday, 13-Oct-83  16:00:01-BST
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Zadeh's apartment paradox


The resolution of the paradox lies in realising that
        "cheap apartments are expensive"
is not contradictory.  "cheap" refers to the cost of
maintaining (rent, bus fares, repairs) the apartment
and "expensive" refers to the cost of procuring it.
The fully stated theorem is
        \/x apartment(x) & low(upkeep(x)) =>
            difficult←to←procure(x)
        \/x difficult←to←procure(x) =>
            high(cost←of←procuring(x))
hence   \/x apartment(x) & low(upkeep(x)) =>
            high(cost←of←procuring(x))
where "low" and "high" can be as fuzzy as you please.

A reasoning system should not conclude that cheap
flats don't exist, but rather that the axioms it has
been given are inconsistent with the assumption that
they do.  Sooner or later you are going to tell it
"Jones has a cheap flat", and then it will spot the
flawed axioms.


[I can see your point that one might pay a high price
to procure an apartment with a low rental.  There is
an alternate interpretation which I had in mind, however.
The paradox could have been stated in terms of any
bargain, specifically one in which upkeep is not a
factor.  One could conclude, for instance, that a cheap
meal is expensive.  My own resolution is that the term
"rare" (or "rare and highly sought") must be split into
subconcepts corresponding to the cause of rarity.  When
discussing economics, one must always reason separately
about economic rarities such as rare bargains.  The second
assertion in the syllogism then becomes "rare and highly
sought objects other than rare bargains are (Zadeh might
add 'usually') expensive", or "rare and highly sought
objects are either expensive or are bargains".

                                        -- Ken Laws ]

------------------------------

Date: Thu 13 Oct 83 03:38:21-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Re: Zadeh Syllogism

        Expensive apartments are not highly sought.
        Items not in demand are cheap.
                -> expensive apartments are cheap.

or      The higher the price, the lower the demand.
        The lower the demand, the lower the price.
                -> the higher the price , the lower the price.

ergo ??         garbage in , garbage out!

Why am I thinking of Reagonomics right now ????

Werner (UUCP:   { ut-sally , ut-ngp }   !utastro!werner
        ARPA:   werner@utexas-20

PS:     at this time of the day, one gets the urge to voice "weird" stuff ...
                               -------

[The first form is as persuasive as the original syllogism.
The second seems to be no more than a statement of negative
feedback.  Whether the system is stable depends on the nature
of the implied driving forces.  It seems we are now dealing
with a temporal logic.

An example of an unstable system is:

    The fewer items sold, the higher the unit price must be.
    The higher the price, the fewer the items sold.
    --------------------------------------------------------
    Bankruptcy.

-- KIL]

------------------------------

Date: Wed, 12 Oct 83 13:16 PDT
From: GMEREDITH.ES@PARC-MAXC.ARPA
Subject: Sensitivity Issue and Self-Awareness


I can understand the concern of researcher people about censorship.

However, having worked with an agency which spent time extracting
information of a classified nature from unclassified or semi-secure
sources, I have to say that people not trained in such pursuits are
usually very poor judges of the difference between necessary efforts to
curb flow of classified information and "censorship".

I can also guarantee that this country's government is not the alone in
knowing how to misuse the results of research carried out with the most
noble of intents.



Next, to the subject of self-awareness.  The tendency of an individual
to see his/her corporal self as distinct from the *I* experience or to
see others as robots or a kind of illusion is sufficient to win a tag of
'schizophrenic' from any psychiatrist and various other negative
reactions from those involved in other schools of the psychological
community.

Beyond that, the above tendencies make relating to 'real' world
phenomena very difficult.   That semi coming around the curve will
continue to follow through on the illusion of having smashed those just
recently discontinued illusions in the on-coming car.

Guy

------------------------------

Date: Wed 12 Oct 83 00:07:15-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Goverment Reviews of Basic Research

    I must disagree with Frank Adrian who commented in a previous digest
that "I urge everyone to boycott this conference" and other conferences with
this requirement. The progress of science should not be halted due to some
government ruling, especially since an attempted boycott would have little
positive and (probably) much negative effect. Assuming that all of the
'upstanding' scientists participated, is there any reason to think that
the government couldn't find less discerning researchers more than happy to
accept grant money?

    Eric (sorry, no last name) is preoccupied with the fact that government
'paid' for the research; aren't "we" the people the real owners, in that case?
Or can there be real owners of basic knowledge: as I recall, the patent office
has ruled that algorithms are unpatentable and thus inherently public domain.
The control of ideas has been an elusive goal for many governments, but even so,
it is rare for a government to try to claim ownership of an idea as a
justification for restriction; outside of the military domain, this is seems
to be a new one...

        As a scientist, I believe that the world and humanity will gain wisdom
and insight though research, and eventually enable us to end war, hunger,
ignorance, whatever. Other forces in the world have different, more short-term
goals, for our work; this is fine, as long as the long-term reasons for
scientific research are not sacrificed. Sure, they 'paid' for the results of
our short-term goals, but we should never allow that to blind us to the real
reason for working in AI, and *NO-ONE* can own that.

   So I'll take government money (if they offer me any after this diatribe!)
and work on various systems and schemes, but I'll fight any attempt to
nullify the long term goals I'm really working for. I feel these new
restrictions are detrimental to the long-term goals of scientific search,
but currently, I'm going with things here... we're the best in the world (sigh)
and I plan on fighting to keep it that way.

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Wed, 12 Oct 83 10:26:28 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Flaming Mad

  I have refrained  from  reflaming  since  I  sent  the  initial
conference  announcement  on  "Intelligent Systems and Machines."
First,  the  conference  is  not  being  sponsored  by   the   US
Government.   Second,  many  papers  may  be  submitted  by those
affected by the security  release  and  it  seemed  necessary  to
include  this as part of the announcement.  Third, I attended the
conference at Oakland earlier  this  year  and  it  was  a  super
conference.  Fourth, you may bite your nose to spite your face if
you as an individual do not want to submit a paper or attend  but
you are not doing much service to those sponsoring the conference
who are true scientists by urging boycotts.  Finally, below is  a
little of my own philosophy.

  I have rarely  seen  science  or  the  application  of  science
(engineering)  benefit anyone anywhere without an associated cost
(often called an investment).  The costs are usually borne by the
investors  and  if  the  end  product is a success then costs are
passed  on  to  consumers.   I  can  find  few   examples   where
discoveries  in  science  or  in  the  name  of  science have not
benefited the discoverer and/or  his  heirs,  or  the  investors.
Many  of  our  early discoveries were made by men of considerable
wealth who could dally with theory and experimentation  (and  the
arts)  and science using their own resources.  We may have gained
a heritage but they gained a profit.

  What seems to constitute a common heritage is either  something
that  has been around for so long that it is either in the public
domain or is a  romanticized  fiction  (e.g.  Paul  Muni  playing
Pasteur).   Simultaneous  discovery has been responsible for many
theories being in  the  public  domain  as  well  as  leading  to
products  which were hotly contested in lawsuits.  (e.g. did Bell
really invent the telephone or Edison the movie camera?)

  Watson in his book "The Double Helix" gives a clear picture  of
what  a typical scientist may really be and it is not Arrowsmith.
I did not see Watson refuse his Noble because the radiologist did
not get a prize.

  Government, and here for historical reasons we must also include
state  and  church, has  always had a role in the sciences.  That
role is one that governments can not always be proud of (Galileo,
Rachael Carson, Sakharov).

  The manner in  which  the  United  States  Government  conducts
business  gives  great  latitude  to scientists and to investors.
When the US Government buys something it should be theirs just as
when  you as an individual buy something.  As such it is then the
purview of the US Government as to what to do with  the  product.
Note  the  US  Government  often  buys  with  limited  rights  of
ownership and distribution.

  It has been my observation having worked in  private  industry,
for a university, and now for the government that relations among
the three has not been optimal and in  many  cases  not  mutually
rewarding.   This  is  a  great  concern  of  mine and many of my
colleagues.  I would like a role in changing relations among  the
three  and do work toward that as a personal goal.  This includes
not  referring  to  academicians  as  eggheads   or   charlatans;
industrialists  as grubby profiteers; and government employees as
empty-headed bureaucrats.

  I recommend that young flamers try to maintain a little naivete
as they mature but not so much that they are ignorant of reality.

  Every institution has its structure and by in large  one  works
within  the  structure to earn a living or are free to move on or
can work to change that structure.  One possible  change  is  for
the US Government to conduct business the way the the Japanese do
(at least in certain cases).  Maybe AI is the place to start.

  I also notice that mail on the net comes  across  much  harsher
than  it  is  intended  to  be.  This can be overcome by being as
polite as possible and being more verbose.  In addition, one  can
read their mail more than once before flaming.

                                Mort

------------------------------

End of AIList Digest
********************

∂14-Oct-83  1545	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #77
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Oct 83  15:44:18 PDT
Date: Friday, October 14, 1983 9:36AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #77
To: AIList@SRI-AI


AIList Digest            Friday, 14 Oct 1983       Volume 1 : Issue 77

Today's Topics:
  Natural Language - Semantic Chart Parsing & Macaroni & Grammars,
  Games - Rog-O-Matic,
  Seminar - Nau at UMaryland, Diagnostic Problem Solving
----------------------------------------------------------------------

Date: Wednesday, 12 October 1983 14:01:50 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: "Semantic chart parsing"

        I should have made it clear in my previous note on the subject that
the phrase "semantic chart parsing" is a name I've coined to describe a
parser which uses the technique of syntactic chart parsing, but includes
semantic information right from the start.  In a way, it's an attempt to
reconcile Schank-style immediate semantic interpretation with syntactically
oriented parsing, since both sources of information seem worthwhile.

------------------------------

Date: Wednesday, 12-Oct-83  17:52:33-BST
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Natural Language


There was rather more inflammation than information in the
exchanges between Dr Pereira and Whats-His-Name-Who-Butchers-
Leprechauns.  Possibly it's because I've only read one or two
[well, to be perfectly honest, three] papers on PHRAN and the
others in that PHamily, but I still can't see why it is that
their data structures aren't a grammar.  Admittedly they don't
look much like rules in an XG, but then rules in an XG don't
look much like an ATN either, and no-one has qualms about
calling ATNs grammars.  Can someone please explain in words
suitable for a 16-year-old child what makes phrasal analysis
so different from
        XGs (Extraposition grammars, include DCGS in this)
        ATNs
        Marcus-style parsers
        template-matching
so different that it is hailed as "solving" the parsing problem?
I have written grammars for tiny fragments of English in DCG,
ATN, and PIDGIN -styles [the adverbs get me every time].  I am not
a linguist, and the coverage of these grammars was ludicrously
small.  So my claim that I found it vastly easier to extend and
debug the DCG version [DCGs are very like EAGs] will probably be
dismissed with the contempt it deserves.  Dr Pereira has published
his parser, and in other papers has published an XG interpreter.
I believe a micro-PHRAN has been published, and I would be grateful
for a pointer to it.  Has anyone published a phrasal-analysis
grimoire (if the term "grammar" doesn't suit) with say >100 "things"
(I forget the right name for the data structures), and how can I
get a copy?

     People certainly can accept ill-formed sentences.  But they DO
have quite definite notions of what is a well-formed sentence and
what is not.  I was recently in a London Underground station, and
saw a Telecom poster.  It was perfectly obvious that it was written
by an Englishman trying to write in American.  It finally dawned on
me that he was using American vocabulary and English syntax.  At
first sight the poster read easily enough, and the meaning came through.
But it was sufficiently strange to retain my attention until I saw what
was odd about it.  Our judgements of grammaticality are as sensitive as
that.  [I repeat, I am no linguist.  I once came away from a talk by
Gazdar saying to one of my fellow students, who was writing a parser:
"This extraposition, I don't believe people do that."]  I suggest that
people DO learn grammars, and what is more, they learn them in a form
that is not wholly unlike [note the caution] DCGs or ATNs.  We know that
DCGs are learnable, given positive and negative instances.  [Oh yes,
before someone jumps up and down and says that children don't get
negative instances, that is utter rubbish.  When a child says something
and is corrected by an adult, is that not a negative instance?  Of course
it is!]  However, when people APPLY grammars for parsing, I suggest that
they use repair methods to match what they hear against what they
expect.  [This is probably frames again.]  These repair methods range
all the way from subconscious signal cleaning [coping with say a lisp]
to fully conscious attempts to handle "Colourless Green ideas sleep
furiously".  [Maybe parentheses like this are handled by a repair
mechanism?]  If this is granted, some of the complexity required to
handle say ellipsis would move out of the grammar and into the repair
mechanisms.  But if there is anything we know about human psychology,
it is that people DO have repair mechanisms.  There is a lot of work
on how children learn mathematics [not just Brown & co], and it turns
out that children will go to extraordinary lengths to patch a buggy
hack rather than admit they don't know.  So the fact that people can
cope with ungrammatical sentences is not evidence against grammars.

     As evidence FOR grammars, I would like to offer Macaroni.  Not
the comestible, the verse form.  Strictly speaking, Macaroni is a
mixture of the vernacular and Latin, but since it is no longer
popular we can allow any mixture of languages.  The odd thing about
Macaroni is that people can judge it grammatical or ungrammatical,
and what is more, can agree about their judgements as well as they
can agree about the vernacular or Latin taken separately.  My Latin
is so rusty there is no iron left, so here is something else.

        [Prolog is] [ho protos logos] [en programmation logiciel]
        English     Greek               French

This of course is (NP copula NP) PP, which is admissible in all
three languages, and the individual chunks are well-formed in their
several languages.  The main thing about Macaroni is that when
two languages have a very similar syntactic class, such as NP,
a sentence which starts off in one language may rewrite that
category in the other language, and someone who speaks both languages
will judge it acceptable.  Other ways of dividing up the sentence are
not judged acceptable, e.g.

        Prolog estin ho protos mot en logic programmation

is just silly.  S is very similar in most languages, which would account
for the acceptability of complete sentences in another language.  N is
pretty similar too, and we feel no real difficulty with single isolated
words from other languages like "chutzpa" or "pyjama" or "mana".  When
the syntactic classes are not such a good match, we feel rather more
uneasy about the mixture.  For example, "[ka ora] [teenei tangata]"
and "[these men] [are well]" both say much the same thing, but because
the Maaori nominal phrase and the English noun phrase aren't all that
similar, "[teenei tangata] [are well]" seems strained.

     The fact that bilingual people have little or no difficulty with
Macaroni is just as much a fact as the fact the people in general have
little difficulty with mildly malformed sentences.  Maybe they're the
same fact.  But I think the former deserves as much attention as the
latter.
     Does anyone have a parser with a grammar for English and a grammar
for [UK -> French or German; Canada -> French; USA -> Spanish] which use
the same categories as far as possible?  Have a go at putting the two
together, and try it on some Macaroni.  I suspect that if you have some
genuinely bilingual speakers to assist you, you will find it easier to
develo/correc the grammars together than separately.  [This does not
hold for non-related languages.  I would not expect English and Japanese
to mix well, but then I don't know any Japanese.  Maybe it's worth trying.]

------------------------------

Date: Thu 13 Oct 83 11:07:26-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Dave Curry's request for a Simple English Grammer

        I think the book "Natural Language Information
Processing" by Naomi Sager (Addison-Wesley, 1981) may be useful.
This book represents the results of the Linguistic String project
at New York University, and Dr. Sager is its director.  The book
contains a BNF grammer set of 400 or so rules for parsing English
sentences.  It has been applied to medical text, such as
radiology reports and narrative documents in patient records.

Dave Wyland
WYLAND@SRI

------------------------------

Date: 11 Oct 83 19:41:39-PDT (Tue)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: utah-cs.1994

(Oh no, here he goes again! and with his water-cooled keyboard too!)

Yes, analysis of syntax alone cannot possibly work - as near as I can
tell, syntax-based parsers need an enormous amount of semantic processing,
which seems to be dismissed as "just pragmatics" or whatever.  I'm
not an "in" member of the NLP community, so I haven't been able to
find out the facts, but I have a bad feeling that some of the well-known
NLP systems are gigantic hacks, whose syntactic analyzer is just a bag
hanging off the side, but about which all the papers are written.  Mind
you, this is just a suspicion, and I welcome any disproof...

                                                stan the l.h.
                                                utah-cs!shebs

------------------------------

Date: 7 Oct 83 9:54:21-PDT (Fri)
From: decvax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: rayssd.187

date: 10/7/83

        Yesterday I sent a suggestion that you look at Winograd's
new book on syntax.  Upon reflection, I realized that there are
several aspects of syntax not clearly stated therein. In particular,
there is one aspect which you might wish to think about, if you
are interested in building models and using the 'expectations'
approach. This aspect has to do with the synergism of syntax and
semantics. The particular case which occured to me is an example
of the specific ways that Latin grammar terminology is innapropriate
for English. In English, there is no 'present' tense in the intuitive
sense of that word. The stem of the verb (which Winograd calls the
'infinitive' form, in contrast to the traditional use of this term to
signify the 'to+stem' form) actually encodes the semantic concept
of 'indefinite habitual' Thus, to say only 'I eat.' sounds
peculiar. When the stem is used alone, we expect a qualifier, as in
'I eat regularly', or 'I eat very little', or 'I eat every day'. In
this framework, there is a connection with the present, in the sense
that the process described is continuous, has existed in the past,
and is expected to continue in the future. Thus, what we call the
'present' is really a 'modal' form, and might better be described
as the 'present state of a continuing habitual process'. If we wish
to describe something related to our actual state at this time,
we use what I think of as the 'actual present', which is 'I am eating'.
Winograd hints at this, especially in Appendix B, in discussing verb
forms. However, he does not go into it in detail, so it might help
you understand better what's happening if you keep in mind the fact
that there exist specific underlying semantic functions being
implemented, which are in turn based on the ltype of information
to be conveyed and the subtlety of the disinctions desired. Knowing
this at the outset may help you decide the elements you wish to
model in a simplified program. It will certainly help if you
want to try the expectations technique. This is an ideal situation
in which to try a 'blackboard' type of expert system, where the
sensing, semantics, and parsing/generation engines operate in
parallel. Good luck!

        A final note: if you would like to explore further a view
of grammar which totally dispenses with the terms and concepts of
Latin grammar, you might read "The Languages of Africa" (I think
that's the title), by William Welmer.

        By the way! Does anyone out there know if Welmer ever published
his fascinating work on the memory of colors as a function of time?
Did it at least get stored in the archives at Berkeley?

Asa Simmons
rayssd!asa

------------------------------

Date: Thursday, 13 October 1983 22:24:18 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Total Winner


        @   @          @   @           @          @@@  @     @
        @   @          @@ @@           @           @   @     @
        @   @  @@@     @ @ @  @@@   @@@@  @@@      @  @@@    @
        @@@@@ @   @    @   @     @ @   @ @   @     @   @     @
        @   @ @@@@@    @   @  @@@@ @   @ @@@@@     @   @     @
        @   @ @        @   @ @   @ @   @ @         @   @  @
        @   @  @@@     @   @  @@@@  @@@@  @@@     @@@   @@   @


Well, thanks to the modern miracles of parallel processing (i.e. using
the UUCPNet as one giant distributed processor)  Rog-O-Matic became an
honest member of the Fighter's guild on October 10, 1983.  This is the
fourth total victory for our Heuristic Hero, but the first time he has
done so without using a "Magic Arrow".  This comes only a year and two
weeks  after  his  first  total  victory.  He will be two years old on
October 19.  Happy Birthday!

Damon Permezel of Waterloo was the lucky user. Here is his announcement:

    - - - - - - - -
    Date: Mon, 10 Oct 83 20:35:22 PDT
    From: allegra!watmath!dapermezel@Berkeley
    Subject: total winner
    To: mauldin@cmu-cs-a

    It won!  The  lucky  SOB started out with armour class of 1 and a (-1,0)
    two handed sword (found right next to it on level 1).  Numerous 'enchant
    armour' scrolls  were found,  as well as a +2 ring of dexterity,  +1 add
    strength, and slow digestion, not to mention +1 protection.  Luck had an
    important part to play,  as  initial  confrontations  with 'U's  got him
    confused and almost killed, but for the timely stumbling onto the stairs
    (while still confused). A scroll of teleportation was seen to be used to
    advantage once, while it was pinned between 2 'X's in a corridor.
    - - - - - - - -
    Date: Thu, 13 Oct 83 10:58:26 PDT
    From: allegra!watmath!dapermezel@Berkeley
    To: mlm@cmu-cs-cad.ARPA
    Subject: log

    Unfortunately, I was not logging it. I did make sure that there
    were several witnesses to the game, who could verify that it (It?)
    was a total winner.
    - - - - - - - -

The paper is still available; for a copy of "Rog-O-Matic: A Belligerent
Expert System", please send your physical address to "Mauldin@CMU-CS-A"
and include the phrase "paper request" in the subject line.

Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA  15213
(412) 578-3065,  mauldin@cmu-cs-a.

------------------------------

Date: 13 Oct 83 21:35:12 EDT  (Thu)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: University of Maryland Colloquium

University of Maryland
Department of Computer Science
Colloquium

Monday, October 24 -- 4:00 PM
Room 2324 - Computer Science Building


             A Formal Model of Diagnostic Problem Solving


                             Dana S. Nau
                        Computer Science Dept.
                        University of Maryland
                          College Park, Md.


      Most expert computer systems are based on production rules, and to
some readers the terms "expert computer system" and "production rule
system" may seem almost synonymous.  However, there are problem domains
for which the usual production rule techniques appear to be inadequate.

      This talk presents a useful alternative to rule-based problem
solving:  a formal model of diagnostic problem solving based on a
generalization of the set covering problem, and formalized algorithms
for diagnostic problem solving based on this model.  The model and the
resulting algorithms have the following features:
(1) they capture several intuitively plausible features of human
    diagnostic inference;
(2) they directly address the issue of multiple simultaneous causative
    disorders;
(3) they can serve as a basis for expert systems for diagnostic problem
    solving; and
(4) they provide a conceptual framework within which to view recent
    work on diagnostic problem solving in general.

Coffee and refreshments - Rm. 3316 - 3:30
------------------------------

End of AIList Digest
********************

∂14-Oct-83  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #78
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Oct 83  20:49:25 PDT
Date: Friday, October 14, 1983 2:25PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #78
To: AIList@SRI-AI


AIList Digest           Saturday, 15 Oct 1983      Volume 1 : Issue 78

Today's Topics:
  Philosophy - Dedekind & Introspection,
  Rational Psychology - Conectionist Models,
  Creativity - Intuition in Physics,
  Conference - Forth,
  Seminar - IUS Presentation
----------------------------------------------------------------------

Date: 10 Oct 83 11:54:07-PDT (Mon)
From: decvax!duke!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: consciousness, loops, halting problem
Article-I.D.: uvacs.983


With regard to loops and consciousness, consider Theorem 66 of Dedekind's
book on the foundations of mathematics, "Essays on the Theory of Numbers",
translated 1901.  This is the book where the Dedekind Cut is invented to
characterize irrational numbers.

        64.  Definition.  A system S is said to be infinite when it
        is similar to a proper part of itself; in the contrary case
        S is said to be a finite system.


        66.  Theorem.  There exist infinite systems.  Proof.  My own
        realm of thoughts, i.e. the totality S of all things, which
        can be objects of my thought, is infinite.  For if s
        signifies an element of S, then is the thought s', that s
        can be object of my thought, itself an element of S.  If we
        regard this as transform phi(s) of the element s then has
        the transformation phi of S, thus determined, the property
        that the transform S' is part of S; and S' is certainly
        proper part of S, because there are elements of S (e.g. my
        own ego) which are different from such thought s' and
        therefore are not contained in S'.  Finally it is clear that
        if a, b are different elements of S, their transformation
        phi is a distinct (similar) transformation.  Hence S is
        infinite, which was to be proved.

For that matter, net.math seems to be in a loop.  They were discussing the
Banach-Tarski paradox about a year ago.

Alex Colvin

ARPA: mac.uvacs@UDel-Relay CS: mac@virginia USE: ...uvacs!mac

------------------------------

Date: 8 Oct 83 13:53:38-PDT (Sat)
From: hplabs!hao!seismo!rochester!blenko @ Ucb-Vax
Subject: Re: life is but a dream
Article-I.D.: rocheste.3318

The statement that consciousness is an illusion does not mean it does
not or cannot have a concrete realization. I took the remarks to mean
simply that the entire mental machinery is not available for
introspection, and in its place some top-level "picture" of the process
is made available. The picture need not reflect the details of internal
processing, in the same way that most people's view of a car does not
bear much resemblance to its actual mechanistic internals.

For those who may not already be aware, the proposal is not a new one.
I find it rather attractive, admitting my own favorable
predisposition towards the proposition that mental processing is
computational.

I still think this newsgroup would be more worthwhile if readers
adopted a more tolerant attitude. It seems to be the case that there is
nearly always a silly interpretation of someone's contribution;
discovering that interpretation doesn't seem to be a very challenging
task.

        Tom Blenko
        blenko@rochester
        decvax!seismo!rochester!blenko
        allegra!rochester!blenko

------------------------------

Date: 11 Oct 83 9:37:52-PDT (Tue)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: Re: "Rational Psychology"
Article-I.D.: rocheste.3352

This is in response to John Black's comments, to wit:

>     Having a theoretical (or "rational" -- terrible name with all the wrong
> connotations) psychology is certainly desirable, but it does have to make
> some contact with the field it is a theory of.  One of the problems here is
> that the "calculus" of psychology has yet to be invented, so we don't have
> the tools we need for the "Newtonian mechanics" of psychology.  The latest
> mathematical candidate was catastrophe theory, but it turned out to be a
> catastrophe when applied to human behavior.  Perhaps Periera and Doyle have
> a "calculus" to offer.

This is an issue I (and I think many AI'ers) are particularly interested in,
that is, the correspondence between our programs and the actual workings of
the mind. I believe that an *explanatory* theory of behavior will not be at
the functional level of correspondence with human behavior. Theories which are
at the functional level are important for pinpointing *what* it is that people
do, but they don't get a handle on *how* they do it. And, I think there are
side-effects of the architecture of the brain on behavior that do not show up
in functional level models.

This is why I favor (my favorite model!) connectionist models as being a
possible "calculus of Psychology". Connectionist models, for those unfamiliar
with the term, are a version of neural network models developed here at
Rochester (with related models at UCSD and CMU) that attempts to bring the
basic model unit into line with our current understanding of the information
processing capabilities of neurons. The units themselves are relatively stupid
and slow, but have state, and can compute simple functions (not restricted to
linear). The simplicity of the functions is limited only by "gentleman's
agreement", as we still really have no idea of the upper limit of neuronal
capabilities, and we are guided by what we seem to need in order to accomplish
whatever task we set them to. The payoff is that they are highly connected to
one another, and can compute in parallel. They are not allowed to pass symbol
structures around, and have their output restricted to values in the range
1..10. Thus we feel that they are most likely to match the brain in power.

The problem is how to compute with the things! We regard the outcome of a
computation to be a "stable coalition", a set of units which mutually
reinforce one another. We use units themselves to represent values of
parameters of interest, so that mutually compatible values reinforce one
another, and mutually exclusive values inhibit one another. These could
be the senses of the words in a sentence, the color of a patch in the
visual field, or the direction of intended eye movement. The result is
something that looks a lot like constraint relaxation.

Anyway, I don't want to go on forever. If this sparks discussion or interest
references are available from the U. of R. CS Dept. Rochester, NY 14627.
(the biblio. is a TR called "the Rochester Connectionist Papers").

gary cottrell   (allegra or seismo)!rochester!gary or gary@rochester

------------------------------

Date: 10 Oct 83 8:00:59-PDT (Mon)
From: harpo!eagle!mhuxi!mhuxj!mhuxl!mhuxm!pyuxi!pyuxn!rlr @ Ucb-Vax
Subject: Re: RE: Intuition in Physics
Article-I.D.: pyuxn.289

>    I presume that at birth, ones mind is not predisposed to one or another
>    of several possible theories of heavy molecule collision (for example.)
>    Further, I think it unlikely that personal or emotional interaction in
>    one "pre-analytic" stage (see anything about developmental psych.) is
>    is likely to bear upon ones opinions about those molecules. In fact I
>    find it hard to believe that anything BUT technical learning is likely
>    to bear on ones intuition about the molecules. One might want to argue
>    that ones personality might force you to lean towards "aggressive" or
>    overly complex theories, but I doubt that such effects will lead to
>    the creation of a theory.  Only a rather mild predisposition at best.

>    In psychology it is entirely different.  A person who is agresive has
>    lots of reasons to assume everyone else is as well. Or paranoid, or
>    that rote learning is esp good or bad, or that large dogs are dangerous
>    or a number of other things that bear directly on ones theories of the
>    mind.  And these biases are aquired from the process of living and are
>    quite un-avoidable.

The author believes that, though behavior patterns and experiences in a
person's life may affect their viewpoint in psychological studies, this
does not apply in "technical sciences" (not the author's phrasing, and not
mine either---I just can't think of another term) like physics.  It would
seem that flashes of "insight" obtained by anyone in a field involving
discovery have to be based on both the technical knowledge that the person
already has AND the entire life experience up to that point.  To oversimplify,
if one has never seen a specific living entity (a flower, a specific animal)
or witnessed a physical event, or participated in a particular human
interaction, one cannot base a proposed scientific model on these things, and
these flashes are often based on such analogies to reality.

------------------------------

Date: 9 Oct 83 14:38:45-PDT (Sun)
From: decvax!genrad!security!linus!utzoo!utcsrgv!utcsstat!laura @
      Ucb-Vax
Subject: Re: RE: Intuition in Physics
Article-I.D.: utcsstat.1251

Gary,
I don't know about why you think about physics, but I know something about
why *I* think about physics. You see, i have this deep fondness for
"continuous creation" as opposed to "the big bang". This is too bad for me,
since "big bang" appears to be correct, or at any rate, "continuous
creation" appears to be *wrong*. Perhaps what it more correct is
"bang! sproiinngg.... bang!" or a series of bangs, but this is not
the issue.

these days, if you ask me to explain the origins of the universe, from
a physical point of veiw I am going to discuss "big bang". I can do this.
It just does not have the same emotional satisfaction to me as "c c"
but that is too bad for me, I do not go around spreading antiquidated
theories to people who ask me in good faith for information.

But what if the evidence were not all in yet? What if there were an
equal number of reasons to believe one or the other? What would I be
doing? talking about continuous creation. i might add a footnote that
there was "this other theory ... the big bang theory" but I would not
discuss it much. I have that strong an emotional attatchment to
"continuous creation".

You can also read that other great issues in physics and astronomy had
their great believers -- there were the great "wave versus particle"
theories of light, and The Tycho Brahe cosmology versus the Kepler
cosmology, and these days you get similar arguments ...

In 50 years, we may all look back and say, well, how silly, everyone
should have seen that X, since X is now patently obvious. This will
explain why people believe X now, but not why people believed X then,
or why people DIDN'T believe X then.

Why didn't Tycho Brahe come up with Kepler's theories? It wasn't
that Kepler was a better experiementer, for Kepler himself admits
that he was a lousy experimenter and Brahe was reknowned for having
the best instraments in the world, and being the most painstaking
in measurements. it wasn't that they did not know each other, for
Kepler worked with Brahe, and replaced him as Royal Astronomer, and
was familiar with his work before he ever met Brahe...

It wasn't that Brahe was religious and Kepler was not, for it was
Kepler that was almost made a minister and studied very hard in Church
schools (which literally brought him out of peasantry into the middle
class) while Brahe, the rich nobleman, could get away with acts that
the church frowned upon (to put if mildly).

Yet Kepler was able to think in terms of Heliocentric, while Brahe,
who came so...so..close balked at the idea and put the sun circling
the earth while all the other planets circled the sun. Absolutely
astonishing!

I do not know where these differences came from. However, I have a
pretty good idea why continuous creation is more emotionally satisfying
for me than "big bang" (though these days I am getting to like
"bang! sproing! bang!" as well.) As a child, i ran across the "c c"
theory at the same time as i ran across all sorts of the things that
interest me to this day. In particular, I recall reading it at the
same time that I was doing a long study of myths, or creation myths
in particular. Certain myths appealed to me, and certain ones did not.

In particular, the myths that centred around the Judeao-Christian
tradition (the one god created the world -- boom!) had almost no
appeal to me those days, since I had utter and extreme loathing for
the god in question. (this in turn was based on the discovery that
this same wonderful god was the one that tortured and burned millions
in his name for the great sin of heresy.) And thus, "big bang"
which smacked of "poof! god created" was much less favoured by me
at age 8 than continuous creation (no creator necessary).

Now that I am older, I have a lot more tolerance for Yaveh, and
I do not find it intollerable to believe in the Big Bang. However,
it is not as satisfying.  Thus I know that some of my beliefs
which in another time could have been essential to my scientific
theories and inspirations, are based on an 8-year-old me reading
about the witchcraft trials.

It seems likely that somebody out there is furthering science by
discovering new theories based on ideas which are equally scientific.

Laura Creighton
utzoo!utcsstat!laura

------------------------------

Date: Fri 14 Oct 83 10:50:52-PDT
From: WYLAND@SRI-KL.ARPA
Subject: FORTH CONVENTION ANNOUNCEMENT

              5TH ANNUAL FORTH NATIONAL CONVENTION

                       October 14-15, 1983


                         Hyatt Palo Alto

                       4920 El Camino real
                       Palo Alto, CA 94306



        Friday   10/14: 12:00-5:00  Conference and Exhibits
        Saturday 10/15:  9:00-5:00  Conference and Exhibits
                              7:00  Banquet and Speakers


        This FORTH convention includes sessions on:

        Relational Data Base Software - an implementation
        FORTH Based Instruments - implementations
        FORTH Based Expert Systems - GE DELTA system
        FORTH Based CAD system - an implementation
        FORTH Machines - hardware implementations of FORTH
        Pattern Recognition Based Programming System - implementation
        Robotics Uses - Androbot

        There are also introductory sessions and sessions on
various standards.  Entry fee is $5.00 for the sessions and
exhibits.  The banquet features Tom Frisna, president of
Androbot, as the speaker (fee is $25.00).

------------------------------

Date: 13 Oct 1983 1441:02-EDT
From: Sylvia Brahm <BRAHM@CMU-CS-C.ARPA>
Subject: IUS Presentation

                 [Reprinted from the CMU-C bboard.]

George Sperling from NYU and Bell Laboratories will give a talk
on Monday, October 17, 3:30 to 5:00 in Wean Hall 5409.

Title will be Image Processing and the Logic of Perception.
This talk is not a unification but merely the temporal juxta-
position of two lines of research.  The logic of perception
invoves using unreliable, ambiguous information to arrive at
a categorical decision.  Critical phenomena are multiple stable
states (in response to the same external stimulus) and path
dependence (hysteresis):  the description is potential theory.
Neural models with local inhibitory interaction are the ante-
cedents of contemporary relaxation methods.  New (and old)
examples are provided from binocular vision and depth perception,
including a polemical demonstration of how the perceptual decision
of 3D structure in a 2D display can be dominated by an irrelevant
brightness cue.

Image processing will deal with the practical problem of squeezing
American Sign Language (ASL) through the telephone network.
Historically, an image (e.g., TV @4MHz) has been valued at more
than 10@+(3) speech tokens (e.g., telephone @3kHz).  With image-
processed ASL, the ratio is shown to be approaching unity.

Movies to illustrate both themes will be shown.  Appointments to
speak with Dr. Sperling can be made by calling x3802.

------------------------------

End of AIList Digest
********************

∂17-Oct-83  0120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #79
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Oct 83  01:19:42 PDT
Date: Sunday, October 16, 1983 10:13PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #79
To: AIList@SRI-AI


AIList Digest            Monday, 17 Oct 1983       Volume 1 : Issue 79

Today's Topics:
  AI Societies - Bledsoe Election,
  AI Education - Videotapes & Rutgers Mini-Talks,
  Psychology - Intuition & Conciousness
----------------------------------------------------------------------

Date: Fri 14 Oct 83 08:41:39-CDT
From: Robert L. Causey <Cgs.Causey@UTEXAS-20.ARPA>
Subject: Congratulations Woody!

               [Reprinted from the UTexas-20 bboard.]


Woody Bledsoe has been named president-elect of the American
Association of Artificial Intelligence.  He will become
president in August, 1984.

According to the U.T. press release Woody said, "You can't
replace the human, but you can greatly augment his abilities."
Woody has greatly augmented the computer's abilities. Congratulations!

------------------------------

Date: 12 Oct 83 12:59:24-PDT (Wed)
From: ihnp4!hlexa!pcl @ Ucb-Vax
Subject: AI (and other) videotapes to be produced by AT&T Bell
         Laboratories
Article-I.D.: hlexa.287

[I'm posting this for someone who does not have access to netnews.
Send comments to the address below; electronic mail to me will be
forwarded. - PCL]

AT&T Bell Laboratories is planning to produce a
videotape on artificial intelligence that concentrates
on "knowledge representation" and "search strategies"
in expert systems.  The program will feature a Bell
Labs prototype expert system called ACE.

Interviews of Bell Labs developers will provide the
content.  Technical explanations will be made graphic
with computer generated animation.

The tape will be sold to colleges and industry by
Hayden Book Company as part of a software series.
Other tapes will cover Software Quality, Software
Project Management and Software Design Methodologies.

Your comments are welcome.  Write to W. L. Gaddis,
Senior Producer, Bell Laboratories, 150 John F. Kennedy
Parkway, Room 3L-528, Short Hills, NJ 07078

------------------------------

Date: 16 Oct 83 22:42:42 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: Mini-talks

Recently two notices were copied from the Rutgers bboard to Ailist.
They listed a number of "talks" by various faculty back to back.
Those who wondered how a talk could be given in 10 minutes and
those who wondered why a talk would be given in 10 minutes may
be glad to know the purpose of the series.  This is the innovative
method that has been designed by the CS graduate students society
for introducing to new graduate students and new faculty members
the research interests of the CS faculty.  Each talk typically outlined
the area of CS and AI of interest to the faculty member, discussed
research opportunities and the background (readings, courses) necessary
for doing research in that area.

I have participated in this mini-talk series for several years and
have found it valuable to myself as a speaker.  To be given about 10 min
to say what I am interested in, does force me distill thoughts and to
say it simply.  The feedback from students is also positive.
Perhaps you will hear some from some of the students too.

------------------------------

Date: 11 Oct 83 2:44:12-PDT (Tue)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: utah-cs.1985

I share your notion (that human ability is limited, and that machines
might actually go beyond man in "consciousness"), but not your confidence.
How do you intend to prove your ideas?  You can't just wait for a fantastic
AI program to come along - you'll end up right back in the Turing Test
muddle.  What *is* consciousness?  How can it be characterized abstractly?
Think in terms of universal psychology - given a being X, is there an
effective procedure (used in the technical sense) to determine whether
that being is conscious?  If so, what is that procedure?

                                        AI is applied philosophy,
                                        stan the l.h.
                                        utah-cs!shebs

ps Re rational or universal psychology: a professor here observed that
it might end up with the status of category theory - mildly interesting
and all true, but basically worthless in practice... Any comments?

------------------------------

Date: 12 Oct 83 11:43:39-PDT (Wed)
From: decvax!cca!milla @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: cca.5880

Of course self-awareness is real.   The  point  is  that  self-awareness
comes  about  BECAUSE  of  the  illusion  of consciousness.  If you were
capable of only very primitive thought, you would  be  less  self-aware.
The  greater  your  capacity  for complex thought, the more you perceive
that your actions are the result of an active,  thinking  entity.   Man,
because  of  his  capacity  to form a model of the world in his mind, is
able to form a model of himself.  This all makes  sense  from  a  purely
physical  viewpoint;  there  is  no  need  for  a supernatural "soul" to
complement the brain.  Animals appear to have some  self-awareness;  the
quantity  depends  on  their intelligence.  Conceivably, a very advanced
computer system could have a high degree  of  self-awareness.   As  with
consciousness,  it is lack of information -- how the brain works, random
factors, etc. which makes self-awareness  seem  to  be  a  very  special
quality.  In fact, it is a very simple, unremarkable characteristic.

                                                M. Massimilla

------------------------------

Date: 12 Oct 83 7:16:26-PDT (Wed)
From: harpo!eagle!mhuxi!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Physics and Intuition
Article-I.D.: ncsu.2367


I intend this to be my final word on the matter.  I intend it to be
brief: as someone said, a bit more tolerance on this group would help.
From Laura we have a wonderful story of the intermeshing of physics and
religion.  Well, I picked molecular physics for its avoidance of any
normal life experiences.  Cosmology and creation are not in that catagory
quite so strongly because religion is an everyday thing and will lead to
biases in cosmological theories.  Clearly there is a continuum from
things which are divorced from everyday experience to those that are
very tightly connected to it.  My point is that most "hard" sciences
are at one end of the continuum while psychology is clearly way over
at the other end, by definition.  It is my position that the rather
big difference between the way one can think about the two ends of the
spectrum suggests that what works well at one end may well be quite
inappropriate at the other.  Or it may work fine.  But there is a burden
of proof that I hand off to the rational psychologists before I will
take them more seriously than I take most psychologists.  I have the same
attitude towards cosmology. I find it patently ludicrous that so many
people push our limited theories so far outside the range of applicability
and expect the extrapolation to be accurate. Such extrapoloation is
an interesting way to understand the failing of the theories, but to
believe that DOES require faith without substantiation.

I dislike being personal, but Laura is trying to make it seem black and
white.  The big bang has hardly been proved. But she seems to be saying
it has.  It is of course not so simple. Current theories and data
seem to be tipping the scales, but the scales move quite slowly and will
no doubt be straightened out by "new" work 30 years hence.

The same is true of my point about technical reasoning.  Clearly no
thought can be entirely divorced from life experiences without 10
years on a mountain-top.  Its not that simple.  That doesn't mean that
there are not definable differences between different ways of thinking
and that some may be more suitable to some fields.  Most psychologists
are quite aware of this problem (I didn't make it up) and as a result
purely experimental psychology has always been "trusted" more than
theorizing without data.  Hard numbers give one some hope that it is
the world, not your relationship with a pet turtle speaking in your
work.

If anyone has anymore to say to me about this send me mail, please.
I suspect this is getting tiresome for most readers. (its getting
tiresome for me...)  If you quote me or use my name, I will always
respond.  This network with its delays is a bad debate forum.  Stick to
ideas in abstration from the proponent of the idea. And please look
for what someone is trying to say before assuming thay they are blathering.
----GaryFostel----

------------------------------

Date: 14 Oct 83 13:43:56 EDT  (Fri)
From: Paul Torek <flink%umcp-cs@CSNet-Relay>
Subject: consciousness and the teleporter

    From Michael Condict   ...!cmcl2!csd1!condict

        This, then, is the reason I would never step into one of those
        teleporters that functions by ripping apart your atoms, then
        reconstructing an exact copy at a distant site.  [...]

In spite of the fact that consciousness (I agree with the growing chorus) is
NOT an illusion, I see nothing wrong with using such a teleporter.  Let's
take the case as presented in the sci-fi story (before Michael Condict rigs
the controls).  A person disappears from (say) Earth and a person appears at
(say) Tau Ceti IV.  The one appearing at Tau Ceti is exactly like the one
who left Earth as far as anyone can tell: she looks the same, acts the same,
says the same sort of things, displays the same sort of emotions.  Note that
I did NOT say she is the SAME person -- although I would warn you not too
conclude too hastily whether she is or not.  In my opinion, *it doesn't
matter* whether she is or not.

To get to the point:  although I agree that consciousness needs something to
exist, there *IS* something there for it -- the person at Tau Ceti.  On
what grounds can anyone believe that the person at Tau Ceti lacks a
consciousness?  That is absurd -- consciousness is a necessary concomitant
of a normal human brain.  Now there IS a question as to whether the
conscious person at Tau Ceti is *you*, and thus as to whether his mind
is *your* mind.  There is a considerable philosophical literature on this
and very similar issues -- see *A Dialogue on Personal Identity and
Immortality* by John Perry, and "Splitting Self-Concern" by Michael B. Green
in *Pacific Philosophical Quarterly*, vol. 62 (1981).

But in my opinion, there is a real question whether you can say whether
the person at Tau Ceti is you or not.  Nor, in my opinion, is that
question really important.  Take the modified case in which Michael Condict
rigs the controls so that you are transported, yet remain also at Earth.
Michael Condict calls the one at Earth the "original", and the one at Tau
Ceti the "copy".  But how do you know it isn't the other way around -- how
do you know you (your consciousness) weren't teleported to Tau Ceti, while
a copy (someone else, with his own consciousness) was produced at Earth?

"Easy -- when I walk out of the transporter room at Earth, I know I'm still
me; I can remember everything I've done and can see that I'm still the same
person."  WRONGO -- the person at Tau Ceti has the same memories, etc.  I
could just as easily say "I'll know I was transported when I walk out of the
transporter room at Tau Ceti and realize that I'm still the same person."

So in fairness, we can't say "You walk out of the transporter room at both
ends, with the original you realizing that something went wrong."  We have
to say "You walk out of the transporter at both ends, with *the one at
Earth* realizing something is wrong."  But wait -- they can't BOTH be you --
or can they?  Maybe neither is you!  Maybe there's a continuous flow of
"souls" through a person's body, with each one (like the "copy" at Tau Ceti
(or is it at Earth)) *seeming* to remember doing the things that that body
did before ...

If you acknowledge that consciousness is rooted in the physical human brain,
rather than some mysterious metaphysical "soul" that can't be seen or
touched or detected in any way at all, you don't have to worry about whether
there's a continuous flow of consciousnesses through your body.  You don't
have to be a dualist to recognize the reality of consciousness; in fact,
physicalism has the advantage that it *supports* the commonsense belief that
you are the same person (consciousness) you were yesterday.

                                --Paul Torek, U of MD, College Park
                                ..umcp-cs!flink

------------------------------

End of AIList Digest
********************

∂20-Oct-83  1541	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #80
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Oct 83  15:40:51 PDT
Date: Thursday, October 20, 1983 9:23AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #80
To: AIList@SRI-AI


AIList Digest           Thursday, 20 Oct 1983      Volume 1 : Issue 80

Today's Topics:
  Administrivia - Complaints &  Seminar Abstracts,
  Implementations - Parallel Production System,
  Natural Language - Phrasal Analysis & Macaroni,
  Psychology - Awareness,
  Programming Languages - Elegance and Purity,
  Conferences - Reviewers needed for 1984 NCC,
  Fellowships - Texas
----------------------------------------------------------------------

Date: Tue 18 Oct 83 20:33:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Complaints

I have received copies of two complaints sent to the author
of a course announcement that I published.  The complaints
alleged that the announcement should not have been put out on
the net.  I have three comments:

First, such complaints should come to me, not to the original
authors.  The author is responsible for the content, but it is
my decision whether or not to distribute the material.  In this
case, I felt that the abstract of a new and unique AI course
was of interest to the academic half of the AIList readership.

Second, there is a possibility that the complainants received
the article in undigested form, and did not know that it was
part of an AIList digest.  If anyone is currently distributing
AIList in this manner, I want to know about it.  Undigested
material is being posted to net.ai and to some bboards, but it
should not be showing up in personal mailboxes.

Third, this course announcement was never formally submitted
to AIList.  I picked the item up from a limited distribution,
and failed to add a "reprinted from" or disclaimer line to
note that fact.  I apologize to Dr. Moore for not getting in
touch with him before sending the item out.

                                        -- Ken Laws

------------------------------

Date: Tue 18 Oct 83 09:01:29-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Seminar Abstracts

It has been suggested to me that seminar abstracts would be more
useful if they contained the home address (or net address, phone
number, etc.) of the speaker.  I have little control over the
content of these messages, but I encourage those who compose them
to include such information.  Your notices will then be of greater
use to the scientific community beyond just those who can attend
the seminars.

                                        -- Ken Laws

------------------------------

Date: Mon 17 Oct 83 15:44:52-EDT
From: Mark D. Lerner <LERNER@COLUMBIA-20.ARPA>
Subject: Parallel production systems.


The parallel production  system interpreter is  running
on the 15 node DADO prototype. We can presently run  up
to 32 productions, with 12 clauses in each  production.
The prototype has been operational since April 1983.

------------------------------

Date: 18 Oct 1983 0711-PDT
From: MEYERS.UCI-20A@Rand-Relay
Subject: phrasal analysis


Recently someone asked why PHRAN was not based on a grammar.
It just so happens ....

I have written a parser which uses many of the ideas of PHRAN
but which organizes the phrasal patterns into several interlocking
grammars, some 'semantic' and some syntactic.

The program is called VOX (Vocabulary Extension System) and attempts
a 'complete' analysis of English text.

I am submitting a paper about the concepts underlying the system
to COLING, the conference on Computational Linguistics.
Whether or not it is accepted, I will make a UCI Technical Report
out of it.

To obtain a copy of the paper, write:

                Amnon Meyers
                AI Project
                Dept. of Computer Science
                University of California,
                Irvine, CA   92717

------------------------------

Date: Wednesday, 19 October 1983 10:48:46 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Grammars; Greek; invective

One comment and two meta-comments:

Re: the validity of grammars: almost no one claims that grammatical
        phenomena don't exist (even Schank doesn't go that far).  What the
        argument generally is about is whether one should, as the first step
        in understanding an input, build a grammatical tree, without any (or
        much) information from either semantics or the current
        conversational context.  One side wants to do grammar first, by
        itself, and then the other stuff, whereas the other side wants to try
        to use all available knowledge right from the start.  Of course, there
        are folks taking extreme positions on both sides, and people
        sometimes get a bit carried away in the heat of an argument.

Re: Greek: As a general rule, it would be helpful if people who send in
        messages containing non-English phrases included translations.  I
        cannot judge the validity of the Macaroni argument, since I don't
        completely understand either example.  One might argue that I should
        learn Greek, but I think expecting me to know Maori grammatical
        classes is stretching things a bit.

Re: invective: Even if the reference to Yahweh was meant as a childhood
        opinion which has mellowed with age, I object to statements of the
        form "this same wonderful god... tortured and burned..." etc.
        Perhaps it was a typo.  As we all know, people have tortured and
        burnt other people for all sorts of reasons (including what sort of
        political/economic systems small Asian countries should have), and I
        found the statement offens