perm filename AI.V2[BB,DOC] blob sn#781782 filedate 1985-01-13 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00186 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00023 00002	This is Volume 2 of the AI-List digests.
C00024 00003	∂03-Jan-84  1823	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #1 
C00041 00004	∂04-Jan-84  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #2 
C00072 00005	∂05-Jan-84  1502	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #3 
C00101 00006	∂05-Jan-84  1939	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #4 
C00128 00007	∂09-Jan-84  1641	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #5 
C00142 00008	∂10-Jan-84  1336	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #6 
C00164 00009	∂16-Jan-84  2244	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #7 
C00190 00010	∂17-Jan-84  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #8 
C00214 00011	∂22-Jan-84  1625	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #9 
C00238 00012	∂30-Jan-84  2209	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #10
C00266 00013	∂02-Feb-84  0229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #11
C00291 00014	∂03-Feb-84  2358	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #12
C00311 00015	∂05-Feb-84  0007	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #13
C00328 00016	∂11-Feb-84  0005	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #14
C00357 00017	∂11-Feb-84  0121	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #15
C00379 00018	∂11-Feb-84  0215	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #16
C00414 00019	∂11-Feb-84  2236	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #17
C00445 00020	∂11-Feb-84  2320	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #18
C00466 00021	∂15-Feb-84  2052	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #19
C00486 00022	∂22-Feb-84  1137	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #20
C00515 00023	∂22-Feb-84  1758	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #21
C00539 00024	∂29-Feb-84  1547	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #22
C00565 00025	∂29-Feb-84  1645	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #23
C00586 00026	∂06-Mar-84  1159	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #24
C00614 00027	∂06-Mar-84  1305	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #25
C00639 00028	∂06-Mar-84  1615	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #26
C00672 00029	∂07-Mar-84  1632	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #27
C00698 00030	∂09-Mar-84  2228	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #28
C00720 00031	∂09-Mar-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #29
C00742 00032	∂12-Mar-84  1023	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #30
C00767 00033	∂13-Jan-85  1624	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #31
C00793 00034	∂16-Mar-84  1247	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #32
C00818 00035	∂18-Mar-84  2328	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #33
C00852 00036	∂22-Mar-84  1127	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #34
C00881 00037	∂26-Mar-84  1241	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #35
C00905 00038	∂29-Mar-84  0017	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #36
C00929 00039	∂29-Mar-84  1401	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #37
C00953 00040	∂29-Mar-84  2317	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #38
C00970 00041	∂31-Mar-84  1655	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #39
C00990 00042	∂03-Apr-84  2054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #40
C01018 00043	∂03-Apr-84  2141	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #41
C01040 00044	∂04-Apr-84  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #42
C01060 00045	∂05-Apr-84  2050	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #43
C01085 00046	∂07-Apr-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #44
C01106 00047	∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #45
C01129 00048	∂13-Apr-84  1129	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #46
C01152 00049	∂15-Apr-84  1824	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #47
C01171 00050	∂16-Apr-84  1106	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #48
C01191 00051	∂19-Apr-84  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #49
C01220 00052	∂21-Apr-84  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #50
C01247 00053	∂22-Apr-84  1629	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #51
C01267 00054	∂24-Apr-84  2250	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #52
C01290 00055	∂28-Apr-84  1704	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #53
C01315 00056	∂03-May-84  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #54
C01333 00057	∂04-May-84  2111	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #55
C01362 00058	∂07-May-84  1032	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #56
C01381 00059	∂08-May-84  2210	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #57
C01404 00060	∂14-May-84  1803	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #58
C01428 00061	∂20-May-84  2349	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #59
C01448 00062	∂21-May-84  0044	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #60
C01478 00063	∂21-May-84  1047	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #61
C01504 00064	∂22-May-84  2158	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #62
C01533 00065	∂25-May-84  0016	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #63
C01560 00066	∂25-May-84  1045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #64
C01583 00067	∂27-May-84  2229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #65
C01607 00068	∂29-May-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #66
C01634 00069	∂31-May-84  2333	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #67
C01661 00070	∂01-Jun-84  1743	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #68
C01687 00071	∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #69
C01714 00072	∂05-Jun-84  2249	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #70
C01742 00073	∂06-Jun-84  2238	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #71
C01762 00074	∂10-Jun-84  1607	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #72
C01787 00075	∂15-Jun-84  1345	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #73
C01816 00076	∂17-Jun-84  1531	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #74
C01838 00077	∂20-Jun-84  1154	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #75
C01864 00078	∂21-Jun-84  2327	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #76
C01885 00079	∂22-Jun-84  0657	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #77
C01911 00080	∂24-Jun-84  1136	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #78
C01938 00081	∂25-Jun-84  0021	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #79
C01972 00082	∂26-Jun-84  0054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #80
C01998 00083	∂28-Jun-84  1319	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #81
C02024 00084	∂28-Jun-84  1428	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #82
C02042 00085	∂05-Jul-84  2304	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #83
C02062 00086	∂05-Jul-84  2203	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #84
C02085 00087	∂06-Jul-84  1220	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #85
C02114 00088	∂07-Jul-84  1252	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #86
C02138 00089	∂10-Jul-84  2221	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #87
C02166 00090	∂11-Jul-84  1558	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #88
C02188 00091	∂12-Jul-84  1604	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #89
C02216 00092	∂13-Jul-84  2352	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #90
C02242 00093	∂16-Jul-84  0015	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #91
C02271 00094	∂17-Jul-84  2244	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #92
C02299 00095	∂18-Jul-84  1916	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #93
C02324 00096	∂21-Jul-84  1638	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #94
C02349 00097	∂25-Jul-84  0101	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #95
C02378 00098	∂26-Jul-84  1439	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #96
C02410 00099	∂27-Jul-84  2351	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #97
C02431 00100	∂01-Aug-84  1020	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #98
C02445 00101	∂02-Aug-84  1213	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #99
C02465 00102	∂04-Aug-84  0512	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #100    
C02494 00103	∂04-Aug-84  2220	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #101    
C02517 00104	∂08-Aug-84  1054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #102    
C02532 00105	∂10-Aug-84  0045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #103    
C02552 00106	∂12-Aug-84  1928	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #104    
C02572 00107	∂14-Aug-84  2357	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #105    
C02590 00108	∂19-Aug-84  1854	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #106    
C02610 00109	∂19-Aug-84  1951	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #107    
C02629 00110	∂21-Aug-84  1735	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #108    
C02655 00111	∂25-Aug-84  1857	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #109    
C02675 00112	∂24-Aug-84  1514	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #110    
C02700 00113	∂28-Aug-84  2259	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #111    
C02721 00114	∂31-Aug-84  1217	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #112    
C02742 00115	∂02-Sep-84  2241	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #113    
C02765 00116	∂05-Sep-84  1121	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #114    
C02792 00117	∂12-Sep-84  1416	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #115    
C02814 00118	∂12-Sep-84  1525	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #116    
C02831 00119	∂12-Sep-84  1650	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #117    
C02856 00120	∂13-Sep-84  2330	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #118    
C02879 00121	∂16-Sep-84  1655	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #119    
C02903 00122	∂19-Sep-84  1045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #120    
C02925 00123	∂19-Sep-84  2307	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #121    
C02946 00124	∂21-Sep-84  0034	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #122    
C02966 00125	∂23-Sep-84  1304	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #123    
C02991 00126	∂23-Sep-84  2339	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #124    
C03014 00127	∂26-Sep-84  0102	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #125    
C03035 00128	∂27-Sep-84  0258	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #126    
C03051 00129	∂28-Sep-84  0103	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #127    
C03069 00130	∂01-Oct-84  1132	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #128    
C03088 00131	∂02-Oct-84  1108	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #129    
C03112 00132	∂03-Oct-84  1218	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #130    
C03133 00133	∂06-Oct-84  1720	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #131    
C03166 00134	∂07-Oct-84  1054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #132    
C03186 00135	∂08-Oct-84  1204	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #133    
C03212 00136	∂09-Oct-84  0024	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #134    
C03243 00137	∂10-Oct-84  1509	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #135    
C03265 00138	∂11-Oct-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #136    
C03289 00139	∂13-Oct-84  0045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #137    
C03313 00140	∂14-Oct-84  2048	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #138    
C03340 00141	∂17-Oct-84  1249	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #139    
C03370 00142	∂18-Oct-84  0000	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #140    
C03396 00143	∂18-Oct-84  1240	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #141    
C03421 00144	∂19-Oct-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #142    
C03446 00145	∂20-Oct-84  2331	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #143    
C03469 00146	∂24-Oct-84  1337	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #144    
C03497 00147	∂27-Oct-84  2326	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #145    
C03523 00148	∂28-Oct-84  0029	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #146    
C03549 00149	∂31-Oct-84  0030	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #147    
C03575 00150	∂01-Nov-84  1138	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #148    
C03600 00151	∂05-Nov-84  1145	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #149    
C03618 00152	∂07-Nov-84  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #150    
C03648 00153	∂09-Nov-84  1308	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #151    
C03680 00154	∂11-Nov-84  0004	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #152    
C03705 00155	∂11-Nov-84  2334	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #153    
C03738 00156	∂15-Nov-84  0022	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #154    
C03762 00157	∂15-Nov-84  0125	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #155    
C03785 00158	∂15-Nov-84  2000	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #156    
C03814 00159	∂18-Nov-84  1358	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #157    
C03844 00160	∂21-Nov-84  1306	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #158    
C03867 00161	∂21-Nov-84  2341	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #159    
C03893 00162	∂24-Nov-84  1543	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #160    
C03915 00163	∂25-Nov-84  1736	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #161    
C03947 00164	∂28-Nov-84  1620	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #162    
C03978 00165	∂29-Nov-84  1254	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #163    
C04008 00166	∂30-Nov-84  0005	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #164    
C04037 00167	∂01-Dec-84  2350	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #165    
C04062 00168	∂06-Dec-84  1139	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #166    
C04091 00169	∂02-Dec-84  1843	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #167    
C04126 00170	∂02-Dec-84  2016	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #168    
C04146 00171	∂02-Dec-84  2145	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #169    
C04164 00172	∂04-Dec-84  0104	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #170    
C04189 00173	∂06-Dec-84  1355	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #171    
C04212 00174	∂06-Dec-84  1853	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #172    
C04238 00175	∂08-Dec-84  0032	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #173    
C04268 00176	∂08-Dec-84  2332	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #174    
C04297 00177	∂11-Dec-84  1203	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #175    
C04322 00178	∂13-Dec-84  1448	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #176    
C04351 00179	∂13-Dec-84  1927	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #177    
C04376 00180	∂16-Dec-84  1507	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #178    
C04405 00181	∂19-Dec-84  1435	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #179    
C04430 00182	∂21-Dec-84  1303	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #180    
C04461 00183	∂21-Dec-84  1814	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #181    
C04488 00184	∂26-Dec-84  0122	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #182    
C04513 00185	∂31-Dec-84  1338	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #183    
C04534 00186	∂04-Jan-85  2250	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #184    
C04555 ENDMK
C⊗;
This is Volume 2 of the AI-List digests.

The digests are edited by Ken Laws from SRI.
To get added to the list send mail to AIList-REQUEST@SRI-AI or better yet
read the current digests in the file AI.TXT[2,2].
Mail your submissions  to AIList@SRI-AI.
Vol. 1 is in file AI.V2[2,2].
∂03-Jan-84  1823	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #1 
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Jan 84  18:23:22 PST
Date: Tue  3 Jan 1984 15:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #1
To: AIList@SRI-AI


AIList Digest           Wednesday, 4 Jan 1984       Volume 2 : Issue 1

Today's Topics:
  Administrivia - Host List & VISION-LIST,
  Cognitive Psychology - Looping Problem,
  Programming Languages - Questions,
  Logic Programming - Disjunctions,
  Vision - Fiber Optic Camera
----------------------------------------------------------------------

Date: Tue 3 Jan 84 15:07:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Host List

The AIList readership has continued to grow throughout the year, and only
a few individuals have asked to be dropped from the distribution network.
I cannot estimate the number of readers receiving AIList through bboards
and remailing nodes, but the existence of such services has obviously
reduced the outgoing net traffic.  For those interested in such things,
I present the following approximate list of host machines on my direct
distribution list.  Numbers in parentheses indicate individual subscribers;
all other hosts (and those marked with "bb") have redistribution systems.
A few of the individual subscribers are undoubtedly redistributing
AIList to their sites, and a few redistribution nodes receive the list
from other such nodes (e.g., PARC-MAXC from RAND-UNIX).  AIList is
also available to USENET through the net.ai distribution system.

    AEROSPACE(8), AIDS-UNIX, BBNA(2), BBNG(1), BBN-UNIX(8), BBN-VAX(3),
    BERKELEY(3), BITNET@BERKELEY(2), ONYX@BERKELEY(1), UCBCAD@BERKELEY(2),
    BRANDEIS(1), BRL(bb+1), BRL-VOC(1), BROWN(1), BUFFALO-CS(1),
    cal-unix@SEISMO(1), CIT-20, CMU-CS-A(bb+11) CMU-CS-G(3),
    CMU-CS-SPICE(1), CMU-RI-ISL1(1), COLUMBIA-20, CORNELL,
    DEC-MARLBORO(7), EDXA@UCL-CS(1), GATECH, HI-MULTICS(bb+1),
    CSCKNP@HI-MULTICS(2), SRC@HI-MULTICS(1), houxa@UCLA-LOCUS(1),
    HP-HULK(1), IBM-SJ(1), JPL-VAX(1), KESTREL(1), LANL, LLL-MFE(2),
    MIT-MC, NADC(2), NOSC(4), NOSC-CC(1), CCVAX@NOSC(3), NPRDC(2),
    NRL-AIC, NRL-CSS, NSF-CS, NSWC-WO(2), NYU, TYM@OFFICE(bb+2),
    RADC-Multics(1), RADC-TOPS20, RAND-UNIX, RICE, ROCHESTER(2),
    RUTGERS(bb+2), S1-C(1), SAIL, SANDIA(bb+1), SCAROLINA(1),
    sdcrdcf@UCBVAX(1), SRI-AI(bb+6), SRI-CSL(1), SRI-KL(12), SRI-TSC(3),
    SRI-UNIX, SU-AI(2), SUMEX, SUMEX-AIM(2), SU-DSN, SU-SIERRA@SU-DSN(1),
    SUNY-SBCS(1), SU-SCORE(11), SU-PSYCH@SU-SCORE(1), TEKTRONIX(1), UBC,
    UCBKIM, UCF-CS, UCI, UCL-CS, UCLA-ATS(1), UCLA-LOCUS(bb+1),
    UDel-Relay(1), UIUC, UMASS-CS, UMASS-ECE(1), UMCP-CS, UMN-CS(bb+1),
    UNC, UPENN, USC-ECL(7), USC-CSE@USC-ECL(2), USC-ECLD@USC-ECL(1),
    SU-AI@USC-ECL(4), USC-ECLA(1), USC-ECLB(2), USC-ECLC(2), USC-ISI(5),
    USC-ISIB(bb+6), USC-ISID(1), USC-ISIE(2), USC-ISIF(10), UTAH-20(bb+2),
    utcsrgv@CCA-UNIX(1), UTEXAS-20, TI@UTEXAS-20(1), WISC-CRYS(3),
    WASHINGTON(4), YALE

                                        -- Ken Laws

------------------------------

Date: Fri, 30 Dec 83 15:20:41 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Are you interested in a more specialized "VISION-LIST"?

        I been feeling frustrated (again).  I really like AIList,
since it provides a nice forum for general AI topics.  Yet, like
many of you out there, I am primarily a vision researcher looking into
ways to facilitate machine vision and trying to decipher the strange,
all-too-often unknown mechanisms of sight.  What we need is a
specialized VISION-LIST to provide a more specific forum that will
foster a greater exchange of ideas among our research.
So...one question and one request:  1) is there such a list in the
works?, and  2) if you are interested in such a list PLEASE SPEAK UP!!

                        Thanks!
                        Philip Kahn
                        UCLA

------------------------------

Date: Fri 30 Dec 83 11:04:17-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: Loop detection

Mike,
        It seems to me that we have an inbuilt mechanism which remembers
what is done (thought) at all times. I.E. we know and remember (more or
less) our train of thoughts. When we get in a loop, the mind is
immediately triggered : at the first element, we think it could be a
coincidence, as more elements are found matching the loop, the more
convinced we get that there is a repeat : the reading example is quite
good , even when just one word appears in the same sentence context
(meaning rather than syntactical context), my mind is triggered and I go
back and check if there is actually a loop or not. Thus to implement this
property in the computer we would need a mechanism able to remember the
path and check whether it has been followed already or not (and how
far), at each step. Detection of repeats of logical rather than word for
word sentences (or sets of ideas) is still left open.
        I think that the loop detection mechanism is part of the
memorization process, which is an integral part of the reasoning engine
and it is not sitting "on top" and monitoring the reasoning process from
above.

Rene

------------------------------

Date: 2 January 1984 14:40 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: stupid questions....

Speaking as an interested outsider to AI, I have a few questions that
I hope someone can answer in non-jargon.  Any help is greatly appreciated:

1. Just why is a language like LISP better for doing AI stuff than a
language like PASCAL or ADA?  In what sense is LISP "more natural" for
simulating cognitive processes?  Why can't you do this in more tightly
structured languages like PASCAL?

2. What is the significance of not distinguishing between data and
program in LISP?  How does this help?

3. What is the difference between decisions made in a production
system (as I understand it, a production is a construct of the form IF
X is true, then do Y, where X is a condition and Y is a procedure),
and decisions made in a PASCAL program (in which IF statements also
have the same (superficial) form).


many thanks.

------------------------------

Date: 1 Jan 84 1:01:50-PST (Sun)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Re: a trivial reasoning problem? - (nf)
Article-I.D.: fortune.2135

Gee, and to a non-Prolog person (me) your problem seemed so simple
(even given the no-exhaustive-search rule). Let's see,

        1. At least one of A or B is on = (A v B)
        2. If A is on, B is not         = (A -> ~B) = (~A v (~B)) [def'n of ->]
        3. A and B are binary conditions.

>From #3, we are allowed to use first-order Boolean algebra (WFF'n'PROOF game).
(That is, #3 is a meta-condition.)

So, #1 and #2 together is just (#1) ↑ (#2) [using caret ↑ for disjunction]

or,             #1 ↑ #2 = (A v B) ↑ (~A v ~B)
(distributivity)        = (A ↑ ~A) v (A ↑ ~B) v (B ↑ ~A) v (B ↑ ~B)
(from #3 and ↑-axiom)   = (A ↑ ~B) v (B ↑ ~A)
(def'n of xor)          = A xor B

Hmmm... Maybe I am missing your original question altogether. Is your real
question "How does one enumerate the elements of a state-space (powerset)
for which a certain logical proposition is true without enumerating (examining)
elements of the state-space for which the proposition is false?"?

To me (an ignorant "non-ai" person), this seems excluded by a version of the
First Law of Thermodynamics, namely, the Law of the Excluded Miraculous Sort
(i.e. to tell which of two elements is bigger, you have to look at both).

It seems to me that you must at least look at SOME of the states for which the
proposition is false, or equivalently, you must use the structure of the
formula itself to do the selection (say, while doing a tree-walk). The problem
of the former approach is that the number of "bad" states should be kept
small (for efficiency), leading to all kinds of pruning heuristics; while
for the latter method the problem of elimination of duplicates (assuming
parallel processing) leads to the former method!

In either case, however, reasoning about the variables does not seem to
solve the problem; one must reason about the formulae. If Prolog admits
of constructing such meta-rules, you may have a chance. (I.e., "For all
true formula 'X xor Y', only X need be considered when ~Y, & v-v.)

In any event, I think your problem can be simplified to:

        1'. A xor B
        2'. A, B are binary variables.


Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

Date: 28 Dec 83 4:01:48-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: REFERENCES FOR SPECIALIZED CAMERA DE - (nf)
Article-I.D.: fortune.2114

Please clarify what you mean by "get close to the focal point of the
optical system". For any lens system I've used (both cameras and TVs),
the imaging surface (the film or the sensor) already IS at the focal point.
As I recall, the formula (for convex lenses) is:

         1     1     1
        --- = --- + ---
         f    obj   img

where "f" is the focal length of the lens, "obj" the distance to the "object",
and "img" the distance to the (real) image. Solving for minimum "obj + img",
the closest you can get a focused image to the object (using a lens) is 4*f,
with the lens midway between the object and the image (1/f = 1/2f + 1/2f).

Not sure what a bundle of fibers would do for you, since without a lens each
fiber picks up all the light around it within a cone of its numerical
aperture (NA). Some imaging systems DO use fiber bundles directly in contact
with film, but that's generally going the other way (from a CRT to film).
I think Tektronix has a graphics output device like that. I suppose you
could use it if the object were self-luminous...

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

End of AIList Digest
********************

∂04-Jan-84  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #2 
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Jan 84  20:47:43 PST
Date: Wed  4 Jan 1984 16:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #2
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 2

Today's Topics:
  Hardware - High Resolution Video Projection,
  Programming Languages - LISP vs. Pascal,
  Net Course - AI and Mysticism
----------------------------------------------------------------------

Date: 04 Jan 84  1553 PST
From: Fred Lakin <FRD@SU-AI>
Subject: High resolution video projection

I want to buy a hi-resolution monochrome video projector suitable for use with
generic LISP machine or Star-type terminals (ie approx 1000 x 1000 pixels).
It would be nice if it cost less than $15K and didn't require expensive
replacement parts (like light valves).

Does anybody know of such currently on the market?

I know, chances seem dim, so on to my second point: I have heard it would be
possible to make a portable video projector that would cost $5K, weigh 25lb,
and project using monochrome green phosphor.  The problem is that industry
does not feel the market demand would justify production at such a price ...
Any ideas on how to find out the demand for such an item?  Of course if
all of you who might be interested in this kind of projector let me know
your suggestions, that would be a good start.

Thanks in advance for replies and/or notions,
Fred Lakin

------------------------------

Date: Wed 4 Jan 84 10:25:56-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: Re: stupid questions (i.e. Why Lisp?)

        You might want to read an article by Beau Sheil (Xerox PARC)
in the February '83 issue of Datamation called "Power tools for
programmers."  It is mostly about the Interlisp-D programming
environment, but might give you some insights about LISP in general.
        I'll offer three other reasons, though.
        Algol family languages lack the datatypes to conveniently
implement a large number of knowledge representation schemes.  Ditto
wrt. rules.  Try to imagine setting up a pascal record structure to
embody the rules "If I have less than half of a tank of gas then I
have as a goal stopping at a gas station" & "If I am carrying valuable
goods, then I should avoid highway bandits."  You could write pascal
CODE that sort of implemented the above, but DATA would be extremely
difficult.  You would almost have to write a lisp interpreter in
pascal to deal with it.  And then, when you've done that, try writing
a compiler that will take your pascal data structures and generate
native code for the machine in question!  Now, do it on the fly, as a
knowledge engineer is augmenting the knowledge base!
        Algol languages have a tedious development cycle because they
typically do not let a user load/link the same module many times as he
debugs it.  He typically has to relink the entire system after every
edit.  This prevents much in the way of incremental compilation, and
makes such languages tedious to debug in.  This is an argument against
the languages in general, and doesn't apply to AI explicitly.  The AI
community feels this as a pressure more, though, perhaps because it
tends to build such large systems.
        Furthermore, consider that most bugs in non-AI systems show up
at compile time.  If a flaw is in the KNOWLEDGE itself in an AI
system, however, the flaws will only show up in the form of incorrect
(unintelligent?) behavior.  Typically only lisp-like languages provide
the run-time tools to diagnose such problems.  In Pascal, etc, the
programmer would have to go back and explicitly put all sorts of
debugging hooks into the system, which is both time consuming, and is
not very clean.  --Christopher

------------------------------

Date: 4 Jan 84 13:59:07 EST
From: STEINBERG@RUTGERS.ARPA
Subject: Re: Herb Lin's questons on LISP etc.

Herb:
Those are hardly stupid questions.  Let me try to answer:

        1. Just why is a language like LISP better for doing AI stuff than a
        language like PASCAL or ADA?

There are two kinds of reasons.  You could argue that LISP is more
oriented towards "symbolic" processing than PASCAL.  However, probably
more important is the fact that LISP provides a truly outstanding
environment for exploratory programming, that is, programming where
you do not completely understand the problem or its solutions before
you start programming.  This is normally the case in AI programming -
even if you think you understand things you normally find out there
was at least something you were wrong about or had forgotten.  That's
one major reason for actually writing the programs.

Note that I refer to the LISP environment, not just the language.  The
existence of good editors, debuggers, cross reference aids, etc. is at
least as important as the language itself.  A number of features of LISP
make a good environment easy to provide for LISP.  These include the
compatible interpreter/compiler, the centrality of function calls, and the
simplicity and accessibility of the internal representation of programs.

For a very good introduction to the flavor of programming in LISP
environments, see "Programming in an Interactive Environment, the LISP
Experience", by Erik Sandewall, Computing Surveys, V. 10 #1, March 1978.

        2. What is the significance of not distinguishing between data
        and program in LISP?  How does this help?

Actually, in ANY language, the program is also data for the interpreter
or compiler.  What is important about LISP is that the internal form used
by the interpreter is simple and accessible.  It is simple in that the
the internal form is a structure of nested lists that captures most of
both the syntactic and the semantic structure of the code.  It is accessible
in that this structure of nested lists is in fact a basic built in data
structure supported by all the facilities of the system, and in that a
program can access or set the definition of a function.

Together these make it easy to write programs which operate on other programs.
E.g.  to add a trace feature to PASCAL you have to modify the compiler or
interpreter.  To add a trace feature to LISP you need not modify the
interpreter at all.

Furthermore, it turns out to be easy to use LISP to write interpreters
for other languages, as long as the other languages use a similar
internal form and have a similarly simple relation between form and
semantics.  Thus, a common way to solve a problem in LISP is to
implement a language in which it is easy to express solutions to
problems in a general class, and then use this language to solve your
particular problem.  See the Sandewall article mentioned above.

        3. What is the difference between decisions made in a production
        system and decisions made in a PASCAL program (in which IF statements
        also have the same (superficial) form).

Production Systems gain some advantages by restricting the languages
for the IF and THEN parts.  Also, in many production systems, all
the IF parts are evaluated first, to see which are true, before any
THEN part is done.  If more than one IF part is true, some other
mechanism decides which THEN part (or parts) to do.  Finally, some
production systems such as EMYCIN do "backward chaining", that is, one
starts with a goal and asks which THEN parts, if they were done, would
be useful in achieving the goal.  One then looks to see if their
corresponding IF parts are true, or can be made true by treating them
as sub-goals and doing the same kind of reasoning on them.

A very good introduction to production systems is "An Overview of Production
Systems" by Randy Davis and Jonathan King, October 1975, Stanford AI Lab
Memo AIM-271 and Stanford CS Dept. Report STAN-CS-75-524.  It's probably
available from the National Technical Information Service.

------------------------------

Date: 1 Jan 84 8:42:34-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide Course -- AI and Mysticism!!
Article-I.D.: psuvax.395

*************************************************************************
*                                                                       *
*            An Experiment in Teaching, an Experiment in AI             *
*       Spring Term Artificial Intelligence Seminar Announcement        *
*                                                                       *
*************************************************************************

This Spring term Penn State inaugurates a new experimental course:

        "THE HUMAN CONDITION: PROBLEMS AND CREATIVE SOLUTIONS".

This course explores all that makes the human condition so joyous and
delightful: learning, creative expression, art, music, inspiration,
consciousness, awareness, insight, sensation, planning, action, community.
Where others study these DESCRIPTIVELY, we will do so CONSTRUCTIVELY.  We
will gain familiarity by direct human experience and by building artificial
entities which manifest these wonders!!

We will formulate and study models of the human condition -- an organism of
bounded rationality confronting a bewilderingly complex environment.  The
human organism must fend for survival, but it is aided by some marvelous
mechanisms: perception (vision, hearing), cognition (understanding, learning,
language), and expression (motor skill, music, art).  We can view these
respectively as the input, processing, and output of symbolic information.
These mechanisms somehow encode all that is uniquely human in our experience
-- or do they??  Are these mechanisms universal among ALL sentient beings, be
they built from doped silicon or neural jelly?  Are these mechanisms really
NECESSARY and SUFFICIENT for sentience?

Not content with armchair philosophizing, we will push these models toward
the concreteness needed for physical implementation.  We will build the tools
that will help us to understand and use the necessary representations and
processes, and we will use these tools to explore the space of possible
realizations of "artificial sentience".

This will be no ordinary course.  For one thing, it has no teacher.  The
course will consist of a group of highly energetic individuals engaged in
seeking the secrets of life, motivated solely by the joy of the search
itself.  I will function as a "resource person" to the extent my background
allows, but the real responsibility for the success of the expedition rests
upon ALL of its members.

My role is that of "encounter group facilitator":  I jab when things lag.
I provide a sheltered environment where the shy can "come out" without
fear.  I manipulate and connive to keep the discussions going at a fever
pitch.  I pick and poke, question and debunk, defend and propose, all to
incite people to THINK and to EXPRESS.

Several people who can't be at Penn State this Spring told me they wish
they could participate -- so: I propose opening this course to the entire
world, via the miracles of modern networks!  We have arranged a local
mailing list for sharing discussions, source-code, class-session summaries,
and general flammage (with the chaff surely will be SOME wheat).  I'm aware
of three fora for sharing this: USENET's net.ai, Ken Laws' AIList, and
MIT's SELF-ORG mailing list.  PLEASE MAIL ME YOUR REACTIONS to using these
resources: would YOU like to participate? would it be a productive use of
the phone lines? would it be more appropriate to go to /dev/null?

The goals of this course are deliberately ambitious.  I seek participants
who are DRIVEN to partake in this journey -- the best, brightest, most
imaginative and highly motivated people the world has to offer.

Course starts Monday, January 16.  If response is positive, I'll post the
network arrangements about that time.

This course is dedicated to the proposition that the best way to secure
for ourselves the blessings of life, liberty, and the pursuit of happiness
is reverence for all that makes the human condition beautiful, and the
best way to build that reverence is the scientific study and construction
of the marvels that make us truly human.

--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%psuvax1.bitnet@Berkeley    Bitnet: bobgian@PSUVAX1.BITNET
CSnet:  bobgian@penn-state.csnet           UUCP:   allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: 1 Jan 84 8:46:31-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide AI Course -- Part 2
Article-I.D.: psuvax.396

*************************************************************************
*                                                                       *
*         Spring Term Artificial Intelligence Seminar Syllabus          *
*                                                                       *
*************************************************************************


  MODELS OF SENTIENCE
    Learning, Cognitive Model Formation, Insight, Discovery, Expression;
    "Subcognition as Computation", "Cognition as Subcomputation";
    Physical, Cultural, and Intellectual Evolution.

      SYMBOLIC INPUT CHANNELS: PERCEPTION
        Vision, hearing, signal processing, the "signal/symbol interface".

      SYMBOLIC PROCESSING: COGNITION
        Language, Understanding, Goals, Knowledge, Reasoning.

      SYMBOLIC OUTPUT CHANNELS: EXPRESSION
        Motor skills, Artistic and Musical Creativity, Story Creation,
        Prose, Poetry, Persuasion, Beauty.

  CONSEQUENCES OF THESE MODELS
    Physical Symbol Systems and Godel's Incompleteness Theorems;
    The "Aha!!!" Phenomenon, Divine Inspiration, Extra-Sensory Perception,
    The Conscious/Unconscious Mind, The "Right-Brain/Left-Brain" Dichotomy;
    "Who Am I?", "On Having No Head"; The Nature and Texture of Reality;
    The Nature and Role of Humor; The Direct Experience of the Mystical.

  TECHNIQUES FOR DEVELOPING THESE ABILITIES IN HUMANS
    Meditation, Musical and Artistic Experience, Problem Solving,
    Games, Yoga, Zen, Haiku, Koans, "Calculus for Peak Experiences".

  TECHNIQUES FOR DEVELOPING THESE ABILITIES IN MACHINES

    REVIEW OF LISP PROGRAMMING AND FORMAL SYMBOL MANIPULATION:
      Construction and access of symbolic expressions, Evaluation and
      Quotation, Predicates, Function definition; Functional arguments
      and returned values; Binding strategies -- Local versus Global,
      Dynamic versus Lexical, Shallow versus Deep; Compilation of LISP.

    IMPLEMENTATION OF LISP:  Storage Mapping and the Free List;
      The representation of Data: Typed Pointers, Dynamic Allocation;
      Symbols and the Symbol Table (Obarray); Garbage Collection
      (Sequential and Concurrent algorithms).

    REPRESENTATION OF PROCEDURE:  Meta-circular definition of the
      evaluation process.

    "VALUES" AND THE OBJECT-ORIENTED VIEW OF PROGRAMMING: Data-Driven
      Programming, Message-Passing, Information Hiding; the MIT Lisp Machine
      "Flavor" system; Functional and Object-Oriented systems -- comparison
      with SMALLTALK.

    SPECIALIZED AI PROGRAMMING TECHNIQUES:  Frames and other Knowledge
      Representation Languages, Discrimination Nets, Augmented Transition
      Networks; Pattern-Directed Inference Systems, Agendas, Chronological
      Backtracking, Dependency-Directed Backtracking, Data Dependencies,
      Non-Monotonic Logic, and Truth-Maintenance Systems.

    LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
      Frames and other Knowledge Representation Languages, Discrimination
      Nets, "Higher" High-Level Languages:  PLANNER, CONNIVER, PROLOG.

  SCIENTIFIC AND ETHICAL CONSEQUENCES OF THESE ABILITIES IN HUMANS
  AND IN MACHINES
    The Search for Extra-Terrestrial Intelligence.
      (Would we recognize it if we found it?  Would they recognize us?)
    The Search for Terrestrial Intelligence.
    Are We Unique?  Are we worth saving?  Can we save ourselves?
    Why are we here?  Why is ANYTHING here?  WHAT is here?
    Where ARE we?  ARE we?  Is ANYTHING?


These topics form a cluster of related ideas which we will pursue more-or-
less concurrently; the listing is not meant to imply a particular sequence.

Various course members have expressed interest in the following software
engineering projects.  These (and possibly others yet to be suggested)
will run concurrently throughout the course:

    LISP Implementations:
      For CMS, in PL/I and/or FORTRAN
      In PASCAL, optimized for personal computers (esp HP 9816)
      In Assembly, optimized for Z80 and MC68000
      In 370 BAL, modifications of LISP 1.5

    New "High-Level" Systems Languages:
      Flavor System (based on the MIT Zetalisp system)
      Prolog Interpreter (plus compiler?)
      Full Programming Environment (Enhancements to LISP):
        Compiler, Editor, Workspace Manager, File System, Debug Tools

    Architectures and Languages for Parallel {Sub-}Cognition:
      Software and Hardware Alternatives to the Von-Neuman Computer
      Concurrent Processing and Message Passing systems

    Machine Learning and Discovery Systems:
      Representation Language for Machine Learning
      Strategy Learning for various Games (GO, CHECKERS, CHESS, BACKGAMMON)

    Perception and Motor Control Systems:
      Vision (implementations of David Marr's theories)
      Robotic Welder control system

    Creativity Systems:
      Poetry Generators (Haiku)
      Short-Story Generators

    Expert Systems (traditional topic, but including novel features):
      Euclidean Plane Geometry Teaching and Theorem-Proving system
      Welding Advisor
      Meteorological Analysis Teaching system


READINGS -- the following books will be very helpful:

    1.  ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1984.

    2.  THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
    Edward Feigenbaum; William Kaufman Press, 1981 and 1982.  Vols 1, 2, 3.

    3.  MACHINE LEARNING, Michalski, Carbonell, and Mitchell; Tioga, 1983.

    4.  GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
    Basic Books, 1979.

    5.  THE MIND'S I, Douglas R. Hofstadter and Daniel C. Dennett;
    Basic Books, 1981.

    6.  LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.

    7.  ANATOMY OF LISP, John Allen; McGraw-Hill, 1978.

    8.  ARTIFICIAL INTELLIGENCE PROGRAMMING, Eugene Charniak, Christopher K.
    Riesbeck, and Drew V. McDermott; Lawrence Erlbaum Associates, 1980.

--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%psuvax1.bitnet@Berkeley    Bitnet: bobgian@PSUVAX1.BITNET
CSnet:  bobgian@penn-state.csnet           UUCP:   allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************

∂05-Jan-84  1502	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #3 
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84  14:59:11 PST
Date: Wed  4 Jan 1984 17:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #3
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 3

Today's Topics:
  Course - Penn State's First Undergrad AI Course
----------------------------------------------------------------------

Date: 31 Dec 83 15:18:20-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Penn State's First Undergrad AI Course
Article-I.D.: psuvax.380

Last fall I taught Penn State's first ever undergrad AI course.  It
attracted 150 students, including about 20 faculty auditors.  I've gotten
requests from several people initiating AI courses elsewhere, and I'm
posting this and the next 6 items in hopes they may help others.

  1.  General Information
  2.  Syllabus (slightly more detailed topic outline)
  3.  First exam
  4.  Second exam
  5.  Third exam
  6.  Overview of how it went.

I'll be giving this course again, and I hate to do anything exactly the
same twice.  I welcome comments and suggestions from all net buddies!

        -- Bob

  [Due to the length of Bob's submission, I will send the three
  exams as a separate digest.  Bob's proposal for a network AI course
  associated with his spring semester curriculum was published in
  the previous AIList issue; that was entirely separate from the
  following material.  -- Ken Laws]

--
Spoken: Bob Giansiracusa
Bell:   814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa:   bobgian%psuvax1.bitnet@Berkeley
CSnet:  bobgian@penn-state.csnet
UUCP:   allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802

------------------------------

Date: 31 Dec 83 15:19:52-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course, Part 1/6
Article-I.D.: psuvax.381

CMPSC 481:  INTRODUCTION TO ARTIFICIAL INTELLIGENCE

An introduction to the theory, research paradigms, implementation techniques,
and philosopies of Artificial Intelligence considered both as a science of
natural intelligence and as the engineering of mechanical intelligence.


OBJECTIVES  --  To provide:

   1.  An understanding of the principles of Artificial Intelligence;
   2.  An appreciation for the power and complexity of Natural Intelligence;
   3.  A viewpoint on programming different from and complementary to the
       viewpoints engendered by other languages in common use;
   4.  The motivation and tools for developing good programming style;
   5.  An appreciation for the power of abstraction at all levels of program
       design, especially via embedded compilers and interpreters;
   6.  A sense of the excitement at the forefront of AI research; and
   7.  An appreciation for the tremendous impact the field has had and will
       continue to have on our perception of our place in the Universe.


TOPIC SUMMARY:

  INTRODUCTION:  What is "Intelligence"?
    Computer modeling of "intelligent" human performance.  The Turing Test.
    Brief history of AI.  Relation of AI to psychology, computer science,
    management, engineering, mathematics.

  PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":
    "What is a Brain that it may possess Intelligence, and Intelligence that
    it may inhabit a Brain?"  Introduction to Formal Systems, Physical Symbol
    Systems, and Multilevel Interpreters.  Necessity and Sufficiency of
    Physical Symbol Systems as the basis for intelligence.

  REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:
    State Space, Predicate Calculus, Production Systems, Procedural
    Representations, Semantic Networks, Frames and Scripts.

  THE "PROBLEM-SOLVING" PARADIGM AND TECHNIQUES:
    Generate and Test, Heuristic Search (Search WITH Heuristics,
    Search FOR Heuristics), Game Trees, Minimax, Problem Decomposition,
    Means-Ends Analysis, The General Problem Solver (GPS).

  LISP PROGRAMMING:
    Symbolic Expressions and Symbol Manipulation, Data Structures,
    Evaluation and Quotation, Predicates, Input/Output, Recursion.
    Declarative and Procedural knowledge representation in LISP.

  LISP DETAILS:
    Storage Mapping, the Free List, and Garbage Collection,
    Binding strategies and the concept of the "Environment", Data-Driven
    Programming, Message-Passing, The MIT Lisp Machine "Flavor" system.

  LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
    Frames and other Knowledge Representation Languages, Discrimination
    Nets, "Higher" High-Level Languages:  PLANNER, CONNIVER, PROLOG.

  LOGIC, RULE-BASED SYSTEMS, AND INFERENCE:
    Logic: Axioms, Rules of Inference, Theorems, Truth, Provability.
    Production Systems: Rule Interpreters, Forward/Backward Chaining.
    Expert Systems: Applied Knowledge Representation and Inference.
    Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems,
    Theorem Proving, Question Answering, and Planning systems.

  THE UNDERSTANDING OF NATURAL LANGUAGE:
    Formal Linguistics: Grammars and Machines, the Chomsky Hierarchy.
    Syntactic Representation: Augmented Transition Networks (ATNs).
    Semantic Representation: Conceptual Dependency, Story Understanding.
    Spoken Language Understanding.

  ROBOTICS: Machine Vision, Manipulator and Locomotion Control.

  MACHINE LEARNING:
    The Spectrum of Learning: Learning by Adaptation, Learning by Being
      Told, Learning from Examples, Learning by Analogy, Learning by
      Experimentation, Learning by Observation and Discovery.
    Model Induction via Generate-and-Test, Automatic Theory Formation.
    A Model for Intellectual Evolution.

  RECAPITULATION AND CODA:
    The knowledge representation and problem-solving paradigms of AI.
    The key ideas and viewpoints in the modeling and creation of intelligence.
    Is there more (or less) to Intelligence, Consciousness, the Soul?
    Prospectus for the future.


Handouts for the course include:

1.  Computer Science as Empirical Inquiry: Symbols and Search.  1975 Turing
Award Lecture by Allen Newell and Herb Simon; Communications of the ACM,
Vol. 19, No. 3, March 1976.

2.  Steps Toward Artificial Intelligence.  Marvin Minsky; Proceedings of the
IRE, Jan. 1961.

3.  Computing Machinery and Intelligence.  Alan Turing; Mind (Turing's
original proposal for the "Turing Test").

4.  Exploring the Labyrinth of the Mind.  James Gleick; New York Times
Magazine, August 21, 1983 (article about Doug Hofstadter's recent work).


TEXTBOOKS:

1.  ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1983.
Will be available from publisher in early 1984.  I will distribute a
copy printed from Patrick's computer-typeset manuscript.

2.  LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.
Excellent introductory programming text, illustrating many AI implementation
techniques at a level accessible to novice programmers.

4.  GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
Basic Books, 1979.  One of the most entertaining books on the subject of AI,
formal systems, and symbolic modeling of intelligence.

5.  THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
Edward Feigenbaum; William Kaufman Press, 1981 and 1982.  Comes as a three
volume set.  Excellent (the best available), but the full set costs over $100.

6.  ANATOMY OF LISP, John Allen; McGraw-Hill, 1978.  Excellent text on the
definition and implementation of LISP, sufficient to enable one to write a
complete LISP interpreter.

------------------------------

Date: 31 Dec 83 15:21:46-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 2/6  (Topic Outline)
Article-I.D.: psuvax.382

CMPSC 481:  INTRODUCTION TO ARTIFICIAL INTELLIGENCE


TOPIC OUTLINE:

   INTRODUCTION:  What is "Intelligence"?

   Computer modeling of "intelligent" human performance.  Turing Test.
   Brief history of AI.  Examples of "intelligent" programs:  Evan's Geometric
   Analogies, the Logic Theorist, General Problem Solver, Winograd's English
   language conversing blocks world program (SHRDLU), MACSYMA, MYCIN, DENDRAL.

   PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":

   "What is a Brain that it may possess Intelligence, and Intelligence that
   it may inhabit a Brain?"  Introduction to Formal Systems, Physical Symbol
   Systems, and Multilevel Interpreters.

   REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:

   State Space problem formulations.  Predicate Calculus.  Semantic Networks.
   Production Systems.  Frames and Scripts.

   SEARCH:

   Representation of problem-solving as graph search.
   "Blind" graph search:
      Depth-first, Breadth-first.
   Heuristic graph search:
      Best-first, Branch and Bound, Hill-Climbing.
   Representation of game-playing as tree search:
      Static Evaluation, Minimax, Alpha-Beta.
   Heuristic Search as a General Paradigm:
      Search WITH Heuristics, Search FOR Heuristics

   THE GENERAL PROBLEM SOLVER (GPS) AS A MODEL OF INTELLIGENCE:

   Goals and Subgoals -- problem decomposition
   Difference-Operator Tables -- the solution to subproblems
   Does the model fit?  Does GPS work?

   EXPERT SYSTEMS AND KNOWLEDGE ENGINEERING:

   Representation of Knowledge:  The "Production System" Movement
   The components:
      Knowledge Base
      Inference Engine
   Examples of famous systems:
      MYCIN, TEIRESIAS, DENDRAL, MACSYMA, PROSPECTOR

   INTRODUCTION TO LISP PROGRAMMING:

   Symbolic expressions and symbol manipulation:
      Basic data types
         Symbols
            The special symbols T and NIL
         Numbers
         Functions
      Assignment of Values to Symbols (SETQ)
      Objects constructed from basic types
         Constructor functions:  CONS, LIST, and APPEND
         Accessor functions:  CAR, CDR
   Evaluation and Quotation
   Predicates
   Definition of Functions (DEFUN)
   Flow of Control (COND, PROG, DO)
   Input and Output (READ, PRINT, TYI, TYO, and friends)

   REPRESENTATION OF DECLARATIVE KNOWLEDGE IN LISP:

   Built-in representation mechanisms
      Property lists
      Arrays
   User-definable data structures
      Data-structure generating macros (DEFSTRUCT)
   Manipulation of List Structure
      "Pure" operations (CONS, LIST, APPEND, REVERSE)
      "Impure" operations (RPLACA and RPLACD, NCONC, NREVERSE)
   Storage Mapping, the Free List, and Garbage Collection

   REPRESENTATION OF PROCEDURAL KNOWLEDGE IN LISP:

   Types of Functions
      Expr:  Call by Value
      Fexpr:  Call by Name
      Macros and macro-expansion
   Functions as Values
      APPLY, FUNCALL, LAMBDA expressions
      Mapping operators (MAPCAR and friends)
      Functional Arguments (FUNARGS)
      Functional Returned Values (FUNVALS)

   THE MEANING OF "VALUE":

   Assignment of values to symbols
   Binding of values to symbols
      "Local" vs "Global" variables
      "Dynamic" vs "Lexical" binding
      "Shallow" vs "Deep" binding
   The concept of the "Environment"

   "VALUES" AND THE OBJECT-CENTERED VIEW OF PROGRAMMING:

   Data-Driven programming
   Message-passing
   Information Hiding
   Safety through Modularity
   The MIT Lisp Machine "Flavor" system

   LISP'S TALENTS IN REPRESENTATION AND SEARCH:

   Representation of symbolic structures in LISP
      Predicate Calculus
      Rule-Based Expert Systems (the Knowledge Base examined)
      Frames
   Search Strategies in LISP
      Breadth-first, Depth-first, Best-first search
      Tree search and the simplicity of recursion
   Interpretation of symbolic structures in LISP
      Rule-Based Expert Systems (the Inference Engine examined)
      Symbolic Mathematical Manipulation
         Differentiation and Integration
      Symbolic Pattern Matching
         The DOCTOR program (ELIZA)

   LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS

   Frames and other Knowledge Representation Languages
   Discrimination Nets
   Augmented Transition Networks (ATNs) as a specification of English syntax
   Interpretation of ATNs
   Compilation of ATNs
   Alternative Control Structures
      Pattern-Directed Inference Systems (production system interpreters)
      Agendas (best-first search)
      Chronological Backtracking (depth-first search)
      Dependency-Directed Backtracking
   Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems
   "Higher" High-Level Languages:  PLANNER, CONNIVER

   PROBLEM SOLVING AND PLANNING:

   Hierarchical models of planning
      GPS, STRIPS, ABSTRIPS

   Non-Hierarchical models of planning
      NOAH, MOLGEN

   THE UNDERSTANDING OF NATURAL LANGUAGE:

   The History of "Machine Translation" -- a seemingly simple task
   The Failure of "Machine Translation" -- the need for deeper understanding
   The Syntactic Approach
      Grammars and Machines -- the Chomsky Hierarchy
      RTNs, ATNs, and the work of Terry Winograd
   The Semantic Approach
      Conceptual Dependency and the work of Roger Schank
   Spoken Language Understanding
      HEARSAY
      HARPY

   ROBOTICS:

   Machine Vision
      Early visual processing (a signal processing approach)
      Scene Analysis and Image Understanding (a symbolic processing approach)
   Manipulator and Locomotion Control
      Statics, Dynamics, and Control issues
      Symbolic planning of movements

   MACHINE LEARNING:

   Rote Learning and Learning by Adaptation
      Samuel's Checker player
   Learning from Examples
      Winston's ARCH system
      Mitchell's Version Space approach
   Learning by Planning and Experimentation
      Samuel's program revisited
      Sussman's HACKER
      Mitchell's LEX
   Learning by Heuristically Guided Discovery
      Lenat's AM (Automated Mathematician)
      Extending the Heuristics:  EURISKO
   Model Induction via Generate-and-Test
      The META-DENDRAL project
   Automatic Formation of Scientific Theories
      Langley's BACON project
   A Model for Intellectual Evolution (my own work)

   RECAP ON THE PRELUDE AND FUGUE:

   Formal Systems, Physical Symbol Systems, and Multilevel Interpreters
   revisited -- are they NECESSARY?  are they SUFFICIENT?  Is there more
   (or less) to Intelligence, Consciousness, the Soul?

   SUMMARY, CONCLUSIONS, AND FORECASTS:

   The representation of knowledge in Artificial Intelligence
   The problem-solving paradigms of Artificial Intelligence
   The key ideas and viewpoints in the modeling and creation of intelligence
   The results to date of the noble effort
   Prospectus for the future


------------------------------

Date: 31 Dec 83 15:28:32-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 6/6  (Overview)
Article-I.D.: psuvax.386

A couple of notes about how the course went.  Interest was high, but the
main problem I found is that Penn State students are VERY strongly
conditioned to work for grades and little else.  Most teachers bore them,
expect them to memorize lectures and regurgitate on exams, and students
then get drunk (over 50 frats here) and promptly forget all.  Initially
I tried to teach, but I soon realized that PEOPLE CAN LEARN (if they
really want to) BUT NOBODY CAN TEACH (students who don't want to learn).
As the course evolved my role became less "information courier" and more
"imagination provoker".  I designed exams NOT to measure learning but to
provoke thinking (and thereby learning).  The first exam (on semantic
nets) was given just BEFORE covering that topic in lecture -- students
had a hell of a hard time on the exam, but they sure sat up and paid
attention to the next week's lectures!

For the second exam I announced that TWO exams were being given: an easy
one (if they sat on one side of the room) and a hard one (on other side).
Actually the exams were identical.  (This explains the first question.)
The winning question submitted from the audience related to the chapter
in GODEL, ESCHER, BACH on the MU system: I gave a few axioms and inference
rules and then asked whether a given wff was a theorem.

The third exam was intended ENTIRELY to provoke discussion and NOT AT ALL
to measure anything.  It started with deadly seriousness, then (about 20
minutes into the exam) a few "audience plants" started acting out a
prearranged script which included discussing some of the questions and
writing some answers on the blackboard.  The attempt was to puncture the
"exam mentality" and generate some hot-blooded debate (you'll see what I
mean when you see the questions).  Even the Teaching Assistants were kept
in the dark about this "script"!  Overall, the attempt failed, but many
people did at least tell me that taking the exams was the most fun part
of the course!

With this lead-in, you probably have a clearer picture of some of the
motivations behind the spring term course.  To put it bluntly: I CANNOT
TEACH AI.  I CAN ONLY HOPE TO INSPIRE INTERESTED STUDENTS TO WANT TO LEARN
AI.  I'LL DO ANYTHING I CAN THINK OF WHICH INCREASES THAT INSPIRATION.

The motivational factors also explain my somewhat unusual grading system.
I graded on creativity, imagination, inspiration, desire, energy, enthusiasm,
and gusto.  These were partly measured by the exams, partly by the energy
expended on several optional projects (and term paper topics), and partly
by my seat-of-the-pants estimate of how determined a student was to DO real
AI.  This school prefers strict objective measures of student performance.
Tough.

This may all be of absolutely no relevance to others teaching AI.  Maybe
I'm just weird.  I try to cultivate that image, for it seems to attract
the best and brightest students!

					-- Bob Giansiracusa

------------------------------

End of AIList Digest
********************

∂05-Jan-84  1939	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #4 
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84  19:37:47 PST
Date: Thu  5 Jan 1984 11:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #4
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Jan 1984       Volume 2 : Issue 4

Today's Topics:
  Course - PSU's First AI Course (continued)
----------------------------------------------------------------------

Date: 31 Dec 83 15:23:38-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 3/6  (First Exam)
Article-I.D.: psuvax.383

[The intent and application of the following three exams was described
in the previous digest issue.  The exams were intended to look difficult
but to be fun to take.  -- KIL]


********        ARTIFICIAL INTELLIGENCE  --  First Exam        ********

The field of Artificial Intelligence studies the modeling of human
intelligence in the hope of constructing artificial devices that display
similar behavior.  This exam is designed to study your ability to model
artificial intelligence in the hope of improving natural devices that
display similar behavior.  Please read ALL the questions first, introspect
on how an AI system might solve these problems, then simulate that system.
(Please do all work on separate sheets of paper.)


EASY PROBLEM:

The rules for differentiating polynomials can be expressed as follows:

IF the input is:  (A * X ↑ 3) + (B * X ↑ 2) + (C * X ↑ 1) + (D * X ↑ 0)

THEN the output is:
 (3 * A * X ↑ 2) + (2 * B * X ↑ 1) + (1 * C * X ↑ 0) + (0 * D * X ↑ -1)

(where "*" indicates multiplication and "↑" indicates exponentiation).

Note that all letters here indicate SYMBOLIC VARIABLES (as in algebra),
not NUMERICAL VALUES (as in FORTRAN).


1.  Can you induce from this sample the general rule for polynomial
differentiation?  Express that rule in English or Mathematical notation.
(The mathematicians in the group may have some difficulty here.)

2.  Can you translate your "informal" specification of the differentiation
rule into a precise statement of an inference rule in a Physical Symbol
System?  That is, define a set of objects and relations, a notation for
expressing them (hint: it doesn't hurt for the notation to look somewhat
like a familiar programming language which was invented to do mathematical
notation), and a symbolic transformation rule that encodes the rule of
inference representing differentiation.

3.  Can you now IMPLEMENT your Physical Symbol System using some familiar
programming language?  That is, write a program which takes as input a
data structure encoding your symbolic representation of a polynomial and
returns a data structure encoding the representation of its derivative.
(Hint as a check on infinite loops:  this program can be done in six
or fewer lines of code.  Don't be afraid to define a utility function
or two if it helps.)


SLIGHTLY HARDER PROBLEM:

Consider a world consisting of one block (a small wooden cubical block)
standing on the floor in the middle of a room.  A fly is perched on the
South wall, looking North at the block.  We want to represent the world
as seen by the fly.  In the fly's world the only thing that matters is
the position of that block.  Let's represent the world by a graph
consisting of a single node and no links to any other nodes.  Easy enough.

4.  Now consider a more complicated world.  There are TWO blocks, placed
apart from each other along an East/West line.  From the fly's point of
view, Block A (the western block) is TO-THE-LEFT-OF Block B (the eastern
block), and Block B has a similar relationship (TO-THE-RIGHT-OF) to
Block A.  Draw your symbolic representation of the situation as a graph
with nodes for the blocks and labeled links for the two relationships
which hold between the blocks.  (Believe it or not, you have just invented
the representation mechanism called a "semantic network".)

5.  Now the fly moves to the northern wall, looking south.  Draw the new
semantic network which represents the way the blocks look to him from his
new vantage point.

6.  What you have diagrammed in the above two steps is a Physical Symbol
System: a symbolic representation of a situation coupled with a process
for making changes in the representation which correspond homomorphically
with changes in the real world represented by the symbol system.
Unfortunately, your symbol system does not yet have a concrete
representation for this changing process.  To make things more concrete,
let's transform to another Physical Symbol System which can encode
EXPLICITLY the representation both of the WORLD (as seen by the fly)
and of HOW THE WORLD CHANGES when the fly moves.

Invent a representation for your semantic network using some familiar
programming language.  Remember what is being modeled are OBJECTS (the
blocks) and RELATIONS between the objects.  Hint: you might like to
use property lists, but please feel no obligations to do so.

7.  Now the clincher which demonstrates the power of the idea that a
physical symbol system can represent PROCESSES as well as OBJECTS and
RELATIONS.  Write a program which transforms the WORLD-DESCRIPTION for
FLY-ON-SOUTH-WALL to WORLD-DESCRIPTION for FLY-ON-NORTH-WALL.  The
program should be a single function (with auxiliaries if you like)
which takes two arguments, the symbol SOUTH for the initial wall and
NORTH for target wall, uses a global symbol whose value is your semantic
network representing the world seen from the south wall, and returns
T if successful and NIL if not.  As a side effect, the function should
CHANGE the symbolic structure representing the world so that afterward
it represents the blocks as seen by the fly from the north wall.
You might care to do this in two steps: first describing in English or
diagrams what is going on and then writing code to do it.

8.  The world is getting slightly more complex.  Now there are four
blocks, A and B as before (spread apart along an East/West line), C
which is ON-TOP-OF B, and D which is just to the north of (ie, in back
of when seen from the south) B.  Let's see your semantic network in
both graphical and Lisp forms.  The fly is on South wall, looking North.
(Note that we mean "directly left-of" and so on.  A is LEFT-OF B but has
NO relation to D.)

9.  Generalize the code you wrote for question 4 (if you haven't already)
so that it correctly transforms the world seen by the fly from ANY of
the four walls (NORTH, EAST, SOUTH, and WEST) to that seen from any other
(including the same) wall.  What I mean by "generalize" is don't write
code that works only for the two-block or four-block worlds; code it so
it will work for ANY semantic network representing a world consisting of
ANY number of blocks with arbitrary relations between them chosen from
the set {LEFT-OF, RIGHT-OF, IN-FRONT-OF, IN-BACK-OF, ON-TOP-OF, UNDER}.
(Hint: if you are into group theory you might find a way to do this with
only ONE canonical transformation; otherwise just try a few examples
until you catch on.)

10.  Up to now we have been assuming the fly is always right-side-up.
Can you do question 6 under the assumption that the fly sometimes perches
on the wall upside-down?  Have your function take two extra arguments
(whose values are RIGHT-SIDE-UP or UPSIDE-DOWN) to specify the fly's
vertical orientation on the initial and final walls.

11.  Up to now we have been modeling the WORLD AS SEEN BY THE FLY.  If
the fly moves, the world changes.  Why is this approach no good when
we allow more flies into the room and wish to model the situation from
ANY of their perspectives?

12.  What can be done to fix the problem you pointed out above?  That is,
redefine the "axioms" of your representation so it works in the "multiple
conscious agent" case.  (Hint: new axioms might include new names for
the relations.)

13.  In your new representation, the WORLD is a static object, while we
have functions called "projectors" which given the WORLD and a vantage
point (a symbol from the set {NORTH, EAST, SOUTH, WEST} and another from
the set {RIGHT-SIDE-UP, UPSIDE-DOWN}) return a symbolic description (a
"projection") of the world as seen from that vantage point.  For the
reasons you gave in answer to question 11, the projectors CANNOT HAVE
SIDE EFFECTS.  Write the projector function.

14.  Now let's implement a perceptual cognitive model builder, a program
that takes as input a sensory description (a symbolic structure which
represents the world as seen from a particular vantage point) and a
description of the vantage point and returns a "static world descriptor"
which is invariant with respect to vantage point.  Code up such a model
builder, using for input a semantic network of the type you used in
questions 6 through 10 and for output a semantic network of the type
used in questions 12 and 13.  (Note that this function in nothing more
than the inverse of the projector from question 13.)


********    THAT'S IT !!!    THAT'S IT !!!    THAT'S IT !!!    ********


SOME HELPFUL LISP FUNCTIONS
You may use these plus anything else discussed in class.

Function      Argument description          Return value     Side effect

PUTPROP <symbol> <value> <property-name> ==>  <value>       adds property
GET <symbol> <property-name>             ==>  <value>
REMPROP <symbol> <property-name>         ==>  <value>    removes property


***********************************************************************

					-- Bob Giansiracusa

------------------------------

Date: 31 Dec 83 15:25:34-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 4/6  (Second Exam)
Article-I.D.: psuvax.384

1.  (20) Why are you now sitting on this side of the room?  Can you cite
an AI system which used a similar strategy in deciding what to do?

2.  (10) Explain the difference between vs CHRONOLOGICAL and DEPENDENCY-
DIRECTED backtracking.

3.  (10) Compare and contrast PRODUCTION SYSTEMS and SEMANTIC NETWORKS as
far as how they work, what they can represent, what type of problems are
well-suited for solution using that type of knowledge representation.

4.  (20) Describe the following searches in detail.  In detail means:
 1) How do they work??           2) How are they related to each other??
 3) What are their advantages??  4) What are their disadvantages??
      Candidate methods:
         1) Depth-first                 2) Breadth-first
         3) Hill-climbing               4) Beam search
         5) Best-first                  6) Branch-and-bound
         7) Dynamic Programming         8) A*

5.  (10) What are the characteristics of good generators for
the GENERATE and TEST problem-solving method?

6.  (10) Describe the ideas behind Mini-Max.  Describe the ideas behind
Alpha-Beta.  How do you use the two of them together and why would you
want to??

7.  (50) Godel's Incompleteness Theorem states that any consistent and
sufficiently complex formal system MUST express truths which cannot be
proved within the formal system.  Assume that THIS theorem is true.
  1.  If UNPROVABLE, how did Godel prove it?
  2.  If PROVABLE, provide an example of a true but unprovable statement.

8.  (40) Prove that this exam is unfinishable correctly; that is, prove
that this question is unsolvable.

9.  (50) Is human behavior governed by PREDESTINATION or FREE-WILL?  How
could you design a formal system to solve problems like that (that is, to
reason about "non-logical" concepts)?

10.  (40) Assume only ONE question on this exam were to be graded -- the
question that is answered by the FEWEST number of people.  How would you
decide what to do?  Show the productions such a system might use.

11.  (100) You will be given extra credit (up to 100 points) if by 12:10
pm today you bring to the staff a question.  If YOUR question is chosen,
it will be asked and everybody else given 10 points for a correct answer.
YOU will be given 100 points for a correct answer MINUS ONE POINT FOR EACH
CORRECT ANSWER GIVEN BY ANOTHER CLASS MEMBER.  What is your question?

					-- Bob Giansiracusa

------------------------------

Date: 31 Dec 83 15:27:19-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 5/6  (Third Exam)
Article-I.D.: psuvax.385

1.  What is the sum of the first N positive integers?  That is, what is:

         [put here the sigma-sign notation for the sum]

2.  Prove that the your answer works for any N > 0.

3.  What is the sum of the squares of the first N positive integers:

         [put here the sigma-sign notation for the sum]

4.  Again, prove it.

5.  The proofs you gave (at least, if you are utilizing "traditional"
mathematical background,) are based on "mathematical induction".
Briefly state this principle and explain why it works.

6.  If you are like most people, your definition will work only over the
domain of NATURAL NUMBERS (positive integers).  Can this definition be
extended to work over ANY countable domain?

7.  Consider the lattice of points in N-dimensional space having integer
valued coordinates.  Is this space countable?

8.  Write a program (or express an algorithm in pseudocode) which returns
the number of points in this space (the one in #7) inside an N-sphere of
radius R (R is a real number > 0).

9.  The domains you have considered so far are all countable.  The problem
solving methods you have used (if you're "normal") are based on
mathematical induction.  Is it possible to extend the principle of
mathematical induction (and recursive programming) to NON-COUNTABLE
domains?

10.  If you answered #9 NO, why not?  If you answered it YES, how?

11.  Problems #1 and #3 require you to perform INDUCTIVE REASONING
(a related but different use of the term "induction").  Discuss some of
the issues involved in getting a computer to perform this process
automatically.  (I mean the process of generating a finite symbolic
representation which when evaluated will return the partial sum for
an infinite sequence.)

12.  Consider the "sequence extrapolation" task: given a finite sequence
of symbols, predict the next few terms of the sequence or give a rule
which can generate ALL the terms of the sequence.  Is this problem
uniquely solvable?  Why or why not?

13.  If you answered #12 YES, how would you build a computer program to
do so?

14.  If you answered #12 NO, how could you constrain the problem to make
it uniquely solvable?  How would you build a program to solve the
constrained problem?

15.  Mankind is faced with the threat of nuclear annihilation.  Is there
anything the field of AI has to offer which might help avert that threat?
(Don't just say "yes" or "no"; come up with something real.)

16.  Assuming mankind survives the nuclear age, it is very likely that
ethical issues relating to AI and the use of computers will have very
much to do with the view the "person on the street" has of the human
purpose and role in the Universe.  In what way can AI researchers plan
NOW so that these ethical issues are resolved to the benefit of the
greatest number of people?

17.  Could it be that our (humankind's) purpose on earth is to invent
and build the species which will be the next in the evolutionary path?
Should we do so?  How?  Why?  Why not?

18.  Suppose you have just discovered the "secret" of Artificial
Intelligence; that is, you (working alone and in secret) have figured
out a way (new hardware, new programming methodology, whatever) to build
an artificial device which is MORE INTELLIGENT, BY ANY DEFINITION, BY
ANY TEST WHATSOEVER, that any human being.  What do you do with this
knowledge?  Explain the pros and cons of several choices.

19.  Question #9 indicates that SO FAR all physical symbol systems have
dealt ONLY with discrete domains.  Is it possible to generalize the
idea to continuous domains?  Since many aspects of the human nervous
system function on a continuous (as opposed to discrete) basis, is it
possible that the invention of CONTINUOUS PHYSICAL SYMBOL SYSTEMS might
provide part of the key to the "secret of intelligence"?

20.  What grade do you feel you DESERVE in this course?  Why?  What
grade do you WANT?  Why?  If the two differ, is there anything you
want to do to reduce the difference?  Why or Why Not?  What is it?
Why is it (or is it not) worth doing?

--
Spoken: Bob Giansiracusa
Bell:   814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa:   bobgian%psuvax1.bitnet@Berkeley
CSnet:  bobgian@penn-state.csnet
UUCP:   allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************

∂09-Jan-84  1641	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #5 
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Jan 84  16:41:38 PST
Date: Mon  9 Jan 1984 14:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #5
To: AIList@SRI-AI


AIList Digest            Tuesday, 10 Jan 1984       Volume 2 : Issue 5

Today's Topics:
  AI and Weather Forecasting - Request,
  Expert Systems - Request,
  Pattern Recognition & Cognition,
  Courses - Reaction to PSU's AI Course,
  Programming Lanuages - LISP Advantages
----------------------------------------------------------------------

Date: Mon 9 Jan 84 14:15:13-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI and Weather Forecasting

I have been talking with people interested in AI techniques for
weather prediction and meteorological analysis.  I would appreciate
pointers to any literature or current work on this subject, especially

    * knowledge representations for spatial/temporal reasoning;
    * symbolic description of weather patterns;
    * capture of forecasting expertise;
    * inference methods for estimating meteorological variables
      from (spatially and temporally) sparse data;
    * methods of interfacing symbolic knowledge and heuristic
      reasoning with numerical simulation models;
    * any weather-related expert systems.

I am aware of some recent work by Gaffney and Racer (NBS Trends and
Applications, 1983) and by Taniguchi et al. (6th Pat. Rec., 1982),
but I have not been following this field.  A bibliography or guide
to relevant literature would be welcome.

                                        -- Ken Laws

------------------------------

Date: 5 January 1984 13:47 est
From: RTaylor.5581i27TK at RADC-MULTICS
Subject: Expert Systems Info Request


Hi, y'all...I have the names (hopefully, correct) of four expert
systems/tools/environments (?).  I am interested in the "usual":  that
is, general info, who to contact, feedback from users, how to acquire
(if we want it), etc.  The four names I have are:  RUS, ALX, FRL, and
FRED.

Thanks.  Also, thanks to those who provided info previously...I have
info (similar to that requested above) on about 15 other
systems/tools/environments...some of the info is a little sketchy!

             Roz  (aka:  rtaylor at radc-multics)

------------------------------

Date: 3 Jan 84 20:38:52-PST (Tue)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: Re: Loop detection and classical psychology
Article-I.D.: mit-eddi.1114

One of the truly amazing things about the human brain is that its pattern
recognition capabilities seem limitless (in extreme cases).  We don't even
have a satisfactory way to describe pattern recognition as it occurs in
our brains.  (Well, maybe we have something acceptable at a minimum level.
I'm always impressed by how well dollar-bill changers seem to work.)  As
a friend of mine put it, "the brain immediately rejects an infinite number
of wrong answers," when working on a problem.

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: Fri 6 Jan 84 10:11:01-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: PSU's First AI Course

Wow!  I actually think it's kind of neat (but, of course, very wacko).  I
particularly like making people think about the ethical and philosphical
considerations at the same time as their thinking about minimax, etc.

------------------------------

Date: Wed 4 Jan 84 17:23:38-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: AIList Digest   V2 #1

[in response to Herb Lin's questions]

Well, 2 more or less answers 1.   One of the main reasons why Lisp and not C
is the language of many people's choice for AI work is that you can easily cons
up at run time a piece of data which "is" the next action you are going to
take.   In most languages you are restricted to choosing from pre-written
actions, unless you include some kind of interpreter right there in your AI
program.   Another reason is that Lisp has all sorts of extensibility.

As for 3, the obvious response is that in Pascal control has to be routed to an
IF statement before it can do any good, whereas in a production system, control
automatically "goes" to any production that is applicable.   This is highly
over-simplified and may not be the answer you were looking for.

                                                - Richard

------------------------------

Date: Friday,  6 Jan 1984 13:10-PST
From: narain@rand-unix
Subject: Reply to Herb Lin: Why is Lisp good for AI?


A central issue in AI is knowledge representation.  Experimentation with  a
new  KR  scheme  often involves defining a new language. Often, definitions
and meanings of new  languages  are  conceived  of  naturally  in  terms of
recursive (hierarchical) structures.  For instance, many grammars of English-
like frontends are recursive, so  are  production  system  definitions,  so
are theorem provers.

The abstract machinery  underlying  Lisp,  the  Lambda  Calculus,  is  also
inherently recursive, yet very simple and powerful.  It involves the notion
of function application to symbolic expressions.  Functions can  themselves
be  symbolic  expressions.  Symbolic expressions provide a basis for SIMPLE
implementation   and   manipulation   of   complex   data/knowledge/program
structures.

It is therefore possible to easily interpret  new  language  primitives  in
terms of Lisp's already very high level primitives.  Thus, Lisp is a  great
"machine language" for AI.

The usefulness of a well understood, powerful, abstract  machinery  of  the
implementation language is probably more obvious when we  consider  Prolog.
The  logical  interpretation of Prolog programs helps considerably in their
development and verification.  Logic is a convenient specification language
for  a  lot  of  AI, and it is far easier to 'compile' those specifications
into a logic language like Prolog than  into  Pascal.  For  instance,  take
natural  language  front ends implemented in DCGs or database/expert-system
integrity and redundancy constraints.

The fact that programs can be considered as data is not true only of  Lisp.
Even in Pascal you can analyze a Pascal program.  The nice thing  in  Lisp,
however,  is  that  because  of  its  few  (but  very powerful) primitives,
programs tend to be simply structured and concise  (cf.  claims  in  recent
issues  of  this  bulletin that Lisp programs were much shorter than Pascal
programs).  So naturally it is simpler to analyze  Lisp  programs  in  Lisp
than it is to analyze Pascal programs in Pascal.

Of course,  Lisp  environments  have  evolved  for  over  two  decades  and
contribute  no  less to its desirability for AI.  Some of the nice features
include screen-oriented editors, interactiveness, debugging facilities, and
an extremely simple syntax.

I would greatly appreciate any comments on the above.

Sanjai Narain
Rand.

------------------------------

Date: 6 Jan 84 13:20:29-PST (Fri)
From: ihnp4!mit-eddie!rh @ Ucb-Vax
Subject: Re: Herb Lin's questons on LISP etc.
Article-I.D.: mit-eddi.1129

One of the problems with LISP, however, is it does not force one
to subscribe the code of good programming practices.  I've found
that the things I have written for my bridge-playing program (over
the last 18 months or so) have gotten incredibly crufty, with
some real brain-damaged patches.  Yeah, I realize it's my fault;
I'm not complaining about it because I love LISP, I just wanted
to mention some of the pitfalls for people to think about.  Right
now, I'm in the process of weeding out the cruft, trying to make
it more clearly modular, decrease the number of similar functions
and so on.  Sigh.

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: 7 January 1984 15:08 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: my questions of last Digest on differences between PASCAL
         and LISP

So many people replied that I send my thanks to all via the list.  I
very much appreciate the time and effort people put into their
comments.

------------------------------

End of AIList Digest
********************

∂10-Jan-84  1336	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #6 
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84  13:34:22 PST
Date: Tue 10 Jan 1984 09:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #6
To: AIList@SRI-AI


AIList Digest            Tuesday, 10 Jan 1984       Volume 2 : Issue 6

Today's Topics:
  Humor,
  Seminars - Programming Styles & ALICE & 5th Generation,
  Courses - Geometric Data Structures & Programming Techniques & Linguistics
----------------------------------------------------------------------

Date: Mon, 9 Jan 84 08:45 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: An AI Joke

Last week a cartoon appeared in our local (Rochester NY) paper.  It was
by a fellow named Toles, a really excellent editorial cartoonist who
works out of, of all places, Buffalo:

Panel 1:

[medium view of the Duckburg Computer School building.  A word balloon
extends from one of the windows]
"A lot of you wonder why we have to spend so much time studying these
things."

Panel 2:

[same as panel 1]
"It so happens that they represent a lot of power.  And if we want to
understand and control that power, we have to study them."

Panel 3:

[interior view of a classroom full of personal computers.  At right,
several persons are entering.  At left, a PC speaks]
". . .so work hard and no talking.  Here they come."

Tickler (a mini-cartoon down in the corner):

[a lone PC speaks to the cartoonist]
"But I just HATE it when they touch me like that. . ."


Mark

------------------------------

Date: Sat, 7 Jan 84 20:02 PST
From: Vaughan Pratt <pratt@navajo>
Subject: Imminent garbage collection of Peter Coutts.  :=)

  [Here's another one, reprinted from the SU-SCORE bboard.  -- KIL]

Les Goldschlager is visiting us on sabbatical from Sydney University, and
stayed with us while looking for a place to stay.  We belatedly pointed him
at Peter Coutts, which he immediately investigated and found a place to
stay right away.  His comment was that no pointer to Peter Coutts existed
in any of the housing assistance services provided by Stanford, and that
therefore it seemed likely that it would be garbage collected soon.
-v

------------------------------

Date: 6 January 1984 23:48 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Seminar on Programming Styles in AI

                     DATE:      Thursday, January 12, 1984
                     TIME:      3.45 p.m.  Refreshments
                                4.00 p.m.  Lecture
                     PLACE:     NE43-8th Floor, AI Playroom


               PROGRAMMING STYLES IN ARTIFICIAL INTELLIGENCE

                              Herbert Stoyan
                   University of Erlangen, West Germany

                               ABSTRACT

Not much is clear about the scientific methods used in AI research.
Scientific methods are sets of rules used to collect knowledge about the
subject being researched.  AI is an experimental branch of computer science
which does not seem to use established programming methods.  In several
works on AI we can find the following method:

    1.  develop a new convenient programming style

    2.  invent a new programming language which supports the new style
        (or embed some appropriate elements into an existing AI language,
        such as LISP)

    3.  implement the language (interpretation as a first step is
        typically less efficient than compilation)

    4.  use the new programming style to make things easier.

A programming style is a way of programming guided by a speculative view of
a machine which works according to the programs.  A programming style is
not a programming method.  It may be detected by analyzing the text of a
completed program.  In general, it is possible to program in one
programming language according to the principles of various styles.  This
is true in spite of the fact that programming languages are usually
designed with some machine model (and therefore with some programming
style) in mind.  We discuss some of the AI programming styles.  These
include operator-oriented, logic-oriented, function-oriented, rule-
oriented, goal-oriented, event-oriented, state-oriented, constraint-
oriented, and object-oriented. (We shall not however discuss the common
instruction-oriented programming style).  We shall also give a more detailed
discussion of how an object-oriented programming style may be used in
conventional programming languages.

HOST:  Professor Ramesh Patil

------------------------------

Date: Mon 9 Jan 84 14:09:07-PST
From: Laws@SRI-AI
Subject: SRI Talk on ALICE, 1/23, 4:30pm, EK242


ALICE:  A parallel graph-reduction machine for declarative and other
languages.

SPEAKER -  John Darlington, Department of Computing, Imperial College,
           London
WHEN    -  Monday, January 23, 4:30pm
WHERE   -  AIC Conference Room, EK242

     [This is an SRI AI Center talk.  Contact Margaret Olender at
     MOLENDER@SRI-AI or 859-5923 if you would like to attend.  -- KIL]

                           ABSTRACT

Alice is a highly parallel-graph reduction machine being designed and
built at Imperial College.  Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.

This talk will describe the general model of computation, extended
graph reduction, that ALICE executes, outline how different languages
can be supported by this model, and describe the concrete architecture
being constructed.  A 24-processor prototype is planned for early
1985.  This will give a two-orders-of-magnitude improvement over a VAX
11/750 for derclarative languages. ALICE is being constructed out of
two building blocks, a custom-designed switching chip and the INMOS
transputer. So far, compilers for a functional language, several logic
languages, and LISP have been constructed.

------------------------------

Date: 9 Jan 1984 1556-PST
From: OAKLEY at SRI-CSL
Subject: SRI 5th Generation Talk


  Japan's 5th Generation Computer Project: Past, Present, and Future
      -- personal observations by a researcher of
         ETL (ElectroTechnical Laboratory)

                          Kokichi FUTATSUGI
                    Senior Research Scientist, ETL
                    International Fellow, SRI-CSL


    Talk on January 24, l984, in conference room EL369 at 10:00am.
    [This is an SRI Computer Science Laboratory talk.  Contact Mary Oakley
    at OAKLEY@SRI-AI or 859-5924 if you would like to attend.  -- KIL]


1 Introduction
  * general overview of Japan's research activities in
    computer science and technology
  * a personal view

2 Past -- pre-history of ICOT (the Institute of New Generation
  ComputerTechnology)
  * ETL's PIPS project
  * preliminary research and study activities
  * the establishment of ICOT

3 Present -- present activities
  * the organization of ICOT
  * research activities inside ICOT
  * research activities outside ICOT

4 Future -- ICOT's plans and general overview
  * ICOT's plans
  * relations to other research activities
  * some comments

------------------------------

Date: Thu 5 Jan 84 16:41:57-PST
From: Martti Mantyla <MANTYLA@SU-SIERRA.ARPA>
Subject: Data Structures & Algorithms for Geometric Problems

                    [Reprinted from the SU-SCORE bboard.]

                                  NEW COURSE:
                     EE392 DATA STRUCTURES AND ALGORITHMS
                            FOR GEOMETRIC PROBLEMS


Many   problems   arising  in  science  and  engineering  deal  with  geometric
information.  Engineering design  is  most  often  spatial  activity,  where  a
physical  shape  with  certain desired properties must be created.  Engineering
analysis also uses heavily information on the geometric form of the object.

The seminar Data Structures and Algorithms for Geometric  Problems  deals  with
problems  related to representing and processing data on the geometric shape of
an object in a computer.    It  will  concentrate  on  practically  interesting
solutions to tasks such as

   - representation of digital images,
   - representation of line figures,
   - representation of three-dimensional solid objects, and
   - representation of VLSI circuits.

The  point  of  view  taken  is  hence  slightly  different  from a "hard-core"
Computational Geometry view that  puts  emphasis  on  asymptotic  computational
complexity.    In  practice,  one  needs solutions that can be implemented in a
reasonable  time,  are  efficient  and  robust  enough,  and  can  support   an
interesting   scope  of  applications.    Of  growing  importance  is  to  find
representations  and  algorithms  for  geometry  that   are   appropriate   for
implementation in special hardware and VLSI in particular.

The seminar will be headed by

    Dr. Martti Mantyla (MaM)
    Visiting Scholar
    CSL/ERL 405
    7-9310
    MANTYLA@SU-SIERRA.ARPA

who  will  give  intruductory  talks.    Guest  speakers of the seminar include
well-known scientists and practitioners of the field such as Dr. Leo Guibas and
Dr. John Ousterhout.  Classes are held on

                             Tuesdays, 2:30 - 3:30
                                      in
                                    ERL 126

First class will be on 1/10.

The seminar should be of interest to  CS/EE  graduate  students  with  research
interests   in   computer   graphics,   computational   geometry,  or  computer
applications in engineering.

------------------------------

Date: 6 Jan 1984 1350-EST
From: KANT at CMU-CS-C.ARPA
Subject: AI Programming Techniques Course

                  [Reprinted from the CMUC bboard.]


           Announcing another action-packed AI mini-course!
                 Starting soon in the 5409 near you.

This course covers a variety of AI programming techniques and languages.
The lectures will assume a background equivalent to an introductory AI course
(such as the undergraduate course 15-380/381 or the graduate core course
15-780.)  They also assume that you have had at least a brief introduction to
LISP and a production-system language such as OPS5.

       15-880 A,  Artificial Intelligence Programming Techniques
                         MW 2:30-3:50, WeH 5409


T Jan 10        (Brief organizational meeting only)
W Jan 11        LISP: Basic Pattern Matching (Carbonell)
M Jan 16        LISP: Deductive Data Bases (Steele)
W Jan 18        LISP: Basic Control: backtracking, demons (Steele)
M Jan 23        LISP: Non-Standard Control Mechanisms (Carbonell)
W Jan 25        LISP: Semantic Grammar Interpreter (Carbonell)
M Jan 30        LISP: Case-Frame interpreter (Hayes)
W Feb 1         PROLOG I (Steele)
M Feb 6         PROLOG II (Steele)
W Feb 8         Reason Maintenance and Comparison with PROLOG (Steele)
M Feb 13        AI Programming Environments and Hardware I (Fahlman)
W Feb 15        AI Programming Environments and Hardware II (Fahlman)
M Feb 20        Schema Representation Languages I (Fox)
W Feb 22        Schema Representation Languages II (Fox)
W Feb 29        User-Interface Issues in AI (Hayes)
M Mar 5         Efficient Game Playing and Searching (Berliner)
W Mar 7         Production Systems: Basic Programming Techniques (Kant)
M Mar 12        Production Systems: OPS5 Programming (Kant)
W Mar 14        Efficiency and Measurement in Production Systems (Forgy)
M Mar 16        Implementing Diagnostic Systems as Production Systems (Kahn)
M Mar 26        Intelligent Tutoring Systems: GRAPES and ACT Implementations
                     (Anderson)
W Mar 28        Explanation and Knowledge Acquisition in Expert Systems
                     (McDermott)
M Apr 2         A Production System for Problem Solving: SOAR2 (Laird)
W Apr 4         Integrating Expert-System Tools with SRL (KAS, PSRL, PDS)
                     (Rychener)
M Apr 9         Additional Expert System Tools: EMYCIN, HEARSAY-III, ROSIE,
                   LOOPS, KEE (Rosenbloom)
W Apr 11        A Modifiable Production-System Architecture: PRISM (Langley)
M Apr 16        (additional topics open to negotiation)

------------------------------

Date: 9 Jan 1984 1238:48-EST
From: Lori Levin <LEVIN@CMU-CS-C.ARPA>
Subject: Linguistics Course

                  [Reprinted from the CMUC bboard.]

NATURAL LANGUAGE SYNTAX FOR COMPUTER SCIENTISTS

FRIDAYS  10:00 AM - 12:00
4605 Wean Hall

Lori Levin
Richmond Thomason
Department of Linguistics
University of Pittsburgh

This is an introduction to recent work in generative syntax.  The
course will deal with the formalism of some of the leading syntactic
theories as well as with methodological issues.  Computer scientists
find the formalism used by syntacticians easy to learn, and so the
course will begin at a fairly advanced level, though no special
knowledge of syntax will be presupposed.

We will begin with a sketch of the "Standard Theory," Chomsky's
approach of the mid-60's from which most of the current theories have
evolved.  Then we will examine Government-Binding Theory, the
transformational approach now favored at M.I.T.  Finally, we will
discuss in more detail two nontransformational theories that are more
computationally tractable and have figured in joint research projects
involving linguists, psychologists, and computer scientists:
Lexical-Functional Grammar and Generalized Context-Free Phrase
Structure Grammar.

------------------------------

End of AIList Digest
********************

∂16-Jan-84  2244	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #7 
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Jan 84  22:44:15 PST
Date: Mon 16 Jan 1984 21:55-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #7
To: AIList@SRI-AI


AIList Digest            Tuesday, 17 Jan 1984       Volume 2 : Issue 7

Today's Topics:
  Production Systems - Requests,
  Expert Systems - Software Debugging Aid,
  Logic Programming - Prolog Textbooks & Disjunction Problem,
  Alert - Fermat's Last Theorem Proven?,
  Seminars - Mulitprocessing Lisp & Lisp History,
  Conferences - Logic Programming Discount & POPL'84,
  Courses - PSU's First AI Course & Net AI Course
----------------------------------------------------------------------

Date: 11 Jan 1984 1151-PST
From: Jay <JAY@USC-ECLC>
Subject: Request for production systems

  I would like pointers  to free or  public domain production  systems
(running on Tops-20, Vax-Unix, or Vax-Vms) both interpreters (such  as
ross) and systems built up on them (such as emycin).  I am  especially
interested in Rosie, Ross, Ops5, and Emycin.  Please reply directly to
me.
j'

ARPA: jay@eclc

------------------------------

Date: Thu 12 Jan 84 12:13:20-MST
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Taxonomy of Production Systems

I'm looking for info on a formal taxonomy of production rule systems,
sufficiently precise that it can distinguish OPS5 from YAPS, but also say
that they're more similar than either of them is to Prolog.  The only
relevant material I've seen is the paper by Davis & King in MI 8, which
characterizes PSs in terms of syntax, complexity of LHS and RHS, control
structure, and "programmability" (seems to mean meta-rules).  This is
a start, but too vague to be implemented.  A formal taxonomy should
indicate where "holes" exist, that is, strange designs that nobody has
built.  Also, how would Georgeff's (Stanford STAN-CS-79-716) notion of
"controlled production systems" fit in?  He showed that CPSs are more
general than PSs, but then one can also show that any CPS can be represented
by some ordinary PS.  I'm particularly interested in formalization of
the different control strategies - are text order selection (as in Prolog)
and conflict resolution (as in OPS5) mutually exclusive, or can they be
intermixed (perhaps using text order to find 5 potential rules, then
conflict resolution to choose among the 5).  Presumably a sufficiently
precise taxonomy could answer these sorts of questions.  Has anyone
looked at these questions?

                                                        stan shebs

------------------------------

Date: 16 Jan 84 19:13:21 PST (Monday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Expert systems for software debugging?

Debugging is a black art, not at all algorithmic, but almost totally
heuristic.  There is a lot of expert knowledge around about how to debug
faulty programs, but it is rarely written down or systemetized.  Usually
it seems to reside solely in the minds of a few "debugging whizzes".

Does anyone know of an expert system that assists in software debugging?
Or any attempts (now or in the past) to produce such an expert?

/Ron

------------------------------

Date: 12 Jan 84 20:43:31-PST (Thu)
From: harpo!floyd!clyde!akgua!sb1!mb2c!uofm-cv!lah @ Ucb-Vax
Subject: prolog reference
Article-I.D.: uofm-cv.457

Could anybody give some references to good introductory book
on prolog?

------------------------------

Date: 14 Jan 84 14:50:57-PST (Sat)
From: decvax!duke!mcnc!unc!bts @ Ucb-Vax
Subject: Re: prolog reference
Article-I.D.: unc.6594

There's only one introductory book I know of, that's Clocksin
and Mellish's "Programming in Prolog", Springer-Verlag, 1981.
It's a silver paperback, probably still under $20.00.

For more information on the language, try Clark and Tarnlund's
"Logic Programming", Academic Press, 1982.  It's a white hard-
back, with an elephant on the cover.  The papers by Bruynooghe
and by Mellish tell a lot about Prolog inplementation.

Bruce Smith, UNC-Chapel Hill
decvax!duke!unc!bts     (USENET)
bts.unc@CSnet-Relay (lesser NETworks)

------------------------------

Date: 13 Jan 84 8:11:49-PST (Fri)
From: hplabs!hao!seismo!philabs!sbcs!debray @ Ucb-Vax
Subject: re: trivial reasoning problem?
Article-I.D.: sbcs.572

Re: Marcel Schoppers' problem: given two lamps A and B, such that:

        condition 1) at least one of them is on at any time; and
        condition 2) if A is on then B id off,

        we are to enumerate the possible configurations without an exhaustive
        generate-and-test strategy.

The following "pure" Prolog program that will generate the various
configurations without exhaustively generating all possible combinations:


  config(A, B) :- cond1(A, B), cond2(A, B).   /* both conditions must hold */

  cond1(1, ←).    /* at least one is on an any time ... condition 1 above */
  cond1(←, 1).

  cond2(1, 0).    /* if A is on then B is off */
  cond2(0, ←).    /* if A is off, B's value is a don't care */

executing Prolog gives:

| ?- config(A, B).

A = 1
B = 0 ;

A = 0
B = 1 ;

no
| ?- halt.
[ Prolog execution halted ]

Tracing the program shows that the configuration "A=0, B=0" is not generated.
This satisfies the "no-exhaustive-listing" criterion. Note that attempting
to encode the second condition above using "not" will be both (1) not pure
Horn Clause, and (2) using exhaustive generation and filtering.

Saumya Debray
Dept. of Computer Science
SUNY at Stony Brook

                {floyd, bunker, cbosgd, mcvax, cmcl2}!philabs!
                                                              \
        Usenet:                                                sbcs!debray
                                                              /
                   {allegra, teklabs, hp-pcd, metheus}!ogcvax!
        CSNet: debray@suny-sbcs@CSNet-Relay


[Several other messages discussing this problem and suggesting Prolog
code were printed in the Prolog Digest.  Different writers suggested
very different ways of structuring the problem.  -- KIL]


------------------------------

Date: Fri 13 Jan 84 11:16:21-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Fermat's Last Theorem Proven?

                [Reprinted from the UTEXAS-20 bboard.]

There was a report last night on National Public Radio's All Things Considered
about a British mathematician named Arnold Arnold who claims to have
developed a new technique for dealing with multi-variable, high-dimensional
spaces.  The method apparently makes generation of large prime numbers
very easy, and has applications in genetics, the many-body problem, orbital
mechanics, etc.  Oh yeah, the proof to Fermat's Last Theorem falls out of
this as well!  The guy apparently has no academic credentials, and refuses
to publish in the journals because he's interested in selling his technique.
There was another mathematician named Jeffrey Colby who had been allowed
to examine Arnold's work on the condition he didn't disclose anything.
He claims the technique is all it's claimed to be, and shows what can
be done when somebody starts from pure ignorance not clouded with some
of the preconceptions of a formal mathematical education.

If anybody hears more about this, please pass it along.

Clive

------------------------------

Date: 12 Jan 84  2350 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Next week's CSD Colloquium.

                [Reprinted from the SU-SCORE bboard.]

  Dr. Richard P. Gabriel, Stanford CSD
  ``Queue-based Multi-processing Lisp''
  4:30pm Terman Auditorium, Jan 17th.

As the need for high-speed computers increases, the need for
multi-processors will be become more apparent. One of the major stumbling
blocks to the development of useful multi-processors has been the lack of
a good multi-processing language---one which is both powerful and
understandable to programmers.

Among the most compute-intensive programs are artificial intelligence (AI)
programs, and researchers hope that the potential degree of parallelism in
AI programs is higher than in many other applications.  In this talk I
will propose a version of Lisp which is multi-processed.  Unlike other
proposed multi-processing Lisps, this one will provide only a few very
powerful and intuitive primitives rather than a number of parallel
variants of familiar constructs.

The talk will introduce the language informally, and many examples along
with performance results will be shown.

------------------------------

Date: 13 January 1984 07:36 EST
From: Kent M Pitman <KMP @ MIT-MC>
Subject: What is Lisp today and how did it get that way?

                 [Reprinted from the MIT-MC bboard.]

                        Modern Day Lisp

        Time:   3:00pm
        Date:   Wednesdays and Fridays, 18-27 January
        Place:  8th Floor Playroom

The Lisp language has changed significantly in the past 5 years. Modern
Lisp dialects bear only a superficial resemblance to each other and to
their common parent dialects.

Why did these changes come about? Has progress been made? What have we
learned in 5 hectic years of rapid change? Where is Lisp going?

In a series of four lectures, we'll be surveying a number of the key
features that characterize modern day Lisps. The current plan is to touch
on at least the following topics:


        Scoping. The move away from dynamic scoping.
        Namespaces. Closures, Locales, Obarrays, Packages.
        Objects. Actors, Capsules, Flavors, and Structures.
        Signals. Errors and other unusual conditions.
        Input/Output. From streams to window systems.


The discussions will be more philosophical than technical. We'll be
looking at several Lisp dialects, not just one. These lectures are not
just something for hackers. They're aimed at just about anyone who uses
Lisp and wants an enhanced appreciation of the issues that have shaped
its design and evolution.

As it stands now, I'll be giving all of these talks, though there
is some chance there will be some guest lecturers on selected
topics. If you have questions or suggestions about the topics to be
discussed, feel free to contact me about them.

                        Kent Pitman (KMP@MC)
                        NE43-826, x5953

------------------------------

Date: Wed 11 Jan 84 16:55:02-PST
From: PEREIRA@SRI-AI.ARPA
Subject: IEEE Logic Programming Symposium (update)

              1984 International Symposium on
                      Logic Programming

                 Student Registration Rates


In our original symposium announcements, we failed to offer a student
registration rate. We would like to correct that situation now.
Officially enrolled students may attend the symposium for the reduced
rate of $75.00.

This rate includes the symposium itself (all three days) and one copy
of the symposium proceedings. It does not include the tutorial, the
banquet, or cocktail parties.  It does however, include the Casino
entertainment show.

Questions and requests for registration forms by US mail to:

   Doug DeGroot                           Fernando Pereira
   Program Chairman                       SRI International
   IBM Research                    or     333 Ravenswood Ave.
   P.O. Box 218                           Menlo Park, CA 94025
   Yorktown Heights, NY 10598             (415) 859-5494
   (914) 945-3497

or by net mail to:

                  PEREIRA@SRI-AI (ARPANET)
                  ...!ucbvax!PEREIRA@SRI-AI (UUCP)

------------------------------

Date: Tue 10 Jan 84 15:54:09-MST
From: Subra <Subrahmanyam@UTAH-20.ARPA>
Subject: *** P O P L 1984 --- Announcement ***

*******************************  POPL 1984 *********************************

                              ELEVENTH ANNUAL

                            ACM SIGACT/SIGPLAN

                               SYMPOSIUM ON

                               PRINCIPLES OF

                           PROGRAMMING LANGUAGES


    *** POPL 1984 will be held in Salt Lake City, Utah January 15-18. ****
  (The skiing is excellent, and the technical program threatens to match it!)

For additional details, please contact

        Prof. P. A. Subrahmanyam
        Department of Computer Science
        University of Utah
        Salt Lake City, Utah 84112.

        Phone: (801)-581-8224

ARPANET: Subrahmanyam@UTAH-20 (or Subra@UTAH-20)


------------------------------

Date: 12 Jan 84 4:51:51-PST (Thu)
From: 
Subject: Re: PSU's First AI Course - Comment
Article-I.D.: sjuvax.108

I would rather NOT get into social issues of AI: there are millions of
forums for that (and I myself have all kinds of feelings and reservations
on the issue, including Vedantic interpretations), so let us keep this
one technical, please.

------------------------------

Date: 13 Jan 84 11:42:21-PST (Fri)
From: 
Subject: Net AI course -- the communications channel
Article-I.D.: psuvax.413

Responses so far have strongly favored my creating a moderated newsgroup
as a sub to net.ai for this course.  Most were along these lines:

    From: ukc!srlm (S.R.L.Meira)

    I think you should act as the moderator, otherwise there would be too
    much noise - in the sense of unordered information and discussions -
    and it could finish looking like just another AI newsgroup argument.
    Anybody is of course free to post whatever they want if they feel
    the thing is not coming out like they want.

Also, if the course leads to large volume, many net.ai readers (busy AI
professionals rather than students) might drop out of net.ai.

For a contrasting position:

    From: cornell!nbires!stcvax!lat

    I think the course should be kept as a newsgroup.  I don't think
    it will increase the nation-wide phone bills appreciably beyond
    what already occurs due to net.politics, net.flame, net.religion
    and net.jokes.

So HERE's how I'll try to keep EVERYBODY happy ...    :-)

... a "three-level" communication channel.  1: a "free-for-all" via mail
(or possibly another newsgroup), 2: a moderated newsgroup sub to net.ai,
3: occasional abstracts, summaries, pointers posted to net.ai and AIList.

People can then choose the extent of their involvement and set their own
"bull-rejection threshold".  (1) allows extensive involvement and flaming,
(2) would be the equivalent of attending a class, and (3) makes whatever
"good stuff" evolves from the course available to all others.

The only remaining question: should (1) be done via a newsgroup or mail?

Please send in your votes -- I'll make the final decision next week.

Now down to the REALLY BIG decisions: names.  I suggest "net.ai.cse"
for level (2).  The "cse" can EITHER mean "Computer Science Education"
or abbreviate "course".  For level (1), how about "net.ai.ffa" for
"free-for-all", or .raw, or .disc, or .bull, or whatever.

Whatever I create gets zapped at end of course (June), unless by then it
has taken on a life of its own.

        -- Bob

[PS to those NOT ON USENET: please mail me your address for private
mailings -- and indicate which of the three "participation levels"
best suits your tastes.]

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

End of AIList Digest
********************

∂17-Jan-84  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #8 
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Jan 84  23:46:18 PST
Date: Tue 17 Jan 1984 22:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #8
To: AIList@SRI-AI


AIList Digest           Wednesday, 18 Jan 1984      Volume 2 : Issue 8

Today's Topics:
  Programming Languages - Lisp for IBM,
  Intelligence - Subcognition,
  Seminar - Knowledge-Based Design Environment
----------------------------------------------------------------------

Date: Thu 12 Jan 84 15:07:55-PST
From: Jeffrey Mogul <MOGUL@SU-SCORE.ARPA>
Subject: Re: lisp for IBM

                [Reprinted from the SU-SCORE bboard.]

        Does anyone know of LISP implementations for IBM 370--3033--308x?

Reminds me of an old joke:
        How many IBM machines does it take to run LISP?

        Answer: two -- one to send the input to the PDP-10, one
                to get the output back.

------------------------------

Date: Thursday, 12 Jan 1984 21:28-PST
From: Steven Tepper <greep@SU-DSN>
Subject: Re: lisp for IBM

                [Reprinted from the SU-SCORE bboard.]

Well, I used Lisp on a 360 once, but I certainly wouldn't recommend
that version (I don't remember where it came from anyway -- the authors
were probably so embarrassed they wanted to remain anonymous).  It
was, of course, a batch system, and its only output mode was "uglyprint" --
no matter what the input looked like, the output would just be printed
120 columns to a line.

------------------------------

Date: Fri 13 Jan 84 06:55:00-PST
From: Ethan Bradford <JLH.BRADFORD@SU-SIERRA.ARPA>
Subject: LISP (INTERLISP) for IBM

                [Reprinted from the SU-SCORE bboard.]

Chris Ryland (CPR@MIT-XX) sent out a query on this before and he got back
many good responses (he gave me copies).  The main thing most people said
is that a version was developed at Uppsula in Sweden in the 70's.  One
person gave an address to write to, which I transcribe here with no gua-
rantees of currentness:
    Klaus Appel
    UDAC
   Box 2103
    750 02 Uppsala
    Sweden
    Phone: 018-11 13 30

------------------------------

Date: 13 Jan 84  0922 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Lisp for IBM machines

                [Reprinted from the SU-SCORE bboard.]

Standard Lisp runs quite well on the IBM machines.
The folks over at IMSSS on campus know all about it --
they have written several large theorem proving/CAI programs for
that environment.

------------------------------

Date: 11 January 1984 06:27 EST
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: intelligence and genius

I should have thought that if you can make a machine more or
less intelligent; and make another machine ABLE TO RECOGNIZE
GENIUS (it need not itself be able to "be" or "have" genius)
then the "genius machine " problem is probably solved: have the
somewhat intelligent one generate lots of ideas, with random
factors thrown in, and have the second "recognizing" machine
judge the products.
        Obviously they could be combined into one machine.

------------------------------

Date: Sunday, 15 January 1984, 00:18-EST
From: Marek W. Lugowski <MAREK%MIT-OZ@MIT-MC.ARPA>
Subject: Adrressing DRogers' questions (at last) + on subcogniton

    DROGERS (c. November '84):
      I have a few questions I would like to ask, some (perhaps most)
    essentially unanswerable at this time.

Appologies in advance for rashly attempting to answer at this time.

      - Should the initially constructed subcognitive systems be
    "learning" systems, or should they be "knowledge-rich" systems? That
    is, are the subcognitive structures implanted with their knowledge
    of the domain by the programmer, or is the domain presented to the
    system in some "pure" initial state?  Is the approach to
    subcognitive systems without learning advisable, or even possible?

I would go off on a limb and claim that attempting wholesale "learning"
first (whatever that means these days) is silly.  I would think one
would first want to spike the system with hell of a lot of knowledge
(e.g., Dughof's "Slipnet" of related concepts whose links are subject to
cummulative, partial activation which eventually makes the nodes so
connected highly relevant and therefore taken into consideration by the
system).  To repeat Minsky (and probably, most of the AI folk: one can
only learn if one already almost knows it).

      - Assuming human brains are embodiments of subcognitive systems,
    then we know how they were constructed: a very specific DNA
    blueprint controlling the paths of development possible at various
    times, with large assumptions as to the state of the intellectual
    environment.  This grand process was created by trial-and-error
    through the process of evolution, that is, essentially random
    chance. How much (if any) of the subcognitive system must be created
    essentially by random processes? If essentially all, then there are
    strict limits as to how the problem should be approached.

This is an empirical question.  If my now-attempted implementation of
the Copycat Project (which uses the Slipnet described above)
[forthcoming MIT AIM #755 by Doug Hofstadter] will converge nicely, with
trivial tweaking, I'll be inclined to hold that random processes can
indeed do most of the work.  Such is my current, unfounded, belief.  On
the other hand, a failure will not debunk my position--I could always
have messed up implementationally and made bad guesses which "threw"
the system out of its potential convergence.

      - Which processes of the human brain are essentially subcognitive
    in construction, and which use other techniques? Is this balance
    optimal?  Which structures in a computational intelligence would be
    best approached subcognitively, and which by other methods?

Won't even touch the "optimal" question.  I would guess any process
involving a great deal of fan-in would need to be subcognitive in
nature.  This is argued from efficiency.  For now, and for want of
better theories, I'd approach ALL brain functions using subcognitive
models.  The alternative to this at present means von Neumannizing the
brain, an altogether quaint thing to do...

      - How are we to judge the success of a subcognitive system? The
    problems inherent in judging the "ability" of the so-called expert
    systems will be many times worse in this area. Without specific goal
    criteria, any results will be unsatisfying and potentially illusory
    to the watching world.

Performance and plausibility (in that order) ought to be our criteria.
Judging performance accurately, however, will continue to be difficult
as long as we are forced to use current computer architectures.
Still, if a subcognitive system converges at all on a LispM, there's no
reason to damn its performance.  Plausibility is easier to demonstrate;
one needs to keep in touch with the neurosciences to do that.

      - Where will thinking systems REALLY be more useful than (much
   refined) expert systems? I would guess that for many (most?)
   applications, expertise might be preferable to intelligence. Any
   suggestions about fields for which intelligent systems would have a
   real edge over (much improved) expert systems?

It's too early (or, too late?!) to draw such clean lines.  Perhaps REAL
thinking and expertise are much more intertwined than is currently
thought.  Anyway, there is nothing to be gained by pursuing that line of
questioning before WE learn how to explicitly organize knowledge better.


Over all, I defend pursuing things subcognitively for these reasons:

  -- Not expecting thinking to be a cleanly organized, top-down driven
  activity is minimizing one's expectations.  Compare thinking with such
  activities as cellular automata (e.g., The Game of Life) or The Iterated
  Pairwise Prisoner's Dilemma Game to convince yourself of the futility of
  top-down modeling where local rules and their iterated interactions are
  very successful at concisely describing the problem at hand.  No reason
  to expect the brain's top-level behavior to be any easier to explain
  away.

  -- AI has been spending a lot of itself on forcing a von Neumannian
  interpretation on the mind.  At CMU they have it down to an art, with
  Simon's "symbolic information processing" the nowadays proverbial Holy
  Grail.  With all due respect, I'd like to see more research devoted to
  modeling various alleged brain activities with high degree of
  parallelism and probabilistic interaction, systems where "symbols" are
  not givens but intricately invovled intermediates of computation.

  -- It has not been done carefully before and I want at least a thesis
  out of it.

                                -- Marek

------------------------------

Date: Mon, 16 Jan 1984  12:40 EST
From: GLD%MIT-OZ@MIT-MC.ARPA
Subject: minority report


     From: MAREK
     To repeat Minsky (and probably, most of the AI folk: one can
     only learn if one already almost knows it).

By "can only learn if..." do you mean "can't >soon< learn unless...", or
do you mean "can't >ever< learn unless..."?

If you mean "can't ever learn unless...", then the statement has the Platonic
implication that a person at infancy must "already almost know" everything she
is ever to learn.  This can't be true for any reasonable sense of "almost
know".

If you mean "can't soon learn unless...", then by "almost knows X", do you
intend:

 o a narrow interpretation, by which a person almost knows X only if she
   already has knowledge which is a good approximation to understanding X--
   eg, she can already answer simpler questions about X, or can answer
   questions about X, but with some confusion and error; or
 o a broader interpretation, which, in addition to the above, counts as
   "almost knowing X" a situation where a person might be completely in the
   dark about X-- say, unable to answer any questions about X-- but is on the
   verge of becoming an instant expert on X, say by discovering (or by being
   told of) some easy-to-perform mapping which reduces X to some other,
   already-well-understood domain.

If you intend the narrow interpretation, then the claim is false, since people
can (sometimes) soon learn X in the manner described in the broad-
interpretation example.  But if you intend the broad interpretation, then the
statement expands to "one can't soon learn X unless one's current knowledge
state is quickly transformable to include X"-- which is just a tautology.

So, if this analysis is right, the statement is either false, or empty.

------------------------------

Date: Mon, 16 Jan 1984  20:09 EST
From: MAREK%MIT-OZ@MIT-MC.ARPA
Subject: minority report

         From: MAREK
         To repeat Minsky (and probably, most of the AI folk): one can
         only learn if one already almost knows it.

    From: GLD
    By "can only learn if..." do you mean..."can't >ever< learn unless..."?

    If you mean "can't ever learn unless...", then the statement has
    the Platonic implication that a person at infancy must "already almost
    know" everything she is ever to learn.  This can't be true for any
    reasonable sense of "almost know".

I suppose I DO mean "can't ever learn unless".  However, I disagree
with your analysis.  The "Platonic implication" need not be what you
stated it to be if one cares to observe that some of the things an
entity can learn are...how to learn better and how to learn more.  My
original statement presupposes an existence of a category system--a
capacity to pigeonhole, if you will.  Surely you won't take issue with
the hypothesis that an infant's category system is lesser than that of
an adult.  Yet, faced with the fact that many infants do become
adults, we have to explain how the category system can muster to grow
up, as well.

In order to do so, I propose to think that the human learning
is a process where, say, in order to assimilate a chunk of information
one has to have a hundred-, nay, a thousand-fold store of SIMILAR
chunks.  This is by direct analogy with physical growing up--it
happens very slowly, gradually, incrementally--and yet it happens.

If you recall, my original statement was made against attempting
"wholesale learning" as opposed to "knowledge-rich" systems when
building subcognitive sytems.  Admittedly, the complexity of a human
being is many an order of magnitude beyond that what AI will attempt
for decades to come, yet by observing the physical development of a
child we can arrive at some sobbering tips for how to successfully
build complex systems.  Abandoning the utopia of having complex
systems just "self-organize" and pop out of simple interactions of a
few even simplier pieces is one such tip.

                                -- Marek

------------------------------

Date: Tue 17 Jan 84 11:56:01-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT- JANUARY 20, l984

         [Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   January 20, 1984   12:05

LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry

SPEAKER:  Harold Brown
          Stanford University

TOPIC:    Palladio:  An Exploratory Environment for Circuit Design

Palladio is an environment for experimenting with design methodologies
and  knowledge-based  design   aids.   It  provides   the  means   for
constructing, testing  and incrementally  modifying design  tools  and
languages.  Palladio  is  a  testbed for  investigationg  elements  of
design including  specification,  simulation, refinement  and  use  of
previous designs.

For  the  designer,   Palladio  supports  the   construction  of   new
specification languages  particular to  the design  task at  hand  and
augmentation of  the  system's  expert knowledge  to  reflect  current
design goals  and constraints.   For the  design environment  builder,
Palladio provides several  programming paradigms:  rule based,  object
oriented,  data   oriented  and   logical  reasoning   based.    These
capabilities are largely provided by two of the programming systems in
which Palladio is implemented: LOOPS and MRS.

In this talk,  we will  describe the  basic design  concepts on  which
Palladio is  based,  give  examples  of  knowledge-based  design  aids
developed   within   the   environment,   and   describe    Palladio's
implementation.

------------------------------

End of AIList Digest
********************

∂22-Jan-84  1625	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #9 
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Jan 84  16:25:11 PST
Date: Sun 22 Jan 1984 15:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #9
To: AIList@SRI-AI


AIList Digest            Monday, 23 Jan 1984        Volume 2 : Issue 9

Today's Topics:
  AI Culture - Survey Results Available,
  Digests - Vision-List Request,
  Expert Systems - Software Debugging,
  Seminars - Logic Programming & Bagel Architecture,
  Conferences - Principles of Distributed Computing
----------------------------------------------------------------------

Date: 18 Jan 84 14:50:21 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: How AI People Think - Cultural Premises of the AI Community...

                 [Reprinted from the Rutgers bboard.]

How AI People Think - Cultural Premises of the AI Community...
is the name of a report by sociologists at the University of Genoa, Italy,
based on a survey of AI researchers attending the International AI conference
(IJCAI-8) this past summer.  [...]

Smadar.

------------------------------

Date: Wed, 18 Jan 84 13:08:34 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: TO THOSE INTERESTED IN COMPUTER VISION, IMAGE PROCESSING, ETC

        This is the second notice directed to all of those interested
in IMAGE PROCESSING, COMPUTER VISION, etc.  There has been a great need,
and interest, in compiling a VISION list that caters to the specialized
needs and interests of those involved in image/vision processing/theory/
implementation.  I broadcast a message to this effect over this BBOARD
about three weeks ago asking for all those that are interested to
respond.  Again, I reiterate the substance of that message:

        1)  If you are interested in participating in a VISION list,
            and have not already expressed your interest to me,
            please do so!  NOW is the time to express that interest,
            since NOW is when the need for such a list is being
            evaluated.
        2)  I cannot moderate the list (due to a lack of the proper type
            of resources to deal with the increased mail traffic).  A
            moderator is DESPERATELY NEEDED!  I will assist you in
            establishing the list, and I am presently in contact with
            the moderator of AILIST (Ken LAWS@SRI-AI) to establish what
            needs to be done.  The job of moderator involves the
            following:
                i)   All mail for the list is sent to you
                ii)  You screen (perhaps, format or edit, depending upon
                     the time and effort you wish to expend) all
                     incoming messages, then redistribute them to the
                     participants on the list at regular intervals.
                iii) You maintain/update the distribution list.
           Needless to say, the job of moderator is extremely rewarding
           and involves a great deal of high visibility.  In addition,
           you get to GREATLY AID in the dissemination and sharing of
           ideas and information in this growing field.  Enough said...
        3) If you know of ANYONE that might be interested in such a
           list, PLEASE LET THEM KNOW and have them express that interest
           to me by sending mail to KAHN@UCLA-CS.ARPA

                                Now's the time to let me know!
                                Philip Kahn

                        send mail to:  KAHN@UCLA-CS.ARPA

------------------------------

Date: 19 Jan 84 15:14:04 EST
From: Lou <STEINBERG@RUTGERS.ARPA>
Subject: Re: Expert systems for software debugging

I don't know of any serious work in AI on software debugging since
HACKER.  HACKER was a part of the planning work done at MIT some years
ago - it was an approach to planning/automatic programming where
planning was done with a simple planner that, e.g., ignored
interactions between plan steps.  Then HACKER ran the plan/program and
had a bunch of mini-experts that detected various kinds of bugs.  See
Sussman, A Computer Model of Skill Acquisition, MIT Press, 1975.

Also, there is some related work in hardware debugging.  Are you aware
of the work by Randy Davis at MIT and by Mike Genesereth at Stanford on
hardware trouble shooting?  This is the problem where you have a piece
of hardware (e.g. a VAX) that used to work but is now broken, and you
want to isolate the component (board, chip, etc.) that needs to be
replaced.  Of course this is a bit different from program debugging,
since you are looking for a broken component rather than a mis-design.
E.g. for trouble shooting you can usually assume a single thing is
broken, but you often have multiple bugs in a program.

Here at Rutgers, we're working on an aid for design debugging for
VLSI.  Design debugging is much more like software debugging.  Our
basic approach is to use a signal constraint propagation method to
generate a set of possible places where the bug might be, and then use
various sorts of heuristics to prune the set (e.g.  a sub-circuit
that's been used often before is less likely to have a bug than a
brand new one).

------------------------------

Date: Fri, 20 Jan 84 8:39:38 EST
From: Paul Broome <broome@brl-bmd>
Subject: Re:  Expert systems for software debugging?


        Debugging is a black art, not at all algorithmic, but almost totally
        heuristic.  There is a lot of expert knowledge around about how
        to debug faulty programs, but it is rarely written down or
        systemetized.  Usually it seems to reside solely in the minds of
        a few "debugging whizzes".

        Does anyone know of an expert system that assists in software
        debugging? Or any attempts (now or in the past) to produce such
        an expert?

There are some good ideas and a Prolog implementation in Ehud Shapiro's
Algorithmic Program Debugging, which is published as an ACM distinguished
dissertation by MIT Press, 1983.  One of his ideas is "divide-and-query:
a query-optimal diagnosis algorithm," which is essentially a simple binary
bug search.  If the program is incorrect on some input then the program
is divided into two roughly equal subtrees and the computation backtracks
to the midpoint.  If this intermediate result is correct then the
first subtree is ignored and the bug search is repeated on the second
subtree.   If the intermediate result is incorrect then the search
continues instead on the first subtree.

------------------------------

Date: 20 Jan 84 19:25:30-PST (Fri)
From: pur-ee!uiucdcs!nielsen @ Ucb-Vax
Subject: Re: Expert systems for software debuggin - (nf)
Article-I.D.: uiucdcs.4980

The Knowledge Based Programming Assistant Project here at the University of
Illinois was founded as a result of a very similar proposal.
A thesis you may be interested in which explains some of our work is
"GPSI : An Expert System to Aid in Program Debugging" by Andrew Laursen
which should be available through the university.

I would be very interested in corresponding with anyone who is considering
the use of expert systems in program debugging.

                                        Paul Nielsen
                                        {pur-ee, ihnp4}!uiucdcs!nielsen
                                        nielsen@uiucdcs

------------------------------

Date: 01/19/84 22:25:55
From: PLUKEL
Subject: January Monthly Meeting, Greater Boston Chapter/ACM

                 [Forwarded from MIT by SASW@MIT-MC.]


        On behalf of GBC/ACM,  J. Elliott Smith, the Lecture Chairman, is
        pleased to present a discussion on the topic of

                                LOGIC PROGRAMMING

                              Henryk Jan Komorowski
                          Division of Applied Sciences
                               Harvard University
                            Cambridge, Massachusetts

             Dr. Komorowski is an Assistant Professor of Computer Science,
        who  received  his MS from  Warsaw University  and  his Phd  from
        Linkoeping University, Linkoeping, Sweden, in 1981.   His current
        research interests include applications of logic programming  to:
        rapid  prototyping,  programming/specification development envir-
        onments, expert systems, and databases.

             Dr.  Komorowski's  articles have appeared in proceedings  of
        the  IXth  POPL,  the 1980 Logic Programming Workshop  (Debrecen,
        Hungary),  and the book "Logic Programming",  edited by Clark and
        Taernlund.   He  acted  as Program Chairman for the  recent  IEEE
        Prolog tutorial at Brandies University, is serving on the Program
        Committee  of  the  1984 Logic  Programming  Symposium  (Atlantic
        City),  and is a member of the Editorial Board of THE JOURNAL  OF
        LOGIC PROGRAMMING.

             Prolog  has been selected as the programming language of the
        Japanese  Fifth  Generation Computer Project.   It is  the  first
        realization of logic programming ideas,  and implements a theorem
        prover  based  on a design attributed  to  J.A.  Robinson,  which
        limits resolution to a Horn clause subset of assertions.

             A  Prolog program is a collection of true statements in  the
        form  of RULES.   A computation is a proof from these assertions.
        Numerous   implementations  of  Prolog  have   elaborated   Alain
        Colmerauer's original, including Dr. Komorowski's own Qlog, which
        operates in LISP environments.

             Dr.  Komorowski  will present an introduction to  elementary
        logic  programming  concepts  and an overview  of  more  advanced
        topics,    including   metalevel   inference,    expert   systems
        programming, databases, and natural language processing.

                                 DATE:     Thursday, 26 January 1984
                                 TIME:     8:00 PM
                                 PLACE:    Intermetrics Atrium
                                           733 Concord Avenue
                                           Cambridge, MA
                                         (near Fresh Pond Circle)

                COMPUTER MOVIE and REFRESHMENTS before the talk.
                 Lecture dinner at 6pm open to all GBC members.
                   Call (617) 444-5222 for additional details.

------------------------------

Date: 20 Jan 84  1006 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Shaprio Seminars at Stanford and Berkeley

      [Adapted from the SU-SCORE bboard and the Prolog Digest.]


  Ehud Shapiro, The Weizmann Institute of Science
  The Bagel: A Systolic Concurrent Prolog Machine

  4:30pm, Terman Auditorium, Tues, Jan 24th, Stanford CSD Colloq.
  1:30pm, Evans 597, Wed., Jan 2th, Berkeley Prolog Seminar



It is argued that explicit mapping of processes to processors is
essential to effectively program a general-purpose parallel computer,
and, as a consequence, that the kernel language of such a computer
should include a process-to-processor mapping notation.

The Bagel is a parallel architecture that combines concepts of
dataflow, graph-reduction and systolic arrays. The Bagel's kernel
language is Concurrent Prolog, augmented with Turtle programs as a
mapping notation.

Concurrent Prolog, combined with Turtle programs, can easily implement
systolic systems on the Bagel. Several systolic process structures are
explored via programming examples, including linear pipes (sieve of
Erasthotenes, merge sort, natural-language interface to a database),
rectangular arrays (rectangular matrix multiplication, band-matrix
multiplication, dynamic programming, array relaxation), static and
dynamic H-trees (divide-and-conquer, distributed database), and
chaotic structures (a herd of Turtles).

All programs shown have been debugged using the Turtle graphics Bagel
simulator, which is implemented in Prolog.

------------------------------

Date: Fri 20 Jan 84 14:56:58-PST
From: Jayadev Misra <MISRA@SU-SIERRA.ARPA>
Subject: call for Papers- Principles of Distributed Computing


                         CALL FOR PAPERS
3rd ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC)

                        Vancouver, Canada
                      August 27 - 29, 1984

This conference will address fundamental issues in the theory  and
practice   of   concurrent  and  distributed  systems.   Original
research papers describing theoretical or  practical  aspects  of
specification.  design  or  implementation  of  such  systems are
sought.  Topics of interest include, but are not limited to,  the
following aspects of concurrent and distributed systems.

  . Algorithms
  . Formal models of computations
  . Methodologies for program development
  . Issues in specifications, semantics and verifications
  . Complexity results
  . Languages
  . Fundamental results in application areas such as
                distributed databases, communication protocols, distributed
                operating systems, distributed transaction processing systems,
                real time systems.

Please send eleven copies of a detailed abstract (not a  complete
paper) not exceeding 10 double spaced typewritten pages, by MARCH
8, 1984, to the Program Chairman:

  Prof. J. Misra
  Computer Science Department
  University of Texas
  Austin, Texas 78712

The abstract must include a clear description of the problem  be-
ing  addressed, comparisons with extant work and a section on ma-
jor original contributions of this work.  The abstract must  pro-
vide  sufficient detail for the program committee to make a deci-
sion.  Papers will be chosen on the basis  of  scientific  merit,
originality, clarity and appropriateness for this conference.

Authors will be notified of acceptance by April  30,  1984.   Ac-
cepted  papers,  typed on special forms, are due at the above ad-
dress by June 1, 1984.  Authors of accepted papers will be  asked
to sign ACM Copyright forms.

The Conference Chairman is Professor  Tiko  Kameda  (Simon  Fraser
University).   The Publicity Chairman is Professor Nicola Santoro
(Carleton University).  The Local Arrangement Chiarman is Profes-
sor Joseph Peters (Simon Fraser University).  The Program Commit-
tee consists of Ed Clarke (C.M.U.), Greg  N.  Frederickson  (Pur-
due),  Simon Lam (U of Texas, Austin), Leslie Lamport (SRI Inter-
national), Michael Malcom (U  of  Waterloo),  J.  Misra,  Program
Chairman  (U of Texas, Austin), Hector G. Molina (Princeton), Su-
san Owicki (Stanford), Fred Schneider (Cornell),  H.  Ray  Strong
(I.B.M. San Jose), and Howard Sturgis (Xerox Parc).

------------------------------

End of AIList Digest
********************

∂30-Jan-84  2209	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #10
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Jan 84  22:08:55 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Thu 26 Jan 1984 14:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #10
To: AIList@SRI-AI


AIList Digest            Friday, 27 Jan 1984       Volume 2 : Issue 10

Today's Topics:
  AI Culture - IJCAI Survey,
  Cognition - Parallel Processing Query,
  Programming Languages - Symbolics Support & PROLOG/ZOG Request,
  AI Software - KEE Knowledge Representation System,
  Review - Rivest Forsythe Lecture on Learning,
  Seminars - Learning with Constraints & Semantics of PROLOG,
  Courses - CMU Graduate Program in Human-Computer Interaction
----------------------------------------------------------------------

Date: 24 Jan 84 12:19:21 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Report on "How AI People Think..."

I received a free copy because I attended IJCAI.  I have an address
here, but I don't know if it is the appropriate one for ordering this
report:

Re: the report "How AI People Think - Cultural Premises of the AI community"
Commission of the European Communities
Rue de la Loi, 200
B-1049 Brussels, Belgium

(The report was compiled by Massimo Negrotti, Chair of Sociology of
 Knowledge, University of Genoa, Italy)

Smadar (KEDAR-CABELLI@RUTGERS).

------------------------------

Date: Wed 18 Jan 84 11:05:26-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: brain, a parallel processor ?

What are the evidences that the brain is a parallel processor?  My own
introspection seem to indicate that mine is doing time-sharing.  That is
I can follow only one idea at a time, but with a lot of switching
between reasoning paths (often more non directed than controlled
switching). Have different people different processors ? Or is the brain
able to function in more than one way (parallel, serial, time-sharing) ??

Rene (bach@sumex)

------------------------------

Date: Wed, 25 Jan 84 15:37:39 CST
From: Mike Caplinger <mike@rice>
Subject: Symbolics support for non-Lisp languages

[This is neither an AI nor a graphics question per se, but I thought
these lists had the best chance of reaching Symbolics users...]

What kind of support do the Symbolics machines provide for languages
other than Lisp?  Specifically, are there interactive debugging
facilities for Fortran, Pascal, etc.?  It's my understanding that the
compilers generate Lisp output.  Is this true, and if so, is the
interactive nature of Lisp exploited, or are the languages just
provided as batch compilers?  Finally, does anyone have anything to say
about efficiency?

Answers to me, and I'll summarize if there's any interest.  Thanks.

------------------------------

Date: Wed 25 Jan 84 09:38:25-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: KEE Representation System

The Jan. issue of IEEE Computer Graphics reports the following:

Intelligenetics has introduced the Knowledge Engineering Environment
AI software development system for AI professionals, computer
scientists, and domain specialists.  The database management program
development system is graphics oriented and interactive, permitting
use of a mouse, keyboard, command-option menus, display-screen
windows, and graphic symbols.

KEE is a frame-based representation system that provides support
for descriptive and procedural knowledge representation, and a
declarative, extendable formalism for controlling inheritance of
attributes and attribute values between related units of
knowledge.  The system provides support for multiple inheritance
hierarchies; the use of user-extendable data types to promote
knowledge-base integrity; object-oriented programming; multiple-
inference engines/rule systems; and a modular system design through
multiple knowledge bases.

The first copy of KEE sells for $60,000; the second for $20,000.
Twenty copies cost $5000 each.

------------------------------

Date: 01/24/84 12:08:36
From: JAWS@MIT-MC
Subject: PROLOG and/or ZOG for TOPS-10

Does anyone out there know where I can get a version of prolog and/or
ZOG to that will run on a DEC-10 (7.01)?  The installation is owned by the
US government, albeit beneign (DOT).

                                THANX JAWS@MC

------------------------------

Date: Tue 24 Jan 84 11:26:14-PST
From: Armar Archbold <ARCHBOLD@SRI-AI.ARPA>
Subject: Rivest Forsythe Lecture on Learning

[The following is a review of a Stanford talk, "Reflections on AI", by
Dr. Ron Rivest of MIT.  I have edited the original slightly after getting
Armar's permission to pass it along.  -- KIL]

Dr. Rivest's  talk  emphasized  the interest of small-scale studies of
learning through experience (a "critter"  with  a  few  sensing  and
effecting operations building up a world model of a blocks environment).
He stressed such familiar themes as

   - "the evolutionary function and value of world  models  is  predicting
     the  future,  and  consequently  knowledge is composed principally of
     expectations, possibilities, hypotheses -  testable  action-sensation
     sequences, at the lowest level of sophistication",

   - "the  field  of  AI  has  focussed  more  on 'backdoor AI', where you
     directly  program  in   data   structures   representing   high-level
     knowledge,  than  on  'front-door' AI, which studies how knowledge is
     built up from non-verbal experience, or 'side door AI', which studies
     how knowledge might be gained through teaching and instruction  using
     language;

   - such a study of simple learning systems in a simple environment -- in
     which an agent with a given  vocabulary  but  little  or  no  initial
     knowledge  ("tabula  rasa")  investigates  the  world (either through
     active experiementation or through changes imposed  by  perturbations
     in  the  surroundings)  and  attempts  to  construct a useful body of
     knowledge   through   recognition   of   identities,    equivalences,
     symmetries,  homomorphisms,  etc.,  and  eventually  metapatterns, in
     action-sensation chains (represented perhaps in dynamic logic) --  is
     of considerable interest.

Such concepts are not new. There have been many mathematical studies,
psychological similations, and AI explorations along the lines since the
50s.  At SRI, Stan Rosenschein was playing around with a simplified learning
critter about a year ago; Peter Cheeseman shares Rivest's interest in
Jaynes' use of entropy calculations to induce safe hypotheses in an
overwhelmingly profuse space of possibilities.  Even so, these concerns
were worth having reactivated by a talk.  The issues raised by some of the
questions from the audience were also intesting, albeit familiar:

   - The critter which starts out with a tabula rasa  will  only  make  it
     through  the  enormous  space  of  possible  patterns induceable from
     experience if it initially "knows" an awful lot about how  to  learn,
     at  whatever  level  of  procedural  abstraction  and/or  "primitive"
     feature selection (such as that done at the level of the eye itself).

   - Do we call intelligence the procedures that permit one to gain useful
     knowledge (rapidly), or the knowledge thus gained, or what mixture of
     both?

   - In addition, there is the question  of  what  motivational  structure
     best furthers the critter's education.  If the critter attaches value
     to  minimum  surprise (various statistical/entropy measures thereof),
     it can sit in a corner and do nothing, in which case it may  one  day
     suddenly  be very surprised and very dead.  If it attaches tremendous
     value to surprise, it could just flip a coin and always  be  somewhat
     surprised.    The  mix  between repetition (non-surprise/confirmatory
     testing) and exploration which produces the best cognitive system  is
     a  fundamental  problem.   And there is the notion of "best" - "best"
     given the critter's values other than curiosity, or "best"  in  terms
     of  survivability,  or  "best"  in  a  kind  of  Occam's  razor sense
     vis-a-vis truth (here it was commented you could rank Carnapian world
     models based on the  simple  primitive  predicates  using  Kolmogorov
     complexity measures, if one could only calculate the latter...)

   - The  success  or  failure  of the critter to acquire useful knowledge
     depends very much on the particular world it is placed in.    Certain
     sequences  of  stimuli will produce learning and others won't, with a
     reasonable, simple learning procedure.  In simple artificial  worlds,
     it  is possible to form some kind of measure of the complexity of the
     environment by seeing what the minimum length action-sensation chains
     are which are true regularities.  Here there is  another  traditional
     but  fascinating question: what are the best worlds for learning with
     respect to  critters  of  a  given  type  -  if  the  world  is  very
     stochastic,  nothing  can  be learned in time; if the world is almost
     unchanging, there is little motivation to learn and  precious  little
     data about regular covariances to learn from.

     Indeed,  in  psychological studies, there are certain sequences which
     will bolster reliance on certain conclusions to such an  extent  that
     those    conclusions    become    (illegitimately)   protected   from
     disconfirmation.  Could one recreate this phenomenon  with  a  simple
     learning  critter  with a certain motivational structure in a certain
     kind of world?

Although these issues seemed familiar, the talk certainly could stimulate
the general public.

                                                                 Cheers - Armar

------------------------------

Date: Tue 24 Jan 84 15:45:06-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT - FRIDAY, January 27, 1984

           [Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   January 27, 1984
Chemistry Gazebo, between Physical & Organic Chemistry
12:05

SPEAKER:  Tom Dietterich, HPP
          Stanford University

TOPIC:    Learning with Constraints

In attempting to construct a program  that can learn the semantics  of
UNIX commands, several shortcomings of existing AI learning techniques
have been  uncovered.  Virtually  all  existing learning  systems  are
unable to (a)  perform data  interpretation in a  principled way,  (b)
form theories about systems that contain substantial amounts of  state
information, (c) learn from  partial data, and (d)  learn in a  highly
incremental fashion.  This talk  will describe these shortcomings  and
present techniques  for overcoming  them.  The  basic approach  is  to
employ a vocabulary of constraints to represent partial knowledge  and
to apply  constraint-propagation techniques  to draw  inferences  from
this partial knowledge.  These techniques  are being implemented in  a
system called, EG,  whose task is  to learn the  semantics of 13  UNIX
commands (ls, cp,  mv, ln, rm,  cd, pwd, chmod,  umask, type,  create,
mkdir, rmdir) by watching "over-the-shoulder" of a teacher.

------------------------------

Date: 01/25/84 17:07:14
From: AH
Subject: Theory of Computation Seminar

                       [Forwarded from MIT-MC by SASW.]


                           DATE:  February 2nd, 1984
                           TIME:  3:45PM  Refreshments
                                  4:00PM  Lecture
                          PLACE:  NE43-512A

           "OPERATIONAL AND DENOTATIONAL SEMANTICS FOR P R O L O G"

                                      by

                                 Neil D. Jones
                              Datalogisk Institut
                             Copenhagen University

                                   Abstract

  A PROLOG program can go into an infinite loop even when there exists a
refutation of its clauses by resolution theorem proving methods.  Conseguently
one can not identify resolution of Horn clauses in first-order logic with
PROLOG  as it is actually used, namely, as a deterministic programming
language.  In this talk two "computational" semantics of PROLOG will be given.
One is operational and is expressed as an SECD-style interpreter which is
suitable for computer implementation.  The other is a Scott-Strachey style
denotational semantics.  Both were developed from the SLD-refutation procedure
of Kowalski and APT and van Embden, and both handle "cut".

HOST:  Professor Albert R. Meyer

------------------------------

Date:     Wednesday, 25 Jan 84 23:47:29 EST
From:     reiser (brian reiser) @ cmu-psy-a
Reply-to: <Reiser%CMU-PSY-A@CMU-CS-PT>
Subject:  Human-Computer Interaction Program at CMU

                         ***** ANNOUNCEMENT *****

              Graduate Program in Human-Computer Interaction
                       at Carnegie-Mellon University

The  field  of  human-computer  interaction  brings  to  bear  theories and
methodologies from cognitive psychology and computer science to the  design
of   computer   systems,   to   instruction   about   computers,   and   to
computer-assisted instruction.  The new Human-Computer Interaction  program
at  CMU is geared toward the development of cognitive models of the complex
interaction between learning, memory, and language mechanisms  involved  in
using  computers.    Students  in  the  program  apply their psychology and
computer science  training  to  research  in  both  academic  and  industry
settings.

Students in the Human-Computer Interaction program design their educational
curricula  with  the  advice  of  three  faculty  members  who serve as the
student's committee.  The intent  of  the  program  is  to  guarantee  that
students   have  the  right  combination  of  basic  and  applied  research
experience and coursework so that they  can  do  leading  research  in  the
rapidly developing field of human-computer interaction.  Students typically
take  one  psychology  course and one computer science course each semester
for the first two years.  In addition, students participate in a seminar on
human-computer interaction held during the summer  of  the  first  year  in
which  leading  industry  researchers are invited to describe their current
projects.

Students are also actively involved in research throughout  their  graduate
career.    Research  training  begins  with  a collaborative and apprentice
relationship with a faculty member in laboratory research for the first one
or two years of the program.  Such involvement allows the  student  several
repeated   exposures  to  the  whole  sequence  of  research  in  cognitive
psychology and computer science, including conceptualization of a  problem,
design   and   execution   of   experiments,  analyzing  data,  design  and
implementation of computer systems, and writing scientific reports.

In the second half  of  their  graduate  career,  students  participate  in
seminars,  teaching,  and  an  extensive  research project culminating in a
dissertation.  In addition, an important component  of  students'  training
involves  an  internship working on an applied project outside the academic
setting.  Students and faculty in the  Human-Computer  Interaction  program
are  currently studying many different cognitive tasks involving computers,
including: construction of algorithms, design of instruction  for  computer
users,  design of user-friendly systems, and the application of theories of
learning and problem solving to the design of systems for computer-assisted
instruction.

Carnegie-Mellon University is exceptionally well suited for  a  program  in
human-computer   interaction.    It  combines  a  strong  computer  science
department with a strong  psychology  department  and  has  many  lines  of
communication  between  them.   There are many shared seminars and research
projects.  They also share in a computational community defined by a  large
network  of  computers.  In addition, CMU and IBM have committed to a major
effort to integrate personal computers into college education.    By  1986,
every  student  on  campus  will  have a powerful state-of-the-art personal
computer.  It is anticipated that members of the Human-Computer Interaction
program will be involved in various aspects of this effort.

The  following  faculty  from  the  CMU  Psychology  and  Computer  Science
departments  are  participating  in the Human-Computer Interaction Program:
John R. Anderson, Jaime G. Carbonell, John  R. Hayes,  Elaine  Kant,  David
Klahr,  Jill  H. Larkin, Philip L. Miller, Alan Newell, Lynne M. Reder, and
Brian J. Reiser.

Our   deadline   for   receiving   applications,   including   letters   of
recommendation,  is  March  1st.  Further information about our program and
application materials may be obtained from:

     John R. Anderson
     Department of Psychology
     Carnegie-Mellon University
     Pittsburgh, PA  15213

------------------------------

End of AIList Digest
********************

∂02-Feb-84  0229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #11
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Feb 84  02:28:51 PST
Date: Tue 31 Jan 1984 10:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #11
To: AIList@SRI-AI


AIList Digest            Tuesday, 31 Jan 1984      Volume 2 : Issue 11

Today's Topics:
  Techniques - Beam Search Request,
  Expert Systems - Expert Debuggers,
  Mathematics - Arnold Arnold Story,
  Courses - PSU Spring AI Mailing Lists,
  Awards - Fredkin Prize for Computer Math Discovery,
  Brain Theory - Parallel Processing,
  Intelligence - Psychological Definition,
  Seminars - Self-Organizing Knowledge Base, Learning, Task Models
----------------------------------------------------------------------

Date: 26 Jan 1984 21:44:11-EST
From: Peng.Si.Ow@CMU-RI-ISL1
Subject: Beam Search

I would be most grateful for any information/references to studies and/or
applications of Beam Search, the search procedure used in HARPY.

                                                        Peng Si Ow
                                                      pso@CMU-RI-ISL1

------------------------------

Date: 25 Jan 84 7:51:06-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!erh @ Ucb-Vax
Subject: Expert debuggers
Article-I.D.: uvacs.1148

        See also "Sniffer: a system that understands bugs", Daniel G. Shapiro,
MIT AI Lab Memo AIM-638, June 1981
        (The debugging knowledge of Sniffer is organized as a bunch of tiny
experts, each understanding a specific type of error.  The program has an in-
depth understanding of a (very) limited class of errors.  It consists of
a cliche-finder and a "time rover".  Master's thesis.)

------------------------------

Date: Thursday, 26-Jan-84  19:11:37-GMT
From: BILL (on ERCC DEC-10) <Clocksin%edxa@ucl-cs.arpa>
Reply-to: Clocksin <Clocksin%edxa@ucl-cs.arpa>
Subject: AIList entry

In reference to a previous AIList correspondent wishing to know more about
Arnold Arnold's "proof" of Fermat's Last Theorem, last week's issue of
New Scientist explains all.  The "proof" is faulty, as expected.
Mr Arnold is a self-styled "cybernetician" who has a history of grabbing
headlines with announcements of revolutionary results which are later
proven faulty on trivial grounds.  I suppose A.I. has to put up with
its share of circle squarers and angle trisecters.

------------------------------

Date: 28 Jan 84 18:23:09-PST (Sat)
From: ihnp4!houxm!hocda!hou3c!burl!clyde!akgua!sb1!sb6!bpa!burdvax!psu
      vax!bobgian@Ucb-Vax
Subject: PSU Spring AI mailing lists
Article-I.D.: psuvax.433

I will be using net.ai for occasionally reporting "interesting" items
relating to the PSU Spring AI course.

If anybody would also like "administrivia" mailings (which could get
humorous at times!), please let me know.

Also, if you want to be included on the "free-for-all" discussion list,
which will include flames and other assorted idiocies, let me know that
too.  Otherwise you'll get only "important" items.

The "official Netwide course" (ie, net.ai.cse) will start up in a month
or so.  Meanwhile, you are welcome to join the fun via mail!

Bob

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: 26 Jan 84 19:39:53 EST
From: AMAREL@RUTGERS.ARPA
Subject: Fredkin Prize for Computer Math Discovery

                 [Reprinted from the RUTGERS bboard.]

Fredkin Prize to be Awarded for Computer Math Discovery

LOUISVILLE,  Ky.--The  Fredkin  Foundation  will award a $100,000 prize for the
first computer to make a major mathematical discovery, it was  announced  today
(Jan. 26).

Carnegie-Mellon  University  has  been  named trustee of the "Fredkin Prize for
Computer Discovery in Mathematics", according to Raj  Reddy,  director  of  the
university's  Robotics  Institute,  and a trustee of IJCAI (International Joint
Council on Artificial Intelligence) responsible for AI prizes.  Reddy said  the
prize  will be awarded "for a mathematical work of distinction in which some of
the pivotal ideas have been found automatically by a computer program in  which
they were not initially implicit."

"The criteria for awarding this prize will be widely publicized and reviewed by
the  artificial  intelligence  and  mathematics  communities to determine their
adequacy," Reddy said.

Dr. Woody Bledsoe of the University of Texas at Austin will head a committee of
experts  who  will  define  the  rules  of  the  competition.      Bledsoe   is
president-elect of the American Association for Artificial Intelligence.

"It  is  hoped,"  said  Bledsoe,  "that  this  prize  will stimulate the use of
computers in mathematical research and have a good long-range effect on all  of
science."

The  committee  of mathematicians and computer scientists which will define the
rules of the competition includes:  William Eaton of the University of Texas at
Austin, Daniel  Gorenstein  of  Rutgers  University,  Paul  Halmos  of  Indiana
University,  Ken  Kunen  of  the  University of Wisconsin, Dan Mauldin of North
Texas State University and John McCarthy of Stanford University.

Also, Hugh Montgomery of the University of Michigan, Jack Schwartz of New  York
University,  Michael  Starbird  of  the  University  of  Texas  at  Austin, Ken
Stolarsky of  the  University  of  Illinois  and  Francois  Treves  of  Rutgers
University.

The  Fredkin Foundation has a similar prize for a world champion computer chess
system.  Recently, $5,000 was awarded to Ken Thompson and Joseph  Condon,  Bell
Laboratories  researchers  who developed the first computer system to achieve a
Master rating in tournament chess.

------------------------------

Date: 26 Jan 84 15:34:50 PST (Thu)
From: Mike Brzustowicz <mab@aids-unix>
Subject: Re: Rene Bach's query on parallel processing in the brain

What happens when something is "on the tip of your tounge"  but is beyond
recall.  Often (for me at least)  if the effort to recall is displaced
by some other cognitive activity, the searched-for information "pops-up"
at a later time.  To me, this suggests at least one background process.

                                -Mike (mab@AIDS-UNIX)

------------------------------

Date: Thu, 26 Jan 84 17:19:30 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: How my brain works

I find that most of what my brain does is pattern interpretation.  I receive
various sensory input in the form of various kinds of vibrations (i.e.
eletromagnetic and acoustic) and my brain perceives patterns in this muck.
Then it attaches meanings to the patterns.  Within limits, I can attach these
meanings at will.  The process of logical deduction a la Socrates takes up
a negligible time-slice in the CPU.

  --Charlie

------------------------------

Date: Fri, 27 Jan 84 15:35:21 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: How my brain works

I see what you mean about the question as to whether the brain is a parallel
processor in consious reasoning or not.  I also feel like a little daemon that
sits and pays attention to different lines of thought at different times.

An interesting counterexample is the aha! phenomenon.  The mathematician
Henri Poincare, among others, has written an essay about his experience of
being interrupted from his conscious attention somehow and becoming instantly
aware of the solution to a problem he had "given up" on some days before.
It was as though some part of his brain had been working on the problem all
along even though he had not been aware of it.  When it had gotten the solution
an interrupt occurred and his conscious mind was triggered into the awareness
of  the solution.

  --Charlie

------------------------------

Date: Mon 30 Jan 84 09:47:49-EST
From: Alexander Sen Yeh <AY@MIT-XX.ARPA>
Subject: Request for Information

I am getting started on a project which combines symbolic artificial
intelligence and image enhancement techniques.  Any leads on past and
present attempts at doing this (or at combining symbolic a.i. with
signal processing or even numerical methods in general) would be
greatly appreciated.  I will send a summary of replies to AILIST and
VISION LIST in the future.  Thanks.

--Alex Yeh
--electronic mail: AY@MIT-XX.ARPA
--US mail: Rm. 222, 545 Technology Square, Cambridge, MA 02139

------------------------------

Date: 30 January 1984 1554-est
From: RTaylor.5581i27TK @ RADC-MULTICS
Subject: RE:  brain, a parallel processor ?

I agree that based on my own observations, my brain appears to be
working more like a time-sharing unit...complete with slow downs,
crashes, etc., due to overloading the inputs by fatigue, poor maintenance,
and numerous inputs coming too fast to be covered by the
time-sharing/switching mechanism!
                              Roz

------------------------------

Date: Monday, 30 Jan 84 14:33:07 EST
From: shrager (jeff shrager) @ cmu-psy-a
Subject: Psychological Definition of (human) Intelligence

Recommended reading for persons interested in a psychological view of
(human) intelligence:

Sternberg, R.J. (1983) "What should intelligence tests test?  Implications
 of a triarchic theory of intelligence for intelligence testing."  in
 Educational Researcher, Jan 1984.  Vol. 13 #1.

This easily read article (written for educational researchers) reviews
Sternberg's current view of what makes intelligent persons intelligent:

"The triarchic theory accounts for why IQ tests work as well as they do
 and suggests ways in which they might be improved...."

Although the readership of this list are probably not interested in IQ tests
per se, Sternberg is the foremost cognitive psychologist concerned directly
with intelligence so his view of "What is intelligence?" will be of interest.
This is reviewed quite nicely in the cited paper:

"The triachric theory of human intelligence comprises three subtheories.  The
first relates intelligence to the internal world of the individual,
specifying the mental mechanisms that lead to more and less intelligent
behavior.  This subtheory specifies three kinds of information processing
components that are instrumental in (a) learning how to do things, (b)
planning what to do and how to do them, and in (c) actually doing them. ...
The second subtheory specifies those points along the continuum of one's
experience with tasks or situations that most critically involve the use of
intelligence.  In particular, the account emphasizes the roles of novelty
(...) and of automatization (...) in intelligence.  The third subtheory
relates intelligence to the external world of the individual, specifying
three classes of acts -- environmental adaptation, selection, and shaping --
that characterize intelligent behavior in the everyday world."

There is more detail in the cited article.

(Robert J. Sternberg is professor of Psychology at Yale University.  See
also, his paper in Behavior and Flame Sciences (1980, 3, 573-584): "Sketch of
a componential subtheory of human intelligence." and his book (in press with
Cambridge Univ. Press): "Beyond IQ: A triarchic theory of human
intelligence.")

------------------------------

Date: Thu 26 Jan 84 14:11:55-CST
From: CS.BUCKLEY@UTEXAS-20.ARPA
Subject: Database Seminar

                [Reprinted from the UTEXAS-20 bboard.]

    4-5 Wed afternoon in Pai 5.60 [...]

    Mail-From: CS.LEVINSON created at 23-Jan-84 15:47:25

    I am developing a system which will serve as a self-organizing
    knowledge base for an expert system. The knowledge base is currently
    being developed to store and retrieve Organic Chemical reactions. As
    the fundamental structures of the system are merely graphs and sets,
    I am interested in finding other domains is which the system could be used.

    Expert systems require a large amount of knowledge in order to perform
    their tasks successfully. In order for knowledge to be useful for the
    expert task it must be characterized accurately. Data characterization
    is usually the responsibility of the system designer and the
    consulting experts. It is my belief that the computer itself can be
    used to help characterize and classify its knowledge. The system's
    design is based on the assumption that the key to knowledge
    characterization is pattern recognition.

------------------------------

Date: 28 Jan 84 21:25:17 EST
From: MSIMS@RUTGERS.ARPA
Subject: Machine Learning Seminar Talk by R. Banerji

                 [Reprinted from the RUTGERS bboard.]

                MACHINE LEARNING SEMINAR

Speaker:        Ranan Banerji
                St. Joseph's University, Philadelphia, Pa. 19130

Subject:        An explanation of 'The Induction of Theories from
                Facts' and its relation to LEX and MARVIN


In Ehud Shapiro's Yale thesis work he presented a framework for
inductive inference in logic, called the incremental inductive
inference algorithm.  His Model Inference System was able to infer
axiomatizations of concrete models from a small number of facts in a
practical amount of time.  Dr. Banerji will relate Shapiro's work to
the kind of inductive work going on with the LEX project using the
version space concept of Tom Mitchell, and the positive focusing work
represented by Claude Sammut's MARVIN.

Date:           Monday, January 30, 1984
Time:           2:00-3:30
Place:          Hill 7th floor lounge (alcove)

------------------------------

Date: 30 Jan 84  1653 PST
From: Terry Winograd <TW@SU-AI>
Subject: Talkware seminar Mon Feb 6, Tom Moran (PARC)

                [Reprinted from the SU-SCORE bboard.]

Talkware Seminar (CS 377)

Date: Feb 6
Speaker: Thomas P. Moran, Xerox PARC
Topic: Command Language Systems, Conceptual Models, and Tasks
Time: 2:15-4
Place: 200-205

Perhaps the most important property for the usability of command language
systems is consistency.  This notion usually refers to the internal
(self-) consistency of the language.  But I would like to reorient the
notion of consistency to focus on the task domain for which the system
is designed.  I will introduce a task analysis technique, called
External-Internal Task (ETIT) analysis.  It is based on the idea that
tasks in the external world must be reformulated in to the internal
concepts of a computer system before the system can be used.  The
analysis is in the form of a mapping between sets of external tasks and
internal tasks.  The mapping can be either direct (in the form of rules)
or "mediated" by a conceptual model of how the system works.  The direct
mapping shows how a user can appear to understand a system, yet have no
idea how it "really" works.  Example analyses of several text editing
systems and, for contrast, copiers will be presented; and various
properties of the systems will be derived from the analysis.  Further,
it is shown how this analysis can be used to assess the potential
transfer of knowledge from one system to another, i.e., how much knowing
one system helps with learning another.  Exploration of this kind of
analysis is preliminary, and several issues will be raised for
discussion.

------------------------------

End of AIList Digest
********************

∂03-Feb-84  2358	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #12
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Feb 84  23:57:46 PST
Date: Fri  3 Feb 1984 22:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #12
To: AIList@SRI-AI


AIList Digest            Saturday, 4 Feb 1984      Volume 2 : Issue 12

Today's Topics:
  Hardware - Lisp Machine Benchmark Request,
  Machine Translation - Request,
  Mathematics - Fermat's Last Theorem & Four Color Request,
  Alert - AI Handbooks & Constraint Theory Book,
  Expert Systems - Software Debugging Correction,
  Course - PSU's Netwide AI Course,
  Conferences -  LISP Conference Deadline & Cybernetics Congress
----------------------------------------------------------------------

Date: Wed, 1 Feb 84 16:37:00 cst
From: dyer@wisc-ai (Chuck Dyer)
Subject: Lisp Machines

Does anyone have any reliable benchmarks comparing Lisp
machines, including Symbolics, Dandelion, Dolphin, Dorado,
LMI, VAX 780, etc?

Other features for comparison are also of interest.  In particular,
what capabilities are available for integrating a color display
(at least 8 bits/pixel)?

------------------------------

Date: Thu 2 Feb 84 01:54:07-EST
From: Andrew Y. Chu <AYCHU@MIT-XX.ARPA>
Subject: language translator

                     [Forwarded by SASW@MIT-ML.]

Hi, I am looking for some information on language translation
(No, not fortran->pascal, like english->french).
Does anyone in MIT works on this field? If not, anyone in other
schools? Someone from industry ? Commercial product ?
Pointer to articles, magazines, journals etc. will be greatly appreciated.

Please reply to aychu@mit-xx. I want this message to reach as
many people as possible, are there other bboards I can send to?
Thanx.

------------------------------

Date: Thu, 2 Feb 84 09:48:48 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Fermat's Last Theorem

Fortunately (or unfortunately) puzzles like Fermat's Last Theorem, Goldbach's
conjecture, the 4-color theorem, and others are not in the same class as
the geometric trisection of an angle or the squaring of a circle.  The former
class may be undecidable propositions (a la Goedel) and the latter are merely
impossible.  Since one of the annoying things about undecidable propositions
is that it cannot be decided whether or not they are decidable, (Where are
you, Doug Hofstader, now that we need you?) people seriously interested in
these candidates for undecidablilty should not dismiss so-called theorem
provers like A. Arnold without looking at their work.

I have heard that the ugly computer proof(?) of the 4-color theorem that
appeared in Scientific American is incorrect, i.e. not a proof.  I also
have heard that one G. Spencer-Brown has proved the 4-color theorem.  I
do not know whether either of these things is true and it's bugging me!
Is the 4-color theorem undecidable or not?

  --Charlie

------------------------------

Date: 30 Jan 84 19:48:36-PST (Mon)
From: pur-ee!uiucdcs!uicsl!keller @ Ucb-Vax
Subject: AI Handbooks only .95
Article-I.D.: uiucdcs.5251

        Several people here have joined "The Library of Computer and
Information Sciences Book Club" because they have an offer of the complete
AI Handbook set (3 vols) for $3.95 instead of the normal $100.00. I got mine
and they are the same production as non book club versions. You must buy
three more books during the comming year and it will probably be easy to
find ones that you want. Here's the details:

Send to: The Library of Computer and Information Sciences
         Riverside NJ 08075

Copy of Ad:
Please accept my application for trial membership in the Library of Computer
and Information Sciences and send me the 3-volume HANDBOOK OF ARTIFICIAL
INTELLIGENCE (10079) billing me only $3.95. I agree to purchase at least
three additional Selections or Alternates over the next 12 months. Savings
may range up to 30% and occasionally even more. My membership is cancelable
any time after I buy these three books. A shipping and handling charge is
added to all shipments.

No-Risk Guarantee: If you are not satisfied--for any reason--you may return
the HANDBOOK OF ARTIFICIAL INTELLIGENCE within 10 days and your membership
will be canceled and you will owe nothing.

Name ←←←←←←←←
Name of Firm ←←←← (if you want subscription to your office)
Address ←←←←←←←←←←←←←
City ←←←←←←←←
State ←←←←←←← Zip ←←←←←←

(Offer good in Continental U.S. and Canada only. Prices slightly higher in
Canada.)

Scientific American 8/83    7-BV8

-Shaun ...uiucdcs!uicsl!keller

[I have been a member for several years, and have found this club's
service satisfactory (and improving).  The selection leans towards
data processing and networking, but there have been a fair number
of books on AI, graphics and vision, robotics, etc.  After buying
several books you get enough bonus points for a very substantial
discount on a selection of books that you passed up when they were
first offered.  I do get tired, though, of the monthly brochures that
use the phrase "For every computer professional, ..." in the blurb for
nearly every book.  If you aren't interested in the AI Handbook,
find a current club member for a list of other books you can get
when you enroll.  The current member will also get a book for signing
you up.  -- KIL]

------------------------------

Date: 31 Jan 84 19:55:24-PST (Tue)
From: pur-ee!uiucdcs!ccvaxa!lipp @ Ucb-Vax
Subject: Constraint Theory - (nf)
Article-I.D.: uiucdcs.5285


*********************BOOK ANNOUNCEMENT*******************************

                     CONSTRAINT THEORY
                 An Approach to Policy-Level
                         Modelling
                             by
                     Laurence D. Richards

The cybernetic concepts of variety, constraint, circularity, and
process provide the foundations for a theoretical framework for the
design of policy support systems.  The theoretical framework consists
of a modelling language and a modelling mathematics.  An approach to
building models for policy support sys- tems is detailed; two case
studies that demonstrate the approach are described.  The modelling
approach focuses on the structure of mental models and the subjec-
tivity of knowledge.  Consideration is given to ideas immanent in
second-order cybernetics, including paradox, self-reference, and
autonomy. Central themes of the book are "complexity", "negative
reasoning", and "robust" or "value-rich" policy.

424 pages; 23 tables; 56 illustrations
Hardback: ISBN 0-8191-3512-7 $28.75
Paperback:ISBN 0-8191-3513-5 $16.75

order from:
                          University Press of America
                                4720 Boston Way
                           Lanham, Maryland 20706 USA

------------------------------

Date: 28 Jan 84 0:25:20-PST (Sat)
From: pur-ee!uiucdcs!renner @ Ucb-Vax
Subject: Re: Expert systems for software debugging
Article-I.D.: uiucdcs.5217

Ehud Shapiro's error diagnosis system is not an expert system.  It doesn't
depend on a heuristic approach at all.  Shapiro tries to find the faulty part
of a bad program by executing part of the program, then asking an "oracle" to
decide if that part worked correctly.  I am very impressed with Shapiro's
work, but it doesn't have anything to do with "expert knowledge."

Scott Renner
{ihnp4,pur-ee}!uiucdcs!renner

------------------------------

Date: 28 Jan 84 12:25:56-PST (Sat)
From: ihnp4!houxm!hocda!hou3c!burl!clyde!akgua!sb1!sb6!bpa!burdvax!psuvax!bobgian @ Ucb-Vax
Subject: PSU's Netwide AI course
Article-I.D.: psuvax.432

The PSU ("in person") component of the course has started up, but things
are a bit slow and confused regarding the "netwide" component.

For one thing, I am too busy finishing a thesis and teaching full-time to
handle the administrative duties, and we don't (yet, at least) have the
resources to hire others to do it.

For another, my plans presupposed a level of intellectual maturity and
drive that is VERY rare in Penn State students.  I believe the BEST that
PSU can offer are in my course right now, but only 30 percent of them are
ready for what I wanted to do (and most of THEM are FACULTY!!).

I'm forced to backtrack and run a slightly more traditional "mini" course
to build a common foundation.  That course essentially will read STRUCTURE
AND INTERPRETATION OF COMPUTER PROGRAMS by Hal Abelson and Gerry Sussman.
[This book was developed for the freshman CS course (6.001) at MIT and will
be published in April.  It is now available as an MIT LCS tech report by
writing Abelson at 545 Technology Square, Cambridge, MA 02139.]

The "netwide" version of the course WILL continue in SOME (albeit perhaps
delayed) form.  My "mini" course should take about 6 weeks.  After that
the "AI and Mysticism" course can be restarted.

For now, I won't create net.ai.cse but rather will use net.ai for
occasional announcements.  I'll also keep addresses of all who wrote
expressing interest (and lack of a USENET connection).  Course
distributions will go (low volume) to that list and to net.ai until
things start to pick up.  When it becomes necessary we will "fork off"
into a net.ai subgroup.

So keep the faith, all you excited people!  This course is yet to be!!

        Bob

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP:   bobgian@psuvax.UUCP       -or-    allegra!psuvax!bobgian
Arpa:   bobgian@PSUVAX1           -or-    bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET    CSnet:  bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: Fri 3 Feb 84 00:24:28-EST
From: STEELE%TARTAN@CMU-CS-C.ARPA
Subject: 1984 LISP Conference submissions deadline moved back

Because of delays that occurred in getting out the call for papers,
the deadline for submissions to the 1984 ACM Symposium on LISP and
Functional Programming (to be held August 5-8, 1984) has been moved
back from February 6 to February 15.  The date for notification of
acceptance or rejection of papers is now March 20 (was March 12).
The date for return of camera-ready copy is now May 20 (was May 15).

Please forward this message to anyone who may find it of interest.
--Thanks,
        Guy L. Steele Jr.
        Program Chairman, 1984 ACM S. on L. and F.P.
        Tartan Laboratories Incorporated
        477 Melwood Avenue
        Pittsburgh, Pennsylvania 15213
        (412)621-2210

------------------------------

Date: 31 Jan 84 19:54:56-PST (Tue)
From: pur-ee!uiucdcs!ccvaxa!lipp @ Ucb-Vax
Subject: Cybernetics Congress - (nf)
Article-I.D.: uiucdcs.5284

6th International Congress of the World Organisation
        of General Systems and Cybernetics
        10--14 September 1984
        Paris, France
This transdisciplinary congress will present the contemporary aspects
of cybernetics and of systems, and examine their different currents.
The proposed topics include both methods and domains of cybernetics
and systems:
  1) foundations, epistemology, analogy, modelisation, general methods
     of systems, history of cybernetics and systems science ideas.
  2) information, organisation, morphogenesis, self-reference, autonomy.
  3) dynamic systems, complex systems, fuzzy systems.
  4) physico-chemical systems.
  5) technical systems: automatics, simulation, robotics, artificial
     intelligence, learning.
  6) biological systems: ontogenesis, physiology, systemic therapy,
     neurocybernetics, ethology, ecology.
  7) human and social systems: economics, development, anthropology,
     management, education, planification.

For further information:
                                     WOGSC
                               Comite de lecture
                                     AFCET
                               156, Bld. Pereire
                             F 75017 Paris, France
Those who want to attend the congress are urged to register by writing
to AFCET, at the above address, as soon as possible.

------------------------------

End of AIList Digest
********************

∂05-Feb-84  0007	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #13
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Feb 84  00:07:15 PST
Date: Sat  4 Feb 1984 23:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #13
To: AIList@SRI-AI


AIList Digest             Sunday, 5 Feb 1984       Volume 2 : Issue 13

Today's Topics:
  Brain Theory - Parallelism,
  Seminars - Neural Networks & Automatic Programming
----------------------------------------------------------------------

Date: 31 Jan 84 09:15:02 EST  (Tue)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: parallel processing in the brain

       From: Rene Bach <BACH@SUMEX-AIM.ARPA>
       What are the evidences that the brain is a parallel processor?  My own
       introspection seem to indicate that mine is doing time-sharing.  That is
       I can follow only one idea at a time, but with a lot of switching
       between reasoning paths (often more non directed than controlled
       switching).

Does that mean you hold your breath and stop thinking while you're
walking, and stop walking in order to breathe or think?

More pointedly, I think it's incorrect to consider only
consciously-controlled processes when we talk about whether or not
the brain is doing parallel processing.  Perhaps the conscious part
of your mind can keep track of only one thing at a time, but most
(probably >90%) of the processing done by the brain is subconscious.

For example, most of us have to think a LOT about what we're doing
when we're first learning to drive.  But after a while, it becomes
largely automatic, and the conscious part of our mind is freed to
think about other things while we're driving.

As another example, have you ever had the experience of trying
unsuccessfully to remember something, and later remembering
whatever-it-was while you were thinking about something else?
SOME kind of processing was going on in the interim, or you
wouldn't have remembered whatever-it-was.

------------------------------

Date: 30 Jan 84 20:18:33-PST (Mon)
From: pur-ee!uiucdcs!parsec!ctvax!uokvax!andree @ Ucb-Vax
Subject: Re: intelligence and genius - (nf)
Article-I.D.: uiucdcs.5259

Sorry, js@psuvax, but I DO know something about what I spoke, even if I do
have trouble typing.

I am aware that theorom-proving machines are impossible. It's also fairly
obvious that they would use lots of time and space.

However, I didn't even MENTION them. I talked about two flavors of machine.
One generated well-formed strings, and the other said whether they were
true or not. I didn't say either machine proved them. My point was that the
second of these machines is also impossible, and is closely related to
Jerry's genius finding machines. [I assume that any statement containing
genius is true.]

        Down with replying without reading!
        <mike

------------------------------

Date: Wed, 1 Feb 84 13:54:21 PST
From: Richard Foy <foy@AEROSPACE>
Subject: Brain Processing

The Feb Scientific American has an article entitled "The
Skill of Typing" which can help one to form insights into
mechanisms of the brains processing.
richard

------------------------------

Date: Thu, 2 Feb 84 08:24:35 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: AIList Digest   V2 #10

Re: Parallel Processing in the Brain

  There are several instances of people experiencing what can most easily
be explained as "tasking" in the brain. (an essay by Henri Poincare in "The
World of Mathematics", "The Seamless Web" by Stanley Burnshaw)  It appears
that the conscious mind is rather clumsy at creative work and in large measure
assigns tasks (in parallel) to the subconscious mind which operates in the
background.  When the background task is finished, an interrupt is generated
and the conscious mind becomes aware of the solution without knowing how the
problem was solved.

  --Charlie

------------------------------

Date: Thu 2 Feb 84 10:17:08-PST
From: Kenji Sugiyama <SUGIYAMA@SRI-AI.ARPA>
Subject: Re: Parallel brain?

I had a strange experience when I had practiced abacus in Japan.
An abacus is used for adding, subtracting, multipling, and dividing
numbers.  The practice consisted of a set of calculations in a definite
amount of time, say, 15 minutes.  During that time, I began to think
of something other than the problem at hand.  Then I noticed that
fact ("Aha, I thought of this and that!"), and grinned at myself in
my mind.  In spite of these detours, I continued my calculations without
an interruption.  This kind of experience repeated several times.

It seems to me that my brain might be parallel, at least, in simple tasks.

------------------------------

Date: 2 Feb 1984 8:16-PST
From: fc%USC-CSE@ECLA.ECLnet
Subject: Re: AIList Digest   V2 #10

parallelism in the brain:
        Can you walk and chew gum at the same time?
                        Fred

------------------------------

Date: Sat, 4 Feb 84 15:06:09 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: The brain is parallel, yet data flow can be serial...

        In response to Rene Bach's question whether "the brain is a parallel
processor."  There is no other response other than an emphatic YES!  The
brain is comprised of about 10E9 neurons.  Each one of those neurons is
making locally autonomous calculations; it's hard to get more parallel than
that!  The lower brain functions (e.g., sensory preprocessing, lower motor
control, etc.) are highly distributed and locally autonomous processors (i.e.,
pure parallel data flow).  At the higher thought processing levels, however,
it has been shown (can't cite anything, but I can get sources if someone
wants me to dig them out) that logic tends to run in a serial fashion.
That is, the brain is parallel (a hardware structure), yet higher logic
processes apply the timing of thought in a serial nature (a "software"
structure).
        It is generally agreed that the brain is an associational
machine; it processes based upon the timing of diffuse stimuli and the
resulting changes in the "action potential" of its member neurons.
"Context" helps to define the strength and structure of those associational
links.  Higher thinking is generally a cognitive process where the context
of situations is manipulated.  Changing context (and some associational
links) will often result in a "conclusion" significantly different than
previously arrived upon.  Higher thought may be thought as a three process
cycle:  decision (evaluation of an associational network), reasonability
testing (i.e., is the present decision using a new "context" no different
from the decision arrived upon utilizing the previous "context"?), and
context alteration (i.e., "if my 'decision' is not 'reasonable' what
'contextual association' may be omitted or in error?").  This cycle is
continued until the second step -- 'reasonability testing' -- has concluded
that the result of this 'thinking' process is at least plausible.  Although the
implementation (assuming the trichotomy is correct) in the brain is
via parallel neural structures, the movement of information through those
structures is serial in nature.  An interesting note on the above trichotomy;
note what occurs when the input to the associational network is changed.
If the new input is not consistent with the previously existing 'context'
then the 'reasonability tester' will cause an automatic readjustment of
the 'context'.
        Needless to say, this is not a rigorously proven theory of mine,
but I feel it is quite plausible and that there are profuse psychophysical
and phychological studies that reinforce the above model.  As of now, I
use it as a general guiding light in my work with vision systems, but it
seems equally appplicable to general AI.

                        Philip Kahn
                        KAHN@UCLA-CS.ARPA

------------------------------

Date: 02/01/84 16:09:21
From: STORY at MIT-MC
Re:   Neural networks

                     [Forwarded by SASW@MIT-ML.]

DATE:   Friday, February 3, 1984
TITLE:  "NEURAL NETWORKS: A DISCUSSION OF VARIOUS MATHEMATICAL MODELS"
SPEAKER:        Margaret Lepley, MIT

Neural networks are of interest to researchers in artificial intelligence,
neurobiology, and even statistical mechanics.  Because of their random parallel
structure it is difficult to study the transient behavior of the networks.  We
will discuss various mathematical models for neural networks and show how the
behaviors of these models differ.  In particular we will investigate
asynchronous vs. synchronous models with undirected vs. directed edges of
various weights.

HOST:   Professor Silvio Micali

------------------------------

Date: 01 Feb 84  1832 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Feb 7th CSD Colloquium - Stanford

                  [Reprinted from the SU-SCORE bboard.]

                  A Perspective on Automatic Programming
                             David R. Barstow
                        Schlumberger-Doll Research
                    4:30pm, Terman Aud., Tues Feb 7th

Most work in automatic programming has focused primarily on the roles of
deduction and programming knowledge. However, the role played by knowledge
of the task domain seems to be at least as important, both for the usability
of an automatic programming system and for the feasibility of building one
which works on non-trivial problems. This perspective has evolved during
the course of a variety of studies over the last several years, including
detailed examination of existing software for a particular domain
(quantitaive interpretation of oil well logs) and the implementation
of an experimental automatic programming system for that domain. The
importance of domain knowledge has two importatnt implications: a primary goal
of automatic programming research should be to characterize the programming
process for specific domains; and a crucial issue to be addressed
in these characterizations is the interaction of domain and programming
knowledge during program synthesis.

------------------------------

End of AIList Digest
********************

∂11-Feb-84  0005	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #14
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  00:03:42 PST
Date: Fri 10 Feb 1984 22:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #14
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 14

Today's Topics:
  Requests - SHRDLU & Spencer-Brown & Programming Tests & UNITS,
  Replys - R1/XCON & AI Text & Lisp Machine Comparisons,
  Seminars - Symbolic Supercomputer & Expert Systems & Multiagent Planning
----------------------------------------------------------------------

Date: Sun, 29 Jan 84 16:30:36 PST
From: Rutenberg.pa@PARC-MAXC.ARPA
Reply-to: Rutenberg.pa@PARC-MAXC.ARPA
Subject: does anyone have SHRDLU?

I'm looking for a copy of SHRDLU, ideally in
machine readable form although a listing
would also be fine.

If you have a copy or know of somebody
who does, please send me a message!

Thanks,
        Mike

------------------------------

Date: Mon, 6 Feb 84 14:48:37 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: AIList Digest   V2 #12

I would dearly like to get in contact with G. Spencer-Brown.  Can anyone
give me any kind of lead?  I have tried his publisher, Bantam, and got
no results.

Thanks.

  --Charlie

------------------------------

Date: Wed,  8 Feb 84 19:26:38 CST
From: Stan Barber <sob@rice>
Subject: Testing Programming Aptitude or Compentence

I am interested in information on the following tests that have been or are
currently administered to determine Programming Aptitude or Compentence.

1. Aptitude Assessment  Battery:Programming (AABP) created by Jack M. Wolfe
and made available to employers only from Programming Specialists, Inc.
Brooklyn NY.

2. Programmer Aptitude/Compentence Test System sold by Haverly Systems,
Inc. (Introduced in 1970)

3. Computer Programmer Aptitude Battery by SRA (Science Research Associates),
Inc. (Examined in by F.L. Schmidt et.al. in Journal of Applied Psychology,
Volume 65 [1980] p 643-661)

4. CLEP Exam on Computers and Data Processing. The College Board and the
Educational Testing Service.

5. Graudate Record Exam Advanced Test in Computer Science by the Education
Testing Service.

Please send the answers to the following questions if you have taken or
had experience with any of these tests:

1. How many scores and what titles did they used for the version of the
exam that you took?

2. Did you feel the test actually measured your ability to learn to
program or your current programming competence (that is, did you feel it
asked relevant questions)?

3. What are your general impressions about testing and more specifically
about testing special abilities or skills (like programming, writing, etc.)

I will package up the results and send them to Human-nets.

My thanks.


                        Stan Barber
                        Department of Psychology
                        Rice University
                        Houston TX 77251

                        sob@rice                        (arapnet,csnet)
                        sob.rice@rand-relay             (broken arpa mailers)
                        ...!{parsec,lbl-csam}!rice!sob  (uucp)
                        (713) 660-9252                  (bulletin board)

------------------------------

Date: 6 Feb 84 8:10:41-PST (Mon)
From: decvax!linus!vaxine!chb @ Ucb-Vax
Subject: UNITS request: Second Posting
Article-I.D.: vaxine.182

Good morning!

   I am looking for a pointer to someone (or something) who is knowledgeable
about the features and the workings of the UNITS package, developed at
Stanford HPP.  If you know something, or someone, and could drop me a note
(through mail) I would greatly appreciate it.

   Thanks in advance.


                                Charlie Berg
                             ...allegra!linus!vaxine!chb

------------------------------

Date: 5 Feb 84 20:28:09-PST (Sun)
From: hplabs!hpda!fortune!amd70!decwrl!daemon @ Ucb-Vax
Subject: DEC's expert system for configuring VAXen
Article-I.D.: decwrl.5447

[This is in response to an unpublished request about R1. -- KIL]

Just for the record - we changed the name from "R1" to "XCON" about a year
ago I think.   It's a very useful system and is part of a family of expert
systems which assist us in the operation of various corporate divisions
(sales, service, manufacturing, installation).

Mark Palmer
Digital

        (UUCP)  {decvax, ucbvax, allegra}!decwrl!rhea!nacho!mpalmer

        (ARPA)  decwrl!rhea!nacho!mpalmer@Berkeley
                decwrl!rhea!nacho!mpalmer@SU-Shasta

------------------------------

Date: 6 Feb 84 7:15:33-PST (Mon)
From: harpo!utah-cs!hansen @ Ucb-Vax
Subject: Re: AI made easy??
Article-I.D.: utah-cs.2473

I'd try Artificial Intelligence by Elaine Rich (McGraw-Hill).  It's easy
reading, not too technical but gives a good overview to the novice.

Chuck Hansen {...!utah-cs}

------------------------------

Date: 5 Feb 84 8:48:26-PST (Sun)
From: hplabs!sdcrdcf!darrelj @ Ucb-Vax
Subject: Re: Lisp Machines
Article-I.D.: sdcrdcf.813

There really no such things as reasonable benchmarks for systems as different
as various Lisp machines and VAXen are.  Each machine has different strengths
and weaknesses.  Here is a rough ranking of machines:
VAX 780 running Fortran/C standalone
Dorado (5 to 10X dolphin)
LMI Lambda, Symbolics 3600, KL-10 Maclisp (2 to 3X dolphin)
Dolphin, dandelion, 780 VAX Interlisp, KL-10 Interlisp

Relative speeds are very rough, and dependent on application.

Notes:  Dandelion and Dolphin have 16-bit ALUs, as a result most arithmetic
is pretty slow (and things like trancendental functions are even worse
because there's no way to floating arithmetic without boxing each
intermediate result).  There is quite a wide range of I/O bandwidth among
these machines -- up to 530 Mbits/sec on a Dorado, 130M on a dolphin).

Strong points of various systems:
Xerox: a family of machines fully compatible at the core-image level,
spanning a wide range of price and performance (as low as $26k for a minumum
dandelion, to $150k for a heavily expanded Dorado).  Further, with the
exception of some of the networking and all the graphics, it is very highly
compatible with both Interlisp-10 and Interlisp-VAX (it's reasonable to have
a single set of sources with just a bit of conditional compilation).
Because of the use of a relatively old dialect, they have a large and well
debugged manual as well.

LMI and Symbolics (these are really fairly similar as both are licensed from
the MIT lisp machine work, and the principals are rival factions of the MIT
group that developed it) these have fairly large microcode stores, and as
a result more things are fast (e.g. much of graphics primitives are
microcoded, so these are probably the machines for moby amounts of image
processing and graphics.  There are also tools for compiling directly to
microcode for extra speed.  These machines also contain a secondary bus such
as Unibus or Multibus, so there is considerable flexibility in attaching
exotic hardware.

Weak points:  Xerox machines have a proprietary bus, so there are very few
options (philosphy is hook it to something else on the Ethernet).  MIT
machines speak a new dialect of lisp with only partial compatible with
MACLISP (though this did allow adding many nice features), and their cost is
too high to give everyone a machine.

The news item to which this is a response also asked about color displays.
Dolphin:  480x640x4 bits.  The 4 bits go thru a color map to 24 bits.
Dorado:  480x640x(4 or 8 or 24 bits).  The 4 or 8 bits go thru a color map to
         24 bits.  Lisp software does not currently support the 24 bit mode.
3600:  they have one or two (the LM-2 had 512x512x?) around 1Kx1Kx(8 or 16
or 24) with a color map to 30 bits.
Dandelion:  probably too little I/O bandwidth
Lambda:  current brochure makes passing mention of optional standard and
         high-res color displays.

Disclaimer:  I probably have some bias toward Xerox, as SDC has several of
their machines (in part because we already had an application in Interlisp.

Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,sdccsu3,trw-unix}!sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

------------------------------

Date: 6 Feb 84 16:40 PDT
From: Kandt.pasa@PARC-MAXC.ARPA
Subject: Lisp Machines

I have seen several benchmarks as a former Symbolics and current Xerox
employee.  These benchmarks have typically compared the LM-2 with the
1100; they have even included actual or estimated(?) 3600, 1108, or 1132
performances.  These benchmarks, however, have seldom been very
informative because the actual test code is not provided or a detailed
discussion of the implementation.  For example, is the test on the
Symbolics machine coded in Zetalisp or with the Interlisp compatibility
package?  Or, in Interlisp, were fast functions used (FRPLACA vs.
RPLACA)?  (Zetalisp's RPLACA is equivalent to Interlisp's FRPLACA so
that if this transformation was not performed the benchmark would favor
the Symbolics machine.)  What about efficiency issues such as block
compiling, compiler optimizers, or explicitily declaring variables?
There are also many other issues such as what happens when the data set
gets very large in a real application instead of a toy benchmark or, in
Zetalisp, should you turn the garbage collector on (its not normally on)
and when you do what impact does it have on performance.  In summary, be
cautious about claims without thorough supportive evidence.  Also
realize that each machine has its own strengths and weaknesses; there is
no definitive answer.  Caveat emptor!

------------------------------

Date: Sat, 4 Feb 84 19:24 EST
From: Thomas Knight <tk@MIT-MC.ARPA>
Subject: Concurrent Symbolic Supercomputer

                      [Forwarded by SASW@MIT-MC]


                                FAIM-1

                       Fairchild AI Machine #1

              An Ultra-Concurrent Symbolic Supercomputer

                                  by


                           Dr. A. L. Davis
      Fairchild Laboratory for Artificial Intelligence Research

                       Friday, February 10, 1984


Presently AI researchers are being hampered in the development of large scale
symbolic applications such as expert systems, by the lack of sufficient machine
horsepower to execute the application programs at a sufficiently rapid rate to
make the application viable.  The intent of the FAIM-1 machine is to provide
a machine capable of 3 or 4 orders of magnitude performance improvement over
that currently available on today's large main-frame machines.  The
main source of performance increase is in the exploitation of concurrency at
the program, system, and architectural levels.

In addition to the normal ancillary support activities, the work is being
carried on in 3 areas:

        1.  Language Design - a frame based, object oriented language is being
            designed which allows the programmer to express highly concurrent
            symbolic algorithms.  The mechanism permits both logical and
            procedural programming styles in a unified message based semantics
            fashion.  In addition, the programmer may provide strategic
            information which aids the system in managing the concurrency
            structure on the physical resource components of the machine.

        2.  Machine Architecture - the machine derives its power from the
            homogeneous replication of a medium grain processor element.
            The element consists of a processor, message delivery subsystem,
            and a parallel pattern based memory subsystem known as the CxAM
            (Context Adressable Memory).  2 variants of a CxAM design are
            being done at this time and are targeted for fabrication on a
            sub 2 micron CMOS line.  The connection topology for the
            replicated elements is a 3 axis, single twist, Hex plane which
            has the advantages of planar wiring, easy extensibility, variable
            off surface bandwidth, and permits a variety of fault tolerant
            designs.  The Hex plane topology also permits nice hierarchical
            process growth without creating excess communication congestion
            which would cause false synchronization in otherwise concurrent
            activities.  In addition the machine is being designed in hopes
            of an eventual wafer-scale integrated implementation.

        3.  Resource Allocation - with any concurrent system which does not
            require machine dependent programming styles, there is a generic
            problem in mapping the concurrent activities extant in the program
            efficiently onto the multi-resource ensemble.  The strategy
            employed in the FAIM-1 system is to analyze the static structure of
            the source program, transform it into a graph, and then via a
            series of function preserving graph transforms produce a loadable
            version of the program which attempts to minimize communication
            cost while preserving the inherent concurrency structure.
            A certain level of dynamic compensation is guided by programmer
            supplied strategy information.

The talk will present an overview of the work we have done in these areas.

Host: Prof. Thomas Knight

------------------------------

Date: 8 Feb 84 15:59:49 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on Expert Systems this coming Tuesday...

                    [Reprinted from the Rutgers bboard.]

                                 I I I SEMINAR


          Title:    Automation of Modeling, Simulation and Experimental
                    Design - An Expert System in Enzyme Kinetics

          Speaker:  Von-Wun Soo

          Date:     Tuesday, February 14,1983, 1:30-2:30 PM

          Location: Hill Center, Seventh floor lounge


  Von-Wun Soo, a Ph.D. student in our department, will give an informal talk on
the thesis research he is proposing.  This is his abstract:

       We  are proposing to develop a general knowledge engineering tool to
    aid biomedical researchers in developing biological models and  running
    simulation experiments. Without such powerful tools, these tasks can be
    tedious  and  costly.  Our aim is to integrate these techniques used in
    modeling, simulation, optimization, and experimental design by using an
    expert system approach. In addition we propose to carry out experiments
    on the processes of theory formation used by the scientists.

    Enzyme kinetics is the domain where we are concentrating  our  efforts.
    However, our research goal is not restricted to this particular domain.
    We  will attempt to demonstrate with this special case, how several new
    ideas  in  expert  problem  solving  including  automation  of   theory
    formation,  scientific  discovery,  experimental  design, and knowledge
    acquisition can be further developed.

    Four modules have been designed in parallel:  PROKINAL, EPX, CED, DISC.

    PROKINAL is a model generator which simulates the qualitative reasoning
    of the kineticists who conceptualize and postulate a reaction mechanism
    for a set of experimental data. By using a general procedure  known  as
    the  King-Altman  procedure to convert a mechanism topology into a rate
    law function, and  symbolic  manipulation  techniques  to  factor  rate
    constant   terms   to   kinetic   constant   term,  PROKINAL  yields  a
    corresponding FORTRAN function which computes the reaction rate.

    EPX is a model simulation aid which is designed by combining EXPERT and
    PENNZYME. It is supposed to guide the novice user in  using  simulation
    tools  and  interpreting  the  results.  It  will take the data and the
    candidate model that has been generated from PROKINAL and estimate  the
    parameters by a nonlinear least square fit.

    CED  is a experimental design consultant which uses EXPERT to guide the
    computation of experimental conditions.  Knowledge  of  optimal  design
    from  the  statistical  analysis  has  been taken into consideration by
    EXPERT in order to give advice  on  the  appropriate  measurements  and
    reduce the cost of experimentation.

    DISC  is  a  discovery  module which is now at the stage of theoretical
    development. We wish to explore and simulate the behavior of scientific
    discovery in enzyme kinetics research and use the results in automating
    theory formation tasks.

------------------------------

Date: 09 Feb 84  2146 PST
From: Rod Brooks <ROD@SU-AI>
Subject: CSD Colloquium

                [Reprinted from the Stanford bboard.]

CSD Colloquium
Tuesday 14th, 4:30pm Terman Aud
Michael P. Georgeff, SRI International
"Synthesizing Plans for Co-operating Agents"

Intelligent agents need to be able to plan their activities so that
they can assist one another with some tasks and avoid harmful
interactions on others.  In most cases, this is best achieved by
communication between agents at execution time. This talk will discuss
a method for synthesizing a synchronized multi-agent plan to achieve
such cooperation between agents.  The idea is first to form
independent plans for each individual agent, and then to insert
communication acts into these plans to synchronize the activities of
the agents.  Conditions for freedom from interference and cooperative
behaviour are established.  An efficient method of interaction and
safety analysis is then developed and used to identify critical
regions and points of synchronization in the plans.  Finally,
communication primitives are inserted into the plans and a supervisor
process created to handle synchronization.

------------------------------

End of AIList Digest
********************

∂11-Feb-84  0121	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #15
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  01:21:08 PST
Date: Fri 10 Feb 1984 22:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #15
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 15

Today's Topics:
  Proofs - Fermat's Theorem & 4-Color Theorem,
  Brain Theory - Parallelism
----------------------------------------------------------------------

Date: 04 Feb 84  0927 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Fermat and decidability

From the logical point of view, Fermat's last theorem is a Pi-1
statement. It follows that it is decidable. Whether it is valid
or not is another matter.

------------------------------

Date: Sat 4 Feb 84 13:13:14-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Spencer-Brown's Proof

I don't know anything about the current status of the computer proof of the
4-colour theorem, though the last I heard (five years ago) was that it was
"probably OK".   That's why I use the word "theorem".   However, I can shed
some light on Spencer-Brown's alleged proof -- I was present at a lecture in
Cambridge where he supposedly gave the outline of the proof, and  I applauded
politely, but was later fairly authoritatively informed that it disintegrated
under closer scrutiny.   This doesn't *necessarily* mean that the man is a
total flake, since other such proofs by highly reputable mathematicians have
done the same (we are told that one proof was believed for twelve whole years,
late in the 19th century, before its flaw was discovered).
                                                                - Richard

------------------------------

Date: Mon, 6 Feb 84 14:46:43 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Scientific Method

Isn't it interesting that most of what we think about proofs is belief!
I guess until one actually retraces the steps of a proof and their
justifications one can only express his belief in its truth or falsness.

  --Charlie

------------------------------

Date: 3 Feb 84 8:48:01-PST (Fri)
From: harpo!eagle!allegra!alan @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: allegra.2254

I've been reading things like:

        My own introspection seem to indicate that ...
        I find, upon introspection, that ...
        I find that most of what my brain does is ...
        I also feel like ...
        I agree that based on my own observations, my brain appears to
          be ...

Is this what passes for scientific method in AI these days?

        Alan S. Driscoll
        AT&T Bell Laboratories

------------------------------

Date: 2 Feb 84 14:40:23-PST (Thu)
From: decvax!genrad!grkermit!masscomp!clyde!floyd!cmcl2!rocky2!cucard!
      aecom!alex @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: aecom.358

        If the brain was a serial processor, the limiting processing speed
would be the speed that neurons conduct signals. Humans, however, do
very complex processing in real time! The other possibility is that the
data structures of the brain are HIGHLY optimized.


                                Alex S. Fuss
                        {philabs, esquire, cucard}!aecom!alex

------------------------------

Date: Tue, 7 Feb 84 13:09:25 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: I can think in parale||,

but most of time I'm ---sequential. For example, a lot of * I can
talk with (:-{) and at the same time I can be thinking on s.m.t.i.g
else. I also do this when ai-list gets too boring: I keep browsing
until I find something intere sting, and then I do read, with a better
level of under-standing. In the u-time, I can daydream...

However, If I really want to get s.m.t.i.g done, then I cannot think
on anything else! In this cases, I just have one main-stream idea in
my mind. When I'm looking for a solution, I seldom use depth first,
or bread first search. Most of the time I use a convynatium of all
these tricks I know to search, until one 'works'.

To + up, I think we @|-< can do lots of things in lots of ways. And
until we furnish computers with all this tools, they won't be able to
be as intelligent as us. Just parale|| is not the ?↑-1.

        Adolfo
              ///

------------------------------

Date: 7 Feb 1984 1433-PST
From: EISELT%UCI-20A@Rand-Relay
Subject: More on Philip Kahn's reply to Rene Bach

I recently asked Philip Kahn (via personal net mail) to elaborate on his three
cycle model of thought, which he described briefly in his reply to Rene Bach's
question.  Here is my request, and his reply:

                      -------------------------

  In your recent submission to AIList, you describe a three-process cycle
model of higher-level brain function.  Your model has some similarities to
a model of text understanding we are working on here at UC Irvine.  You say,
though, that there are "profuse psychophysical and psychological studies that
reinforce the ... model."  I haven't seen any of these studies and would
be very interested in reading them.  Could you possibly send me references
to these studies?  Thank you very much.

Kurt Eiselt
eiselt@uci-20a


                       ------------------------

Kurt,

        I said "profuse" because I have come across many psychological
and physiological studies that have reinforced my belief.  Unfortunately,
I have very few specific references on this, but I'll tell you as much as
I can....

        I claim there are three stages: associational, reasonability, and
context.  I'll tell you what I've found to support each.  Associational
nets, also called "computational" or "parameter" nets, have been getting
a lot of attention lately.  Especially interesting are the papers coming out
of Rochester (in New York state).  I suggest the paper by Feldman called
"Parameter Nets."  Also, McCullough in "Embodiments of Mind" introduced a
logical calculus that he proposes neural mechanisms use to form assocational
networks.  Since then, a considerable amount of work has been done on
logical calculus, and these works are directly applicable to the analysis
of associational networks.  One definitive "associational network" found
in nature that has been exhaustively defined by Ratliff is the lateral
inhibition that occurs in the linear image sensor of the Limulus crab.
Each element of the network inhibits its neighbors based upon its value,
and the result is the second spatial derivative of the image brightness.
Most of the works you will find to support associational nets are directly
culled from neurophysiological studies.  Yet, classical conditioning
psychology defines the effects of association in its studies on forward and
backward conditioning.  Personally, I feel the biological proof of
associational nets is more concrete.
        The support for a "reasonability" level of processing has more
psychological support, because it is generally a cognitive process.
For example, learning is facilitated by subject matter that is most
consistent with past knowledge; that is, knowledge is most facilitated by
a subject that is most "reasonable" in light of past knowledge.
Some studies have shown, though I can't cite them, that the less
"reasonable" a learning task, the lesser is the learned performance.
I remember having seen at least a paper (I believe it was by a natural
language processing researcher) that claimed that the facility of language
is a metaphorical process.  By definition, a metaphor is the comparison
of alike traits in dissimilar things; it seems to me this is a very good
way to look at the question of reasonability.  Again, though, no specific
references.  In neurophysiology there are found "feedback loops" that
may be considered "reasonability" testers in so far that they take action
only when certain conditions are not met.  You might want to look at work
done on the cerebellum to document this.
        "Context" has been getting a lot of attention lately.  Again,
psychology is the major source of supporting evidence, yet neurophysiology
has its examples also.  Hormones are a prime example of "contextual"
determinants.  Their presence or absence affects the processing that
occurs in the neurons that are exposed to them.  But on a more AI level,
the importance of context has been repeatedly demonstrated by psychologists.
I believe that context is a learned phenomena.  Children have no construct
of context, and thus, they are often able to make conclusions that may be
associationally feasible, yet clearly contrary to the context of presentation.
Context in developmental psychology has been approached from a more
motivational point of view.  Maslowe's hierarchies and the extensive work
into "values" are all defining different levels of context.  Whereas an
associational network may (at least in my book) involve excitatory
nodal influences, context involves inhibitory control over the nodes in
the associational network.  In my view, associational networks only know
(always associated), (often associated), and (weak association).
(Never associated) dictates that no association exists by default.  A
contextual network knows only that the following states can occur between
concepts: (never can occur) and (rarely occurs).  These can be defined using
logical calculus and learning theory.  The associational links are solely
determined by event pairing and is a more dynamic event.  Contextual
networks are more stable and can be the result of learning as well as
by introspective analysis of the associational links.
        As you can see, I have few specific references on "context," and rely
upon my own theory of context.  I hope I've been of some help, and I would
like to be kept apprised of your work.  I suggest that if you want research
evidence of some of the above, that you examine indices on the subjects I
mentioned.  Again,

                Good luck,
                Philip Kahn

------------------------------

Date: 6 Feb 84 7:18:25-PST (Mon)
From: harpo!ulysses!mhuxl!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: brain, a parallel processor ?
Article-I.D.: hou5d.809

See the Feb. Scientific American for an article on typists and speed.  There
is indeed evidence for a high degree of parallelism even in SIMILAR tasks.

                                                Mark Terribile

------------------------------

Date: Wed,  8 Feb 84 18:19:09 CST
From: Doug Monk <bro@rice>
Subject: Re: AIList Digest   V2 #11

Subject : Mike Brzustowicz's 'tip of the tongue' as parallel process

Rather than being an example of parallel processing, the 'tip of the
tongue' phenomenon is probably more an example of context switch, where
the attempt to recall the information displaces it temporarily, due to
too much pressure being brought to bear. ( Perhaps a form of performance
anxiety ? ) Later, when the pressure is off, and the processor has a spare
moment, a smaller recall routine can be used without displacing the
information. This model assumes that concentrating on the problem causes
more of the physical brain to be involved in the effort, thus perhaps
'overlaying' the data desired. Once a smaller recall routine is used,
the recall can actually be performed.

        Doug Monk       ( bro.rice@RAND-RELAY )

------------------------------

Date: 6 Feb 84 19:58:33-PST (Mon)
From: ihnp4!ihopa!dap @ Ucb-Vax
Subject: Re: parallel processing in the brain
Article-I.D.: ihopa.153

If you consider pattern recognition in humans when constrained to strictly
sequential processing, I think we are MUCH slower than computers.

In other words, how long do you think it would take a person to recognize
a letter if he could only inquire as to the grayness levels in different
pixels?  Of course, he would not be allowed to "fill in" a grid and then
recognize the letter on the grid.  Only a strictly algorithmic process
would be allowed.

The difference here, as I see it, is that the human mind DOES work in
parallel.  If we were forced to think sequentially about each pixel in our
field of vision, we would become hopelessly bogged down.  It seems to me
that the most likely way to simulate such a process is to have a HUGE
number of VERY dumb processors in a heirarchy of "meshes" such that some
small number of processors in common localities in a low level mesh would
report their findings to a single processor in the next higher level mesh.
This processor would do some very quick, very simple calculations and pass
its findings on the the next higher level mesh.  At the top level, the
accumulated information would serve to recognize the pattern.  I'm really
speaking off the top of my head since I'm no AI expert.  Does anybody know if
such a thing exists or am I way off?

Darrell Plank
BTL-IH
ihopa!dap

[Researchers at the University of Maryland and at the University of
Massachusetts, among others, have done considerable work on "pyramid"
and "processing cone" vision models.  The multilayer approach was
also common in perceptron-based pattern recognition, although very
little could be proven about multilayer networks.  -- KIL]

------------------------------

End of AIList Digest
********************

∂11-Feb-84  0215	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #16
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  02:14:33 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Fri 10 Feb 1984 23:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #16
To: AIList@SRI-AI


AIList Digest           Saturday, 11 Feb 1984      Volume 2 : Issue 16

Today's Topics:
  Lab Description - New UCLA AI Lab,
  Report - National Computing Environment for Academic Research,
  AI Journal - New Developments in the Assoc. for Computational Linguistics,
  Course - Organization Design,
  Conference - Natural Language and Logic Programming & Systems Science
----------------------------------------------------------------------

Date: Fri, 3 Feb 84 22:57:55 PST
From: Michael Dyer <dyer@UCLA-CS>
Subject: New UCLA AI Lab

              Announcing the creation of a new Lab for
              Artificial Intelligence Research at UCLA.


Just recently, the UCLA CS  department  received  a  private  foundation
grant  of  $450,000  with  $250,000  matching  funds  from the School of
Engineering and Applied Sciences to create a Laboratory  for  Artificial
Intelligence  Research.   The  departmental chairman as well as the dean
strongly support this effort and are both committed to the growth of  AI
at UCLA.

In  addition, UCLA has been chosen as the site of the next International
Joint Conference on Artificial Intelligence (IJCAI-85) in August, 1985.

UCLA is second in the nation among public research universities  and  in
the  top  six overall in quality of faculty, according to a new national
survey of 5,000 faculty and 228  universities.   In  a  two  year  study
(conducted  by the Conference Board of the Associated Research Councils,
consisting of the American Council of Learned  Societies,  the  American
Council  on  Education,  the  National  Research  Council and the Social
Science Research Council) the UCLA  Computer  Science  Dept.   tied  for
sixth place with U.  of Ill., after Stanford, MIT, CMU, UC Berkeley, and
Cornell.

The UCLA CS department is the recipient (in  1982)  of  a  $3.6  million
five-year  NSF  Coordinated  Experimental Research grant, augmented by a
$1.5 million award from DARPA.

Right  now  the  AI lab includes a dozen Apollo DN300 workstations on an
Apollo Domain ring network.  This ring is attached via an ethernet  gate
to the CS department LOCUS network of 20 Vax 750s and a 780.  UCLA is on
the Arpanet and CSNet.  Languages include Prolog and T  (a  Scheme-based
dialect of lisp).  A number of DN320s, DN460s and a color Apollo (DN660)
are on order and will be  housed  in  a  new  area  being  reserved  for
graduate  AI research.  One Vax 750 on the LOCUS net and 10 Apollos will
be  reserved for graduate AI instruction.  Robotics and vision equipment
is also being acquired.  The  CS  dept  is  seeking  an  assist.   prof.
(tenure  track) in the area of AI, with preference for vision, robotics,
problem-solving, expert systems, learning, and simulation  of  cognitive
processes.  The new AI faculty member will be able to direct expenditure
of a portion of available funds.  (Interested AI PhDs, reply to  Michael
Dyer, CS dept, UCLA, Los Angeles, CA 90024.  Arpanet:  dyer@ucla-cs).

Our AI effort is new, but growing, and includes the following faculty:

     Michael Dyer: natural language processing, cognitive modeling.
     Margot Flowers: reasoning, argumentation, belief systems.
     Judea Pearl: theory of heuristics, search, expert systems.
     Alan Klinger: signal processing, pattern recognition, vision.
     Michel Melkanoff: CAD/CAM, robotics.
     Stott Parker: logic programming, databases.

------------------------------

Date: 26 Jan 84 14:22:30-EDT (Thu)
From: Kent Curtis <curtis%nsf-cs@CSNet-Relay>
Subject: A National Computing Environment for Academic Research

The National Science Foundation has released a report entitled "A National
Computing Environment for Academic Research" prepared by an NSF Working Group
on Computers for Research, Kent Curtis, Chairman. The table of contents is:

Executive Summary

I. The Role of Modern Computing in Scientific and Engineering Research
        with Special Concern for Large Scale Computation

        Background

        A. Summary of Current Uses and Support of Large Scale Computing for
           Research

        B. Critique of Current Facilities and Support Programs

        C. Unfilled Needs for Computer Support of Research

II. The Role and Responsibilities of NSF with Respect to Modern Scientific
    Computing

III. A Plan of Action for the NSF: Recommendations

IV. A Plan of Action for the NSF: Funding Implications

Bibliography

Appendix
        Large-scale Computing Facilities

If you are interested in receiving a copy of this report contact
Kent Curtis, (202) 357-9747; curtis.nsf-cs@csnet-relay;
or write Kent K. Curtis
         Div. of Computer Research
         NSF
         Washington, D.C.  20550

------------------------------

Date: 10 Feb 84 09:35:51 EST (Fri)
From: Journal Duties  <acl@Rochester.ARPA>
Subject: ~New Developments in the Assoc. for Computational Linguistics


The AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS -- Some New Developments

    The AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is the major
international journal devoted entirely to computational approaches to
natural language research.  With the 1984 volume, its name is being changed
to COMPUTATIONAL LINGUISTICS to reflect its growing international coverage.
There is now a European chapter of the ASSOCIATION FOR COMPUTATIONAL
LINGUISTICS and a growing interest in forming one in Asia.

The journal also has many new people on its Editorial Staff.  James Allen,
of the University of Rochester, has taken over as Editor.  The FINITE STRING
Editor is now Ralph Weischedel of the University of Delaware.  Lyn Bates of
Bolt Beranek and Newman is the Book Review Editor.  Michael McCord, now at
IBM, remains as Associate Editor.

With these major changes in editorial staffing, the journal has fallen
behind schedule.  In order to catch up this year, we will be publishing
close to double the regular number of issues.  The first issue for 1983,
which was just mailed out, contains papers on "Paraphrasing Questions Using
Given and New Information" by Kathleen McKeown and "Denotational Semantics
for 'Natural' Language Question-Answering Programs" by Michael Main and
David Benson.  There is a lengthy review of Winograd's new book by Sergei
Nirenburg and a comprehensive description of the new Center for the Study
of Language and Information at Stanford University.

Highlights of the forthcoming 1983 AJCL issues:

   - Volume 9, No. 2 (expected March '84) will contain, in addition
to papers on "Natural Language Access to Databases: Interpreting Update
Requests" by Jim Davidson and Jerry Kaplan and "Treating Coordination
in Logic Grammars" by Veronica Dahl and Michael McCord, will be accompanied
by a supplement: a Directory of Graduate Programs in Computational Linguistics.
The directory is the result of two years of surveys, and provides a fairly
complete listing of programs available internationally.

   - Volume 9, Nos. 3 and 4 (expected June '84) will be a special double
issue on Ill-Formed Input.  The issue will cover many aspects of processing
ill-formed sentences from syntactic ungrammaticality to dealing with inaccurate
reference.  It will contain papers from many of the research groups that
are working on such problems.

    We will begin publishing Volume 10 later in the summer.  In addition
to the regular contributions, we are planning a special issue on the
mathematical properties of grammatical formalisms.  Ray Perrault (now at
SRI) will be guest editor for the issue, which will contain papers addressing
most of the recent developments in grammatical formalisms (e.g., GPSG,
Lexical-Function Grammars, etc).  Also in the planning stage is a special
issue on Machine Translation that Jonathan Slocum is guest editing.

    With its increased publication activity in 1984, COMPUTATIONAL
LINGUISTICS can provide authors with an unusual opportunity to have their
results published in the international community with very little delay.
A paper submitted now (early spring '84) could actually be in print by the
end of the year, provided that major revisions need not be made.  Five
copies of submissions should be sent to:

                 James Allen, CL Editor
                 Dept. of Computer Science
                 The University of Rochester
                 Rochester, NY 14627, USA

    Subscriptions to COMPUTATIONAL LINGUISTICS come with membership in the
ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, which still is only $15 per year.
As a special bonus to new members, those who join the ACL for 1984 before
August will receive the special issue on Ill-Formed Input, even though it is
formally part of the volume for 1983.

To become a member, simply send your name, address and a check made out to
the Association for Computational Linguistics to:

                  Don Walker, ACL membership
                  SRI International
                  333 Ravenswood Avenue
                  Menlo Park, CA 94025, USA

People in Europe or with Swiss accounts can pay an equivalent value in Swiss
francs, by personal check in their own currency, or by a banker's draft that
credits account number 141.880.LAV at the Union Bank of Switzerland, 8 rue
de Rhone, CH-1211 Geneva 11, SWITZERLAND; send the statement with payment or
with a copy of the bank draft to:

                  Mike Rosner, ACL
                  ISSCO
                  54, route des Acacias
                  CH-1227 Geneva, SWITZERLAND

------------------------------

Date: Wednesday, 8 February 1984, 14:28-EST
From: Gerald R. Barber <JERRYB at MIT-OZ>
Subject: Course Announcement: Organization Design

                     [Forwarded by SASW@MIT-MC.]

The following is an announcement for a course that Tom Malone and I are
organizing for this spring term.  Anyone who is interested can come to
the course or contact:

        Tom Malone
        Malone@XX
        E53-307, x6843,
        or
        Jerry Barber
        Jerryb@OZ
        NE43-809, x5871



                          Course Announcement
                       15.963 Oranization Design

                  Wednesdays, 2:30 - 5:30 p.m, E51-016
                          Prof. Thomas Malone

In this graduate seminar we will review research from a number of
fields, identifying general principles of organization design that apply
to many kinds of information processing systems, including human
organizations and computer systems.  This novel approach will integrate
examples and theories from computer science, artificial intelligence,
organization theory and economics.  The seminar will also include
discussion of several special issues that arise when these general
principles are applied to designing organizations that include both
people and computers.

A partial list of topics includes:

I.  Introduction
        A. What is an organization?
                Scott, March & Simon, Etzioni, etc
        B. What is design?
                Simon: Science of Design

II. Alternative Organizational Designs
        A. Markets
                Computer Systems: Contract Nets, Enterprise
                Organizational Theories: Simon, Arrow, Hurwicz
        B.  Hierachies
                Computer Systems: Structured programming, inheritance
                  hierarchies
                Organizational Theories: Simon, March, Cyert, Galbraith,
                  Williamson
        C. Cooperating experts (or teams)
                Computer Systems: Hearsay, Ether, Actors, Smalltalk, Omega
                Organizational Theories: Marschak & Radner, Minsky & Papert

III. Integrating Computer Systems and Human Organizations
        A. Techniques for analyzing organizational needs
                Office Analysis Methodology, Critical Success Factors,
                Information Control Networks, Sociotechnical systems
        B. Possible technologies for supporting organizational problem-solving
                Computer conferencing, Knowledge-based systems

------------------------------

Date: Thu 2 Feb 84 20:35:47-PST
From: Pereira@SRI-AI
Subject: Natural Language and Logic Programming


                           Call for Papers

                      International Workshop On
                    Natural Lanugage Understanding
                        and Logic Programming

                Rennes, France - September 18-20, 1984

The workshop will consider fundamental principles and important
innovations in the design, definition, uses and extensions of logic
programming for natural language understanding and, conversely, the
adequacy of logic programming to express natural language grammar
formalisms. The topics of interest are:

* Formal representations of natural language
* Logic grammar formalisms
* Linguistic aspects (anaphora, coordination,...)
* Analysis methods
* Natural language generation
* Uses of techniques for logic grammars (unification)
  in other grammar formalisms
* Compilers and interpreters for grammar formalisms
* Text comprehension
* Applications: natural-language front ends (database
  interrogation, dialogues with expert systems...)

Conference Chairperson

Veronica Dahl  Simon Fraser University,
               Burnaby B.C. V5A 1S6
               Canada

Program Committee

H. Abrahamson (UBC, Canada)        F. Pereira (SRI, USA)
A. Colmerauer (GIA, France)        L. Pereira (UNL, Portugal)
V. Dahl (Simon Fraser U., Canada)  P. Sabatier (CNRS, France)
P. Deransart (INRIA, France)       P. Saint-Dizier (IRISA, France)
M. Gross (LADL, France)            C. Sedogbo (Bull, France)
M. McCord (IBM, USA)

Sponsored by: IRISA, Groupe BULL, INRIA

Deadlines:

        April 15:       Submission of papers in final form
        June 10:        Notification of acceptance to authors
        July 10:        Registration in the Workshop

Submission of papers:

Papers should contain the following items: abstract and title of
paper, author name, country, affiliation, mailing address and
phone (or telex) number, one program area and the following
signed statement: ``The paper will be presented at the Workshop
by one of the authors''.

Summaries should explain what is new or interesting abount
the work and what has been accomplished. Papers must report
recent and not yet published work.

Please send 7 copies of a 5 to 10 page single spaced manuscript,
including a 150 to 200 word abstract to:

-- Patrick Saint-Dizier
   Local Organizing Committee
   IRISA - Campus de Beaulieu
   F-35042 Rennes CEDEX - France
   Tel: (99)362000 Telex: 950473 F

------------------------------

Date: Sat, 4 Feb 84 10:18 cst
From: Bruce Shriver <ShriverBD.usl@Rand-Relay>
Subject: call for papers announcement

                              Eighteenth Annual
                       HAWAII INTERNATIONAL CONFERENCE
                                      ON
                               SYSTEM SCIENCES
                     JANUARY 2-4, 1985 / HONOLULU, HAWAII

This is the eighteenth in a series  of  conferences  devoted  to  advances  in
information  and  system sciences.  The conference will encompass developments
in theory or practice in the areas of  COMPUTER  HARDWARE  and  SOFTWARE,  and
advanced  computer  systems  applications in selected areas.  Special emphasis
will be devoted to MEDICAL  INFORMATION  PROCESSING,  computer-based  DECISION
SUPPORT SYSTEMS for upper-level managers in organizations, and KNOWLEDGE-BASED
SYSTEMS.

                               CALL FOR PAPERS

Papers are invited in the preceeding and related areas and may be theoretical,
conceptual,  tutorial  or descriptive in nature.  The papers submitted will be
refereed and those selected for conference presentation will be printed in the
CONFERENCE PROCEEDINGS; therefore, papers submitted for presentation must  not
have  been  previously presented or published.  Authors of selected papers are
expected to attend the conference to  present  and  discuss  the  papers  with
attendees.

Relevant topics include:
                                  Deadlines
HARDWARE                          * Abstracts may be submitted to track
* Distributed Processing            chairpersons for guidance and indication
* Mini-Micro Systems                of appropriate content by MAY 1, 1984.
* Interactive Systems               (Abstract is required for Medical
* Personal Computing                Information Processing Track.)
* Data Communication              * Full papers must be mailed to appropriate
* Graphics                          track chairperson by JULY 6, 1984.
* User-Interface Technologies     * Notification of Accepted papers will be
                                    mailed to the author on or before
SOFTWARE                            SEPTEMBER 7, 1984.
* Software Design Tools &         * Final papers in camera-ready form will
  Techniques                        be due by OCTOBER 19, 1984.
* Specification Techniques
* Testing and Validation
* Performance Measurement &       Instructions for Submitting Papers
  Modeling                        1. Submit three copies of the full paper,
* Formal Verification                not to exceed 20 double-spaced pages,
* Management of Software             including diagrams, directly to the
  Development                        appropriate track chairperson listed
                                     below, or if in doubt, to the conference
APPLICATIONS                         co-chairpersons.
* Medical Information             2. Each paper should have a title page
  Processing Systems                 which includes the title of the paper,
* Computer-Based Decision            full name of its author(s), affiliat-
  Support Systems                    ation(s), complete address(es), and
* Management Information Systems     telephone number(s).
* Data-Base Systems for           3. The first page should include the
  Decision Support                   title and a 200-word abstract of the
* Knowledge-Based Systems            paper.

                                   SPONSORS
The  Eighteenth  Annual  Hawaii  International Conference on System Science is
sponsored by the University of  Hawaii  and  the  University  of  Southwestern
Louisiana, in cooperation with the ACM and the IEEE Computer Society.

HARDWARE                            All Other Papers
Edmond L. Gallizzi                  Papers not clearly within one of the
HICSS-18 Track Chairperson          aforementioned tracks should be mailed
Eckerd College                      to:
St. Petersberg, FL 33733            Ralph H. Sprague, Jr.
(813) 867-1166                      HICSS-18 Conference Co-chairperson
                                    College of Business Administration
SOFTWARE                            University of Hawaii
Bruce D. Shriver                    2404 Maile Way, E-303
HICSS-18 Track Chairperson          Honolulu, HI 96822
Computer Science Dept.              (808)948-7430
U. of Southwestern Louisiana
P. O. Box 44330
Lafayette, LA 70504                 Conference Co-Chairpersons
(318) 231-6284                      RALPH H. SPRAGUE, JR.
                                    BRUCE D. SHRIVER
DECISION SUPPORT SYSTEM &
KNOWLEDGE-BASED SYSTEMS             Contributing Sponsor Coordinator
Joyce Elam                          RALPH R. GRAMS
HICSS-18 Track Chairperson          College of Medicine
Dept. of General Business           Department of Pathology
BEB 600                             University of Florida
U. of Texas at Austin               Box J-275
Austin, TX 78712                    Gainesville, FL 32610
(512) 471-3322                      (904) 392-4571

MEDICAL INFORMATION PROCESSING      FOR FURTHER INFORMATION
Terry M. Walker                     Concerning Conference Logistics
HICSS-18 Track Chairperson          NEM B. LAU
Computer Science Dept.              HICSS-18 Conference Coordinator
U. of Southwestern Louisiana        Center for Executive Development
P. O. Box 44330                     College of Business Administration
Lafayette, LA 70504                 University of Hawaii
(318) 231-6284                      2404 Maile Way, C-202
                                    Honolulu, HI 96822
                                    (808) 948-7396
                                    Telex: RCA 8216 UHCED    Cable: UNIHAW

The HICSS conference is a non-profit activity organized to provide a forum for
the  interchange of ideas, techniques, and applications among practitioners of
the system sciences.  It maintains objectivity to the systems sciences without
obligation to any commercial  enterprise.   All  attendees  and  speakers  are
expected  to  have  their  respective companies, organizations or universities
bear the costs of their expenses and registration fees.

------------------------------

End of AIList Digest
********************

∂11-Feb-84  2236	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #17
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  22:34:28 PST
Date: Sat 11 Feb 1984 20:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #17
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Feb 1984       Volume 2 : Issue 17

Today's Topics:
  Jargon - Glossary of NASA Terminology,
  Humor - Programming Languages
----------------------------------------------------------------------

Date: 23 Jan 84 7:41:17-PST (Mon)
From: hplabs!hao!seismo!flinn @ Ucb-Vax
Subject: Glossary of NASA Terminology

[Reprinted from the Space Digest by permission of the author.
This strikes me as an interesting example of a "natural sublanguage."
It does not reflect the growth and change of NASA jargon, however:
subsequent discussion on the Space Digest indicates that many of the
terms date back eight years and many newer terms are missing.  The
author and others are continuing to add to the list. -- KIL]


        I've been collecting examples of the jargon in common use by
people at NASA Headquarters.  Here is the collection so far:
I have not made any of these up.  I'd be glad to hear of worthy
additions to the collection.

        The 'standard NASA noun modifiers' are nouns used as
adjectives in phrases like 'science community' or 'planetary area.'
Definitions have been omitted for entries whose meaning ought to be
clear.

        -- Ted Flinn

Action Item
Actors in the Program
Ancillary
Ankle: 'Get your ankles bitten' = running into unexpected trouble.
Ant: 'Which ant is steering this log?' = which office is in charge
        of a project.
Appendice (pronounced ap-pen-di-see):  some people, never having
        seen a document with only one appendix, think that this
        is the singular of 'appendices.'
Area:  Always as 'X Area,' where X is one of the standard NASA
        noun modifiers.
Asterick:  pronounced this way more often than not.
Back Burner
Bag It: 'It's in the bag' = it's finished.
Ball of Wax
Baseline: verb or noun.
Basis:  Always as 'X Basis,' where X is one of the standard NASA
         noun modifiers.
Bean Counters:  financial management people.
Bed: 'Completely out of bed' = said of people whose opinions
        are probably incorrect.
Belly Buttons: employees.
Bench Scientists
Bend Metal:  verb, to construct hardware.
Bending Your Pick:  unrewarding activity.
Bent Out of Shape:  disturbed or upset, of a person.
Big Picture
Big-Picture Purposes
Bite the Bullet
Big-Ticket Item: one of the expensive parts.
Black-belt Bureaucrat:  an experienced and knowledgable government
        employee.
Bless: verb, to approve at a high level of management.
Blow One's Skirts Up:  usually negative: 'that didn't blow
        their skirts up' = that didn't upset them.
Blow Smoke:  verb, to obfuscate.
Blown Out of the Water
Bottom Line
Bounce Off: to discuss an idea with someone else.
Brassboard (see Breadboard).
Breadboard (see Brassboard).
Bullet: one of the paragraphs or lines on a viewgraph, which are
         *never* numbered, but always labelled with a bullet.
Bulletize:  to make an outline suitable for a viewgraph.
Bureaucratic Hurdles
Burn:  verb, to score points off a competitor.
Burning Factor:  one of the critical elements.
Calibrate:  verb, to judge the capabilities of people or
              organizations.
Camel's Nose in the Tent
Can of Worms
Canned:  finished, as 'it's in the can.'
Can't Get There From Here.
Capture a Mission:  verb, to construct a launch vehicle for
                        a space flight.
Carve Up the Turkey
Caveat:  usually a noun.
Centers:  'on N-week centers' = at N-week intervals.
Choir, Preaching to the
Clock is Ticking = time is getting short.
Code:  Every section at NASA centers or Headquarters has a label
        consisting of one or more letters or numbers, and in
        conversations or less formal memos, sections are always
        referred to by the code rather than the name:
        Code LI, Code 931, Code EE, etc.
Commonality
Community:  'X Community,' where X is one of the standard NASA
                noun modifiers.
Concept:  'X Concept,' where X is one of the standard NASA
                noun modifiers.
Concur: verb, to agree.
Configure:  verb.
Constant Dollars:  cost without taking inflation into account
        (see Real-Year Dollars).
Contract Out
Core X:  The more important parts of X, where X is one of the
          nouns used as modifiers.
Correlative
Cost-Benefit Tradeoff
Cross-Cut:  verb, to look at something a different way.
Crump:  transitive verb, to cause to collapse.
Crutch: flimsy argument.
Cut Orders:  to fill out a travel order form; left over from the
                days when this was done with mimeograph stencils.
Cutting Edge
Data Base
Data Dump:  a report made to others, usually one's own group.
Data Point:  an item of information.
Debrief:  transitive verb, to report to one's own staff after
            an outside meeting.
Deep Yoghurt:  bad trouble.
Definitize:  verb, to make precise or definite.
De-integrate:  verb, to take apart (not dis-).
De-lid:  verb, to take the top off an instrument.
Delta:  an increment to cost or content.
Descope:  verb, to redesign a project as a result of budget
           cuts (not the opposite of scope, q.v.).
Development Concept
Dialog:  transitive verb.
Disadvantage:  transitive verb.
Disgruntee:  non-NASA person unhappy with program decisions.
Dog's Breakfast
Dollar-Limited
Driver:  an item making up a significant part of cost or
           schedule: 'X is the cost driver.'
Drop-Dead Date:  the real deadline; see 'hard deadline.'
Ducks in a Row
Egg on One's Face
End Item:  product.
End-Run the System
End to End
Extent to Which
Extramural
Facilitize:  verb, to make a facility out of something.
Factor in:  verb.
Feedback:  reaction of another section or organization to
             a proposition.
Fill This Square
Finalize
Finesse The System
First Cut:  preliminary estimate.
Fiscal Constraints
Flag:  verb, to make note of something for future reference.
Flagship Program
Flex the Parameters
Flux and Change
What Will Fly:  'see it if will fly.'
Folded In:  taken into account.
Forest: miss the f. for the trees.
Forgiving, unforgiving:  of a physical system.
Front Office
Full-Up:  at peak level.
Future:  promise or potential, as, 'a lot of potential future.'
Futuristic
Gangbusters
Glitch
Grease the Skids
Green Door:  'behind the green door' = in the Administrator's offices.
Go to Bat For
Goal:  contrasted to 'objective,' q.v.
Grabber
Gross Outline:  approximation.
Ground Floor
Group Shoot = brainstorming session.
Guidelines:  always desirable to have.
Guy:  an inanimate object such as a data point.
Hack:  'get a hack on X' = make some kind of estimate.
Hard Copy:  paper, as contrasted to viewgraphs.
Hard Deadline:  supposed deadline; never met.
Hard Over:  intransigent.
Head Counters:  personnel office staff.
Hit X Hard:  concentrate on X.
Hoop:  a step in realizing a program:  'yet to go through this hoop.'
Humanoid
Hypergolic:  of a person: intransigent or upset in general.
Impact:  verb.
Implement:  verb.
In-House
Initialize
Innovative
Intensive:  always as X-intensive.
Intercompare:  always used instead of 'compare.'
Issue:  always used instead of 'problem.'
Key:  adj., of issues:  'key issue; not particularly key'.
Knickers:  'get into their knickers' = to interfere with them.
Laicize: verb, to describe in terms comprehensible to lay people.
Lashup = rackup.
Lay Track:  to make an impression on management ('we laid a lot
                of track with the Administrator').
Learning Curve
Liaise:  verb.
Limited:  always as X-limited.
Line Item
Link Calculation
Liberate Resources:  to divert funds from something else.
Looked At:  'the X area is being looked at' = being studied.
Loop:  to be in the loop = to be informed.
Love It!   exclamation of approval.
Low-Cost
Machine = spacecraft.
Man-Attended Experiment
Marching Orders
Matrix
Micromanagement = a tendency to get involved in management of
                        affairs two or more levels down from
                        one's own area of responsibility.
Milestone
Mission Definition
Mode:  'in an X mode.'
Model-Dependent
Muscle:  'get all the muscle into X'
Music:  'let's all read from the same sheet of music.'
Necessitate
Nominal:  according to expectation.
Nominative:  adj., meaning unknown.
Nonconcur:  verb, to disagree.
Numb Nut:  unskilled or incapable person.
Objective:  as contrasted with 'goal' (q.v.)
Overarching Objective
Oblectation
Off-Load:  verb.
On Board:  'Y is on board' = the participation of Y is assured.
On-Boards:  employees or participants.
On Leave:  on vacation.
On the Part Of
On Travel:  out of town.
Open Loop
Out-of-House
Over Guidelines
Ox:  'depends on whose ox is gored.'
Package
Paradigm
Parking Orbit:  temporary assignment or employment.
Pathfinder Studies
Pedigree:  history of accumulation of non-NASA support for a mission.
Peg to Hang X On
Pie:  'another slice through this same pie is...'
Piece of the Action
Ping On:  verb, to remind someone of something they were
           supposed to do.
Pitch:  a presentation to management.
Placekeeper
Planning Exercise
Pony in This Pile of Manure Somewhere = some part of this mess
        may be salvageable.
Posture
Pre-Posthumous
Prioritize
Priority Listing
Problem Being Worked:  'we're working that problem.'
Problem Areas
Product = end item.
Programmatic
Pucker Factor:  degree of apprehension.
Pull One's Tongue Through One's Nose:  give someone a hard time.
Pulse:  verb, as, 'pulse the system.'
Quick Look
Rackup = lashup.
Rainmaker:  an employee able to get approval for budget increases
                or new missions.
Rapee: a person on the receiving end of an unfavorable decision.
Rattle the Cage:  'that will rattle their cage.'
Real-Year Dollars: cost taking inflation into account, as
        contrasted with 'constant dollars.'
Reclama
Refugee:  a person transferred from another program.
Report Out:  verb, used for 'report.'
Resources = money.
Resource-Intensive = expensive.
ROM: 'rough order of magnitude,' of estimates.
Rubric
Runout
Sales Pitch
Scenario
Scope:  verb, to attempt to understand something.
Scoped Out:  pp., understood.
Secular = non-scientific or non-technological.
Self-Serving
Sense:  noun, used instead of 'consensus.'
Shopping List
Show Stopper
Sign Off On something = approve.
Space Cadets:  NASA employees.
Space Winnies or Wieners:  ditto, but even more derogatory.
X-Specific
Speak to X:  to comment on X, where X is a subject, not a person.
Specificity
Speed, Up To
Spinning One's Wheels
Spooks:  DOD of similar people from other agencies.
Staff:  verb.
Standpoint:  'from an X standpoint'
Statussed:  adj., as, 'that has been statussed.'
Strap On:  verb, to try out:  'strap on this idea...'
Strawman
String to One's Bow
Street, On The:  distributed outside one's own office.
Stroking
Structure: verb.
Subsume
Success-Oriented:  no provision for possible trouble.
Surface:  verb, to bring up a problem.
Surveille: verb.
Suspense Date:  the mildest form of imaginary deadline.
Tail:  to have one's tail in a crack = to be upset or in trouble.
Tall Pole in the Tent:  data anomaly.
Tar With the Same Brush
On Target
Task Force
Team All Set Up
Tickler = reminder.
Tiger Team
Time-Critical:  something likely to cause schedule trouble.
Time Frame
Torque the System
Total X, where X is one of the standard NASA noun modifiers.
Total X Picture
Truth Model
Unique
Update:  noun or verb.
Up-Front:  adj.
Upscale
Upper Management
Vector:  verb.
Vector a Program:  to direct it toward some objective.
Ventilate the Issues:  to discuss problems.
Versatilify:  verb, to make something more versatile.
Viable: adj., something that might work or might be acceptable.
Viewgraph:  always mandatory in any presentation.
Viz-a-Viz
WAG = wild-assed guess.
Wall to Wall:  adj., pervasive.
Watch:  'didn't happen on my watch...'
Water Off a Duck's Back
Waterfall Chart:  one way of present costs vs. time.
I'm Not Waving, I'm Drowning
Wedge; Planning Wedge:  available future-year money.
Been to the Well
Where Coming From
Whole Nine Yards
X-Wide
X-wise
Workaround:  way to overcome a problem.
Wrapped Around the Axle:  disturbed or upset.

------------------------------

Date: Wed 8 Feb 84 07:14:34-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: The Best Languages in Town!!! (forwarded from USENET)

                [Reprinted from the UTexas-20 bboard.]

From: bradley!brad    Feb  6 16:56:00 1984

                               Laidback with (a) Fifth
                               By  John Unger Zussman
                            From Info World, Oct 4, 1982


              Basic, Fortran, Cobol... These programming Languages are well
          known  and (more or less)  well loved throughout the computer in-
          dustry.  There are numerous other languages,  however,  that  are
          less  well  known yet still have ardent devotees.  In fact, these
          little-known languages generally have the most fanatic  admirers.
          For  those  who wish to know more about these obscure languages -
          and why they are obscure - I present the following catalog.

              SIMPLE ... SIMPLE is an acronym for Sheer Idiot's  Mono  Pur-
          pose   Programming   Lingusitic   Environment.    This  language,
          developed at the Hanover College for Technological  Misfits,  was
          designed  to  make it impossible to write code with errors in it.
          The statements are, therefore confined to BEGIN, END,  and  STOP.
          No matter how you arrange the statements, you can't make a syntax
          error.

              Programs written in  SIMPLE  do  nothing  useful.  Thus  they
          achieve  the  results  of  programs  written  in  other languages
          without the tedious, frustrating process of  testing  and  debug-
          ging.

              SLOBOL ... SLOBOL is best known for the speed, or lack of it,
          of  its  compiler.   Although  many compilers allow you to take a
          coffee break while they compile, SLOBOL compilers  allow  you  to
          take  a  trip to Bolivia to pick up the coffee.  Forty-three pro-
          grammers are known to have died of boredom sitting at their  ter-
          minals while waiting for a SLOBOL program to compile.  Weary SLO-
          BOL programmers often turn to a related (but  infinitely  faster)
          language, COCAINE.

              VALGOL ... (With special thanks to Dan and Betsy "Moon  Unit"
          Pfau)  -  From its modest beginnings in southern California's San
          Fernando Valley, VALGOL is enjoying a dramatic surge of populari-
          ty across the industry.

              VALGOL commands include REALLY, LIKE, WELL and Y$KNOW.  Vari-
          ables are assigned with the  =LIKE and =TOTALLY operators.  Other
          operators include the "CALIFORNIA BOOLEANS", FERSURE, and  NOWAY.
          Repetitions of code are handled in FOR-SURE loops. Here is a sam-
          ple VALGOL program:
                    14 LIKE, Y$KNOW (I MEAN) START
                    %% IF
                    PI A =LIKE BITCHEN AND
                    01 B =LIKE TUBULAR AND
                    9  C =LIKE GRODY**MAX
                    4K (FERSURE)**2
                    18 THEN
                    4I FOR I=LIKE 1 TO OH MAYBE 100
                    86 DO WAH + (DITTY**2)
                    9  BARF(I) =TOTALLY GROSS(OUT)
                    -17 SURE
                    1F LIKE BAG THIS PROGRAM
                    ?  REALLY
                    $$ LIKE TOTALLY (Y*KNOW)

              VALGOL is characterized by  its  unfriendly  error  messages.
          For  example, when the user makes a syntax error, the interpreter
          displays the message, GAG ME WITH A SPOON!

              LAIDBACK ... Historically, VALGOL is a  derivative  of  LAID-
          BACK,  which  was  developed  at  the  (now defunct) Marin County
          Center for T'ai Chi, Mellowness, and Computer Programming, as  an
          alternative uo the more intense atmosphere in nearby silicon val-
          ley.

              The center was ideal for programmers who liked to soak in hot
          tubs  while  they  worked.   Unfortunately, few programmers could
          survive there for long, since the center outlawed  pizza  and  RC
          Cola in favor of bean curd and Perrier.

              Many mourn the demise of LAIDBACK because of  its  reputation
          as  a  gentle  and nonthreatening language. For Example, LAIDBACK
          responded to syntax errors with the message, SORRY MAN,  I  CAN'T
          DEAL WITH THAT.

              SARTRE ... Named  after  the  late  existential  philosopher.
          SARTRE  is an extremely unstructured language. Statements in SAR-
          TRE have no purpose; they just are there. Thus,  SARTRE  programs
          are  left to define their own functions.  SARTRE programmers tend
          to be boring and depressed and are no fun at parties.

              FIFTH ... FIFTH is a precision mathematical language in which
          the  data types refer to quantity.  The data types range from CC,
          OUNCE,  SHOT,  and  JIGGER  to  FIFTH  (hence  the  name  of  the
          language),  LITER,  MAGNUM,  and  BLOTTO.   Commands refer to in-
          gredients such as CHABLIS, CHARDONNAY, CABERNET,  GIN,  VERMOUTH,
          VODKA, SCOTCH and WHATEVERSAROUND.

              The many versions of the FIFTH language reflect the sophisti-
          cation  and financial status of its users.  Commands in the ELITE
          dialect include VSOP and LAFITE, while commands in the GUTTER di-
          alect  include  HOOTCH  and  RIPPLE.  The latter is a favorite of
          frustrated FORTH programmers who end up using the language.

              C- ... This language was named for the grade received by  its
          creator  when  he  submitted  it as a class project in a graduate
          programming class.  C- is best described as  a  "Low-Level"  pro-
          gramming language.  In fact, the language generally requires more
          C- statements than machine-code statements  to  execute  a  given
          task.  In this respect, it is very similar to COBOL.

              LITHP  ...  This  otherwise  unremarkable  labuage  is   dis-
          tinguished  by  the absence of an "s" in its character set.  pro-
          grammers and users must substitute "TH". LITHP is said to  useful
          in prothething lithtth.

              DOGO ... Developed at the Massachussettes Institute of Obedi-
          ence Training.  DOGO heralds a new era of computer-literate pets.
          DOGO commands include SIT, STAY, HEEL and ROLL OVER.  An  innova-
          tive feature of DOGO is "PUPPY GRAPHICS", in which a small cocker
          spaniel occasionally leaves a deposit as he  travels  across  the
          screen.

                              Submitted By Ian and Tony Goldsmith

------------------------------

End of AIList Digest
********************

∂11-Feb-84  2320	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #18
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Feb 84  23:19:50 PST
Date: Sat 11 Feb 1984 21:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #18
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Feb 1984       Volume 2 : Issue 18

Today's Topics:
  AI and Meteorology -  Summary of Responses
----------------------------------------------------------------------

Date: 11 Jan 84 16:07:00-PST (Wed)
From: ihnp4!fortune!rpw3 @ Ucb-Vax
Subject: Re: AI and Weather Forecasting - (nf)
Article-I.D.: fortune.2249

As far as the desirability to use AI on the weather, it seems a bit
out of place, when there is rumoured to be a fairly straightforward
(if INCREDIBLY cpu-hungry) thermodynamic relaxation calculation that
gives very good results for 24 hr prediction. It uses as input the
various temperature, wind, and pressure readings from all of the U.S.
weather stations, including the ones cleverly hidden away aboard most
domestic DC-10's and L-1011's. Starting with those values as boundary
conditions, an iterative relaxation is done to fill in the cells of
the continental atmospheric model.

The joke is of course (no joke!), it takes 26 hrs to run on a Illiac IV
(somebody from Ames or NOAS or somewhere correct me, please). The accuracy
goes up as the cell size in the model goes down, but the runtime goes up as
the cube! So you can look out the window, wait 2 hours, and say, "Yup,
the model was right."

My cynical prediction is that either (1) by the time we develop an
AI system that does as well, the deterministic systems will have
obsoleted it, or more likely (2) by the time we get an AI model with
the same accuracy, it will take 72 hours to run a 24 hour forecast!

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

------------------------------

Date: 19 Jan 84 21:52:42-EST (Thu)
From: ucbtopaz!finnca1 @ Ucb-Vax
Subject: Re: "You cant go home again"
Article-I.D.: ucbtopaz.370

It seems to me (a phrase that is always a copout for the ill-informed;
nonetheless, I proceed) that the real payoff in expert systems for weather
forecasting would be to capture the knowledge of those pre-computer experts who,
with limited data and even fewer dollars, managed to develop their
pattern-recognition facilities to the point that they could FEEL what was
happening and forecast accordingly.

I was privileged to take some meteorology courses from such an oldster many
years ago, and it was, alas,  my short-sightedness about the computer revolution
in meteorology that prevented me from capturing some of his expertise, to
buzz a word or two.

Surely not ALL of these veterans have retired yet...what a service to science
someone would perform if only this experise could be captured before it dies
off.

        ...ucbvax!lbl-csam!ra!daven    or
        whatever is on the header THIS time.

------------------------------

Date: 15 Jan 84 5:06:29-PST (Sun)
From: hplabs!zehntel!tektronix!ucbcad!ucbesvax.turner @ Ucb-Vax
Subject: Re: Re: You cant go home again - (nf)
Article-I.D.: ucbcad.1315

Re: finnca1@topaz's comments on weather forecasting

Replacing expertise with raw computer power has its shortcomings--the
"joke" of predicting the weather 24 hours from now in 26 hours of cpu
time is a case in point.  Less accurate but more timely forecasts used
to be made by people with slide-rules--and where are these people now?

It wouldn't surprise me if the 20th century had its share of "lost arts".
Archaelogists still dig up things that we don't know quite how to make,
and the technological historians of the next century might well be faced
with the same sorts of puzzles when reading about how people got by
without computers.

Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: Wed 8 Feb 84 15:29:01-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Summary of Responses

The following is a summary of the responses to my AIList request for
information on AI and meteorology, spatial and temporal reasoning, and
related matters.  I have tried to summarize the net messages accurately,
but I may have made some unwarranted inferences about affiliations,
gender, or other matters that were not explicit in the messages.

The citations below should certainly not be considered comprehensive,
either for the scientific literature as a whole or for the AI literature.
There has been relevant work in pattern recognition and image understanding
(e.g., the work at SRI on tracking clouds in satellite images), mapping,
database systems, etc.  I have not had time to scan even my own collection
of literature (PRIP, CVPR, PR, PAMI, IJCAI, AAAI, etc.) for relevant
articles, and I have not sought out bibliographies or done online searches
in the traditional meteorological literature.  Still, I hope these
comments will be of use.

                        ------------------

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
reports that he and Alistair Frazer (Penn State Meteo Dept.) are advising
two meteorology/CS students who want to do senior/masters theses in AI.
They have submitted a proposal and expect to hear from NSF in a few months.


Capt. Roslyn (Roz) J. Taylor, Applied AI Project Officer, USAF, @RADC,
has read two of the Gaffney/Racer papers entitled "A Learning Interpretive
Decision Algorithm for Severe Storm Forecasting."  She found the algorithm
to be a "fuzzy math"-based fine-tuning algorithm in much the same spirit
as a Kalman filter.  The algorithm might be useful as the numerical
predictor in an expert system.


Jay Glicksman of the Texas Instruments Computer Science Lab suggests
that we check out

  Kawaguchi, E. et al. (1979)
  An Understanding System of Natural Language and Pictorial Pattern in
  the World of Weather Reports
  IJCAI-6 Tokyo, pp. 469-474

It does not provide many details and he has not seen a follow up, but
the paper may give some leads.  This paper is evidently related to the
Taniguchi et al. paper in the 6th Pat. Rec. proceedings that I mentioned
in my query.

Dr. John Tsotsos and his students at the Univ. of Toronto Laboratory for
Computational Medicine have been working for several years on the ALVEN
system to interpret heart images in X-ray films.  Dr. Tsotsos feels that the
spatial and temporal reasoning capabilities of the system would be of use in
meteorology.  The temporal reasoning includes intervals, points,
hierarchies, and temporal sampling considerations.  He has sent me the
following reports:

  R. Gershon, Y. Ali, and M. Jenkin, An Explanation System for Frame-based
  Knowledge Organized Along Multiple Dimensions, LCM-TR83-2, Dec. 1983.

  J.K. Tsotsos, Knowledge Organization: Its Role in Representation,
  Decision-making and Explanation Schemes for Expert Systems, LCM-TR83-3,
  Dec. 1983.

  J.K. Tsotsos, Representational Axes and Temporal Cooperative Processes,
  Preliminary Draft.

I regret that I have found time for only a cursory examination of these papers,
and so cannot say whether they will be useful in themselves for meteorology
or only as a source of further references in spatial and temporal reasoning.
Someone else in my group is now taking a look at them. Others papers from
Dr. Tsotsos group may be found in: IJACI77-79-81, PRIP81, ICPR82, PAMI Nov.80,
and IEEE Computer Oct. 83.


Stuart C. Shapiro at the Univ. of Buffalo (SUNY) CS Dept. added the
following reference on temporal reasoning:

  Almeida, M. J., and Shapiro, S. C., Reasoning about the temporal
  structure of narrative texts.  Proceedings of the Fifth Annual Meeting
  of the Cognitive Science Society, Rochester, NY, 1983.


Fanya S. Montalvo at MIT echoed my interest in

  * knowledge representations for spatial/temporal reasoning;
  * inference methods for estimating meteorological variables
    from (spatially and temporally) sparse data;
  * methods of interfacing symbolic knowledge and heuristic
    reasoning with numerical simulation models;
  * a bibliography or guide to relevant literature.

She reports that good research along these lines is very scarce, but
suggests the following:

  As far as interfacing symbolic knowlege with heuristic reasoning with
  numerical simulation, Weyhrauch's FOL system is the best formalism I've
  seen/worked-with to do that.  Unfortunately there are few references to it.
  One is Filman, Lamping, & Montalvo in IJCAI'83.  Unfortunately it was too
  short.  There's a reference to Weyhrauch's Prolegomena paper in there.  Also
  there is Wood's, Greenfeld's, and Zdybel's work at BBN with KLONE and a ship
  location database; they're no longer there.  There's also Mark Friedell's
  Thesis from Case Western Reserve; see his SIGGRAPH'83 article, also
  references to Greenfeld & Yonke there.  Oh, yes, there's also Reid Simmons,
  here at MIT, on a system connecting diagrams in geologic histories with
  symbolic descriptions, AAAI'83.  The work is really in bits and pieces and
  hasn't really been put together as a whole working formalism yet.  The
  issues are hard.


Jim Hendler at Brown reports that Drew McDermott has recently written
several papers about temporal and spatial reasoning.  The best one on
temporal reasoning was published in Cognitive Science about a year ago.
Also, one of Drew's students at Yale recently did a thesis on spatial
reasoning.


David M. Axler, MSCF Applications Manager at Univ. of Pennsylvania, suggests:

  A great deal of info about weather already exists in a densely-encoded form,
  namely proverbs and traditional maxims.  Is there a way that this system can
  be converted to an expert system, if for no other reason than potential
  comparison between the analysis it provides with that gained from more
  formal meteorological approaches?

  If this is of interest, I can provide leads to collections of weather lore,
  proverbs, and the like.  If you're actually based at SRI, you're near
  several of the major folklore libraries and should have relatively easy
  access (California is the only state in the union with two grad programs in
  the field, one at Berkeley (under the anthro dept.), and one at UCLA) to the
  material, as both schools have decent collections.

I replied:

  The use of folklore maxims is a good idea, and one fairly easy to build
  into an expert system for prediction of weather at a single site.  (The
  user would have to enter observations such as "red sky at night" since
  pattern recognition couldn't be used.  Given that, I suspect that a
  Prospector-style inference net could be built that would simultaneously
  evaluate hypotheses of "rain", "fog", etc., for multiple time windows.)
  Construction of the system and evaluation of the individual rules would
  make an excellent thesis project.

  Unfortunately, I doubt that the National Weather Service or other such
  organization would be interested in having SRI build such a "toy"
  system.  They would be more interested in methods for tracking storm
  fronts and either automating or improving on the map products they
  currently produce.

  As a compromise, one project we have been considering is to automate
  a book of weather forecasting rules for professional forecasters.
  Such rule books do exist, but the pressures of daily forecasting are
  such that the books are rarely consulted.  Perhaps some pattern
  recognition combined with some man-machine dialog could trigger the
  expert system rules that would remind the user of relevant passages.

Dave liked the project, and suggested that there may be additional unofficial
rule sources such as those used by the Farmer's Almanac publishers.


Philip Kahn at UCLA is interested in pattern recognition, and recommends
the book

  REMOTE SENSING: Optics and Optical Systems by Philip N. Slater
  Addison-Wesley Publ. Co., Reading, MA, 1980

for information on atmospherics, optics, films, testing/reliability, etc.


Alex Pang at UCLA is doing some non-AI image processing to aid weather
prediction.  He is interested in hearing about AI and meteorology.
Bill Havens at the University of British Columbia expressed interest,
particularly in methods that could be implemented on a personal computer.
Mike Uschold at Edinburgh and Noel Kropf at Columbia University (Seismology
Lab?) have also expressed interest.

                        ------------------

My thanks to all who replied.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂15-Feb-84  2052	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #19
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Feb 84  20:52:24 PST
Date: Tue 14 Feb 1984 17:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #19
To: AIList@SRI-AI


AIList Digest           Wednesday, 15 Feb 1984     Volume 2 : Issue 19

Today's Topics:
  Requests - OPS5 & IBM LISP,
  LISP - Timings,
  Bindings - G. Spencer-Brown,
  Knowledge Acquisition - Regrets,
  Alert - 4-Color Problem,
  Brain Theory - Definition,
  Seminars - Analogy & Causal Reasoning & Tutorial Discourse
----------------------------------------------------------------------

Date: Mon 13 Feb 84 10:06:53-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: OPS5 query

I'd like to find out some information on acquiring a copy of
the OPS5 system. Is there a purchase price, is it free-of-charge,
etc. Please send replies to

        G.TJM@SU-SCORE

Thanks.

--ted

------------------------------

Date: 1 Feb 1984 15:14:48 EST
From: Robert M. Simmons <simmons@EDN-UNIX>
Subject: lisp on ibm

Can anyone give me pointers to LISP systems that run on
IBM 370's under MVS?  Direct and indirect pointers are
welcome.

Bob Simmons
simmons@edn-unix

------------------------------

Date: 11 Feb 84 17:54:24 EST
From: John <Roach@RUTGERS.ARPA>
Subject: Timings of LISPs and Machines


I dug up these timings, they are a little bit out of date but seem a little
more informative.  They were done by Dick Gabriel at SU-AI in 1982 and passed
along by Chuck Hedrick at Rutgers.  Some of the times have been updated to
reflect current machines by myself.  These have been marked with the
date of 1984.  All machines were measured using the function -

an almost Takeuchi function as defined by John McCarthy

(defun tak (x y z)
       (cond ((not (< y x))
              z)
             (t (tak (tak (1- x) y z)
                     (tak (1- y) z x)
                     (tak (1- z) x y)))))

------------------------------------------

(tak 18. 12. 6.)

On 11/750 in Franz ordinary arith     19.9   seconds compiled
On 11/780 in Franz with (nfc)(TAKF)   15.8   seconds compiled   (GJC time)
On Rutgers-20 in Interlisp/1984       13.8   seconds compiled
On 11/780 in Franz (nfc)               8.4   seconds compiled   (KIM time)
On 11/780 in Franz (nfc)               8.35  seconds compiled   (GJC time)
On 11/780 in Franz with (ffc)(TAKF)    7.5   seconds compiled   (GJC time)
On 11/750 in PSL, generic arith        7.1   seconds compiled
On MC (KL) in MacLisp (TAKF)           5.9   seconds compiled   (GJC time)
On Dolphin in InterLisp/1984           4.81  seconds compiled
On Vax 11/780 in InterLisp (load = 0)  4.24  seconds compiled
On Foonly F2 in MacLisp                4.1   seconds compiled
On Apollo (MC68000) PASCAL             3.8   seconds            (extra waits?)
On 11/750 in Franz, Fixnum arith       3.6   seconds compiled
On MIT CADR in ZetaLisp                3.16  seconds compiled   (GJC time)
On MIT CADR in ZetaLisp                3.1   seconds compiled   (ROD time)
On MIT CADR in ZetaLisp (TAKF)         3.1   seconds compiled   (GJC time)
On Apollo (MC68000) PSL SYSLISP        2.93  seconds compiled
On 11/780 in NIL (TAKF)                2.8   seconds compiled   (GJC time)
On 11/780 in NIL                       2.7   seconds compiled   (GJC time)
On 11/750 in C                         2.4   seconds
On Rutgers-20 in Interlisp/Block/84    2.225 seconds compiled
On 11/780 in Franz (ffc)               2.13  seconds compiled   (KIM time)
On 11/780 (Diablo) in Franz (ffc)      2.1   seconds compiled   (VRP time)
On 11/780 in Franz (ffc)               2.1   seconds compiled   (GJC time)
On 68000 in C                          1.9   seconds
On Utah-20 in PSL Generic arith        1.672 seconds compiled
On Dandelion in Interlisp/1984         1.65  seconds compiled
On 11/750 in PSL INUM arith            1.4   seconds compiled
On 11/780 (Diablo) in C                1.35  seconds
On 11/780 in Franz (lfc)               1.13  seconds compiled   (KIM time)
On UTAH-20 in Lisp 1.6                 1.1   seconds compiled
On UTAH-20 in PSL Inum arith           1.077 seconds compiled
On Rutgers-20 in Elisp                 1.063 seconds compiled
On Rutgers-20 in R/UCI lisp             .969 seconds compiled
On SAIL (KL) in MacLisp                 .832 seconds compiled
On SAIL in bummed MacLisp               .795 seconds compiled
On MC (KL) in MacLisp (TAKF,dcl)        .789 seconds compiled
On 68000 in machine language            .7   seconds
On MC (KL) in MacLisp (dcl)             .677 seconds compiled
On SAIL in bummed MacLisp (dcl)         .616 seconds compiled
On SAIL (KL) in MacLisp (dcl)           .564 seconds compiled
On Dorado in InterLisp Jan 1982 (tr)    .53  seconds compiled
On UTAH-20 in SYSLISP arith             .526 seconds compiled
On SAIL in machine language             .255 seconds (wholine)
On SAIL in machine language             .184 seconds (ebox-doesn't include mem)
On SCORE (2060) in machine language     .162 seconds (ebox)
On S-1 Mark I in machine language       .114 seconds (ebox & ibox)

I would be interested if people who had these machines/languages available
could update some of the timings.  There also isn't any timings for Symbolics
or LMI.

John.

------------------------------

Date: Sun, 12 Feb 1984  01:14 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: AIList Digest   V2 #14

In regard to G Spencer Brown, if you are referring to author of
the Laws of Form, if that's what it was called:  I believe he was
a friend of Bertrand Russell  and that he logged out
quite a number of years ago.

------------------------------

Date: Sun, 12 Feb 84 14:18:04 EST
From: Brint <abc@brl-bmd>
Subject: Re:  "You cant go home again"

I couldn't agree more (with your feelings of regret at not
capturing the expertise of the "oldster" in meterological
lore).

My dad was one of the best automotive diagnosticians in
Baltimore until his death six years ago.  His uncanny
ability to pinpoint a problem's cause from external
symptoms was locally legendary.  Had I known then what I'm
beginning to learn now about the promise of expert systems,
I'd have spent many happy hours "picking his brain" with
the (unfilled) promise of making us both rich!

------------------------------

Date: Mon 13 Feb 84 22:15:08-EST
From: Jonathan Intner <INTNER@COLUMBIA-20.ARPA>
Subject: The 4-Color Problem

To Whom It May Concern:

        The computer proof of the 4 - color problem can be found in
Appel, K. and W. Haken ,"Every planar map is 4-colorable-1 :
Discharging", "Every planar map is 4-colorable-2: Reducibility",
Illinois Journal of Mathematics, 21, 429-567 (1977).  I haven't looked
at this myself, but I understand from Mike Townsend (a Prof here at
Columbia) that the proof is a real mess and involves thousands of
special cases.

        Jonathan Intner
        INTNER@COLUMBIA-20.ARPA

------------------------------

Date: 11 Feb 1984 13:50-PST
From: Andy Cromarty <andy@AIDS-Unix>
Subject: Re: Brain, a parallel processor?

        What are the evidences that the brain is a parallel processor?
        My own introspection seem to indicate that mine is doing time-sharing.
                        -- Rene Bach <BACH@SUMEX-AIM.ARPA>

You are confusing "brain" with "mind".

------------------------------

Date: 10 Feb 1984  15:23 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Revolving Seminar

                     [Forwarded by SASW@MIT-MC.]

Wednesday, February 15, 4:00pm 8th floor playroom

Structure-Mapping: A Theoretical Framework for Analogy
Dedre Gentner

The structure-mapping theory of analogy describes a set of
principles by which the interpretation of an analogy is derived
from the meanings of its terms.  These principles are
characterized as implicit rules for mapping knowledge about a
base domain into a target domain.  Two important features of the
theory are (1) the rules depend only on syntactic properties of
the knowledge representation, and not on the specific content of
the domains; and (2) the theoretical framework allows analogies
to be distinguished cleanly from literal similarity statements,
applications of general laws, and other kinds of comparisons.

Two mapping principles are described: (1) Relations between
objects, rather than attributes of objects, are mapped from base
to target; and (2) The particular relations mapped are determined
by @u(systematicity), as defined by the existence of higher-order
relations.  Psychological experiments supporting the theory are
described, and implications for theories of learning are
discussed.


COMING SOON: Tomas Lozano-Perez, Jerry Barber, Dan Carnese, Bob Berwick, ...

------------------------------

Date: Mon 13 Feb 84 09:15:36-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT - FEBRUARY 24, 1984

[Reprinted from the Stanford SIGLUNCH distribution.]

Friday,   February 24, 1984
LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry
12:05

SPEAKER:   Ben Kuipers, Department of Mathematics
           Tufts University

TOPIC:     Studying Experts to Learn About Qualitative
                       Causal Reasoning


By analyzing a  verbatim protocol  of an expert's  explanation we  can
derive constraints on the conceptual  framework used by human  experts
for causal reasoning  in medicine.   We use  these constraints,  along
with  textbook  descriptions  of  physiological  mechanisms  and   the
computational requirements  of successful  performance, to  propose  a
model of qualitative causal reasoning.  One important design  decision
in the model is the selection of the "envisionment" version of  causal
reasoning  rather  than  a  version  based  on  "causal  links."   The
envisionment process performs a qualitative simulation, starting  with
a description  of the  structure  of a  mechanism and  predicting  its
behavior.  The qualitative causal reasoning algorithm is a step toward
second-generation medical diagnosis programs  that understand how  the
mechanisms of  the  body work.   The  protocol analysis  method  is  a
knowledge  acquisition  technique   for  determining  the   conceptual
framework of new  types of  knowledge in  an expert  system, prior  to
acquiring large amounts of domain-specific knowledge.  The qualitative
causal reasoning algorithm has been implemented and tested on  medical
and non-medical examples.  It will be the core of RENAL, a new  expert
system for diagnosis in nephrology, that we are now developing.

------------------------------

Date: 12 Feb 84 0943 EST (Sunday)
From: Alan.Lesgold@CMU-CS-A (N981AL60)
Subject: colloquium announcement

          [Forwarded from the CMU-C bboard by Laws@SRI-AI.]


                 THE INTELLIGENT TUTORING SYSTEM GROUP
                LEARNING RESEARCH AND DEVELOPMENT CENTER
                        UNIVERSITY OF PITTSBURGH

                          AN ARCHITECTURE FOR
                           TUTORIAL DISCOURSE

                            BEVERLY P. WOOLF
              COMPUTER AND INFORMATION SCIENCE DEPARTMENT
                      UNIVERSITY OF MASSACHUSETTS

                        WEDNESDAY, FEBRUARY 15,
              2:00 - 3:00, LRDC AUDITORIUM (SECOND FLOOR)

    Human  discourse is quite complex compared to the present ability of
machines to handle communication.  Sophisticated research into discourse
is needed before we can construct intelligent interactive systems.  This
talk presents recent research in the areas of discourse generation, with
emphasis on teaching and tutoring dialogues.
    This talk describes MENO, a system where hand  tailored  rules  have
been  used  to  generate  flexible  responses  in  the  face  of student
failures.  The  system  demonstrates  the  effectiveness  of  separating
tutoring  knowledge  and  tutoring  decisions  from  domain  and student
knowledge.  The design of  the  system  suggests  a  machine  theory  of
tutoring and uncovers some of the conventions and intuitions of tutoring
discourse.    This  research  is applicable to any intelligent interface
which must reason about the users knowledge.

------------------------------

End of AIList Digest
********************

∂22-Feb-84  1137	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #20
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Feb 84  11:36:51 PST
Date: Fri 17 Feb 1984 09:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #20
To: AIList@SRI-AI


AIList Digest            Friday, 17 Feb 1984       Volume 2 : Issue 20

Today's Topics:
  Lisp - Timing Data Caveat,
  Bindings - G. Spencer Brown,
  Logic - Nature of Undecidability,
  Brain Theory - Parallelism,
  Expert Systems - Need for Perception,
  AI Culture - Work in Progress,
  Seminars - Learning & Automatic Deduction & Commonsense Reasoning
----------------------------------------------------------------------

Date: 16 Feb 1984 1417-PST
From: VANBUER at USC-ECL.ARPA
Subject: Timing Data Caveat

A warning on the TAK performance testing:  this code only exercises
function calling and small integer arithmetic, and none of things
most heavily used in "real" lisp programming: CONSing, garbage collection,
paging (ai stuff is big after all).
        Darrel J. Van Buer

------------------------------

Date: Wed, 15 Feb 84 11:15:21 EST
From: John McLean <mclean@NRL-CSS>
Subject: G. Spencer-Brown and undecidable propositions


G. Spencer-Brown is very much alive.  He spent several months at NRL a couple
of years ago and presented lectures on his purported proof of the four color
theorem.  Having heard him lecture on several topics previously, I did not feel
motivated to attend his lectures on the four color theorem so I can't comment
on them first hand.  Those who knew him better than I believe that he is
currently at Oxford or Cambridge.  By the way, he was not a friend of Russell's
as far as I know.  Russell merely said something somewhat positive about LAWS
OF FORM.

With respect to undecidability, I can't figure out what Charlie Crummer means
by "undecidable proposition".  The definition I have always seen is that a
proposition is undecidable with respect to a set of axioms if it is
independent, i.e,. neither the proposition nor its negation is provable.
(An undecidable theory is a different kettle of fish altogether.) Examples are
Euclid's 5th postulate with respect to the other 4, Goedel's sentence with
respect to first order number theory, the continuum hypothesis with respect to
set theory, etc.  I can't figure out the claim that one can't decide whether
an undecidable proposition is decidable or not.  Euclid's 5th postulate,
Goedel's sentence, and the continuum hypothesis have been proven to be
undecidable.  For simple theories, such as sentential logic (i.e., no
quantifiers), there are even algorithms for detecting undecidability.
                                                                    John McLean

------------------------------

Date: Wed, 15 Feb 84 11:18:43 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: G. Spencer-Brown and undecidable propositions

Thanks for the lead to G. S-B.  I think I understand what he is driving at with
THE LAWS OF FORM so I would like to see his alledged 4-color proof.

Re: undecidability... Is it true that all propositions can be proved decidable
or not with respect to a particular axiomatic system from WITHIN that system?
My understanding is that this is not generally possible.  Example (Not a proof
of my understanding):  Is the value of the statement "This statement is false."
decidable from within Boolean logic?  It seems to me that from within Boolean
logic, i.e. 2-valued logic, all that would be seen is that no matter how long
I crank I never seem to be able to settle down to a unique value.  If this
proposition is fed to a 2-valued logic program (written in PROLOG, LISP, or
whatever language one desires) the program just won't halt.  From OUTSIDE the
machine, a human programmer can easily detect the problem but from WITHIN
the Boolean system it's not possible.  This seems to be an example of the
halting problem.

--Charlie

------------------------------

Date: 16 Feb 1984  12:22 EST (Thu)
From: "Steven C. Bagley" <BAGLEY%MIT-OZ@MIT-MC.ARPA>
Subject: Quite more than you want to know about George Spencer Brown

Yes, Spencer Brown was associated with Russell, but since Lord Russell
died recently (1970), I think it safe to assume that not ALL of his
associates are dead, yet, at least.

There was a brief piece about Spencer Brown in "New Scientist" several
years ago (vol. 73, no. 1033, January 6, 1977, page 6).  Here are two
interesting quotes:

"What sets him apart from the many others who have claimed a proof of
the [four-color] theorem are his technique, and his personal style.
Spencer Brown's technique rests on a book he wrote in 1964 called
`Laws of Form.'  George Allen and Unwin published it in 1969, on the
recommendation of Bertrand Russell.  In the book he develops a new
algebra of logic -- from which the normal Boolean algebra (a means of
representing propositions and arguments with symbols) can be derived.
The book has had a mixed reputation, from `a work of genius' to
`pretentious triviality.'  It is certainly unorthodox, and mixes
metaphysics and mathematics.  Russell himself was taken with the work,
and mentions it in his autobiography....

The style of the man is extravagant -- he stays at the Savoy -- and
all-embracing.  He was in the Royal Navy in the Second World War; has
degrees in philosophy and psychology (but not mathematics); was a
lecturer in logic at Christ Church College, Oxford; wrote a treatise
on probability; a volume of poetry, and a novel; was a chief logic
designer with Mullard Equipment Ltd where his patented design of a
transistorised elevator logic circuit led to `Laws of Form'; has two
world records for gliding; and presently lectures part-time in the
mathematics department at the University of Cambridge while also
managing his publishing business."

I know of two reviews of "Laws of Form": one by Stafford Beer, the
British cyberneticist, which appeared in "Nature," vol. 223, Sept 27,
1969, and the other by Lancelot Law Whyte, which was published in the
British Journal of the Philosophy of Science, vol 23, 1972, pages
291-292.

Spencer Brown's probability work was published in a book called
"Probability and Scientific Inference", in the late 1950's, if my
memory serves me correctly.  There is also an early article in
"Nature" called "Statistical Significance in Psychical Research", vol.
172, July 25, 1953, pp. 154-156.  A comment by Soal, Stratton, and
Trouless on this article appeared in "Nature" vol 172, Sept 26, 1953,
page 594, and a reply by Spencer Brown immediately follows.  The first
sentence of the initial article reads as follows: "It is proposed to
show that the logical form of the data derived from experiments in
psychical research which depend upon statistical tests is such as to
provide little evidence for telepathy, clairvoyance, precognition,
psychokinesis, etc., but to give some grounds for questioning the
practical validity of the test of significance used."  Careful Spencer
Brown watchers will be interested to note that this article lists his
affliation as the Department of Zoology and Comparative Anatomy,
Oxford; he really gets around.

His works have had a rather widespread, if unorthodox, impact.
Spencer Brown and "Laws of Form" are mentioned in Adam Smith's Powers
of Mind, a survey of techniques for mind expansion, contraction,
adjustment, etc., e.g., EST, various flavors of hallucinogens, are
briefly noted in Aurthur Koestler's The Roots of Coincidence, which
is, quite naturally enough, about probability, coincidence, and
synchronicity, and are mentioned, again, in "The Dyadic Cyclone," by
Dr. John C. Lilly, dolphin aficionado, and consciousness expander,
extraordinaire.

If this isn't an eclectic enough collection of trivia about Spencer
Brown, keep reading.  Here is quote from his book "Only Two Can Play
This Game", written under the pseudonym of James Keys.  "To put it
bluntly, it looks as if the male is so afraid of the fundamentally
different order of being of the female, so terrified of her huge
magical feminine power of destruction and regeneration, that he
doesn't look at her as she really is, he is afraid to accept the
difference, and so has repressed into his unconscious the whole idea
of her as ANOTHER ORDER OF BEING, from whom he might learn what he
could not know of himself alone, and replaced her with the idea of a
sort of second-class replica of himself who, because she plays the
part of a man so much worse than a man, he can feel safe with because
he can despise her."

There are some notes at the end of this book (which isn't really a
novel, but his reflections, written in the heat of the moment, about
the breakup a love affair) which resemble parts of "Laws of Form":
"Space is a construct.  In reality there is no space.  Time is also a
construct.  In reality there is no time.  In eternity there is space
but no time.  In the deepest order of eternity there is no space....In
a qualityless order, to make any distinction at all is at once to
construct all things in embryo...."

And last, I have no idea of his present-day whereabouts.  Perhaps try
writing to him c/o Cambridge University.

------------------------------

Date: Thu, 16 Feb 84 13:58:28 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Quite more than you want to know about George Spencer Brown

Thank you for the copious information on G. S-B.  If I can't get in touch
with him now, it will be because he does not want to be found.

After the first reading of the first page of "The Laws of Form" I almost
threw the book away.  I am glad, however, that I didn't.  I have read it
several times and thought carefully about it and I think that there is much
substance to it.

  --Charlie

------------------------------

Date: 15 Feb 84  2302 PST
From: John McCarthy <JMC@SU-AI>
Subject: Serial or parallel

        It seems to me that introspection can tell us that the brain
does many things serially.  For example, a student with 5 problems
on an examination cannot set 5 processes working on them.  Indeed
I can't see that introspection indicates that anything is done
in parallel, although it does indicate that many things are done
subconsciously.  This is non-trivial, because one could imagine
a mind that could set several processes going subconsciously and
then look at them from time to time to see what progress they
were making.

        On the other hand, anatomy suggests and physiological
experiments confirm that the brain does many things in parallel.
These things include low level vision processing and probably
also low level auditory processing and also reflexes.  For example,
the blink reflex seems to proceed without thought, although it
can be observed and in parallel with whatever else is going on.
Indeed one might regard the blink reflex and some well learned
habits as counter-examples to my assertion that one can't set
parallel processes going and then observe them.

        All else seems to be conjecture.  I'll conjecture that
a division of neural activity into serial and parallel activities
developed very early in evolution.  For example, a bee's eye is
a parallel device, but the bee carries out long chains of serial
activities in foraging.  My more adventurous conjecture is that
primate level intelligence involves applying parallel pattern
recognition processes evolve in connection with vision to records
of the serial activities of the organism.  The parallel processes
of recognition are themselves subconscious, but the results have
to take part in the serial activity.  Finally, seriality seems
to be required for coherence.  An animal that seeks food by
locomotion works properly only if it can go in one direction
at a time, whereas a sea anemone can wave all its tentacles at
once and needs only very primitive seriality that can spread
in a wave of activity.

        Perhaps someone who knows more physiology can offer more
information about the division of animal activity into serial
and parallel kinds.

------------------------------

Date: Wed, 15 Feb 84 22:40:48 pst
From: finnca1%ucbtopaz.CC@Berkeley
Subject: Re:  "You cant go home again"
        Date:     Sun, 12 Feb 84 14:18:04 EST
        From: Brint <abc@brl-bmd>

        I couldn't agree more (with your feelings of regret at not
        capturing the expertise of the "oldster" in meterological
        lore).

        My dad was one of the best automotive diagnosticians in
        Baltimore [...]

Ah yes, the scarcest of experts these days:  a truly competent auto
mechanic!  But don't you still need an expert to PERCEIVE the subtle
auditory cues and translate them into symbolic form?

Living in the world is a full time job, it seems.

                Dave N. (...ucbvax!ucbtopaz!finnca1)

------------------------------

Date: Monday, 13 Feb 1984 18:37:35-PST
From: decwrl!rhea!glivet!zurko@Shasta
Subject: Re: The "world" of CS

        [Forwarded from the Human-Nets digest by Laws@SRI-AI.]

The best place for you to start would be with Sheri Turkle, a
professor at MIT's STS department.  She's been studying both the
official and unofficial members of the computer science world as a
culture/society for a few years now.  In fact, she's supposed to be
putting a book out on her findings, "The Intimate Machine".  Anyone
heard what's up with it?  I thought it was supposed to be out last
Sept, but I haven't been able to find it.
        Mez

------------------------------

Date: 14 Feb 84 21:50:52 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Learning Seminar

             [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                      MACHINE LEARNING BROWN BAG SEMINAR

Title:     When to Learn
Speaker:   Michael Sims
Date:      Wednesday, Feb. 15, 1984 - 12:00-1:30
Location:  Hill Center, Room 254 (note new location)

       In  this  informal  talk I will describe issues which I have broadly
    labeled  'when  to  learn'.    Most  AI  learning  investigations  have
    concentrated  on  the  mechanisms  of  learning.    In  part  this is a
    reasonable consequence of AI's close  relationship  with  the  'general
    process tradition' of psychology [1].  The influences of ecological and
    ethological   (i.e.,  animal  behavior)  investigations  have  recently
    challenged this research methodology in psychology, and I believe  this
    has important ramifications for investigations of machine learning.  In
    particular,  this  influence  would  suggest that learning is something
    which takes place when an appropriate environment  and  an  appropriate
    learning  mechanism  are  present,  and  that  it  is  inappropriate to
    describe learning by describing a learning mechanism without describing
    the environment in which it operates.  The most cogent new issues which
    arise are the description of the environment, and  the  confronting  of
    the  issue  of  'when  to learn in a rich environment'.   By a learning
    system in a 'rich environment' I  mean  a  learning  system  which must
    extract the items to be learned from sensory input which is too rich to
    be  exhaustively stored.  Most present learning systems operate in such
    a restrictive environment that there is no question of what or when  to
    learn.   I will also present a general architecture for such a learning
    system in a rich environment, called a Pattern Directed Learning Model,
    which was motivated by biological learning systems.


                                  References

[1]   Johnston, T. D.
      Contrasting approaches to a theory of learning.
      Behavioral and Brain Sciences 4:125-173, 1981.

------------------------------

Date: Wed 15 Feb 84 13:16:07-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: "Automatic deduction" and other stuff

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

A reminder that the seminar on automatic reasoning / theorem proving /logic
programming / mumble mumble mumble  which I advertised earlier is going to
begin shortly, under one title or another.   It will tentatively be on
Wednesdays at 1:30 in MJH301.   If you wish to be on the mailing list for this,
please mail to me or Yoni Malachi (YM@SAIL).   But if you are already on
Carolyn Talcott's mailing list for the MTC seminars, you will probably be
included on the new list unless you ask not to be.

For those interested specifically in the MRS system, we plan to continue MRS
meetings, also on Weds., at 10:30, starting shortly.   I expect to announce
such meetings on the MRSusers distribution list.   To get on this, mail to me
or Milt Grinberg (GRINBERG@SUMEX).   Note that MRSusers will contain other
announcements related to MRS as well.
                                                - Richard

------------------------------

Date: Wed 15 Feb 84
Subject: McCarthy Lectures on Commonsense Knowledge

      [Forwarded from the Stanford CSLI newsletter by Laws@SRI.]


   MCCARTHY LECTURES ON THE FORMALIZATION OF COMMONSENSE KNOWLEDGE

     John McCarthy  will  present  the remaining three lectures of his
series (the first of the four was held January 20) at 3:00 p.m. in the
Ventura Hall Seminar Room on the dates shown below.

Friday, Feb. 17   "The Circumscription Mode of Nonmonotonic Reasoning"

        Applications of circumscription to formalizing commonsense
        facts.  Application to the frame problem, the qualification
        problem, and to the STRIPS assumption.

Friday, March 2   "Formalization of Knowledge and Belief"

        Modal and first-order formalisms.  Formalisms in which possible
        worlds are explicit objects.  Concepts and propositions as
        objects in theories.

Friday, March 9   "Philosophical Conclusions Arising from AI Work"

        Approximate theories, second-order definitions of concepts,
        ascription of mental qualities to machines.

------------------------------

End of AIList Digest
********************

∂22-Feb-84  1758	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #21
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Feb 84  17:56:38 PST
Date: Wed 22 Feb 1984 16:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #21
To: AIList@SRI-AI


AIList Digest           Thursday, 23 Feb 1984      Volume 2 : Issue 21

Today's Topics:
  Waveform Analysis - EEG/EKG Request,
  Laws of Form - Comment,
  Review - Commercial NL Review in High Technology,
  Humor - The Adventures of Joe Lisp,
  Seminars - Computational Discovery & Robotic Planning & Physiological
    Reasoning & Logic Programming & Mathematical Expert System
----------------------------------------------------------------------

Date: Tue, 21 Feb 84 22:29:05 EST
From: G B Reilly <reilly@udel-relay.arpa>
Subject: EEG/EKG Scoring

Has anyone done any work on automatic scoring and interpretation of EEG or
EKG outputs?

Brendan Reilly

[There has been a great deal of work in these areas.  Good sources are
the IEEE pattern recognition or pattern recognition and image processing
conferences, IEEE Trans. on Pattern Analysis and Machine Intelligence,
IEEE Trans. on Computers, and the Pattern Recognition journal.  There
have also been some conferences on medical pattern recognition.  Can
anyone suggest a bibliography, special issue, or book on these subjects?
Have there been any AI (as opposed to PR) approaches to waveform diagnosis?
-- KIL]

------------------------------

Date: 19-Feb-84 02:14 PST
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: G. Spencer-Brown and the Laws of Form

I know of someone who talked with G. on the telephone about six years
ago somewhere in Northern California.  My friend developed a quantum
logic for expressing paradoxes, and some forms of schyzophrenia, among
other things.  Puts fuzzy set theory to shame.  Anyway, he wanted to
get together with G. to discuss his own work and what he perceived in
the Laws of Form as very fundamental problems in generality due to
over-simplicity.  G. refused to meet without being paid fifty or so
dollars per hour.

Others say that the LoF's misleading notation masks the absence of any
significant proofs.  They observe that the notation uses whitespace as
an implicit operator, something that becomes obvious in an attempt to
parse it when represented as character strings in a computer.

I became interested in the Laws of Form when it first came out as it
promised to be quite an elegant solution to the most obscure proofs of
Whitehead and Russell's Principia Mathematica.  The LoF carried to
perfection a very similar simplification I attempted while studying
the same logical foundations of mathematics.  One does not get too far
into the proofs before getting the distinct feeling that there has GOT
to be a better way.

It would be interesting to see an attempt to express the essence of
Go:del's sentence in the LoF notation.

 -- kirk

------------------------------

Date: Fri 17 Feb 84 10:57:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Commercial NL Review in High Technology

The February issue of High Technology has a short article on
natural language interfaces (to databases, mainly).  The article
and business outlook section mention four NL systems currently
on the market, led by AIC's Intellect ($70,000, IBM mainframes),
Frey Associate's Themis ($24,000, DEC VAX-11), and Cognitive
System's interface.  (The fourth is not named, but some OEMs and
licensees of the first two are given.)  The article says that
four more systems are expected out this year, and discusses
Symantec's system ($400-$600, IBM PC with 256 Kbytes and hard disk)
and Cal Tech's ASK (HP9836 micro, licensed to HP and DEC).

------------------------------

Date:    Tue, 14 Feb 84 11:21:09 EST
From:    Kris Hammond <Hammond@YALE>
Subject: *AI-LUNCH*

         [Forwarded from a Yale bboard by Shrager@CMU-PSY-A.]

                 THE ADVENTURES OF JOE LISP, T MAN

   Brought to you by:  *AI-LUNCH*, its hot,  its  cold,  its  more  than
   a lunch...
                      This week's episode:

                  The Case of the Bogus Expert
                            Part I

   It was  late  on  a  Tuesday and I was dead in my seat from nearly an
   hour of grueling mail reading and idle chit-chat with random  passers
   by.  The  only  light  in  my  office  was the soft glow from my CRT,
   the only sound was the pain wracked rattle of  an  over-heated  disk.
   It was  raining  out,  but  the  steady staccato rhythm that beat its
   way into the skulls of others was held  back  by  the  cold  concrete
   slabs of  my windowless walls.  I like not having windows, but that's
   another story.

   I didn't hear her come in, but when the  scent  of  her  perfume  hit
   me, my  head swung faster than a Winchester.  She was wearing My-Sin,
   a perfume with the smell of an expert, but that wasn't what impressed
   me.  What  hit  me  was  her  contours.   She had a body with all the
   right variables.  She wore a dress with a single closure that  barely
   hid the  dynamic  scoping  of  what  was  underneath.  Sure I saw her
   as an object, but I guess I'm just object oriented.   It's  the  kind
   of operator I am.

   After she sat down and began to tell her story I  realized  that  her
   sophisticated look  was  just  cover.  She was a green kid, still wet
   behind the ears.  In fact she was wet all over.  As I  said,  it  was
   raining outside.  It's an easy inference.

   It  seems  the  kid's  step-father  had  disappeared.   He had been a
   medical specialist,  diagnosis  and  prescription,  but  one  day  he
   started making  wild  claims  about  knowledge  and planning and then
   he  vanished.   I  had  heard  of  this  kind  before.    Some   were
   specialists.  Some  in  medicine,  some  in geology, but all were the
   same kind of guy.  I looked the girl in the eye  and  asked  the  one
   question she  didn't  want  to  hear,  "He's  rule-based, isn't he?".

   She turned  her  head away and that was all the answer I needed.  His
   kind were cold, unfeeling, unchanging, but she still  loved  him  and
   wanted him back again.

   Once I  got  a  full  picture of the guy I was sure that I knew where
   to find him, California.  It was the haven for his  way  of  thinking
   and acting.   I  was  sure  that he had been swept up by the EXPERTS.
   They were a cult that had grown up in the past few  years,  promising
   fast and  easy  enlightenment.   What  they  didn't tell you was that
   the price was your ability  to  understand  itself.   He  was  there,
   as sure as I was a T Man.

   I knew of at least one operative in California who could be  trusted,
   and I  knew  that  I had to talk to him before I could do any further
   planning.  I reached for the phone and gave him a call.

   The conversation was short and  sweet.   He  had  resource  conflicts
   and couldn't  give  me  a  hand  right now.  I assumed that it had to
   be more complex than that and almost  said  that  resource  conflicts
   aren't  that  easy  to  identify,  but  I  had no time to waste on in
   fighting while the real enemy was still at  large.   Before  he  hung
   up, he  suggested  that  I pick up a radar detector if I was planning
   on driving out and asked if I could grab a half-gallon  of  milk  for
   him on  the  way.   I agreed to the favor, thanked him for his advice
   and wished him luck on his tan...

    That's all  for  now  kids.   Tune in next week for the part two of:

                  The Case of the Bogus Expert

                            Starring

                        JOE LISP, T MAN

   And remember kids, Wednesdays are *AI-LUNCH* days and  11:45  is  the
   *AI-LUNCH* time.  And kids, if you send in 3 box tops from *AI-LUNCH*
   you can get a JOE LISP magic decoder ring.  This  is  the  same  ring
   that saved  JOE  LISP only two episodes ago and is capable of parsing
   from surface to deep  structure  in  less  than  15  transformations.
   Its part plastic, part metal and all bogus, so order now.

------------------------------

Date: 17 February 1984 11:55 EST
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: Computational Discovery of Mathamatical Laws

          [Forwarded from the MIT-MC bboard by Laws@SRI-AI.]

TITLE:  "The Computational Discovery of Mathematical Laws: Experiments in Bin
           Packing"
SPEAKER:        Dr. Jon Bentley, Bell Laboratories, Murray Hill
DATE:           Wednesday, February 22, 1984
TIME:           3:30pm  Refreshments
                4:15pm  Lecture
PLACE:          Bldg. 2-338


Bin packing is a typical NP-complete problem that arises in many applications.
This talk describes experiments on two simple bin packing heuristics (First Fit
and First Fit Decreasing) which show that they perform extremely well on
randomly generated data.  On some natural classes of inputs, for instance, the
First Fit Decreasing heuristic finds an optimal solution more often than not.
The data leads to several startling conjectures; some have been proved, while
others remain open problems.  Although the details concern the particular
problem of bin packing, the theme of this talk is more general: how should
computer scientists use simulation programs to discover mathematical laws?
(This work was performed jointly with D.S. Johnson, F.T. Leighton and C.A.
McGeoch.  Tom Leighton will give a talk on March 12 describing proofs of some
of the conjectures spawned by this work.)

HOST:   Professor Tom Leighton

THIS SEMINAR IS JOINTLY SPONSORED BY THE COMBINATORICS SEMINAR & THE THEORY OF
COMPUTATION SEMINAR

------------------------------

Date: 17 Feb 1984  15:14 EST (Fri)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Revolving Seminar

[Forwarded from the MIT-OZ bboard by SASW@MIT-MC.]

[I am uncertain as to the interest of AIList readers in robotics,
VLSI and CAD/CAM design, graphics, and other CS-related topics.  My
current policy is to pass along material relating to planning and
high-level reasoning.  Readers with strong opinions for or against
such topics should write to AIList-Request@SRI-AI.  -- KIL]


AUTOMATIC SYNTHESIS OF FINE-MOTION STRATEGIES FOR ROBOTS

Tomas Lozano Perez

The use of force-based compliant motions enables robots to carry out
tasks in the presence of significant sensing and control errors.  It
is quite difficult, however, to discover a strategy of such motions to
achieve a task.  Furthermore, the choice of motions is quite sensitive
to details of geometry and to error characteristics.  As a result,
each new task presents a brand new and difficult problem.  These
factors motivate the need for automatic synthesis for compliant
motions.  In this talk I will describe a formal approach to the
synthesis of compliant motion strategies from geometric description of
assembly operations.

(This is joint work [no pun intended -- KIL] with Matt Mason of CMU
and Russ Taylor of IBM)

------------------------------

Date: Fri 17 Feb 84 09:02:29-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral

             [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                                  PH.D. ORAL

                        USE OF ARTIFICIAL INTELLIGENCE
                            AND SIMPLE MATHEMATICS
                       TO ANALYZE A PHYSIOLOGICAL MODEL

                    JOHN C. KUNZ, STANFORD/INTELLIGENETICS

                               23 FEBRUARY 1984

                  MARGARET JACKS HALL, RM. 146, 2:30-3:30 PM


   The objective of this research is to demonstrate a methodology for design
and use of a physiological model in a computer program that suggests medical
decisions.  This methodology uses a physiological model based on first
principles and facts of physiology and anatomy.  The model includes inference
rules for analysis of causal relations between physiological events.  The model
is used to analyze physiological behavior, identify the effects of
abnormalities, identify appropriate therapies, and predict the results of
therapy.  This methodology integrates heuristic knowledge traditionally used in
artificial intelligence programs with mathematical knowledge traditionally used
in mathematical modeling programs.  A vocabulary for representing a
physiological model is proposed.

------------------------------

Date: Tue 21 Feb 84 10:47:50-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: ANNOUNCEMENT

[Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]


Thursday, February 23, 1984

Professor Kenneth Kahn
Upssala University

will give a talk:

"Logic Programming and Partial Evaluation as Steps Toward
 Efficient Generic Programming"

at: Bldg. 200, (History Building), Room 107, 12 NOON

PROLOG and extensions to it embedded in LM PROLOG will be presented as
a means of describing programs that can be used in many ways.  Partial
evaluation  is  a  process  that  automatically  produces   efficient,
specialized versions  of programs.   Two partial  evaluators, one  for
LISP and one for PROLOG, will be presented as a means for winning back
efficiency that  was sacrificed  for generality.   Partial  evaluation
will also be presented as a means of generating compilers.

------------------------------

Date: 21 Feb 84 15:27:53 EST
From: DSMITH@RUTGERS.ARPA
Subject: Rutger's University Computer Science Colloquium

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                            COLLOQUIUM

                    Department of Computer Science


         SPEAKER:   John Cannon
                    Dept. of Math
                    University of Sydney
                    Syndey, AUSTRIA

         TITLE:    "DESIGN AND IMPLEMENTATION OF A PROGRAMMING
                    LANGUAGE/EXPERT SYSTEMS FOR MODERN ALGEBRA"

                                  Abstract

Over the past 25 years a substantial body of algorithms has been
devised for computing structural information about graphs.  In order
to make these techniques more generally available, I have undertaken
the development of a system for group theory and related areas of
algebra.  The system consists of a high-level language (having a
Pascal-like syntax) supported by an extensive library.  In that the
system attempts to plan, at a high level, the most economical solution
to a problem, it has some of the attributes of an expert system.  This
talk will concentrate on (a) the problems of designing appropriate
syntax for algebra and, (b) the implementation of a language professor
which attempts to construct a model of the mathematical microworld
with which it is dealing.

          DATE:  Friday, February 24, 1984
          TIME:  2:50 p.m.
          PLACE: Hill Center - Room 705
               * Coffee served at 2:30 p.m. *

------------------------------

End of AIList Digest
********************

∂29-Feb-84  1547	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #22
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Feb 84  15:47:19 PST
Date: Wed 29 Feb 1984 13:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #22
To: AIList@SRI-AI


AIList Digest           Wednesday, 29 Feb 1984     Volume 2 : Issue 22

Today's Topics:
  Robotics - Personal Robotics Request,
  Books - Request for Laws of Form Review,
  Expert Systems - EURISKO Information Request,
  Automated Documentation Tools - Request,
  Mathematics - Fermat's Last Theorem & Map Coloring,
  Waveform Analysis - EEG/EKG Interpretation,
  Brain Theory - Parallelism,
  CS Culture - Computing Worlds
----------------------------------------------------------------------

Date: Thu 16 Feb 84 17:59:03-PST
From: PIERRE@SRI-AI.ARPA
Subject: Information about personal robots?

   Do you know anything about domestic robots? personal robots?
I'm interested by the names and adresses of companies, societies,
clubs, universities involved in that field. Does there exist any review
about this? any articles? Do you work or have you heard of any projects
in this field?
Thank you to answer at Pierre@SRI-AI.ARPA

          Pierre

------------------------------

Date: 23 Feb 84 13:58:28 PST (Thu)
From: Carl Kaun <ckaun@aids-unix>
Subject: Laws of Form


I hope that Charlie Crummer will share some of the substance he finds in
"Laws of Form" with us (ref AIList Digest V2 #20).  I myself am more in the
group that does not understand what LoF has to say that is new, and indeed
doubt that it does say anything unique.

------------------------------

Date: Fri, 24 Feb 84 15:32 MST
From: RNeal@HIS-PHOENIX-MULTICS.ARPA
Subject: EURISKO

I have just begun reading the AI digests (our copy starts Nov 3 1983)
and I am very interested in the one or two transactions dealing with
EURISKO.  Could someone explain what EURISKO does, and maybe give some
background of its development?

On a totally different note, has anyone done any AI work on lower order
intelligence (ie.  that using instinct) such as insects, reptiles, etc.?
Seems they would be easier to model, and I just wondered if anyone had
attempted to make a program which learns they way they do and the things
they do .  I don't know if this belongs in AI or some simulation meeting
(is there one?).
                      >RUSTY<

------------------------------

Date: 27 Feb 1984 07:26-PST
From: SAC.LONG@USC-ISIE
Subject: Automated Documentation Tools

Is anyone aware of software packages available that assist in the
creation of documentation of software, such as user manuals and
maintenance manuals?  I am not looking for simple editors which
are used to create text files, but something a little more
sophisticated which would reduce the amount of time one must
invest in creating manuals manually (with the aid of a simple editor).
If anyone has information about such, please send me a message at:

     SAC.LONG@USC-ISIE

or   Steve Long
     1018-1 Ave H
     Plattsmouth NE 68048

or   (402)294-4460 or reply through AIList.

Thank you.

  --  Steve

------------------------------

Date: 16 Feb 84 5:36:12-PST (Thu)
From: decvax!genrad!wjh12!foxvax1!minas @ Ucb-Vax
Subject: Re: Fermat's Last Theorem & Undecidable Propositions
Article-I.D.: foxvax1.317

Could someone please help out an ignorant soul by posting a brief (if that
is, indeed, possible!) explanation of what Fermat's last theorem states as
well as what the four-color theorem is all about.  I'm not looking for an
explanation of the proofs, but, simply, a statement of the propositions.

Thanks!

-phil minasian          decvax!genrad!wjh12!foxvax1!minas

------------------------------

Date: 15 Feb 84 20:15:33-PST (Wed)
From: ihnp4!mit-eddie!rh @ Ucb-Vax
Subject: Re: Four color...
Article-I.D.: mit-eddi.1290

I had thought that 4 color planar had been proved, but that
the "conjectures" of 5 colors for a sphere and 7 for a torus
were still waiting.  (Those numbers are right, aren't they?)

Randwulf  (Randy Haskins);  Path= genrad!mit-eddie!rh

------------------------------

Date: 17 Feb 84 21:33:46-PST (Fri)
From: decvax!dartvax!dalcs!holmes @ Ucb-Vax
Subject: Re: Four color...
Article-I.D.: dalcs.610

        The four colour problem is the same for a sphere as it is
for the infinite plane.  The problem for a torus was solved many
years ago.  The torus needs exactly 7 colours to paint it.

                                        Ray

------------------------------

Date: 26 Feb 1984 21:38:16-PST
From: utcsrgv!utai!tsotsos@uw-beaver
Subject: AI approach to ECG analysis

One of my PhD students, Taro Shibahara, has been working on an expert
system for arrhythmia analysis. The thesis should be finished by early summer.
A preliminary paper discussing some design issues appeared in IJCAI-83.
System name is CAA - Causal Arrhythmia Analyzer. Important contributions:
Two distinct KB's, one of signal domain the other of the electrophysiological
domain, communication via a "projection" mechanism, causal relations to assist
in prediction, use of meta-knowledge within a frame-based representation
for statistical knowledge. The overall structure is based on the
ALVEN expert system for left ventricular performance assessment, developed
here as well.

John Tsotsos
Dept. of Computer Science
University of Toronto

[Ray Perrault <RPERRAULT@SRI-AI> also suggested this lead.  -- KIL]

------------------------------

Date: 24 Feb 84 10:07:36-PST (Fri)
From: decvax!mcnc!ecsvax!jwb @ Ucb-Vax
Subject: computer ECG
Article-I.D.: ecsvax.2043

At least three companies are currently marketing computer ECG analysis
systems.  They are Marquette Electronics, IBM, Hewlett-Packard.  We use the
Marquette system which works quite well.  Marquette and IBM use variants of
the same program (the "Bonner" program below, original development funded by
IBM.)  Apparently because of fierce competition, much current information,
particularly with regard to algorithms, is proprietary.  Worst in this regard
(a purely personal opinion) is HP who seems to think nobody but HP needs to
know how they do things and physicians are too dumb to understand anyway.
Another way hospitals get computer analysis of ECG's is through "Telenet" who
offers telephone connection to a time sharing system (I think located in the
Chicago area).  Signals are digitized and sent via a modem through standard
phone lines.  ECG's are analyzed and printed information is sent back.
Turn-around time is a few minutes.  They offer an advantage to small hospitals
by offering verification of the analysis by a Cardiologist (for an extra fee).
I understand this service has had some financial problems (rumors).

Following is a bibliography gathered for a lecture to medical students about
computer analysis of ECG's.  Because of this it is mainly from more or less
clinical literature and is oriented toward methods of validation (This is
tough, because reading of ECG's by cardiologists, like many clinical
decisions, is partly a subjective process.  The major impact of these systems
so far has been to force the medical community to develop objective criteria
for their analysis.)

                                 BIBLIOGRAPHY
                  Computer Analysis of the Electrocardiogram
                               August 29, 1983

BOOK

Pordy L (1977) Computer electrocardiography:  present status and criteria.
Mt. Kisco, New York, Futura

PAPERS

Bonner RE, Crevasse L, Ferrer MI, Greenfield JC Jr (1972) A new computer
program for analysis of scalar electrocardiograms.  Computers and Biomedical
Research 5:629-653

Garcia R, Breneman GM, Goldstein S (1981) Electrogram computer analysis.
Practical value of the IBM Bonner-2 (V2MO) program.  J. Electrocardiology
14:283-288

Rautaharju PM, Ariet M, Pryor TA, et al. (1978)  Task Force III:  Computers in
diagnostic electrocardiography.  Proceedings of the Tenth Bethesda Conference,
Optimal Electrocardiography.  Am. J. Cardiol. 41:158-170

Bailey JJ et al (1974) A method for evaluating computer programs for
electrocardiographic interpretation

I.  Application to the experimental IBM program of 1971.  Circulation 50:73-79

II.  Application to version D of the PHS program and the Mayo Clinic program
of 1968.  Circulation 50:80-87

III.  Reproducibility testing and the sources of program errors.  Circulation
50:88-93

Endou K, Miyahara H, Sato (1980) Clinical usefulness of computer diagnosis in
automated electrocardiography.  Cardiology 66:174-189

Bertrand CA et al (1980) Computer interpretation of electrocardiogram using
portable bedside unit.  New York State Journal of Medicine.  August
1980(?volume):1385-1389

Jack Buchanan
Cardiology and Biomedical Engineering
University of North Carolina at Chapel Hill
(919) 966-5201

decvax!mcnc!ecsvax!jwb

------------------------------

Date: Friday, 24-Feb-84 18:35:44-GMT
From: JOLY G C QMA (on ERCC DEC-10) <GCJ%edxa@ucl-cs.arpa>
Subject: re: Parallel processing in the brain.

To compare the product of millions of years of evolution
(ie the human brain) with the recent invention of parallel
processors seems to me to be like trying to effect an analysis
of the relative properties of chalk and cheese.
Gordon Joly.

------------------------------

Date: Wed, 29 Feb 84 13:17:04 PST
From: Dr. Jacques Vidal <vidal@UCLA-CS>
Subject: Brains: Serial or Parallel?


Is the brain parallel?  Or is the issue a red herring?

Computing and thinking are physical processes and as all physical
processes unfold in time are ultimately SEQUENTIAL even "continu-
ous" ones although the latter are self-timed (free-running, asyn-
chronous) rather than clocked.

PARALLEL means that there are multiple tracks with similar  func-
tions  like availability of multiple processors or multiple lanes
on a superhighway. It is a structural characteristic.

CONCURRENT means simultaneous. It is a temporal characteristic.

REDUNDANT means that there is  structure  beyond  that  which  is
minimally  needed  for  function,  perhaps to insure integrity of
function under perturbations.

In this context, PARALLELISM,  i.e. the deployment  of  multiple
processors  is the currency with which a system designer may pur-
chase these two commodities: CONCURRENCY and REDUNDANCY (a neces-
sary but not sufficient condition).

Turing machines have zero  concurrency.  Almost  everything  else
that  computes exhibit some. Conventional processor architectures
and  memories  are  typically  concurrent  at  the  word   level.
Microprogram are sequences of concurrent gate events.

There exist systems that are   completely  concurrent  and  free-
running.   Analog computers and combinational logic circuits have
these properties.  There, computation progresses by chunk between
initial  and final states.  A new chunk starts when the system is
set to a new initial state.

Non-von architectures have moved away from single track computing
and  from  the linear organization of memory cells. With cellular
machines another property appears: ADJACENCY. Neighboring proces-
sors use adjacency as a form of addressing.

These concepts are applicable to natural  automata:  Brains  cer-
tainly  employ  myriads  of  processors  and thus exhibit massive
parallelism. From the numerous processes that are  simultaneously
active  (autonomous  as well as deliberate ones) it is clear that
brains utilize unprecedented concurrency.  These  proces-
sors  are   free-running.   Control  and  data flows are achieved
through three-dimensional networks. Adjacency is a key feature in
most  of the brain processes that have been identified. Long dis-
tance communication is provided for by millions of parallel path-
ways, carrying highly redundant messages.

Now introspection indicates that conscious thinking is limited to
one  stream of thought at any given time. That is a limitation of
the mechanisms supporting consciousness amd some will claim  that
it  can be overcome. Yet even a single stream of thinking is cer-
tainly supported  by  many  concurrent  processes,  obvious  when
thoughts are spoken, accompanied by gestures etc...

Comments?

------------------------------

Date: 18 Feb 1984 2051-PST
From: Rob-Kling <Kling%UCI-20B%UCI-750a@csnet2>
Subject: Computing Worlds

          [Forwarded from Human-Nets Digest by Laws@SRI-AI.]

Sherry Turkle is coming out with a book that may deal in part with the
cultures of computing worlds. It also examines questions about how
children come to see computer applications as alive, animate, etc.

It was to be called, "The Intimate Machine." The title was
appropriated by Neil Frude who published a rather superficial book
with an outline very similar to that Turkle proposed to
some publishers. Frude's book is published by New American Library.

Sherry Turkle's book promises to be much deeper and careful.
It is to be published by Simon and Schuster  under a different
title.

Turkle published an interesting article
called, "Computer as Rorschach" in Society 17(2)(Jan/Feb 1980).

This article examines the variety of meanings that people
attribute to computers and their applications.

I agree with Greg that computing activities are embedded within rich
social worlds. These vary. There are hacker worlds which differ
considerably from the worlds of business systems analysts who develop
financial applications in COBOL on IBM 4341's.  AI worlds differ from
the personal computing worlds, and etc.  To date, no one appears to
have developed a good anthropological account of the organizing
themes, ceremonies, beliefs, meeting grounds, etc.  of these various
computing worlds.  I am beginning such a project at UC-Irvine.

Sherry Turkle's book will be the best contribution (that I know of) in
the near future.

One of my colleagues at UC-Irvine, Kathleen Gregory, has just
completed a PhD thesis in which she has studied the work cultures
within a major computer firm.  She plans to transform her thesis into
a book.  Her research is sensitive to the kinds of langauage
categories Greg mentioned.  (She will joining the Department of
Information and Computer Science at UC-Irvine in the Spring.)

Also, Les Gasser and Walt Scacchi wrote a paper on personal computing
worlds when they were PhD students at UCI.  It is available for $4
from:

        Public Policy Research Organization
        University of California,  Irvine
        Irvine,Ca. 92717

(They are now in Computer Science at USC and may provide copies upon
request.)


Several years ago I published two articles which examine some of the
larger structural arrangments in computing worlds:

        "The Social Dynamics of Technical Innovation in the
Computing World" ↑&Symbolic Interaction\&,
1(1)(Fall 1977):132-146.


        "Patterns of Segmentation and Intersection in the
Computing World"
↑&Symbolic Interaction\& 1(2)(Spring 1978): 24-43.

One section of a more recent article,
        "Value Conflicts in the Deployment of Computing Applications"
↑&Telecommunications Policy\& (March 1983):12-34.
examines the way in which certain computer-based technologies
such as automated offices, artificial intelligence,
CAI, etc. are the foci of social movements.


None of my papers examine the kinds of special languages
which Greg mentions. Sherry Turkle's book may.
Kathleen Gregory's thesis does, in the special setting of
one major computing vendor's software culture.

I'll send copies of my articles on request if I recieve mailing
addresses.


Rob Kling
University of California, Irvine

------------------------------

End of AIList Digest
********************

∂29-Feb-84  1645	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #23
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Feb 84  16:44:59 PST
Date: Wed 29 Feb 1984 14:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #23
To: AIList@SRI-AI


AIList Digest            Thursday, 1 Mar 1984      Volume 2 : Issue 23

Today's Topics:
  Seminars - VLSI Knowledge Representation
    & Machine Learning
    & Computer as Musical Scratchpad
    & Programming Language for Group Theory
    & Algorithm Animation
  Conference - Very Large Databases Call for Papers
----------------------------------------------------------------------

Date: Wed 22 Feb 84 16:36:20-PST
From: Joseph A. Goguen <GOGUEN@SRI-AI.ARPA>
Subject: Hierarchical Software Processor

                     [Forwarded by Laws@SRI-AI.]

                          An overview of HISP
                           by K. Futatsugi

               Special Lecture at SRI, 27 February 1984


    HISP (hierarchical software processor) is an experimental
language/system, which has been developed at ETL (Electrotechnical
Laboratory, Japan) by the author's group, for hierarchical software
development based on algebraic specification techniques.
    In HISP, software development is simply modeled as the incremental
construction of a set of hierarchically structured clusters of
operators (modules).  Each module is the constructed as a result of
applying one of the specific module building operations to the already
existing modules.  This basic feature makes it possible to write
inherently hierarchical and modularized software.
    This talk will inroduce HISP informally by the use of simple
examples.  The present status of HISP implementation and future
possibilities will also be sketched.

------------------------------

Date: Thu 23 Feb 84 00:26:45-MST
From: Subra <Subrahmanyam@UTAH-20.ARPA>
Subject: Very High Level Silicon Compilation

    [Forwarded by Laws@SRI-AI.  This talk was presented at the SRI
                    Computer Science Laboratory.]


           VERY HIGH LEVEL SILICON COMPILATION: THEORY AND PRACTICE

                               P.A.Subrahmanyam
                        Department of Computer Science
                              University of Utah

The  possibility  of  implementing  reasonably  complex special purpose systems
directly in silicon using VLSI technologies has served to  underline  the  need
for design methodologies that support the development of systems that have both
hardware  and  software  components.    It  is  important  in  the long run for
automated design aids that support such methodologies to be based on a  uniform
set  of  principles  --  ideally,  on  a  unifying  theoretical basis.  In this
context, I have been investigating a general framework to support the  analytic
and synthetic tasks of integrated system design. Two of the salient features of
this basis are:

   - The  formalism  allows  various levels of abstraction involved in the
     software/hardware design  process  to  be  modelled.    For  example,
     functional  (behavioral),  architectural  (system  and  chip  level),
     symbolic  layout,  and  electrical  (switch-level)--  are  explicitly
     modelled  as  being  typical  of the levels of abstraction that human
     "expert designers" work with.

   - The  formalism  allows  for  explicit  reasoning  about   behavioral,
     spatial, temporal and performance criteria.

The  talk  will  motivate  the  general  problem,  outline  the  conceptual and
theoretical basis, and discuss some of our preliminary  empirical  explorations
in building integrated software-hardware systems using these principles.

------------------------------

Date: 22 Feb 84 12:19:09 EST
From: Giovanni <Bresina@RUTGERS.ARPA>
Subject: Machine Learning Seminar

              [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

             *** MACHINE LEARNING SEMINAR AND PIZZA LUNCHEON ***


    Empirical Exploration of Problem Reformulation and Strategy Acquisition

Authors: N.S. Sridharan and J.L. Bresina
Location: Room 254, Hill Center, Busch Campus, Rutgers
Date: Wednesday, February 29, 1984
Time: Noon - 1:30 pm
Speaker: John L. Bresina

The  problem  solving  ability  of an AI program is critically dependent on the
nature of the symbolic  formulation  of  the  problem  given  to  the  program.
Improvement  in  performance  of  the  problem  solving  program can be made by
improving the strategy of controlling and directing search but more importantly
by shifting the problem formulation to a more appropriate form.

The choice of the initial formulation is critical, since  certain  formulations
are  more  amenable  to  incremental  reformulations than others.  With this in
mind,  an  Extensible  Problem  Reduction  method  is  developed  that   allows
incremental  strategy  construction.    The class of problems of interest to us
requires dealing with interacting subgoals.  A variety  of  reduction  operator
types   are   introduced  corresponding  to  different  ways  of  handling  the
interaction among subgoals.  These reduction  operators  define  a  generalized
And/Or  space including constraints on nodes with a correspondingly generalized
control structure for dealing with constraints and for combining  solutions  to
subgoals.    We  consider a modestly complex class of board puzzle problems and
demonstrate, by example, how reformulation of the problem can be carried out by
the construction and modification of reduction operators.

------------------------------

Date: 26 Feb 84 15:16:08 EST
From: BERMAN@RU-BLUE.ARPA
Subject: Seminar: The Computer as Musical Scratchpad

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

SEMINAR: THE COMPUTER AS MUSICAL SCRATCHPAD

Speaker: David Rothenburg, Inductive Inference, Inc.
Date:   Monday, March 5, 1984
Place:  CUNY Graduate Center, 33 West 42nd St., NYC
Room:   732
Time:   6:30 -- 7:30 p.m.

        The composer can use a description language wherein only those
properties and relations (of and between protions of the musical
pattern) which he judges significant need be specified.  Parameters of
these unspecified properties and relations are assigned at random.  It
is intended that this description of the music be refined in response
to iterated auditions.

------------------------------

Date: Sun 26 Feb 84 17:06:23-CST
From: Bob Boyer <CL.BOYER@UTEXAS-20.ARPA>
Subject: A Programming Language for Group Theory (Dept. of Math)

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

            DEPARTMENT OF MATHEMATICS COLLOQUIUM
          A Programming Language for Group Theory
                        John Cannon
        University of Sydney and Rutgers University
                 Monday, February 27, 4pm

     The past 25 years has seen the emergence of a small but vigorous branch of
group theory which is concerned with the discovery and implementation of
algorithms for computing structural information about both finite and infinite
groups.  These techniques have now reached the stage where they are finding
increasing use both in group theory research and in its applications.  In order
to make these techniques more generally available, I have undertaken the
development of what in effect is an expert system for group theory.

     Major components of the system include a high-level user language (having
a Pascal-like syntax) and an extensive library of group theory algorithms.  The
system breaks new ground in that it permits efficient computation with a range
of different types of algebraic structures, sets, sequences, and mappings.
Although the system has only recently been released, already it has been
applied to problems in topology, algebraic number theory, geometry, graphs
theory, mathematical crystalography, solid state physics, numerical analysis
and computational complexity as well as to problems in group theory itself.

------------------------------

Date: 27 Feb 1984 2025-PST (Monday)
From: Forest Baskett <decwrl!baskett@Shasta>
Subject: EE380 - Wednesday, Feb. 29 - Sedgewick on Algorithm Animation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

EE380 - Computer Systems Seminar
Wednesday, February 29, 4:15 pm
Terman Auditorium

                        Algorithm Animation
                          Robert Sedgewick
                          Brown University

    The central thesis of this talk is that it is possible to expose
fundamental characteristics of computer programs through the use of
dynamic (real-time) graphic displays, and that such algorithm animation
has the potential to be useful in several contexts.  Recent research in
support of this thesis will be described, including the development of
a conceptual framework for the process of animation, the implementation
of a software environment on high-performance graphics-based
workstations supporting this activity, and the use of the system as a
principal medium of communication in teaching and research.  In
particular, we have animated scores of numerical, sorting, searching,
string processing, geometric, and graph algorithms.  Several examples
will be described in detail.

[Editorial remark: This is great stuff.  - Forest]

------------------------------

Date: 23 Feb 84 16:32:24 PST (Thu)
From: Gerry Wilson <wilson@aids-unix>
Subject: Conference Call for Papers


                        CALL  FOR PAPERS
                        ================

               10'th International Conference on

                    Very Large Data Bases


The tenth VLDB conference is dedicated to the identification and
encouragement of research, development, and application of
advanced technologies for management of large data bases.  This
conference series provides an international forum for the promotion
of an understanding of current research; it facilitates the exchange
of experiences gained in the design, construction and use of data
bases; it encourages the discussion of ideas and future research
directions.  In this anniversary year, a special focus is the
reflection upon lessons learned over the past ten years and the
implications for future research and development.  Such lessons
provide the foundation for new work in the management of large
data bases, as well as the merging of data bases, artificial
intelligence, graphics, and software engineering technologies.

TOPICS:

Data Analysis and Design           Intelligent Interfaces
    Multiple Data Types                User Models
    Semantic Models                    Natural Language
    Dictionaries                       Knowledge Bases
                                       Graphics
Performance and Control
    Data Representation            Workstation Data Bases
    Optimization                       Personal Data Mangement
    Measurement                        Development Environments
    Recovery                           Expert System Applications
                                       Message Passing Designs
Security
    Protection                     Real Time Systems
    Semantic Integrity                 Process Control
    Concurrency                        Manufacturing
                                       Engineering Design
Huge Data Bases
    Data Banks                     Implementation
    Historical Logs                    Languages
                                       Operating Systems
                                       Multi-Technology Systems

Applications                       Distributed Data Bases
    Office Automation                  Distribution Management
    Financial Management               Heterogeneous and Homogeneous
    Crime Control                      Local Area Networks
    CAD/CAM

Hardware
    Data Base Machines
    Associative Memory
    Intelligent Peripherals


LOCATION:  Singapore
DATES:     August 29-31, 1984

TRAVEL SUPPORT: Funds will be available for partial support of most
                participants.

HOW TO SUBMIT:  Original full length (up to 5000 words) and short (up
  to 1000 words) papers are sought on topics such as those above.  Four
  copies of the submission should be sent to the US Program Chairman:

       Dr. Umeshwar Dayal
       Computer Corporation of America
       4 Cambridge Center
       Cambridge, Mass. 02142
       [Dayal@CCA-UNIX]

IMPORTANT DATES:    Papers Due:         March 15, 1984
                    Notification:       May 15, 1984
                    Camera Ready Copy:  June 20, 1984

For additional information contact the US Conference Chairman:

      Gerald A. Wilson
      Advanced Information & Decision Systems
      201 San Antonio Circle
      Suite 286
      Mountain View, California  94040
      [Wilson@AIDS]

------------------------------

End of AIList Digest
********************

∂06-Mar-84  1159	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #24
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 84  11:59:24 PST
Date: Tue  6 Mar 1984 10:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #24
To: AIList@SRI-AI


AIList Digest            Tuesday, 6 Mar 1984       Volume 2 : Issue 24

Today's Topics:
  Conferences - AAAI-84 Paper Submission Deadline,
  AI Tools - LISP for IBM PC & UNIX VAX Tools,
  Manual Generators - Replys,
  Parser Generator - Request,
  Mathematics - Fermat's Last Theorem & Map Coloring,
  Personal Robotics - Reply,
  Waveform Analysis - ECG Systems & Validation,
  Review - U.S. Response to Japan's AI efforts
----------------------------------------------------------------------

Date: Wed 29 Feb 84 15:44:06-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: AAAI-84 Paper Submission Deadline


*******  AAAI-84 PAPER SUBMISSION DEADLINE IS APRIL 2, 1984  *******


The SIGART Newsletter (No. 87, January 1984) has mistakenly published
two conflicting dates for submission of papers to AAAI-84.  Please note
that papers must be received in the AAAI Office in Menlo Park, CA, on or
before April 2, 1984.  This is the date that appears in the AAAI-84 Call
for Papers (printed on page 17 of the above-mentioned Newsletter).  The
date printed in the "Calendar" section on page 1 of the Newsletter is
incorrect.

Thank you,

Ron Brachman, Program Chair
Claudia Mazzetti, AAAI Executive Director

------------------------------

Date: Sun 4 Mar 84 13:33:49-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: LISP for IBM PC

I asked the list a while back about implementations of LISPs for
IBM PC's. I got a pointer for IQLISP, but seem to have misplaced
the pertinent info on how to order it. Can anyone supply this?

If you have any other implementations, I'll be glad to pass any
reviews back to the list.

--ted

[The original message must have been prior to issue 53, and I
don't have it online.  Does some have the address handy?  -- KIL]

------------------------------

Date: 27 Feb 84 16:26:42-PST (Mon)
From: ihnp4!houxm!hou2a!zev @ Ucb-Vax
Subject: AI (LISP,PROLOG,ETC.) for UNIX VAX
Article-I.D.: hou2a.269

A friend of mine is looking for a LISP, PROLOG, and/or
any other decent Artificial Intelligence system that
will run on a VAX under UNIX.

Please send replies directly to Mr. Leonard Brandwein at
aecom!brandw

He asked me to post this as a favor, since he does not
have direct access to the net.

In the likely case that you don't have a direct path
to aecom, here is one that will get you there from
any machine that can reach houxm:

houxm!hou2a!allegra!philabs!aecom!brandw

Of course, you can shorten the path if you can reach
any of the intermediate machines directly.

Thank you very much.

Zev Farkas  hou2a!zev  201 949 3821

[When sending to Usenet from the Arpanet, be sure to put double quotes
around all of the address prior to the @-sign.  Readers who want help
getting messages through the gateways should contact AIList-Request@SRI-AI.
Useful summaries or interesting replys may be published directly in
AIList, of course.  I will pass along some information about CProlog
in the next issue.  -- KIL]

------------------------------

Date: Thu, 1 Mar 84 5:12:55 EST
From: Stephen Wolff <steve@brl-bmd>
Subject: Re:  AI (LISP,PROLOG,ETC.) for UNIX VAX

     [Forwarded from the Info-Unix distribution by Laws@SRI-AI.]

Franz Lisp comes with Berkeley UNIX.  Interlisp is available.  Also T.
CProlog is available from Edinburgh.  You can get Rosie from RAND.
And these are just basics.  There's LOTS!  There are many schools out there
who are (possibly newly) in the AI business who couldn't afford DEC-20's
(obviously not SRI, UTexas, CMU, etc.), but who DID buy VAXen back when they
were good value for money.  And they're mostly running BSD, and they're
busily developing all the tools and software that AI folk do.  Is there any
PARTICULAR branch of AI you're interested in?  [...]

------------------------------

Date: Thu, 1 Mar 84 4:23:11 EST
From: Stephen Wolff <steve@brl-bmd>
Subject: Documentation tools

        Artificially intelligent it's not, and not even fancy; but there are
folks hereabouts that use the UNIX tools SCCS (or RCS) to do documentation
of various sorts.  Although intended for managing the writing, evolving and
maintaining of large software packages, they can't tell C from Fortran from
straight English text and they will quite cheerfully maintain for you the
update/revision tree in any case.

        I should imagine with a bit if thought you could link your code AND
documentation modules and manage 'em both simultaneously and equitably.

------------------------------

Date: Sat, 3 Mar 84 18:43:59 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Manual generators

The SCRIBE system (Brian K. Reid of CMU and Janet H. Walker of BBN)
may be close to what you are looking for.  It has automatic paragraph
numbering, automatic table-of-contents generation, automatic indexing,
and automatic bibliography.  (I use the word "automatic" somewhat
loosely.  The user has to be involved.)  A more sophisticated system,
I believe, is in use at the University of Michigan's Information
Systems Design and Optimization System (ISDOS) project.  The contact
is Prof. Dan Teichroew in the Industrial and Operations Engineering
department at Ann Arbor.  It may be avaliable to ISDOS sponsors.

  --Charlie

------------------------------

Date: Thu 1 Mar 84 20:24:33-EST
From: Howard  Reubenstein <HBR@MIT-XX.ARPA>
Subject: Looking for a Parser Generator

          [Forwarded from the MIT-MC bboard by SASW@MIT-MC.]

        A friend of mine needs a parser generator which produces
output in either FORTRAN or LISP. Does anyone know where he can
get access to one?

------------------------------

Date: Thu, 1 Mar 84 08:34 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Re: Fermat's Last Theorem & Undecidable Propositions

Fermat's Last Theorem:

is the assertion that

                A↑N + B↑N = C↑N

has no solution in integers for N > 2.  (For N = 2, of course, all the
well-known right triangles like [3,4,5] are solutions.)

The Four-Color Theorem:

states that any planar map can be colored so that no two adjacent
regions are the same color using no more than four different colors.
(Regions must be connected; "adjacent" means having a common boundary of
finite length, i.e. not just touching at a point.

The latter was shown to be true by two mathematicians at the University
of Illinois, using a combination of traditional mathematical reasoning
and computer-assisted analysis of a large set of graphs.  An article
describing the proof can be found in a back issue of /Scientific
American/.

The former appears in a manuscript by Fermat, with a marginal notation
to the effect that he had found a slick proof, but didn't have enough
space to write it down.  This was discovered after his death, of course.
Most mathematicians believe the theorem to be true, and most do not
think Fermat is likely to have found a valid proof, but neither
proposition has been proved beyond question.

Mark

------------------------------

Date: 28 Feb 84 20:42:40-PST (Tue)
From: decvax!genrad!wjh12!n44a!ima!inmet!andrew @ Ucb-Vax
Subject: Re: Re: Fermat's Last Theorem & Undecida - (nf)
Article-I.D.: inmet.945

Fermat's Last Theorem states that the equation

    n    n    n
   A  + B  = C

has solutions in positive integers a, b, c, n only when n = 2.

The "four-color map problem" states that any map (think of, say, a map of the
US) requires at most four colors to color all regions without using the same
color for any two adjacent ones.  (This is for 2-dimensional maps.  Maps
on a sphere or torus require more - 5 and 7, I think.)

The former has neither been proven nor disproven.  The latter was "proven"
with the aid of a computer program; many feel that this does not constitute
a true proof (see all the flames elsewhere in this group).  Incidentally,
the school where it was "proven" changed their postage meters to print
"FOUR COLORS SUFFICE" on outgoing mail.

------------------------------

Date: Thu 1 Mar 84 13:53:05-PST
From: Sam Hahn <SHahn@SUMEX-AIM.ARPA>
Subject: Domestic Robotics


I find that Robotics Age (the journal of intelligent machines), published by
Robotics Age, Inc, located at:

                Strand Building
                174 Concord Street
                Peterborough, NH  03458         (603) 924-7136

is a good source of information on low-end, more personal, and thus more
"domestic"ly oriented robotics.  For example, the advertisers include

        Micromation:    voice command system for Hero-1
        Iowa Precision Robotics:
                        68000-controlled educ/pers'l robot
        Micron Techn.:  computer vision for your PC
        S.M. Robotics:  PR kit for $59.95

just to name a few from the February 1984 issue.

Their articles are also more PR-oriented, and often include some level of
design info.

I'm new to the publication myself (about 1/2 year), but find it a source of
information not elsewhere available.

                                        -- sam hahn

------------------------------

Date: 27 Feb 84 19:25:34-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: computer ECG - (nf)
Article-I.D.: uiucdcs.5890

Ivan Bratko, of the Josef Stefan Institute in Ljubljana, Yugoslavia, has
recently achieved some remarkable results. With the aid of computer
simulation he has built an expert system capable of diagnosing multiple
simultaneous heart malfunction causes from ECG outputs. This was a
significant contribution to medical science, since for the class of failures
he treated, there was no known method of diagnosing anything more complicated
than a single cause.

His work will be printed as a monograph from the newly-formed "International
School for the Synthesis of Expert Knowledge" (ISSEK), which will have its
first meeting this summer. ISSEK is an affiliation of computer science labs
dedicated to the automatic generation of new knowledge of super-human quality.
(Membership of ISSEK is by invitation only).

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign
                                        { pur-ee | ihnp4 } ! uiucdcs ! marcel

------------------------------

Date: 2 Mar 84 20:54:42 EST
From: Ron <FISCHER@RUTGERS.ARPA>
Subject: Re: computer ECG, FDA testing of AI programs

    Apparently because of fierce competition, much current information,
    particularly with regard to algorithms, is proprietary.  Worst in this
    regard (a purely personal opinion) is HP who seems to think nobody but
    HP needs to know how they do things and physicians are too dumb to
    understand anyway.
    ...
    They offer an advantage to small hospitals by offering verification of
    the analysis by a Cardiologist (for an extra fee).

What the latter seems to say is that the responsibility for accepting
the diagnosis is that of the local cardiologist.  I cannot see a
responsable doctor examining a few runs of a program's output and
proclaiming it "correct."

A hedge against complaints of computers taking over decision making
processes from human has been that we can look at the algorithms
ourselves or examine the reasons that a system concluded something.

If this information becomes proprietary the government will probably
license software for medical purposes the way the FDA does for new
drugs.

Imagine a testing procedure for medical diagnostic AI programs that is
as expensive and complicated as that for testing new drugs.

(ron)

[Ron makes a good point.  As a side issue, though, I would like
to mention that H-P has not been entirely secretive about its
techniques.  On March 8, Jim Lindauer of H-P will present a seminar
at Stanford (MJH 352, 2:45PM) on "Uses of Decision Trees in ECG
Analysis".  -- KIL]

------------------------------

Date: 29 Feb 84 15:36:33 PST (Wednesday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: U.S. Response to Japan's AI efforts

In the new "soft" computer journal from Springer-Verlag, 'Abacus', Vol.
1, #2, Winter 1984, is an essay by Eric A. Weiss reviewing Feigenbaum
and McCorduck's 'Fifth Generation' book and general AI books.  The
general A.I. review is worth reading.  The whole piece is lengthy, but I
quote only from the final section.

--Rodney Hoffman

[This is a rather remarkable book review.  In addition to discussing the
"The Fifth Generation" and several AI reference works and textbooks,
Eric Weiss describes the history and current partitioning of AI, the
disputes and alignments of the major AI centers, and the solution to
our technological race with foreign powers.  It's well worth reading.

This second issue of Abacus also has interesting articles on Ada,
the language and the countess, tomographic and NMR imaging (with
equations!), and the U.S. vs. IBM antitrust suit, as well as columns
on computers and the laws and other topics.  The magazine resembles
a Scientific American for the computer-oriented, and the NMR article
is of quality comparable to IEEE Computer.  -- KIL]

        ------------------------------------------------------
U.S. Response

On the basis of all this perspective, let me return to the Fifth
Generation Project itself and suggest that the U.S. response should be
thoughtful, considered, not guided by panic or fear, but based on
principles this nation has found fruitful:
        build on experience
        do what you do best
        encourage enthusiasm
What has been our experience with foreign science and technology?  We
know that new scientific knowledge gives the greatest benefit to those
nations which are most ready to exploit and use it, and this ready group
may not include the originating nation.... [discussion of rocketry,
automobiles, shipbuilding, steel, consumer electronics]

From this experience, the U.S. should look forward to reaping the
benefits from whatever the Japanese Fifth Generation Project develops,
and, just because we are bigger, richer, and stronger, benefiting more
from these improvements than the originating nation....

... "Do what you do best."  We do not compete with the Japanese very
well, but we do best in helping them.... [The U.S.] is best at helping
others, especially Japan, and at giving money away.... Thus, the
indicated course for the U.S. ... is to help the Japanese Fifth
Generation Project in every way we can: by supplying grants of money; by
loaning college professors; by buying and copying its product,
exploiting its scientific and technological developments and
breakthroughs as fast as they appear; and by ignoring or clucking
sympathetically over any failures or missed schedules.  Finally,...
encourage enthusiasm.

Young military people may murmur against this stance on the grounds that
military developments must be home-grown and that the development of
technology which might be used in weapons should be guided by the
military.  This assertion is borne out neither by history nor by the
present public attitude of the DoD.... [discussion of WWII anti-aircraft
guns, mines, torpedoes, and many other such]

... The advantages of letting another nation develop your military
hardware are frequently and forcefully explained to other countries by
the DoD and its industrial toadies, but these logical arguments... are
never put in their equally logical vice-versa form....

The danger is not that the Japanese will succeed -- for their successes
will result in U.S. benefits -- but that somehow we will not make prompt
use of whatever they accomplish.  We might manage this neglect if we
overdo our national inclination to  fight them and compete with them....

A related but more serious danger lies in the possibility that our
military people will get their thumbs into the American AI efforts and
make secret whatever they don't gum up.... Even the best ideas can be
killed, hurt, or at least delayed if hedged around with bureaucrats and
secrecy limitations.

... We should press vigorously forward on all fronts in the unplanned
and uncoordinated fashion that we all understand.  We should let a
thousand flowers bloom.  We should encourage everyone.... We should hand
out money.  We should transport experts.  We should jump up and down.
We should be ready to grab anybody's invention , even our own, and use
it.  We should be ready to seize winners and dump losers, even our own.
We should look big, fearless, happy, and greedy, and not tiny,
frightened, worried, and dumb.

... The conclusion is: don't bet on the Japanese, don't bet against
them, don't fear them.  Push forward with confidence that the U.S. will
muddle through -- if it can keep its government from making magnificent
plans for everyone.

------------------------------

End of AIList Digest
********************

∂06-Mar-84  1305	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #25
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 84  13:01:19 PST
Date: Tue  6 Mar 1984 11:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #25
To: AIList@SRI-AI


AIList Digest            Tuesday, 6 Mar 1984       Volume 2 : Issue 25

Today's Topics:
  Review - Laws of Form,
  Brain Theory - Parallelism,
  AI Reports - Stanford Acquisitions,
  Administrivia - New Location for List-of-Lists,
  AI Software - Portability of C-Prolog
----------------------------------------------------------------------

Date: Sat, 3 Mar 84 18:36:25 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Laws of Form

I don't pretend to be an expert on LoF but I think there are at least two
interesting aspects to it.  One is that it provides a calculus that can be used
to "compile" a set of syllogisms (page 124 of the Dutton 1979 edition).  A
second is that it does away with Russell and Whitehead's cumbersome Theory
of Types.  All orders of self-referential sets of statements can be evaluated
within the set of "imaginary" values.

You can argue that the compilation of syllogism sets (rule sets) can already
be done using truth tables.  I think that the benefit of Spencer-Brown's
calculus is that it is much more efficient and should run much faster.

Those who are really interested should loosen up and plow through the book
a few times with an open mind.  It is really very thought-provoking.

  --Charlie

------------------------------

Date: Mon 5 Mar 84 20:34:27-EST
From: David Rogers <DRogers%MIT-OZ@MIT-MC.ARPA>
Subject: parallel minds?

For a very good (if 3 years old) discussion on parallism
in the brain, refer to Hinton and Anderson's book "Parallel
Models of Associative Memory, pages 32-44 [Hin 81]. The
applicable section is entitled "Parallelism and Distribution
in the Mammalian Nervous System". Structurally, paralleism is
inherent throughout the nervous system, making simple
sequential models of human low-level cognition highly
suspect.

Though it was not openly stated in the discussion on this list,
there seems to be two issues of parallelism involved here:
low-level parallelism, and parallelism at some higher
"intellectual" level. The latter subject is rightly the domain
for experimentalists, and should not be approached with
simple AI techniques as introspection ("Well, I *feel*
sequential when I think...").

One known experimental fact does suggest a high degree of
parallelism, even in higher cognitive functions. Since
the firing rate of a neuron is on the order of
2-3 milliseconds, and some highly complex tasks (such as
face recognition) are performed in about 300 ms, it seems
clear that the brain uses massive parallelism, not just
in the visual system but throughout [Fel 79].

I would suggest that future discussions offer the reader
a few more experimental details, lest the experimental
psychologists in our midst feel unappreciated.

                              ---------
[Hin 81]
   "Parallel Models of Associative Memory, G. Hinton,
   J. Anderson, eds, Laurence Earlbaum Assoc., 1981, pages 32-44.

[Fel 79]
   "A Distributed Information Processing Model of Visual
   Memory", J.A. Feldman, University of Rochester Computer
   Science Department, TR52, December 1979.

------------------------------

Date: Sun 4 Mar 84 21:56:21-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Latest Math & CS Library "New Reports List" posted on-line.

[Every month or two Stanford announces its new CS report acquisitions.
I culled and sorted many of the citations for an earlier set of AIList
issues, but I have not gotten around to doing so for the last six
months or so.  Instead, I am forwarding this notice as an example
of the notices you can get by contacting LIBRARY@SU-SCORE.  For those
interested in FTPing the report listings, I would characterize them
as being lengthy, somewhat cryptic and inconveniently formatted, and
usually divided about equally between AI-related topics and non-AI
math/CS topics (VLSI design, hardware concepts, operating systems,
networking, office automation, etc.).  -- KIL]

The latest Math & Computer Science Library "New Reports List" has been
posted on-line.  The file is "<LIBRARY>NEWTRS" at SCORE, "NEWTRS[LIB,DOC]"
at SAIL, "<CSD-REPORTS>NEWTRS" at SUMEX, and "<LIBRARY>NEWTRS" at SIERRA.
In case you miss a reports list, the old lists are being copied to
"<LIBRARY>OLDTRS" at SCORE and "<LIBRARY>OLDTRS" at SIERRA where they will
be saved for about six months.

If you want to see any of the reports listed in the "New Reports List,"
either come by the library during the display period mentioned or send a
message to LIBRARY at SCORE, giving your departmental address and the
six-digit accession numbers of the reports you want to see, and we will
check them out in your name and send them to you as soon as they are available.

The library receives technical reports from over a hundred universities
and other institutions.  The current batch includes - among others -
reports from:


      Eidgenoessische Technische Hochschule Zuerich. Instituet fuer Informatik.
      IBM. Research Division.
      Institut National de Recherche en Informatique et en Automatique (INRIA).
      New York University. Courant Institute of Mathematical Sciences.
      U.K. National Physical Laboratory. Division of Information Technology
        and Computing.
      Universite de Montreal. Departement d'Informatique et de Recherche
        Operationnelle.
      University of Edinburgh. Department of Computer Science.
      University of Southern California. Information Sciences Institute.
      University of Wisconsin-Madison. Computer Sciences Department.




                                        - Richard Manuck
                                          Math & Computer Science Library
                                          Building 380 - 4th Floor
                                          LIBRARY at SCORE

------------------------------

Date: 1 Mar 1984 2142-PST
From: Zellich@OFFICE-3 (Rich Zellich)
Subject: New location for list-of-lists (Interest-Groups.TXT)

File Interest-Groups.TXT has been moved from OFFICE-3 and is now
available on the SRI-NIC host in file <NETINFO>INTEREST-GROUPS.TXT

Requests for copies of the list, updates to the list, etc., should be
sent to ZELLICH@SRI-NIC in the future, instead of ZELLICH@OFFICE-3 or
RICH.GVT@OFFICE-3.

Cheers,
Rich

------------------------------

Date: Wednesday, 22-Feb-84 23:45:00-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe%EDXA@UCL-CS>
Subject: Portability of C-Prolog

[The following is forwarded from the Prolog digest.  I consider
it an interesting account of the difficulties of making AI
software available on different systems.  The message is
8K characters, so I have put it last in the digest for those
who want to skip over it. -- KIL]

There was a question in this Digest about whether C-Prolog had
been ported to Apollos.  I don't know about that, but I have had a
great deal to do with C-Prolog, so I can say what might give trouble
and what shouldn't.

The first thing to beware of is that there are two main versions
of C-Prolog drifting around.  The one most people have is the one
distributed by EdCAAD (which is where Fernando Pereira wrote it), and
while that runs under VAX/UNIX and VAX/VMS both, and is said to run
on at least one 68000 box, V7 C compilers don't like it much.  The
other version is distributed by EdAI on a very informal basis, but it
should be available from Silogic in a couple of weeks.  The EdAI
version has been ported to the Perq (running ICL's C-machine micro-
code and their PaNiX port of V7 UNIX) and to another C-machine called
the Orion (that compiler isn't a derivative of PCC).  C-Prolog has
something like one cast per line, the EdAI version has stronger type
declarations so that the compiler produces no warning messages.  Both
versions are essentially the same, so EdAI cannot distribute their
version to anyone who hasn't got a licence for the EdCAAD version.

What C-Prolog v1.4d.edai requires is

[1] a V7 or later C compiler
[2] pointers should be 32 bits long
[3] the compiler should support 32 bit integer arithmetic, and
    floats should be storable in 32 bits.  (In fact if anyone has
    a decent C compiler for the Dec-10 [a] please can we have a copy
    and [b] C-Prolog should run quite happily on it.)
[4] It needs to steal 3 bits out of floats, so it needs to know a bit
    about the floating-point storage format.  IEEE and VAX-11 are ok.
[5] I/O uses <stdio> exclusively.
    C-Prolog supports ~username/X and $envvar/X expansion, but if the
    "unix" identifier is not defined it knows not to ask.
[6] brk() and sbrk() are needed.  If you haven't got them, you could
    declare a huge array and use that, but that would require source
    hacking.
[7] The MAJOR portability problem is that C-Prolog assumes that all
    pointers into the area managed by brk() and sbrk() look like
    POSITIVE integers.  It doesn't matter if the stack or text areas
    lie in negative address space (in fact the stack IS in negative
    address space on the Perq and Orion).  Getting around this would
    be a major exercise, not to be undertaken by anyone without a
    thorough understanding of the way C-Prolog works.  Since we have
    a GEC series 63 machine, and since there is some political
    pressure to adopt this as a UK IKBS machine (to which application
    it is NOT suited, nor any other), and since that machine puts
    everything in negative address space, we may produce a version of
    C-Prolog which can handle this.  But don't hold your breath.

The Perq (running C) and the Orion are both word-addressed.  This is
no problem.  Getting C-Prolog running on the Orion was a matter of
telling it where to look for its files and saying "make", but then
the Orion, though nothing like a VAX, runs 4.1bsd.  Getting it going
on a Perq was harder, but the bugs were in the Perq software, not in
C-Prolog.  The main thing anyone porting C-Prolog to a new machine
with a decent C and positive address space should have to worry about
is the sizes of the data areas, in the file parms.c.

To give this message some interest for people who couldn't care
less about porting C-Prolog, here are some general notes on porting
Prolog interpreters written in C.  (I've seen seven of them, but not
UNH Prolog.)

A well written Prolog interpreter uses the stdio library, so that
I/O shouldn't be too much of a problem.  But it may also want to
rename and/or delete files, to change the working directory, or to
call the command interpreter.  These operations should be in one file
and clearly labelled as being operating-system dependent.  Porting
from one version of UNIX to another should cause no difficulty, but
there is a problem with calling the shell: people using ?.?bsd will
expect the C-shell, and an interpreter written for V7 may not know
about that.  If you change it, be sure to use the environment
variable SHELL to determine what shell to use.  (Ports to S3 should
do this too, so that users who are supposed to be restricted to rsh
can't escape to sh via prolog.)

No Prolog implementor worth his salt would dream of using malloc.
As a result, a Prolog interpreter is pretty well bound to use brk()
and/or sbrk().  It may do so only at start-up (C-Prolog does this),
or it may do so dynamically (a Prolog with a garbage collector, and
pitifully few of them have, will probably do this).  In either case
allocation is virtually certain to be word-aligned and in units of
words, where a word is a machine pointer.

There are two ways of telling what sort of thing a pointer is
pointing to.  One way is to use TAGS, that is to reserve part of the
word to hold a code saying (integer,pointer to atom,pointer to clause
pointer to variable,&c).  This is particularly tempting on machines
like the M68000 where part of an address is spare anyway.  The other
way is to divide the address space into a number of partitions, such
as (integers, atoms, clauses, global stack, local stack, trail), and
to tell what something points to by checking *where* it points.
C-Prolog could be described as "semi-tagged": integers, floats,
pointers to clauses, and pointers to records all live in the virtual
partition [-2↑31,0) and are tagged, pointers to real objects are
discriminated by where they point.  Other things being equal, tagged
systems are likely to be slower.  But tagged systems should be immune
to the "positive address space problem".  So you have to check which
sort your system is.  If it is tagged, you should check the macros
for converting between tagged form and machine addresses VERY VERY
carefully.  They may not work on your machine, and it may be possible
to do better.  Here is an example of what can go wrong.

/* Macro to convert a 24-bit byte pointer and a 6-bit tag to a
   32-bit tagged pointer
*/
#define Cons(tag,ptr) (((int)(ptr)<<8) | tag)
/* Macro to extract the tag of a tagged pointer */
#define Tag(tptr) ((tptr)&255)
/* Macro to convert a tagged pointer to a machine pointer */
#define Ptr(tptr) (pointertype)(tptr>>8)
/* Macro to find the number of words between two tagged
   pointers
*/
#define Delta(tp1,tp2) (((tp1)-(tp2))>>8)

DRAT!  That was meant to be >>10 not >>8.

What can go wrong with this?  Well, Delta can go wrong if the machine
uses word addresses rather than byte addresses, in which case it
should be >>8 as I first wrote instead of >>10.  Cons can go wrong
if the top bits of a pointer are significant.  (On the Orion the top
2 bits and the bottom 24 bits are significant.)  Ptr can go wrong
if addresses are positive and user addresses can go over 2↑23, in
which case an arithmetic right shift may do horrid things.  I have
seen at least two tagged Prolog interpreters which would go wrong on
the Orion.

Prolog interpreters tend to be written by people who do not know
all the obscure tricks they can get up to in C, so at least you ought
not be plagued by the "dereferencing 0" problem.

If anyone reading this Digest has problems porting C-Prolog other
than the positive address space problem, please tell me.  I may be
able to help.  There is one machine with a C compiler that someone
tried to port it to, and failed, and that is a Z8000 box where the
user's address space is divided up into a lot of 64kbyte chunks, and
the chunks aren7t contiguous!  A tagged system could handle that,
though with some pain.  C-Prolog can't handle it at all.

If anyone has already ported some version of C-Prolog to another
machine (not a VAX, Perq/UNIX, Orion, or M68000/UNIX) please let me
know so that we can maintain a list of C-Prolog versions, saying
what machine, what problems, and whether your version is available to
people holding an EdCAAD licence.

------------------------------

End of AIList Digest
********************

∂06-Mar-84  1615	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #26
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 84  16:15:00 PST
Date: Tue  6 Mar 1984 15:09-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #26
To: AIList@SRI-AI


AIList Digest           Wednesday, 7 Mar 1984      Volume 2 : Issue 26

Today's Topics:
  Seminars - Extended Prolog Theorem Prover &
    A Model of LISP Computation &
    YOKO Random Haiku Generator &
    Emulation of Human Learning &
    Circuit Design by Knowledge-Directed Search &
    Knowledge Structures for Automatic Programming &
    Mathematical Ontology &
    Problem Solving in Organizations &
    Inequalities for Probablistic Knowledge
  Conference - STeP-84 Call for Papers
----------------------------------------------------------------------

Date: 29 Feb 84 13:54:56 PST (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 3/8/84

[Forwarded from the SRI-AI bboard by Laws@SRI-AI.]


                Mark E. Stickel
                SRI International

                A Prolog Technology Theorem Prover

An extension of Prolog, based on the model elimination theorem-proving
procedure, would permit production of a Prolog technology theorem prover
(PTTP). This would be a complete theorem prover for the full first-order
predicate calculus, not just Horn clauses, and provide capabilities for
full handling of logical negation and indefinite answers. It would be
capable of performing inference operations at a rate approaching that of
Prolog itself--substantially faster than conventional theorem-proving
systems.

PTTP differs from Prolog in its use of unification with the "occurs
check" for soundness, the complete model elimination input inference
procedure, and a complete staged depth-first search strategy. The use of
an input inference procedure and depth-first search minimize the
differences between this theorem-proving method and Prolog and permit the
use of highly efficient Prolog implementation techniques.

        Thursday, March 8, 1984 4:00 pm
        Hewlett Packard
        Stanford Division
        5M Conference room

        1501 Page Mill Rd
        Palo Alto

        *** Be sure to arrive at the building's lobby on time, so that you may
be escorted to the meeting room.

------------------------------

Date: Wed 29 Feb 84 13:07:26-PST
From: MESEGUER@SRI-AI.ARPA
Subject: A Model of LISP Computation

           [Forwarded from the CLSI bboard by Laws@SRI-AI.]


                REWRITE RULE SEMINAR AT SRI-CSL
                    Wednesday March 7, 3:00 pm

                   A Model of Computation
         Theory and application to LISP-like systems

                      Carolyn Talcott
                    Stanford University

The goal of this work is to provide a rich context in which a
variety of aspects of computation can be treated and where new
ideas about computing can be tested and developed.  An important
motivation and guide has been the desire to understand the construction
and use of LISP like computation systems.

The first step was to define a model of computation and develop the
theory to provide basic tools for further work.  The main components are

    - basic model and notion of evaluation
    - equivalence relations and extensionality
    - an abstract machine as a subtheory
    - formalization of the metatheory

Key features of this theory are:

    - It is a construction of particular theories uniformly
      from given data structures (data domain and operations).

    - Focus is on control aspects of computation

    - A variety of objects
      Forms  -- for describing control aspects of computation
      Pfns  -- abstraction of form in an environment
            -- elements of the computation domain
            -- computational analogue of partial functions
      Carts -- for collecting arguments and values
      Envs -- intepretation of symbols appearing in forms
      cTrees -- objects describing particular computations


Applications of this theory include

   -  proving properties of pfns
   -  implementation of computation systems
   -  representing and mechanizing aspects of reasoning


In this talk I will describe RUM -  the applicative
fragment (flavor).  RUM is the most mathematically
developed aspect of the work and is the foundation
for the other aspects which include implementation
of a computation system called SEUS.

------------------------------

Date: 1 Mar 1984 10:00:33-EST
From: walter at mit-htvax
Subject: GRADUATE STUDENT LUNCH

               [Forwarded from the MIT-MC bboard by SASW@MIT-MC.]


                     Computer Aided Conceptual Art (CACA)
                      Eternally Evolving Seminar Series
                                   presents

                        YOKO: A Random Haiku Generator

Interns gobble oblist hash      | We will be discussing YOKO and the
Cluster at operations           | related issues of computer modeling
Hidden rep: convert!            | of artists, modeling computer artists,
                                | computer artists' models, computer
Chip resolve to bits            | models of artists' models of computers,
Bus cycle inference engine      | artist's cognitive models of computers,
Exposing grey codes             | computers' cognitive models of artists
                                | and models, models' models of models,
Take-grant tinker bucks         | artists' models of computer artists,
Pass oblist message package     | modelling of computer artists' cognitive
Federal express                 | models and artist's models of cognition.

                     Hosts: Claudia Smith and Crisse Ciro
                         REFRESHMENTS WILL BE SERVED

------------------------------

Date: 1 Mar 84 09:26:46 EST
From: PETTY@RUTGERS.ARPA
Subject: VanLehn Colloquium on Learning

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


          SPEAKER:   Dr. Kurt VanLehn
                     Xerox Corp.
                     Palo Alto Research Center

          TITLE:    "FELICITY CONDITIONS FOR HUMAN SKILL ACQUISITION"

A theory of how people learn certain procedural skills will be
presented.  It is based on the idea that the teaching and learning
that goes on in a classroom is like an ordinary conversation.  The
speaker (teacher) compresses a non-liner knowledge structure (the
target procedure) into a linear sequence of utterances (lessons).  The
listener (student) constructs a knowledge structure (the learned
procedure) from the utterance sequence (lesson sequence).  In recent
years, linguists have discovered that speakers unknowingly obey
certain constraints on the sequential form of their utterances.
Apparently, these tacit conventions, called felicity conditions or
conversational postulates, help listeners construct an appropriate
knowledge structure from the utterance sequence.  The analogy between
conversations and classrooms suggests that there might be felicity
conditions on lesson sequences that help students learn procedures.
This research has shown that there are.  For the particular kind of
skill acquisition studied here, three felicity conditions were
discovered.  They are the central hypotheses in the learning theory.
The theory has been embedded in a model, a large AI program.  The
model's performance has been compared to data from several thousand
students learning ordinary mathematical procedures:  subtracting
multidigit numbers, adding fractions and solving simple algebraic
equations.  A key criterion for the theory is that the set of
procedures that the model "learns" should exactly match the set of
procedures that students actually acquire including their "buggy"
procedures.  However, much more is needed for psychological validation
of this theory, or any complex AI-based theory, than merely testing
its predictions.  Part of the research has involved finding ways to
argue for the validity of the theory.

           DATE:   Tuesday, March 6, 1984
           TIME:   11:30 a.m.
           PLACE:  Room 323 - Hill Center

------------------------------

Date: 1 Mar 84 09:27:06 EST
From: PETTY@RUTGERS.ARPA
Subject: Tong Colloquium on Knowledge-Directed Search

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


        SPEAKER:   Christopher Tong

        TITLE: "CIRCUIT DESIGN AS KNOWLEDGE-DIRECTED SEARCH"

     The process of circuit design is usefully viewed as search through
a large space of circuit descriptions. The search is knowledge-diverse
and knowledge-
intensive: circuits are described at many levels of abstraction (e.g.
architecture, logic, layout); designers use many kinds of knowledge and
styles of reasoning to pursue and constrain the search.

     This talk presents a preliminary categorization of knowledge about
the design process and its control. We simplify the search by using a
single processor-oriented language to cover the function to structure
spectrum of circuit abstractions. We permit the circuit design and the
design problem (i.e. the associated goals) to co-evolve; nodes in the
design space contain explicit representations for goals as well as
circuits. The design space is generated by executing tasks, which
construct and refine circuit descriptions and goals (aided by libraries
of components of goals). The search is guided locally by goals and
tradeoffs; globally it is resource-limited (in design time and quality),
conflict-
driven, and knowledge-intensive (drawing on a library of strategies).

     Finally, we describe an interactive knowledge-based computer
program called DONTE (Design ONTology Experiment) that is based on the
above framework. DONTE transforms architectural descriptions of a
digital system into circuit-level descriptions.

             DATE:  Thursday, March 8, 1984
             TIME:  2:50 p.m.
             PLACE:  Room 705 - Hill Center
                   *  Coffee Served at 2:30 p.m.  *

------------------------------

Date: 1 Mar 84 09:27:23 EST
From: PETTY@RUTGERS.ARPA
Subject: Ferrante Colloquium on Automatic Programming

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

               SPEAKER:  Jeanne Ferrante
                         IBM Thomas J. Watson Research Center
                         Yortown Heights, NY

               TITLE:   "PROGRAMS = CONTROL + DATA

A new program representation called the program dependence graph or
PDG is presented which makes explicit both the data values on which
an operation depends (through data dependence edges) and the control
value on which the execution of the operation depends (through control
dependence edges).  The data dependence relationships determine the
necessary sequencing between operations with the same control
conditions, exposing, exposing potential parallelism.  In this talk we
show how the PDG can be used to solve a traditional stumbling block in
automatic program improvement.  A new incremental solution to the
problem of updating data flow following changes in control flow such
as branch deletion is presented.

The PDG is the basis of current work at IBM Yorktown Heights for
compiling programs in sequential languages like FORTRAN to exploit
parallel architectures.

               DATE:  Friday, March 9, 1984
               TIME:  2:50 p.m.
               PLACE:  Room 705 - Hill Center
                      *  Coffee Served at 2:30 p.m.  *

------------------------------

Date: 5 Mar 84 17:45 PST
From: Guibert.pa@PARC-MAXC.ARPA
Subject: Talk by David McAllester: Mon. Mar. 12 at 11:00 at PARC

[Forwarded from the CSLI bboard by Laws@SRI-AI.]

Title: "MATHEMATICAL ONTOLOGY"

Speaker: David McAllester (M.I.T.)
When: Monday March 12th at 11:00am
Where: Xerox PARC Twin Conference Room, Room 1500

        AI techniques are often divided into "weak" and "strong" methods.  A
strong method exploits the structure of some domain while a weak method
is more general and therefore has less structure to exploit.  But it may
be possible to exploit UNIVERSAL structure and thus to find STRONG
GENERAL METHODS.  Mathematical ontology is the study of the general
nature of mathematical objects. The goal is to uncover UNIVERSAL
RELATIONS, UNIVERSAL FUNCTIONS, and UNIVERSAL LEMMAS which can be
exploited in general inference techniques.  For example there seems to
be a natural notion of isomorphism and a standard notion of essential
property which are universal (they can be meaningfully applied to ALL
mathematical objects).  These universal relations are completely ignored
in current first order formulations of mathematics. A particular theory
of mathematical ontology will be discussed in which many natural
universal relations can be precisely defined.  Some particular strong
general inference techniques will also be discussed.

------------------------------

Date: 5 Mar 1984  22:41 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: AI Revolving Seminar

[Forwarded from the MIT-XX bboard by Laws@SRI-AI.]

Wednesday, March 7   4:00pm   8th floor playroom


               Knowledge and Problem Solving Processes
                         in Organizations

                            Gerald Barber


Human organizations have frequently been used as models for AI systems
resulting in such theories as the scientific community metaphor, the
society of mind and contract nets among others.  However these human
organizational models have been limited by the fact that do no take
into account the epistemological processes involved in organizational
problem solving.  Understanding human organizations from an
epistemological perspective is becoming increasingly important as a
source of insight into intelligent activities and for computer-based
technology as it becomes more intricately involved in organizational
activities.

In my talk I will present the results of an organizational study which
attempted to identify problem solving and knowledge processing
activities in the organization.  I will also outline the possibilities
for development of both human organizational models and artificial
intelligence systems in light of this organizational study.  More
specifically, I will also discuss the shortcoming of organizational
theories and application of the results of this work to highly
parallel computer systems such as the APIARY.

------------------------------

Date: Tue 6 Mar 84 09:05:05-PST
From: Juanita Mullen  <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT -- Friday, March 9, 1984

      [Forwarded from the SIGLUNCH distribution by Laws@SRI-AI.]

Friday,   March 9, 1984
LOCATION: Braun Lecture Hall (smaller), ground floor of Seeley Mudd
          Chemistry Building (approx. 30 yards west of Gazebo)
12:05

SPEAKER:  Ben Grosof
          Stanford University, HPP

TOPIC:    AN INEQUALITY PARADIGM FOR PROBABILISTIC KNOWLEDGE
          Issues in Reasoning with Probabilistic Statements

BACKGROUND:     Reasoning with probabilistic knowledge and evidence is
a key aspect of  many AI systems.  MYCIN  and PROSPECTOR were  pioneer
efforts but were limited and  unsatisfactory in several ways.   Recent
methods  address  many  problems.    The  Maximum  Entropy   principle
(sometimes called  Least  Information)  provides  a  new  approach  to
probabilities. The Dempster-Shafer theory  of evidence provides a  new
approach to confirmation and disconfirmation.

THE TALK: We begin by relating probabilistic statements to logic.   We
then  review  the  motivations  and  shortcomings  of  the  MYCIN  and
PROSPECTOR  approaches.   Maximum  Entropy  and  Dempster-Shafer   are
presented, and recent work using them is surveyed.  (This is your  big
chance to  get up  to date!)   We  generalize both  to a  paradigm  of
inequality constraints on  probabilities.  This  paradigm unifies  the
heretofore divergent  representations  of probability  and  evidential
confirmation in  a formally  satisfactory  way.  Least  commitment  is
natural.  The interval  representation for  probabilities includes  in
effect a meta-level which allows  explicit treatment of ignorance  and
partial information,  confidence  and  precision,  and  (in)dependence
assumptions.  Using bounds  facilitates reasoning ABOUT  probabilities
and evidence.  We extend the Dempster-Shafer theory significantly  and
make an  argument  for  its  potential,  both  representationally  and
computationally.  Finally we list some open problems in reasoning with
probabilities.

------------------------------

Date: Fri, 2 Mar 84 11:18 EST
From: Leslie Heeter <heeter%SCRC-VIXEN@MIT-MC.ARPA>
Subject: STeP-84

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

        In addition to the call for papers below, Eero Hyvonen
has asked me to announce that they are looking for a lecturer
for the tutorial programme. The tutorial speaker should preferably
have experience in building industrial expert systems. For a few
hours' lecture, they are prepared to pay for the trip, the stay, and
some extra.

Exhibitors and papers are naturally welcome, too.


                        C A L L  F O R  P A P E R S

                                STeP-84

                Finnish Artificial Intelligence Symposium
                (Tekoalytutkimuksen paivat)
                Otaniemi, Espoo, Finland
                August 20-22, 1984


Finnish Artificial Intelligence Symposium (STeP-84) will be held at
Otaniemi campus of Helsinki University of Technology. The purpose of
the symposium is to promote AI research and application in Finland.

Papers (30 min) and short communications (15 min) are invited on
(but not restricted to) the following subfields of AI:

Automaattinen ohjelmointi       (Automatic Programming)
Kognitiivinen mallittaminen     (Cognitive Modelling)
Asintuntijajarjestelmat         (Expert Systems)
Viidennen polven tietokoneet    (Fifth Generation Computers)
Teolliset sovellutukset         (Industrial Applications)
Tietamyksen esittaminen         (Knowledge Representation)
Oppiminen                       (Learning)
Lisp-jarjestelmat               (Lisp Systems)
Logikkaohjelmointi              (Logic Programming)
Luonnollinen kieli              (Natural Language)
Hahmontunnistus                 (Pattern Recognition)
Suunnittelu ja etsinta          (Planning and Search)
Filosofiset kysymykset          (Philosophical Issues)
Robotiikka                      (Robotics)
Lauseen todistaminen            (Theorem Proving)
Konenako                        (Vision)

The first day of the symposium is reserved for the Tutorial programme
on key areas of AI presented by foreign and Finnish experts. There will
be an Industrial Exhibition during the symposium. Submission deadline
for one page abstracts of papers and short communications is April 15th.
Camera ready copy of the full text is due by July 31st. The address of
the symposium is:

        STeP-84
        c/o Assoc. Prof. Markku Syrjanen
        Helsinki University of Technology
        Laboratory of Information Processing Science
        Otakaari 1 A
        02150 Espoo 15                          Telex: +358-0-4512076
        Finland                                 Phone: 125161 HTKK SF

Local Arrangements:

Eero Hyvonen, Jouko Seppanen, and Markku Syrjanen
helsinki University of Technology

Program Committee:

Kari Eloranta                           Erkki Lehtinen
    University of Tampere                   University of Jyvaskyla
Seppo Haltsonen                         Seppo Linnainmaa
    Helsinki University of Tech.            University of Helsinki
Rauno Heinonen                          Klaus Oesch
    State Technical Research Centre         Nokia Corp.
Harri Jappinen                          Martti Penttonen
    Sitra Foundation                        University of Turku
Matti Karjalainen                       Matti Pietikainen
    Helsinki University of Tech.            University of Oulu
Kimmo Koskenniemi                       Matti Uusitalo
    University of Helsinki                  Finnish CAD/CAM Association
Kari Koskinen
    Finnish Robotics Association

Organised under the auspices of Finnish Computer Science Society.
Conference languages will be Finnish, Swedish, and English.

------------------------------

End of AIList Digest
********************

∂07-Mar-84  1632	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #27
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Mar 84  16:30:06 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Wed  7 Mar 1984 15:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #27
To: AIList@SRI-AI


AIList Digest            Thursday, 8 Mar 1984      Volume 2 : Issue 27

Today's Topics:
  Automatic Programming - Request for Bibliography,
  Pattern Recognition - Request for Character Recognition Algorithms,
  Expert Systems - Request for MYCIN Source Code, Tutorial,
  AI Tools - IQLISP Source,
  Mathematics - The Four-Color Theorem,
  AI Literature - The Artificial Intelligence Report,
  Expert Systems - EURISKO/AM Overview
----------------------------------------------------------------------

Date: 6 Mar 1984 1612-EST
From: CASHMAN at DEC-MARLBORO
Subject: For AI digest: request for program synthesis bibliography

Does anyone have (a pointer to) a bibliography (preferably annotated) of
papers on program synthesis?  Is there a good survey paper or article on
the field (other than what's in the Handbook of AI)?

  -- Paul Cashman (Cashman@DEC-Marlboro)

[Richard Waldinger (@SRI-AI) suggests a survey and bibliography on
program synthesis in "Synthesis: Dreams -> Programs" which appeared in
the IEEE Transactions on Software Engineering about 1975.  -- KIL]

------------------------------

Date: 7 Mar 1984 0643 PST
From: Richard B. August <AUGUST@JPL-VLSI>
Reply-to: AUGUST@JPL-VLSI
Subject: SEARCH FOR PATTERN/CHARACTER RECOGNITION ALGORITHMS,
         ARTICLES ETC.

BEGINNING RESEARCH ON CHARACTER RECOGINITION TECHNIQUES.
OBJECTIVE: DEVELOP CHARACTER INPUT DEVICE (WAND) TO ACCEPT THE MAJORITY
OF FONTS FOUND IN PUBLICATIONS.

POINTER TO PUBLICATIONS ARE HELPFUL.

THANKS

REGARDS RAUGUST

[The international joint conferences on pattern recognition would
be a good place to start.  Proceedings are available from the IEEE
Computer Society.  A 1962 book I've found interesting is "Optical
Character Recognition" by Fischer, et al.  Good luck (you'll need
it).  -- KIL]

------------------------------

Date: Wed, 7 Mar 84 14:14:35 PST
From: William Jerkovsky <wj@AEROSPACE>
Subject: MYCIN

I would like to execute a simple problem on MYCIN. I have recently gotten
interested in expert systems; since my wife is bacteriologist I think both
of us would enjoy the interaction with the program via our home computer
(terminal).

Can anyone point out the way to get a (free) copy of MYCIN (even if it is
only a simple early version)? Is there a way I can execute a version
interactively from home without actually getting a copy? Does anybody know
of an on-line tutorial on MYCIN? Is there a simple version of MYCIN (or a
reasonable facsimile) which runs on an

Apple //e or on an IBM PC?

I'll appreciate whatever help I can get.

Thanks

Bill Jerkovsky

------------------------------

Date: Tue 6 Mar 84 15:48:55-PST
From: Sam Hahn <SHahn@SUMEX-AIM.ARPA>
Subject: IQLISP Source

The source for IQLisp is:

        Integral Quality, Inc.
        P.O. Box 31970
        Seattle, WA  98103
        (206) 527-2918

Claims to be similar to UCI Lisp, except function def's are stored in cells
within identifiers, not on property lists; arg. handling is specified in the
syntax of the expression defining the function, I/O functions take an explicit
file argument, which defaults to the console; doesn't support FUNARGS.

IQLisp does provide:
        32kb character strings,
        77000 digit long integers,
        IEEE format floating point,
        point and line graphics,
        ifc to assembly coded functions,
        31 dimensions to arrays,

Costs $175 for program and manual, PCDOS only.

I've taken the liberty to include some of their sales info for those who may
not have heard of IQLisp.  It's fairly new, and they claim to soon make a
generic MSDOS version (though probably without graphics support).

------------------------------

Date: Wed, 7 Mar 84 09:16 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Re: The Four-Color Theorem

By "planar map" in my previous message I meant to connote a structure on
a two-dimensional surface, not strictly a flat plane.  In fact, the
plane and the sphere are topologically equivalent (the plane is a sphere
if infinite radius) so four colors suffice for both; for the torus,
which has a different "connectivity," it has long been known that seven
colors are both necessary and sufficient.

I'm not a mathematician but at one time (right after reading an article)
I felt as if I understood the proof.  As I recall it is based on the
fact that if there are any maps that require five colors there is a
minimal (smallest) map that requires five colors.  It is possible to
construct sets of graphs (representing map regions) of varying
complexity for which any map must include at least one member of the
set.  It is also possible to determine for some particular graph whether
it can be "reduced" (so that it represents fewer regions) without
altering its four-colorability or its interactions with its neighbors.
Clearly the minimal five-color map cannot contain a "reducible" graph
(else it is not minimal).

Evidently, if one can construct a set of graphs of which ANY map must
contain at least one member, and show that EVERY member of that set is
reducible, then the minimal five-color map cannot exist; hence no
five-color map can exist.  Now if it were possible to construct such a
set with, say, 20 graphs one could show explicitly BY HAND that each
member was reducible.  No one would call such a proof "ugly" or "not a
true proof;" it might not be considered particularly elegant but it
wouldn't be outside the mainstream of mathematical reasoning either (and
it doubtless would have been found years ago).  The problem with the
actual case is that the smallest candidate set of graphs had thousands
of members.  What was done in practice was to devise algorithms which
would succeed at reducing "most" (>95%?) reducible graphs.  So most of
the graph reduction was done by computer, the remaining cases being done
by hand.  (I understand that to referee the paper another program had to
be written to check the performance of the first.)

I would like to hear any criticism of the Illinois proof that is more
specific than "ugly" or "many feel that this does not constitute a true
proof."  A pointer to the mathematical literature will suffice; my
impression is that the four-color theorem is widely accepted as having
been proved.  (We may be getting a bit far afield of AI here; I would
say that my impression of the techniques used in the automatic reduction
program was that they were not "artificial intelligence," but since they
were manifestly "artificial" I hesitate to do so for fear of rekindling
the controversy over what constitutes "intelligence!")

Mark

------------------------------

Date: Tue 6 Mar 84 10:58:51-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: The Artificial Intelligence Report

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

I have received a sample copy of the Artificial Intelligence Report, vol. 1
number 1, January 1984. It is being published locally and will have ten
issues per year.  It is more of a newsletter type publication with the
latest information on research (academic and industrial) and applied AI
within industry.  The cost is $250 per year.  The first issue has 15 pages.
I will place on the new journal shelf.  [...]

[I may try to start charging for AIList ...  -- KIL]

------------------------------

Date: Tue, 6 Mar 84 16:50 PST
From: "Allen Robert"@LLL-MFE.ARPA
Subject: EURISKO/AM review (415) 422-4881


In response to Rusty's request regarding EURISKO (V2,#22),  the  following
is a brief excerpt from my thesis qualifying/background paper on knowledge
acquisition in expert systems.  I tried to summarize the  system  and  its
history;   a  lot  of  detail has been removed.  I hope the description is
accurate;  please feel free to criticize.


EURISKO is a part of  Doug  Lenat's  investigation  of  machine  learning,
drawing   its   roots  from  his  Stanford  Ph.D.   thesis  research  with
AM [Lenat 76].  AM was somewhat unusual among learning systems in that  it
does  not have an associated performance element (expert system).  Rather,
AM is supplied with an initial  knowledge  base  representing  simple  set
theoretic  concepts,  and  heuristics  which  it  employs to explore those
concepts.   The  goal  is  for  AM  to  search  for  new   concepts,   and
relationships between concepts, guided by those heuristics.

AM represents concepts (eg.  prime and  natural  numbers)  in  frames.   A
frame's  slots  describe  attributes  of  the  concept,  such as its name,
definition, boundary values, and examples.  A definition slot includes one
or  more  LISP  predicate  functions;   AM applies definition functions to
objects (values, etc.) to determine  whether  they  are  examples  of  the
concept.   For  instance,  the  Prime  Number frame has several definition
predicates which can each determine (for different  circumstances)  if  an
integer   is  prime  or  not;   those  predicates  (and  boundary  values)
effectively define the concept "prime number" within AM.

Any slot may have zero or more heuristics, expressed as production  rules,
expressing strategies for exploring concepts.  Heuristics primarily obtain
or verify slot values;  the may also  postulate  new  concepts/frames,  or
specify  tasks  to  be  performed.   AM  maintains  an  "agenda  of tasks"
expressed as goals, in the form "Find or verify the value of slot S,  from
concept/frame  C."  The  basic  control  structure selects a task from the
agenda, and checks the slot (S) for heuristics.  If one or more are found,
a  rule  interpreter  is  invoked  to  execute  them.   If  slot  S has no
heuristics, it may point (possibly  through  several  levels)  to  another
frame  whose  corresponding  (same  name)  slot  does, in which case those
heuristics are executed;  thus, heuristics from higher-level concepts  may
be  employed  or  inherited  in  exploring  less  abstract concepts.  This
continues until all the related heuristics are executed;  AM then  returns
to the agenda for a new goal.

AM is provided with an  initial  knowledge  base  of  around  one  hundred
frames/concepts   from   finite  set  theory,  which  include  around  250
heuristics.  The system is then "set loose"  to  explore  those  concepts,
guided  by  heuristics;   AM  postulates new concepts and then attempts to
judge their  validity  and  utility.   Over  a  period  of  time,  AM  may
conjecture  and  explore  several  hundred  new concepts;  some eventually
become well established and are  themselves  used  as  extensions  of  the
initial knowledge base.

AM never managed to discover concepts  that  were  not  already  known  in
mathematics;   however  it  did  discover  many  well  known  mathematical
principles (eg.  de Morgan's laws, unique factorization),  some  of  which
were  originally  unknown to Lenat.  It was hoped that AM might might also
be applied to  the  domain  of  heuristics  themselves;   i.e.   exploring
heuristic  concept/frames  instead of mathematical concept/frames, but the
system did not make  much  progress  in  this  area.   Lenat  explains  an
underlying   problem : AM's   representation  of  domain  knowledge  (LISP
functions) is fundamentally similar  to  the  primitives  of  mathematical
notation,  while  heuristics  lack  a  similar close relationship.  He has
developed  new  ideas  regarding  the  meaning   and   representation   of
heuristics,   which   are   being   explored   with  the  AM's  successor,
EURISKO [Lenat 82,83a,83b].

One significant lesson learned from AM, and being applied in  EURISKO,  is
(roughly)  that  explicit  treatment  of heuristics and meta-knowledge (as
well as assertive domain knowledge) is a necessary condition for  learning
heuristics  (and  assertive  domain  knowledge).   The  main  focus of the
EURISKO project is to  investigate  representation  and  reasoning/control
issues   related  to  learning  (heuristics,  operators,  and  new  domain
objects).  Also, where concepts in AM were related to mathematical notions
(like  Prime  Numbers),  flexibility  is  an important design criteria for
EURISKO, which is being applied  to  a  number  of  problem  domains  (see
[Lenat 83b]).

Like AM, EURISKO is a frame based system which represents  domain  objects
in   frames.    However,   where   AM  attached  heuristics  to  slots  in
concepts/frames, EURISKO represents heuristics themselves as  frames.   In
general,  EURISKO  goes  much  further  than AM in explicitly defining and
representing knowledge at many levels;  everything possible is  explicitly
represented  as  an  object.   For  example, every kind of slot (eg.  ISA,
Examples) has a frame associated with it, which  explicates  the  meanings
and  operations  of  the slot.  This allows the system to reason with each
kind of slot (as well as with the slot value), for example to know whether
a  particular  type  of  slot  represents guaranteed, probable, or assumed
knowledge.

Part of the approach in EURISKO is to  emphasize  the  importance  of  the
representation  language itself in solving a problem.  The RLL frame based
language [Greiner 80] was developed for  this  purpose.   In  RLL,  almost
every object (notably including heuristics) is represented as an explicit,
discrete frame ("unit" as they are called in RLL).  Thus heuristics become
objects which a system can use, manipulate, and reason about just like any
other object.  Without going into details, RLL has a  number  of  features
which  are  oriented  toward  explicit  representation and manipulation of
domain knowledge, both factual and heuristic.  It has a more sophisticated
"multiple-agendae" control structure which is itself represented as frames
in the knowledge base.  Operations with and on frames  include  a  lot  of
bookkeeping  by  RLL, intended to retain explicit knowledge which was lost
in AM.  Because heuristics are explicitly represented objects (frames), it
is possible for built-in or domain-specific knowledge to be applied to the
learning of heuristics (i.e.  using built-in heuristics which specify  how
to postulate and explore new heuristics).

EURISKO has been notably successful as both a learning and  a  performance
(expert)  system in a number of domains.  [Lenat 83b] describes the use of
EURISKO in playing the Traveller  Trillion  Credit  Squadron  (TCS)  game,
where it has won two national tournaments, and discovered some interesting
playing strategies.  In [Lenat 82a], EURISKO's application to  "high-rise"
VLSI  circuit design is described.  EURISKO constructed a number of useful
devices and circuits, and has discovered  some  important  heuristics  for
circuit design.

                              ----------

Greiner, R., Lenat, D.  1980.  "A Representation Language Language." Proc.
AAAI 1, pp.  165-169.

Lenat, D.B.  1976.  "AM:  An artificial intelligence approach to discovery
in   mathematics   as  heuristic  search."  Ph.D.   Diss.   Memo  AIM-286,
Artificial Intelligence Laboratory, Stanford University, Stanford,  Calif.
(Revised  version  in R.  Davis, D.  Lenat (Eds.), Knowledge Based Systems
in Artificial Intelligence.  New York:  McGraw-Hill.  1982.)

Lenat, D.B.  1982.  "The nature of heuristics." AI Journal 19:2.

Lenat, D.B.  1983a.  "The nature of heuristics II." AI Journal 20:2.

Lenat, D.B.  1983b.  "EURISKO:  A program that learns new  heuristics  and
domain concepts, The nature of heuristics III." AI Journal 21:1-2.

                              ----------
Rob Allen <ALLEN ROBERT@LLL-MFE.ARPA>

------------------------------

End of AIList Digest
********************

∂09-Mar-84  2228	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #28
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Mar 84  22:28:19 PST
Date: Fri  9 Mar 1984 21:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #28
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Mar 1984      Volume 2 : Issue 28

Today's Topics:
  Games - SMAuG Player Simulation,
  Mathematics - The Four-Color Theorem,
  AI Tools - Interlisp Availability,
  Review - Playboy AI Article,
  Expert Systems - Computer Graphics & Hardware/Software Debugging,
  Expert Systems - Production Tools,
  Review - Laws of Form
----------------------------------------------------------------------

Date: 8 Mar 84 17:17:53 EST
From: GOLD@RU-BLUE.ARPA
Subject: a request for suggestions....

Some of you may be aware of the project known as SMAuG (Simultaneous
Multiple AdventUrer Game) that is ongoing at Rutgers University.  It is
an applied research project designed to examine the problems of distrib-
uting the work of a complex piece of software accross local intelligent
devices and a remote timesharing computer.  The software is a multiple
player adventure game.

Within the game a player may interact with other players, or with
software controlled players referred to as Non Player Characters (NPC's).
The NPC's are the area of the project which I am personally involved
with and for which I write to this bboard.  There are many interesting
subtopics within the NPC issue.  NPC communication, self mobility,
acquisition of knowledge, and rescriptability just to name a few.
The object is to create an NPC which can interact with a player character
without making it obvious that the it is machine controlled and not
another player character.  [Aha! Another Turing test! -- KIL]

I would like to request suggestions of relevent publications that I
should be familiar with.  This is a large project, but I am loathe
to make it even larger by ignoring past work that has been done.
I would greatly appreciate any suggestions for books, journal articles,
etc. that might offer a new insight into the problem.

Please send responses to Gold@RU-Blue.

Thank you very much,

Cynthia Gold

------------------------------

Date: Thu 8 Mar 84 10:54:46-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: Re: The Four-Color Theorem

I am not familiar with the literature on the 4-color proof, nor with whether it
is commonly accepted.  I do however have a lot of experience with computer
programs and have seen a lot of subtle bugs that do not surface 'til long after
everyone is convinced the software works for all possible cases after having
used it.  The fact that another person wrote a different program that got the
same results means little as the same subtle bugs are likely to be unforeseen
by other programmers.  If the program is so complicated that you cannot prove
it or its results correct, then I think the mathematicians would be foolish
to accept its output as a proof.

David

------------------------------

Date: Thu 8 Mar 84 09:59:26-PST
From: Slava Prazdny <Prazdny at SRI-KL>
Subject: Re: The Four-Color Problem

re: the 4-color problem
A nice overview paper by the authors is in "Mathematics Today",
L.A.Steen (ed),Vintage Books, 1980.

------------------------------

Date: 8 Mar 1984 11:22-PST
From: Raymond Bates <RBATES at ISIB>
Subject: Interlisp Availability

A version of Interlisp is available from ISI that runs on the VAX
line of computers.  We have versions for Berkeley UNIX 4.1 or 4.2
and a native VMS version.  It is a full and compete
implementation of Interlisp.  For more information send a message
to Interlisp@ISIB with your name and address or send mail to:

Information Science Institute
ISI-Interlisp Project
4676 Admiralty Way
Marina del Rey, CA  90292

Interlisp is a programming environment based on the lisp
programming language.  Interlisp is in widespread use in the
Artificial Intelligence community.  It has an extensive set of
user facilities, including syntax extensions, uniform error
handling, automatic error correction, an integrated
structure-based editor, a sophisticated debugger, a compiler and
a file system.

P.S.  I just got AGE up and running under ISI-Interlisp (the new
name of Interlisp-VAX) and will start to work on EMYCIN soon.

/Ray

------------------------------

Date: Thu 8 Mar 84 20:35:02-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Playboy 4/84 article: AI-article by Lee Gomes

If you needed an excuse to read playboy (even deduct it from your taxes ?? )
on page 126 is an article:

        The Mind of a New Machine.  can the science of artificial intelligence
                produce a computer that's smarter than the men who build it?

nothing earth-shaking, a little history, a little present state of the art,
a little outlook into the future.  but, it's interesting what's being fed
to this audience.  Something to hand to a friend who wants to know what
this is all about, and doesn't mind getting side-tracked by "Playmates
Forever" on page 129.

        Enjoy or Suffer, it's your choice.

------------------------------

Date: Thu 8 Mar 84 16:55:31-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems in Computer Graphics

The February issue of IEEE Computer Graphics and Applications has a short
blurb on Dixon and Simmons' expert system for mechanical engineering design.
Following the blurb, on p. 61, is the notice

    IEEE Computer Graphics and Applications is planning an issue
    featuring articles on expert systems in computer graphics
    applications in early 1985.  Those interested in contributing
    should contact Carl Machover, Machover Associates, Inc., 199
    Main St., White Plains, NY 10601; (914) 949-3777.


The issue also contains an article on "Improved Visual Design for Graphics
Display" by Reilly and Roach.  The authors mention the possibility of
developing an expert consulting system for visual design that could be
used to help programmers format displays.  (I think automated layout
for the graphics industry would be even more useful, and an excellent
topic for expert systems research.)  They cite

    J. Roach, J.A. Pittman, S.S. Reilly, and J. Savarse, "A Visual
    Design Consultant," Int'l Conf. Cybernetics and Society, Seattle,
    Wash., Oct. 1982.

as a preliminary exploration of this idea.

                                        -- Ken Laws

------------------------------

Date: Fri 9 Mar 84 17:23:30-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert System for Hardware/Software Debugging

The March issue of IEEE Computer has an article by Roger Hartley
of Kansas State University on the CRIB system for fault diagnosis.
The article starts with a discussion of expertise among experts
vs. that among practitioners, and about the process of building
a knowledge base.  Hartley then introduces CRIB and discusses, at
a fairly high level, its application to fault diagnosis in ICL 2903
minicomputers.  He then briefly mentions use of the same hierarchical
diagnostic strategy in debugging the VME/K operating system.

This article is an expanded version of the paper "How Expert Should an
Expert System Be?" in the 7th IJCAI, 1981.

                                        -- Ken Laws

------------------------------

Date: 8 March 1984 1426-est
From: Roz    <RTaylor.5581i27TK @ RADC-MULTICS>
Subject: Expert Systems Production tools

To all who have queried me regarding what info I have or have received on
expert systems production tools...I must apologize.  Have not gotten it
into suitable format as yet;  I am literally behind the power curve
with some new efforts (high visibility) recently assigned to me (approx
4 weeks ago--about the time I could start editing what I have).  I will
post it to the AIList, but unless something helps it won't be before
April.  Unfortuanately, what has already been massaged is in 132 char
[tabular] format and would not post easily to the list that way.  I am
sorry, folks.  But I have not forgotten you.
                                  Roz

------------------------------

Date: 7 Mar 84 19:12:34 PST (Wed)
From: Carl Kaun <ckaun@aids-unix>
Subject: More Laws of Form


Before  I  say anything,  you all should know that I consider myself at  best
naive  concerning formal logic.   Having thus outhumbled myself  relative  to
anyone  who  might answer me and having laid a solid basis for my  subsequent
fumbling around, I give you my comments about Laws of Form.  I do so with the
hope that it stirs fruitful discussion.

First,  as  concerns  notation.   LoF  uses a symbol called at  one  point  a
"distinction"  consisting  of  a  horizontal  bar  above  the  scope  of  the
distinction,  ending  in a vertical bar.   Since I can't reproduce that  very
well  here,  I  will  use parentheses to designate scope where the  scope  is
otherwise ambiguous.  Also, LoF uses a blank space which can be confusing.  I
will  use  an  underline "←" in its place.   And LoF  places  symbols  in  an
abutting  position to indicate disjunction.   I will use a comma to  separate
disjunctive terms.

In  Lof,  the string of symbols " (a)|,  b ",  or equivalently,  " a|,  b" is
equivalent  logically to the statement " a implies b".   The comparison  with
the  equivalent  statement  " (not a) or b" is also obvious.  The "|"  symbol
seems to be used as a postfix unary [negation] operator.  "a" and "b" in  the
formulae  are  either  "←"  or  "←|" or any allowable combination of these in
terms of the constructions available through the finite  application  of  the
symbols "|" and "←".  LoF goes on to talk about this form and what it implies
at some length.   Although it derives some interesting looking formulae (such
as the one for distribution), I could find nothing that cannot be equivalently
derived from Boolean Algebra.

Eventually, LoF comes around to the discussion of paradoxical forms, of which
the  statement  "this sentence is false" is the paradigm.   As I  follow  the
discussion at this point, what one really wants is some new distinction (call
it "i") which satisfies the formula " (i|)|, i".  At least I think it  should
be a distinction, perhaps it should also be considered simply to be a symbol.
The above form purports to represent the sentence "this sentence is false".
The  formulation  in  logic  is similar to the way  one  arrives  at  complex
numbers,  so  LoF also refers to this distinction as being  "imaginary".   At
this  point I am very excited,  I think LoF is going to explore the  formula,
create an algebra that one can use to determine paradoxical forms,  etc.  But
no  development of an algebra occurs.   I played around with this some  years
ago  trying  to get a consistent algebra,  but I didn't really  get  anywhere
(could well be because I don't know what I'm doing).  Lof goes on to describe
the  distinction  "i"  in terms of  alternating  sequences  of  distinctions,
supposedly linking the imaginary distinction to the complex number  generator
exp(ix), however I find this discussion most unconvincing and unenlightening.

Now LoF returns to the subject of distinction again,  describing distinctions
as  circles in a plane (topologically deformable),  where distinction  occurs
when one crosses the boundary of a circle.   In this description,  the set of
distinctions  one can make is firmly specified by the number of circles,  and
the  ways  that circles can include other circles,  etc.   LoF gives  a  most
suggestively  interesting  example of how the topology of the  surface  might
affect  the distinctions,  and even states that different distinctions result
on spheres than on planes, and on toroids than on either, etc.  Unfortunately
he  does not expound in this direction either,  and does not link it  to  his
"imaginary"  form  above,  and I think I might have given up on LoF  at  this
time.   LoF  doesn't  even discuss  intersecting  circles/distinctions.

The  example  that  LoF  gives is of a sphere where one  distinction  is  the
equator,   and   where  there  are  two  additional  distinctions   (circles,
noninclusive  one  of  the  other) in  the  southern  hemisphere.   Then  the
structure  of the distinctions one can make depends on whether one is in  the
northern  hemisphere,  or  in  the southern hemisphere external  to  the  two
distinctions there, or inside one of the circles/distinctions in the southern
hemisphere.   As  I say,  I really thought (indeed think today) that  perhaps
there is some meat to be found in the approach,  but I don't have the time to
pursue it.

I  realize  that  I have mangled LoF pretty  considerably  in  presenting  my
summary/assessment/impressions of it.     This is entirely in accordance with
my expertise established above.   Still,  this is about how much I got out of
LoF.   I found some suggestive ideas,  but nothing new that I (as a  definite
non-logician) could work with.   I would dearly love it if someone would show
me how much more there is.  I suspect I am not alone in this.


Carl Kaun  ( ckaun@AIDS-unix )

------------------------------

End of AIList Digest
********************

∂09-Mar-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #29
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Mar 84  23:24:20 PST
Date: Fri  9 Mar 1984 21:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #29
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Mar 1984      Volume 2 : Issue 29

Today's Topics:
  Administrivia - New Osborne Users Group,
  Obituary - A. P. Morse,
  Courses - Netwide AI Course Bites the Dust,
  Seminars - Joint Seminar on Concurrency &
    Programming by Example &
    Mathematical Ontology Seminar Rescheduled &
    Incompleteness in Modal Logic &
    Thinking About Graph Theory
----------------------------------------------------------------------

Date: 3 Mar 84 17:14:40-PST (Sat)
From: decvax!linus!philabs!sbcs!bnl!jalbers @ Ucb-Vax
Subject: Atten:Osborne owners
Article-I.D.: bnl.361

ATTENTION users of Osborne computers.  The Capital Osborne Users Group (CapOUG)
is seeking other Osborne users groups across the country.  If you are a member
of such a group, please send the name of the president, along with an address
and phone number.  We are also looking for contacts via the net (USENET or
ARPA/MILNET) between groups across the country.   If you can be such a contact
or know of someone who can, please send me mail.  All that would be envolved
is sending and recieving  summaries of meetings, parts of newsletters, and
acting as an interface between your group and the other groups 'subscribing' to
this 'mailing list'.  At this point, it is not certain wheather communication
would be through a mail 'reflector', or via a 'digest', however the latter is
most likely.  In return for your service, the CapOUG will exchange our software
library, which consists of over 120 SD disketts, and articles from our
newsletter.  The 'interface' would be asked to offer the like to the other
members of the list.
Even if you don't belong to a group, this would be a great way to find
the group in your area.

                                                        Jon Albersg
                                                ARPA    jalbers@BNL
         (UUCP)...!ihnp4!harpo!floyd!cmc12!philabs!sbcs!bnl!jalbers

------------------------------

Date: Wed 7 Mar 84 22:55:24-CST
From: Bob Boyer <CL.BOYER@UTEXAS-20.ARPA>
Subject: A. P. Morse

        [Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

A. P. Morse, Professor of Mathematics at UC Berkeley, author
of the book "A Theory of Sets," died on Monday, March 5.
Morse's formal theory of sets, sometimes called Kelley-Morse
set theory, is perhaps the most widely used formal theory
ever developed.  Morse and his students happily wrote proofs
of serious mathematical theorems (especially in analysis)
within the formal theory; it is rare for formal theories
actually to be used, even by their authors.  A key to the
utility of Morse's theory of sets is a comprehensive
definitional principle, which permits the introduction of
new concepts, including those that involve indicial (bound)
variables.  Morse's set theory was the culmination of the
von Neumann, Bernays, Godel theory of sets, a theory that
discusses sets (or classes) so "large" that they are not
members of any set.  Morse took delight in making the
elementary elegant.  His notion of ordered pair "works" even
if the objects being paired are too "big" to be members of a
set, something not true about the usual notion of ordered
pairs.  Morse's theory of sets identifies sets with
propositions, conjunction with intersection, disjunction
with union, and so forth.  Through his students (e.g., W. W.
Bledsoe), Morse's work has influenced automatic
theorem-proving.  This influence has shaped the development
of mechanized logics and resulted in mechanical proofs of
theorems in analysis and other nontrivial parts of
mathematics.

------------------------------

Date: Sun, 4 Mar 84 09:00:57 pst
From: bobgian%PSUVAX1.BITNET@Berkeley
Subject: Netwide AI Course Bites the Dust

The "Netwide AI and Mysticism" course I had hoped to offer to all
interested people has become the victim of my overenthusiam and the
students' underenthusiasm.
π
The term here is half over, and student energies and motivations are
YET to rise to the occasion.  I have tried my best, but (aside from a
very select and wonderful few) Penn State students just do not have
what it takes to float such a course.  I am spending most of my time
just trying to make sure they learn SOMETHING in the course.  The
inspiration of a student-initiated and student-driven course is gone.

My apologies to ALL who wrote and offered useful comments and advice.
My special thanks to all who mailed or posted material which has been
useful in course handouts.  I WILL try this again!!  I may give up on
the average Penn State student, but I WON'T give up on good ideas.

I will be moving soon to another institution -- one which EXPLICITLY
encourages innovative approaches to learning, one which EXPLICITLY
appeals to highly self-motivated students.  We shall try again!!

In the meantime, the "Netwide AI course" is officially disbanned.  Those
students here who DO have the insight, desire, and maturity to carry it
on may do so via their own postings to net.ai.  (Nothing I could do or
WANT to do would ever stop them!)  To them all, I say "You are the hope
for the world."  To the others, I say "Please don't stand in our way."

        -- Bob "disappointed, but ever hopeful" Gian...

[P.s.]

Since my last posting (808@psuvax.UUCP, Sunday Mar 4) announcing the
"temporary cessation" of the "Netwide AI and Mysticism" course from Penn
State, I have received lots of mail asking about my new position.  The thought
struck, just AFTER firing that note netwards, that instead of saying

    "I will be moving soon to another institution ...."

I SHOULD have said

    "I will soon be LOOKING for another institution -- one which EXPLICITLY
    encourages innovative approaches to learning, one which EXPLICITLY
    appeals to highly self-motivated students.  We shall try again!!"

That "new institution" might be a school or industrial research lab.  I want
FIRST to leave behind at Penn State the beginnings of what someday could be
one of the finest AI (especially Cognitive Science and Machine Learning)
labs around.  Then I'll start looking for a place more in tune with my
(somewhat unorthodox, by large state school standards) teaching and research
style.

To all who wrote with helpful comments, THANKS.  And, if anybody knows of
such a "new institution", I'm WIDE OPEN to suggestions!!!

        -- Bob "ever hopeful" Gian...

Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa:   bobgian%PSUVAX1.BITNET@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET         CSnet:  bobgian@penn-state.CSNET
UUCP:   bobgian@psuvax.UUCP            -or-    allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802

------------------------------

Date: Wed 7 Mar 84 18:05:04-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Joint Seminar on Concurrency

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


                     JOINT SEMINAR ON CONCURRENCY

                      Carnegie-Mellon University
                           July 9-11, 1984

     The National Science Foundation  (NSF)  of the United States  and
the Science  and  Engineering  Council  (SERC)  of  Great Britain have
agreed to support a Joint Seminar on Concurrency.  The seminar intends
to  discuss the state  of the art in concurrent programming languages,
their  semantics, and the problems of proving properties of concurrent
programs.

     A small number of participants from Britain and the United States
have already  been  invited,  but  other  interested  researchers  are
encouraged to attend.  Because of the limited NSF and SERC funding, no
financial support is  available.  However,  if you  are interested  in
participating and can find your own support, please contact as soon as
possible:

    Stephen D. Brookes                  Brookes@CMU-CS-A
    Department of Computer Science      Home (412) 441-6662
    Carnegie-Mellon University          Work (412) 578-8820
    Schenley Park
    Pittsburgh, PA  15213

     The other organizers of the meeting are Glynn Winskel  (Cambridge
University) and Bill Roscoe (Oxford University), but inquiries  should
be directed to Brookes at Carnegie-Mellon.

------------------------------

Date: 07 Mar 84  1358 PST
From: Terry Winograd <TW@SU-AI.ARPA>
Subject: Programming by Example

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Talkware Seminar (CS 377)

Date: Monday March 12
Speaker: Daniel Halbert (Berkeley & Xerox OSD) and David C. Smith (Visicorp)
Topic: Programming by Example
Time: 2:15-4
Place: 200-205

Most computer-based applications systems cannot be programmed by their
users. We do not expect the average user of a software system to be able
to program it, because conventional programming is not an easy task.

But ordinary users can program their systems, using a technique called
"programming by example". At its simplest, programming by example is
just recording a sequence of commands to a system, so that the sequence
can be played back at a later time, to do the same or a similar task.
The sequence forms a program. The user writes the program -in the user
interface- of the system, which he already has to know in order to
operate the system. Programming by example is "Do what I did."

A simple program written by example may not be very interesting. I will
show methods for letting the user -generalize- the program so it will
operate on data other than that used in the example, and for adding
control structure to the program.

In this talk, I will describe programming by example, discuss current
and past research in this area, and also describe a particular
implementation of programming by example in a prototype of the Xerox
8010 Star office information system.

------------------------------

Date: Thu, 8 Mar 84 14:57 PST
From: BrianSmith.PA@PARC-MAXC.ARPA
Subject: Mathematical Ontology Seminar Rescheduled

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

David McAllester's talk has been rescheduled in both time and space (in
part to avoid conflict with a visit to PARC by the King of Sweden!); I
hope this makes it easier for people to attend.  It will now take place
at 3:30 on Monday in room 3312, instead of at 11:00.

        Title: "MATHEMATICAL ONTOLOGY"

        Speaker: David McAllester (M.I.T.)
        When: Monday March 12th at 3:30 p.m.
        Where: Xerox PARC Executive Conference Room, Room 3312
                (non-Xerox people should come a few moments early,
                 so that they can be escorted to the conference room)

------------------------------

Date: 09 Mar 84  0134 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Incompleteness in Modal Logic

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

Subject: SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
SPEAKER: Johan van Benthem, University of Groningen

TITLE:  "From Completeness Results to Incompleteness
         Results in Modal Logic"

TIME:   Wednesday, Mar. 14, 4:15-5:30 PM
PLACE:  Stanford Mathematics Dept. Room  383-N


  For a long time the main activity in intensional logic
consisted in proving completeness theorems, matching some
logic with some modal class.  In the early seventies,
however, various incompleteness phenomena were discovered -
e.g. such a match is not always possible.  By now, we know that
the latter phenomenon is the rule rather than the exception,
and the issue of the `semantic power' of the possible worlds
approach has become a rather complex and intriguing one.

  In this talk I will give a survey of the main trends in the
above area, concluding with some open questions and partial
answers.  In particular, a new type of incompleteness theorem
will be presented, showing that a certain tense logic defies
semantic modelling even when both modal class and truth
definition are allowed to vary.

------------------------------

Date: 8 Mar 84 12:55:47 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Thinking About Graph Theory

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


             III Seminar on AI and Mathematical Reasoning

          Title:    Thinking About Graph Theory
          Speaker:  Susan Epstein
          Date:     Tuesday, March 13, 1984, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge


  Dr. Susan Epstein, a recent graduate of our department, will give an informal
talk based on her thesis work.  Here is her abstract:

       A major challenge in artificial intelligence is to provide computers
    with  mathematical  knowledge  in  a format which supports mathematical
    reasoning.  A recursive formulation is described as the foundation of a
    knowledge representation  for  graph  theory.    Benefits  include  the
    automatic  construction  of  examples and related algorithms, hierarchy
    detection, creation of new properties, conjecture and theorem proving.

------------------------------

End of AIList Digest
********************

∂12-Mar-84  1023	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #30
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Mar 84  10:22:51 PST
Date: Sun 11 Mar 1984 23:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #30
To: AIList@SRI-AI


AIList Digest            Monday, 12 Mar 1984       Volume 2 : Issue 30

Today's Topics:
  AI Tools - Production Systems Tools Request,
  Documentation Tools - Manual Generators,
  Mathematics - Plane vs. Sphere,
  Waveform Analysis - ECG Testing,
  Humor - Connectionist Dog Modeling & Tail Recursion
----------------------------------------------------------------------

Date: 8 Mar 84 14:01:41-PST (Thu)
From: decvax!ittvax!wxlvax!adele @ Ucb-Vax
Subject: Production systems tools
Article-I.D.: wxlvax.248

I'm interested in finding out about tools for doing production
systems work. Does anyone know of any such tools that exist (for example,
but not limited to, syntax directed editors, rule maintenance aids, run time
environments, etc.)?

In the best of all possible worlds, what kinds of tools would you like to
see? I'd appreciate any suggestions, advice, gripes, whatever from people
who've used production systems.

Thanks much!

        Adele Howe

USMail: ITT-ATC                   Tele: (203) 929-7341 Ext.976
        1 Research Dr.            UUCP: decavx!ittvax!wxlvax!adele
        Shelton, CT. 06464

------------------------------

Date: 8 Mar 84 20:02:23-PST (Thu)
From: hplabs!zehntel!dual!fortune!rpw3 @ Ucb-Vax
Subject: Re: Documentation tools - (nf)
Article-I.D.: fortune.2722

Versions of DEC-10 RUNOFF later than about 1977 had a feature called
the "select" character set, which was a hook to the commenting
conventions of your favorite programming lanuages so that RUNOFF input
could be buried in comments in the code. RUNOFF knew enough to look at
the extension of the source file and set the "select" set from that to
the normal defaults. Typically, <comment-char><"+"> turned stuff on,
and <comment-char><"-"> turned it off.

By using the equivalent of "-ms" displays (.DS/.DE) (which I have
forgotten the RUNOFF version of), you could actually include slected
pieces of the code in the document.

It really helped if the language had a "comment through end of line"
character, though you can make do (as in "C") by using some other
character at the front of each line of a multi-line comment.
An example in "C", written as if nroff knew about this feature and
had been told that the "select" char was "*":

        /*+
         *.SH
         *Critical Algorithms
         *.LP
         *The Right←One macro requires two's-complement arithmetic,
         *as it uses the property of the rightmost "one" remaining
         *invariant under negation:
         *.DS B
         *      Right←One(X) = (-X) AND (NOT X)
         *.DE
         *where "-X" is negation (unary minus) and "AND" and "NOT"
         *are full-word bit-wise logical operators.
         *-
         */
        #define Right←One(x) ((-(x))&(~(x)))

This turned out to be very useful in keeping the documentation up-to-date
with the code. In addition, RUNOFF had a /VARIANT:xyz option that allowed
you to have ".IF xyz", etc., in your document, so that one file could
contain the "man" page (.HLP file), the documentation (.DOC), and the
program logic manual (.PLM). You specified the variant you wanted when
you ran it off. RUNOFF itself was the classic example: the source contained
all of the end-user documentation (a bit extreme, I admit!).

Rob Warnock

UUCP:   {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:    (415)595-8444
USPS:   Fortune Systems Corp, 101 Twin Dolphin Drive, Redwood City, CA 94065

------------------------------

Date: 11 Mar 1984  23:28 EST (Sun)
From: "Robert P. Krajewski" <RPK%MIT-OZ@MIT-MC.ARPA>
Subject: Manual generators: Lisp systems

One system that allows manual sources to be interspersed with code is LSB, a
system for maintaining large Lisp systems.  (It contains a system definition
facility and tools for grappling with getting code to run in various Lisp
dialects including Maclisp, NIL, and Lisp Machine Lisp.)  LSB will
``compile'' manuals for either TeX or Bolio (a Lisp document processor that
looks like the *roff family).

My wish list for a large system maintenance program would allow for the
generation of manuals, reference cards, and online documents of various
formats from the same source.  Are there any other packages for other
languages that will do this (or at least the subset that LSB offers) ?

``Bob''

------------------------------

Date: 9 Mar 84 6:25:57-PST (Fri)
From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
Subject: Plane = Sphere ?
Article-I.D.: hou2g.194

>                                                       In fact, the
>plane and the sphere are topologically equivalent (the plane is a sphere
>if infinite radius) ...

This statement has been made so frequently that I think it's time someone
took exception.  A plane and sphere are NOT topologically equivalent, a
sphere has an additional point. That's why plane like coordinate systems
mapped to a sphere always have a point where the coordinates are undefined.

In any case, spherical and planar maps can both be colored with the same
number of colors.

                                                  Jim

------------------------------

Date: 9 Mar 84 7:47:10-PST (Fri)
From: harpo!ulysses!unc!mcnc!ecsvax!jwb @ Ucb-Vax
Subject: Re: computer ECG, FDA testing of AI programs
Article-I.D.: ecsvax.2140

An extract of a previous submission by me mentioned the overreading of an
ECG interpretation by a cardiologist.  What I meant (and what was not clear)
is that the cardiologist is looking at the raw ECG, not the output of the
computer (although a lot of preprocessing is often done which is hidden
from the cardiologist--this is another different problem--at least it looks
like what you would get from a standard ECG machine).  On a related issue,
medical decisions regarding the treatment of an individual patient *have* to
be made by the local physician treating the patient (at least that is long
standing medical practice and opinion).  The overreading offered by the
remote services is a look at the reconstructed input ECG by a Board
Certified Cardiologist and is intended to be analagous to a "consultation"
by a more experienced and/or specialized physician.  The name of the service
is Telemed, not Telenet as I incorrectly typed.  Committees of the American
Heart Association and the American College of Cardiology are attempting to
set standards for computer (and human) interpretation of ECG's.  A snag is
that different preprocessing of the ECG's by different manufacturers makes
it rather uneconomical to acquire a large number of "standard" ECG's in
machine readable form.  I think the FDA is looking at all this and I think
under current law they can step in at their whim.  So far they seem to be
waiting for the above groups to present standards (since they don't seem to
have the resources to even start to develop them within the FDA).

        Jack Buchanan
        Cardiology and Biomedical Engineering
        UNC-Chapel Hill
        Chapel Hill NC
        decvax!mcnc!ecsvax!jwb

------------------------------

Date: 9 Mar 84 7:43:40-PST (Fri)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: Re: Connectionist Dog Modeling
Article-I.D.: rocheste.5532

  From seismo!harpo!decvax!decwrl!rhea!orphan!benson Fri Mar  2 20:24:18 1984
  Date: Thursday,  1 Mar 1984 13:45:43-PST
  From: seismo!decvax!decwrl!rhea!orphan!benson
  Subject: Re: Seminar Announcement


                                                             29-Feb-1984


     Garrison W. Cottrell
     University of Cottage Street
     55 Cottage Street
     Rochester, New York 14608



     Dear Mr. Cottrell:

     Although  I  was  unable  to  attend  your  recent  seminar,   "New
     Directions  in  Connectionist  Dog  Modeling,"  I  am  compelled to
     comment on your work as presented in your  published  works,  along
     with the new ideas briefly discussed in the seminar announcement.

     Having read your "Dog:  A Canine  Architecture"  in  late  1981,  I
     approached  "Toward  Connectionist Dog Modeling" the following year
     with cautious optimism.  The former work encouraged me that perhaps
     a consistent dog model was, in fact, obtainable;  at the same time,
     it caused me to wonder why it was desirable.   Nontheless,  "Toward
     Connectionist  Dog  Modeling"  proved  to  be  a  landmark  in this
     emerging science, and my resulting enthusiasm quieted those nagging
     suggestions of futility.

     You may not be familiar with my work in  the  field  of  artificial
     ignorance,  which,  I  would  like to suggest, shares several goals
     with your own work, with different emphasis.  "Artificial Ignorance
     -  An  Achievable  Goal?" (Benson 79) was the first of my published
     papers on the subject.  Briefly, it promoted the idea that although
     creation  of  an  "artificially  intelligent"  machine  is a worthy
     scientific goal, design  and  implementation  of  an  "artificially
     ignorant"   one  is  much  more  sensible.   It  presented  several
     arguments  supporting  the  notion  that,  compared  to  artificial
     intelligence,  artificial  ignorance  is  easily achievable, and is
     therefore the logical first step.

     As a demonstration of the power of  artificial  ignorance  (AI),  I
     spent  the latter half of 1979 producing CHESS1, a chess system for
     the VAX-11/780.  CHESS1 was written primarily in LISP,  a  language
     of   my   own   invention   (Language   for   Ingorance  Simulation
     Programming).  In a resounding victory, CHESS1  lost  to  even  the
     most  ignorant  human  players, being unable to distinguish between
     the pieces.  CHESS2, a more sophisticated implementation  completed
     in  April of 1980, lost just as effectively by moving the pieces in
     a clockwise progression around the edge of the board.

     Ignored by overly ambitious, grant-hungry  researchers,  artificial
     ignorance  seemed to become my own personal discipline.  After only
     three issues, the fledgling SIGIGN newsletter was discontinued, and
     the special interest group it served was disbanded.



     Undaunted, I published a series of three papers in 1980.  The first
     two  described several techniques I had developed toward simulating
     ignorant behavior ("Misunderstanding Human  Speech",  and  "Pattern
     Misidentification",  Benson  80).   The  third  presented  a simple
     conversion method for producing artificially ignorant programs from
     artificially  intelligent  ones,  using  a  heuristic bug insertion
     algorithm ("Artificial Brain Damage", Benson 80).

     Despite these technical triumphs,  interest  in  AI  seemed  to  be
     dwindling.   By  the  spring  of  1981,  I, too, had lost interest,
     convinced that  my  AI  research  had  been  little  more  than  an
     interesting intellectual exercise.

     It is for this reason that your dog modeling thesis  so  thoroughly
     captured  my  interest.   Surely  the  phrases  (to quote from your
     announcement) "impoverished phoneme," "decimated world  view,"  and
     "no  brain"  imply  "ignorance." And, if I may paraphrase from your
     original treatise, the generic dog is essentially the equivalent of
     an intellectually stunted human who has been forced to bear fur and
     eat off the floor.

     Clearly dog modeling and AI have  much  in  common.   To  prove  my
     point, I have simulated the Wagging Response in a LISP application,
     and am working toward  a  procedural  representation  of  the  Tail
     Chasing Activity.  The latter is a classic demonstration of genuine
     ignorance,  as  well  as  a  natural   application   of   recursive
     programming techniques.

     I welcome any suggestions you have on these experiments,  and  look
     forward to the continued success of your dog modeling research.



                                        Sincerely,

                                           Tom Benson

------------------------------

Date: 9 Mar 84 7:45:25-PST (Fri)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: Re: Tail Recursion is Iterative
Article-I.D.: rocheste.5535

  Date: Thursday,  8 Mar 1984 18:57:59-PST
  From: decvax!decwrl!rhea!orphan!benson
  Subject: Re: Tail recursion.  Please forward to Mr. Sloan.


Dear Mr. Cottrell:

  I do realize that in most cases (I.E., everyday programming), tail recursion
can be reduced to iteration. However, in my study of this aspect of dog
modeling, I found the underlying MOTIVATION to be recursive in nature. Clearly
this is not a concept which can be applied to programming outside the AI realm.
(And when I say "AI", I of course mean "AI", not "AI"). My canine subject did
not set out to chase his tail for i equals 1 to n. Nor did he intend to chase
it until some condition was met; the obvious condition being "has the tail
been caught?" In fact, frequent experiments showed that actual tail capture
did not necessarily end the cycle, and it often was not achieved at all before
cessation of the chasing activity. No, a more realistic model is one in which
a bored or confused dog initiates an attempt to catch his tail. During this
process, the previously unseen tail falls into view as the head is turned.
The dog's suspicion is aroused; is this some enemy preparing to strike? This
possibility causes an attempt to catch the tail. This causes the tail to fall
into view....   and so on. The recursion may be terminated either by some
interrupt generated by an unrelated process in the dog's brain, or by forced
intervention of the dog's master. The latter is dangerous, and should be
scrupulously avoided, because it does not allow the dog's natural unwinding
mechanism to be invoked. Thus, the dog may carry unnecessary Tail Chasing
Activity procedure frames around in his brain for years, like a time bomb
waiting to go off. This, indeed, is a subject deserving further study.
  In response to your other question: you are welcome to post my AI reports
wherever it seems appropriate.


                                        Tom Benson

------------------------------

Date: 9 Mar 84 20:48:42-PST (Fri)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: sloan's reply to benson's reply to sloan's reply to benson's reply
Article-I.D.: rocheste.5551

  Date: Fri, 9 Mar 1984  14:28 EST
  From: SLOAN%MIT-OZ@MIT-MC.ARPA
  Subject: tail recursion: forwarded reply

Gary-
 Of course, Mr. Benson knows that ALL time bombs are, by definition,
waiting to go off.
 As to the essentially recursive nature of TCA, I simply note that
this view requires a stack of dogs; in my experience stacks of dogs
engage in an entirely different form of behavior, which, under the
proper parity conditions, is truly recursive.
-Ken


[If anyone is Really Tired of This, I will stop sending this rather
convoluted conversation between two friends of mine who don't know
each other but apparently should -gwc]

------------------------------

End of AIList Digest
********************

∂13-Jan-85  1624	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #31
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Jan 85  16:24:30 PST
Mail-From: LAWS created at 13-Mar-84 16:35:02
Date: Tue 13 Mar 1984 16:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #31
To: AIList@SRI-AI
ReSent-date: Sun 13 Jan 85 16:23:50-PST
ReSent-From: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-To: YM@SU-AI.ARPA


AIList Digest           Wednesday, 14 Mar 1984     Volume 2 : Issue 31

Today's Topics:
  Humor - Who is Tom Benson?,
  Linguistics - And as a Non-Logical Conjuction,
  Brain Theory - Parallelism,
  Seminars - Procedural Learning (Boston) &
    Theorem Proving in the Stanford Pascal Verifier,
  Conference - 4th Conf. on FST&TCS,
  Review - Western Culture in the Computer Age
----------------------------------------------------------------------

Date: Mon 12 Mar 84 18:48:05-PST
From: ROBINSON@SRI-AI.ARPA
Subject: Tom Benson

Who is this Tom Benson and what is he doing out of
captivity?

------------------------------

Date: 12 Mar 84 11:32:17-PST (Mon)
From: hplabs!hao!seismo!rochester!stuart @ Ucb-Vax
Subject: And as a non-logical conjuction - Request for pointers
Article-I.D.: rocheste.5580

From: Stuart Friedberg  <stuart>
I am looking for pointers into the linguistic and natural language
processing literature concerning the use of "and" in English as a
non-logical conjunction. That is, the use of "and" often implies
temporal sequence and/or causality. There is also a use introducing
a verb complement.
        "Sheila took the ball and ran with it."
        "The lights were off and I couldn't see."
        "I will try and find the missing book."

I understand that treatment of conjunction and ellipsis is difficult.
Pointers to books, articles, theses, diatribes, etc. that (have sections
that) deal with "and" in this extra-logical sense will be *much* more
useful than pointers to more general treatments of conjunction and
ellipsis.

Useful things to know that I don't:
        What are (all?) the senses in which "and" may be used?
        Do all these interpretations apply to clause conjunction
                only? (Ie, not to noun conjunction, adverb
                conjunction, etc.)
        What knowledge is needed/useful to determine the sense of
                "and" in a given English sentence? (Given a knowledge
                of all the senses of "and", how to we eliminate some
                of them in a particular context?)
        Is it possible to expand Ross's constraints in a reasonable
                way to handle this kind of conjuction? (Constraints
                on variables in syntax, thesis, MIT, 1967, etc.)

I have a few pointers already, but my only real linguistic source is
several years old. I assume that additional work has been done from
both linguistic and AI points of view. I am starting from:

1) Susan F. Schmerling, "Asymmetric Conjuction and Rules of Conversation",
in Syntax and Semantics, Vol. 3 (Speech Acts), Cole and Morgan (eds.),
Academic Press, New York, 1975

2) Stan C. Kwasny, "Treatment of Ungrammatical and Extra-grammatical
Phenomena in Natural Language Understanding Systems", Indiana University
Linguistic Club, Bloomington, IN, 1980

                                Stu Friedberg
                        {seismo, allegra}!rochester!stuart      UUCP
                                stuart@rochester                ARPA

                                Dept. Computer Science          MAIL
                                University of Rochester
                                Rochester, NY 14627

------------------------------

Date: Tue, 13 Mar 84 18:05 EST
From: Ives@MIT-MULTICS.ARPA
Subject: Brain Theory - Parallelism


A strikingly clear picture of brain parallelism at the gross anatomical
level was presented during a lecture at MIT on the architecture of the
cerebral cortex by a neuroanatomist (Dr.  Deepak Pandya, Bedford
Veterans Administration Hospital, Bedford, MA).

Almost a hundred years ago, dye studies showed that the cerebral cortex
is not a random mass of neurons, and it was mapped into a few dozen
areas, differentiated by microstructure.  Later, it was shown that
lesions in a certain area always produced the same behavioral
deficiencies.  Now, they have mapped out the interconnections between
the areas.  The map looks like a plate of spaghetti but, when
transformed into a schematic, reveals simplicity and regularity.

Each half of the brain includes six sets of areas.  Each set has a
somatic area, a visual area and an auditory area.  Each area in a set
connects to the other two, forming a triangle.  The six sets form a
stack because each area is connected to the area of the same kind in the
next set.  The eighteen areas schematicized by this simple triangular
stack include most of the tissue in a cerebral cortex.

If I remember correctly, all mammals have this architecture.  It was
surmised that one set evolved first and was replicated six times,
because the neuronal microstructure varies gradually with increasing
level.  He also suggested that higher levels might process higher levels
of abstraction.

-- Jeffrey D.  Ives

------------------------------

Date: 12 Mar 1984  13:39 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Procedural Learning (Boston)

            [Forward from the MIT bboard by SASW@MIT-MC.]

         Wednesday, March 14     4:00pm   8th floor playroom


          Acquisition of Procedural Knowledge from Examples

                            P. M. Andreae

  I will describe NODDY - a system that acquires procedures from
examples.  NODDY is a variation of concept learning in which the
concepts to be learned are procedures in the form of simple robot
programs.  The procedures are acquired by generalising examples
obtained by leading a robot through a sequence of steps.  Three
distinct types of generalisation are involved: structure
generalisation (eg. loops and branches), event generalisation (eg. the
branching conditions), and function induction.
  I will also discuss two principles that arise out
of, and are illustrated by, NODDY.  I claim that these principles have
application, not only to procedure acquisition, but also to any system
that does partial matching and/or generalisation of any kind.

------------------------------

Date: Mon 12 Mar 84 19:16:47-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Theorem Proving in the Stanford Pascal Verifier

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The Automatic Inference Seminar will meet on Wednesday March 14th in MJH 352
(note change of room from 301) at 1:30 p.m.
(This is tax-filing season;  I'm getting slightly too many groanworthy remarks
about "automatic deduction", hence the name change).

Speaker:  Richard Treitel (oh no, not again)

Subject:  Theorem Proving in the Stanford Pascal Verifier

The Stanford Pascal Verifier was developed in the late 1970's for research in
program verification.   Its deductive component, designed mainly by Greg Nelson
and Derek Oppen, has some features not found in many other natural deduction
systems, including a powerful method for dealing with equalities, a general
framework for combining the results of decision procedures for fragments of the
problem domain, and a control structure based on an unusual "normal form" for
expressions.   I will attempt to explain these and relate them both to other
provers and to post-Oppen work on the same technology.

------------------------------

Date: 9 Mar 84 9:52:21-PST (Fri)
From: harpo!ulysses!burl!clyde!akgua!psuvax!narayana @ Ucb-Vax
Subject: Call for 4th Conf. FST&TCS Bangalore India Dec 13-15.
Article-I.D.: psuvax.815

Subject: Call for papers

              4th Conference on Foundations of Software
              Engineering and Theoretical Computer Science

              Bangalore, INDIA,  DECEMBER 13-15, 1984

Sponsor: Tata Institute of Fundamental Research, Bombay, India.

Conference advisory committe:

A.Chandra(IBM), B.Chandrasekharan(Ohio state), S.Crespi Reghizzi(Milan)
D.Gries(Cornell), A.Joshi(Penn), U.Montanari(Pisa), J.H.Morris(CMU),
A.Nakamura(Hiroshima), R.Narasimhan(TIFR), J.Nivergelt(ETH), M.Nivat(Paris)
R.Parikh(Ney York), S.R.Kosaraju(Johns Hopkins), B.Reusch(Dortmund),
R.Sethi(Bell labs), S.Sahni(Minnesota), P.S.Tiagarajan(Aarhus),
W.A.Wulf(Tartan labs).

Papers are invited in the following areas:

       Programming languages and systems
       Program correctness and proof methodologies
       Formal semantics and specifications
       Theory of computation
       Formal languages and automata
       Algorithms and complexity
       Data bases
       Distributed computing
       Computing practice

Papers will be REFEREED and a final selection will be made by the programme
committe.

Authors should send four copies of each paper to

       Chairman, FST&TCS Programme Committe
       Tata Institute of Fundamental Research
       Homi Bhabha Road, BOMBAY, 400 005, India

Due date for receiving full papers: MAY 31, 1984.

Authors will be notified of acceptance/rejection by: JULY 31,1984

Camera ready papers must be submitted by: SEP 15,1984

PROCEEDINGS WILL BE PUBLISHED.  For further details contact the above address.

Programme Committe: M.Joseph(TIFR), S.N.Maheswari(IIT), S.L.Mahindiratta(IIT),
                    K.V.Nori(Tata RDDC), S.V.Rangaswamy(IISC), R.K.Shyamasundar
                    (TIFR), R.Siromani(Madras Christian college).

------------------------------

Date: 12 Mar 84  2053 PST
From: Frank Yellin <FY@SU-AI.ARPA>
Subject: Western Culture in the Computer Age

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

n066  1529  12 Mar 84
BC-BOOK-REVIEW Undated
(Culture)
By CHRISTOPHER LEHMANN-HAUPT
c.1984 N.Y. Times News Service
    TURING'S MAN. Western Culture in the Computer Age. By J. David
Bolter. 264 pages. University of North Carolina Press. Hard-cover,
$19.95; paper, $8.95.
    J. David Bolter, the author of ''Turing's Man: Western Culture in
the Computer Age,'' is both a classicist who teaches at the
University of North Carolina and a former visiting fellow in computer
science at Yale University. This unusual combination of talents may
not qualify him absolutely to offer a humane view of the computer
age, or what he refers to as the age of Turing's man, after Alan M.
Turing, the English mathematician and logician who offered early
theoretical descriptions of both the computer and advanced artificial
intelligence.
    But his two fields of knowledge certainly provide Bolter with an
unusual perspective on contemporary developments that many observers
fear are about to usher in an age of heartless quantification, if not
the final stages of Orwellian totalitarianism. In Bolter's view,
every important era of Western civilization has had what he calls its
''defining technology'' which ''develops links, metaphorical or
otherwise, with a culture's science, philosophy, or literature; it is
always available to serve as a metaphor, example, model, or symbol.''
    To the ancient Greeks, according to Bolter, the dominant
technological metaphor was the drop spindle, a device for twisting
yarn into thread. Such a metaphor implied technology as a controlled
application of power. To Western Europe after the Middle Ages, the
analogues to the spindle were first, the weight-driven clock, a
triumph of mechanical technology, and later, the steam engine, a
climax of the dynamic. In Bolter's subtly developed argument, the
computer - obviously enough the present age's defining metaphor - is
an outgrowth of both the clock and the steam engine. Yet,
paradoxically, the computer also represents a throwback.
    Everything follows from this. In a series of closely reasoned
chapters on the way in which the computer has redefined our notions
of space, time, memory, logic, language and creativity, Bolter
reviews a subtle but recuing pattern in which the computer
simultaneously climaxes Western technology and returns us to ancient
Greece. He concludes that if the ancient ideals were balance,
proportion and handicraft (the use of the spindle), and the Western
European one was the Faustian quest for power through knowledge
(understanding a clockwork universe to attain the dynamism of the
steam engine), then Turing's man combines the two.
    ''In his own way, computer man retains and even extends the Faustian
tendency to analyze,'' Bolter concludes. ''Yet the goal of Faustian
analysis was to understand, to 'get to the bottom' of a problem,''
whereas ''Turing's man analyzes not primarily to understand but to
act.''
    He continues: ''For Turing's man, knowledge is a process, a skill,''
like the ancient arts of spinning or throwing a pot. ''A man or a
computer knows something only if he or it can produce the right
answer when asked the right question.'' Faustian depth ''adds nothing
to the program's operational success.''
    Now in portraying Turing's man, Bolter may seem to be overburdening
a few simple metaphors. Yet his argument is developed with remarkable
concreteness. Indeed, if his book has any fault, it lies in the
extent to which he has detailed the slightly repetitious and
eventually predictable pattern of argument described above.
    Yet what is far more important about ''Turing's Man'' is its success
in bridging the gap between the sciences and the humanities. I can
only guess at how much it will inform the computer technologist about
philosophy and art, but I can vouch for how much it has to say to the
nonspecialist about how computers work. The inaccessibility of the
computer's inner functioning may well be a key to the author's case
that Turing's man is returning to the ancient Greek's satisfaction in
the surface of things, but after reading Bolter's book, this reader
found the computer far less mysterious. Not incidentally, the book
makes us understand why computers aren't really all that good at
doing mathematics (they can't get a grip on the notion of infinity);
and it far surpasses Andrew Hodges's recent biography of Alan Turing
in explaining Turing's Game for testing artificial intelligence.
    But most provocative about this study is what it has to say about
the political implications of the computer age. Will Turing's man
prove the instrument of Orwell's Big Brother, as so many observers
are inclined to fear? Very likely not, says Bolter:
    ''Lacking the intensity of the mechanical-dynamic age, the computer
age may in fact not produce individuals capable of great good or
evil. Turing's man is not a possessed soul, as Faustian man so often
was. He does not hold himself and his world in such deadly earnest;
he does not speak of 'destiny' but rather of 'options.' And if the
computer age does not produce a Michelangelo and a Goethe, it is
perhaps less likely to produce a Hitler or even a Napoleon. The
totalitarian figures were men who could focus the Faustian commitment
of will for their ends. What if the will is lacking? The premise of
Orwell's '1984' was the marriage of totalitarian purpose with modern
technology. But the most modern technology, computer technology, may
well be incompatible with the totalitarian monster, at least in its
classic form.''
    Indeed, according to Bolter, Turing's man may be more inclined to
anarchy than to totalitarianism. This may be whistling past the
graveyard. But in Bolter's stimulating analysis, it also makes a kind
of homely sense.

------------------------------

End of AIList Digest
********************

∂16-Mar-84  1247	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #32
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Mar 84  12:46:14 PST
Date: Fri 16 Mar 1984 10:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #32
To: AIList@SRI-AI


AIList Digest            Friday, 16 Mar 1984       Volume 2 : Issue 32

Today's Topics:
  AI Books - Request for Sources & New Series Announcement,
  Fuzzy Set Theory - Request for References,
  Bindings - Request for Tong Address,
  Humor - Man-LISP Interface,
  AI Tools - Review of IQLISP for IBM PC,
  Linguistics - Nonlogical "And",
  Waveform Analysis - ECG Interpretation Liability,
  Alert - High Technology Articles,
  Seminars - Knowledge-Based Documentation Systems &
    Sorting Networks & Expert Systems for Fault Diagnosis
----------------------------------------------------------------------

Date: 14 Mar 84 0118 EST
From: Dave.Touretzky@CMU-CS-A
Subject: who sells AI books?

I am looking for places that sell AI books, other than publishers.  Do
you know of any book distributors that specialize in AI titles?  How about
book clubs featuring AI, cog. sci., robotics, and the like?  Send names
and addresses to Touretzky@CMUA.  I'll make the listing available online
if there's any demand for it.

[The best source is probably Synapse Books.  Does anyone have the address?

The Library of Computer and Information Science is a book club that often
offers AI books, and sometimes offers vision and popularized cognitive science
or robotics books.  Right now you can get a great deal on the Handbook of AI.
See Scientific American or the latest IEEE Computer for details, or do a
current member a favor by letting him sign you up.  -- KIL]

------------------------------

Date: Wed 14 Mar 84 16:41:17-PST
From: DKANERVA@SRI-AI.ARPA
Subject: New Book Series

         [Forwarded from the CSLI newsletter by Laws@SRI-AI.]

    MANUSCRIPTS SOLICITED FOR NEW MIT PRESS/BRADFORD BOOKS SERIES

     MIT Press/Bradford  Books has  announced  a new  series  entitled
"Computational Models of Cognition and Perception" edited by Jerome A.
Feldman, Patrick J. Hayes, and David E. Rumelhart.

     The  series  will  include  state-of-the-art  reference works and
monographs, as well as upper  level texts,  on computational models in
such  subject  domains as  knowledge representation,  natural language
understanding, problem  solving,  learning and  generalization,  motor
control, speech perception and production, and all areas of vision.

     The series will span  the full range  of computational models  in
cognition and  perceptual research  and teaching,  including  detailed
neural models, models based on symbol-manipulation languages, and mod-
els employing techniques of formal logic. Especially welcome are works
treating experimentally  testable  computational  models  of  specific
cognitive and  perceptual  functions; basic  computational  questions,
particularly relationships between  different classes  of models;  and
representational questions linking computation  and semantics to  par-
ticular problem domains.

     Manuscript  proposals  should  be  submitted  to  one  the  three
editors, or to Henry Bradford Stanton, Publisher, Bradford Books,  The
MIT Press,  28 Carleton  Street  Cambridge, MA  02142  (617-253-5627).
However, we  welcome  your discussing  ideas  for books  and  software
programs and  packages  with  any  of the  members  of  the  Editorial
Advisory Board who may be your close colleagues:

        John Anderson                   Drew McDermott
        Horace Barlow                   Robert Moore
        Jon Barwise                     Allen Newell
        Emilio Bizzi                    Raymond Perrault
        John Seely Brown                Roger Schank
        Daniel Dennett                  Candy Sidner
        Geoffrey Hinton                 Shimon Ullman
        Stephen Kosslyn                 David Waltz
        Jay  McClelland                 Robert Wilensky
                                        Yorick Wilks

------------------------------

Date: 14 Mar 84 14:08:57 PST (Wednesday)
From: Conde.PA@PARC-MAXC.ARPA
Subject: AIList : Request for Fuzzy Set references

I would like to know if anyone has a references to good introductory
books on   the theory of fuzzy sets, as well as fuzzy databases.

Please reply to me or to the digest.

Thanks,
Daniel Conde

------------------------------

Date: 12 Mar 84 19:25:20-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: Tong Colloquium on Knowledge-Directed Search
Article-I.D.: uiucdcs.6150

Could someone tell me where the speaker [Christopher Tong] is to be
contacted? I'd like to follow up on his work.


                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

[The talk, on knowledge-aided circuit design, was given at Rutgers.
Does anyone have Tong's net or mail address? -- KIL]

------------------------------

Date: Tue 13 Mar 84 13:19:16-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Wait till he discovers the parenthesis key!

        [Forwarded from the UTEXAS-20 bboard by Laws@SRI-AI.]

[FYAmusement: The following item was found on SAIL's Bboard, contributed by
Ron Newman. --CBD]


The following letter to the editor was published in Softalk of March,
1984:

  I have come into possession recently of a program called Microlisp.  I
understand that it has been around for some time, so maybe someone out
there knows something about it.  I cannot get it to do anything but
print numbers I type in or print the word "nil".  How do I make it do
anything else?  Can you give me an example of something useful that I
might be able to do with it?

					[...]

------------------------------

Date: 9 Mar 84 8:51:58-PST (Fri)
From: decvax!genrad!wjh12!vaxine!chb @ Ucb-Vax
Subject: Review of IQLISP for IBM PC
Article-I.D.: vaxine.211

        A review of IQLisp (by Integral Quality, 1983).

                Compiled by Jeff Shrager
                    CMU Psychology
                      7/27/83


[Charlie has forwarded Jeff Schrager's review of IQLISP for the IBM PC.
This appeared in AIList in early August, so I will not reprint it here.
Readers who want the text can FTP file <AILIST>IQLISP.TXT on SRI-AI or
contact AIList-Request@SRI-AI.  -- KIL]


                                   Charlie Berg
                                   Automatix, Inc.
                                   ...allegra!linus!vaxine!chb

------------------------------

Date: Tue 13 Mar 84 20:12:39-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Use of "and"

My father, who is a law professor, was able to come up with an instance where a
contract contained the word "and" in a certain place, and the court interpreted
that "and" to mean what we computermen would have meant by "or".   Or maybe it
was the other way around;  I forget the details.
                                                        - Richard

------------------------------

Date: Wed, 14 Mar 84 09:26:36 EST
From: John McLean <mclean@NRL-CSS>
Subject: nonlogical "and"

  I think that the first treatment of the fact that "and" in English is
not the purely logical "and" of predicate calculus appeared in Philosophy
literature.  You might want to take a look at Strawson's PHILOSOPHICAL LOGIC
for classical arguments that the "and" of natural language is distinct from the
logician's "and" and Grice's "William James' Lectures" for a very influential
rebuttal in which he argues that the use of "and" in English can be modeled
by logical conjunction if we take into account "conversational implicature",
a concept Grice develops in the lectures.

  By the way, one of my favorite examples of nonlogical conjunction which you
do not mention is the statement made to someone about to eat a mushroom
growing in the ground "You will not eat that and live."  This statement
is almost always correct from the truth-functional view of conjunction
even if the mushroom is harmless, since few people when issued the warning
will indulge their appetite.
                            Good luck,
                            John

------------------------------

Date: 13 Mar 84 9:27:44-PST (Tue)
From: harpo!ulysses!unc!mcnc!ecsvax!jwb @ Ucb-Vax
Subject: More computer ECG, Cardiologist
Article-I.D.: ecsvax.2153

With respect to the responsibility of a cardiologist to overread every
ECG, this is (or should be) uniformly done.  The problem addressed by
the dial up services is that in the middle of the night, in a small
town hospital, the cardiologist's reading may not come until the next
day.  Many communities do not have a cardiologist at all.  The
physician obtaining the ECG (who in an emergency room is typically NOT
a cardiologist) has an obligation to carefully examine the ECG
obtained.  There are two schools of thought with regard to sending
computer ECG information to a physician who may not be expert in
interpreting an ECG.  One is that any information is better than none,
and therefore the nonexpert physician should get the information.  The
other is that if there is a screw up, and the local physician cannot
be trusted to recognize this, the computer analysis can do significant
harm and should be witheld (The local physician will ALWAYS have his
own interpretation.)  Both approaches have their merit.  Our local
approach is to NOT send machine interpretations back to the Emergency
Room until a person with some expertise in reading ECG's has looked at
the tracing and at the computer generated interpretation.  In some
cases, this approach negates the major advantage of having the
computer in the first place.

[...]
Jack Buchanan
Cardiology and Biomedical Engineering
UNC-Chapel Hill
decvax!mcnc!ecsvax!jwb  {Usenet}

------------------------------

Date: Thu 15 Mar 84 22:46:08-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: High Technology, Feb 84 has AI-relevant articles

               Summary of HIGH TECHNOLOGY, Feb 84
          ============================================

FEATURES
        BIOCHIPS: CAN MOLECULES COMPUTE?  The groundwork is being laid for
        21st-century computers based on carbon rather than silicon.  Molecular
        Switches. Soliton Switches and Logic. Bulk Molecular Devices. Analog
        Biochips. "Intelligent" Switches. Robot Vision. Fabrication. Protein
        Engineering. Development Strategy. written by Jonathan Tucker

        UNCOVERING HIDDEN FLAWS. Nondestructive tests spot trouble before it
        happens.  Computerized tomography.  6 techniques dominate.

        ENGLISH: THE NEWEST COMPUTER LANGUAGE. Natural language systems.
        Computational Linguistics. commercial applications. semantic grammars.
        Syntax, Semantics, vs. Pragmatics. Situation Semantics.

        BIOPOLYMERS CHALLENGE PETROCHEMICALS.  Oil-recovery agents, drug
        purification media, and plastics are promising applications.


OPINION
        Where defense can be cut
LETTERS
        Data Security; Helping kids learn; Retraining
UPDATE
        Graphics Analysis. converting a 3-Dmodel into finite element model
        Russians develop electromagnetic casting; licensed by Alcoa
        DNA sequence DB. (longer than 50 mucleotides) GenBank has 2700+ entries
                comprising over 2.1 million bases
        Optical memory units boost computer storage. ST with 4 gigabytes on a
                single 14 inch removable platter. 3 Mbyte/sec transfer rate
                costing $130,000.  Shugart offers 1Gbyte on 12" platter with
                5Mbyte/sec transfer costing $6,000 in quantities of 250
        In-mold metal plating of plastics cuts costs.
        Brain chemicals delivered on demand (an experimental method)
INSIGHTS
        Factory Automation Survival Kit
MILITARY/AEROSPACE
        Mosaic arrays boost infrared surveillance
CONSUMER
        Multi-decoders may revive AM stereo
BUSINESS
        Optical memories eye computer market
MICROCOMPUTER
        Micro publicity game
BOOK REVIEW
        Luciano Caglioti: The 2 Faces of Chemistry. [ on chemical risks ]
INVESTMENTS
        Big Potential for Custom Chip Suppliers

------------------------------

Date: Tue 13 Mar 84 10:23:55-EST
From: Renata J. Sorkin <RENATA@MIT-XX.ARPA>
Subject: Knowledge-Based Documentation Systems

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

               "KNOWLEDGE-BASED COMMUNICATION PROCESSES
                       IN SOFTWARE ENGINEERING"

                          Matthias Schneider
                            Project Inform
                       University of Stuttgart


        Designing programs to solve ill-structured problems as well as
trying to understand the purpose of a program and the designer's
intentions involves a great deal of communication between programmers
and users.  Program documentation systems must support these
communication processes by supplying a common base of knowledge and
structuring the exchange of information.

DATE:  Wednesday, March 14
TIME:  12:00 noon
PLACE: NE43-453
Host: Dr. A. diSessa

------------------------------

Date: 15 March 1984 16:58-EST
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: Sorting Networks

            [Forward from the MIT bboard by Laws@SRI-AI.]

DATE:   Tuesday, March 20, 1984
TIME:   3:45pm   Refreshments
        4:00pm   Lecture
PLACE:  NE43-512a
TITLE:  "Sorting Networks"
SPEAKER:        Professor Michael Paterson, University of Warwick

Last year, Ajtai, Komlos and Szemeredi published details of a depth O(log n)
comparator network for sorting, thus answering a longstanding open problem.
Their construction is difficult to analyse and the bounds they proved result in
networks of astronomical size.  A considerable simplification is presented
which readily yields constructions of more moderate size.

HOST:   Professor Tom Leighton

------------------------------

Date: 15 Mar 84 13:53:27 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on Expert Systems for Fault Diagnosis...

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                                 I I I SEMINAR

          Title:    An Expert System for Fault Monitoring and Diagnosis

          Speaker:  Kathy Abbott

          Date:     Tuesday, March 27, 1984, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge

  Kathy  Abbott,  a Ph.D. student in our department, will give an informal talk
describing her research work at NASA.  Here is her abstract:

       The Flight Management Branch  at  NASA/Langley  Research  Center  in
    Hampton,Va.  is exploring the use of AI concepts to aid flight crews in
    managing aircraft systems. Under this research effort, an expert system
    is  being developed to perform on-board fault monitoring and diagnosis.
    Current expert systems technology is insufficient for this application,
    because the flight domain consists of dynamic physical systems and  the
    system  must respond in real time. A frame-based expert system has been
    designed that includes a  frame  associated  with  each  subsystem  and
    sensor  on  the  aircraft.  Among other information, the frames include
    mechanism models of the associated systems that  can  be  used  by  the
    diagnostic expert for hypothesis verification and predictive purposes.

------------------------------

End of AIList Digest
********************

∂18-Mar-84  2328	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #33
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Mar 84  23:27:15 PST
Date: Sun 18 Mar 1984 21:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #33
To: AIList@SRI-AI


AIList Digest            Monday, 19 Mar 1984       Volume 2 : Issue 33

Today's Topics:
  AI Books - Synapse Books,
  Bindings - Tong Address,
  Mathematics - Topology of Plane and Sphere,
  Expert Systems - Explanatory Capability,
  Automata - Characterizing Automata from I/O Pairs,
  Conferences - ACM Conference & CSCSI 84 Preliminary Program
----------------------------------------------------------------------

Date: Sun 18 Mar 84 21:53:54-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Synapse Books

I found a copy of the 1982 Synapse Books catalog.  The address is

  Synapse Information Resources, Inc.
  912 Cherry Lane
  Vestal, New York  13850

The catalog covers AI, automation, biomedical engineering, CAD/CAM,
robotics, instrumentation, cybernetics, and computer technology.
Prices seem to be the publishers' suggested prices, although I only
checked a couple.  The selection is impressive.

                                        -- Ken Laws

------------------------------

Date: Fri 16 Mar 84 21:31:54-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Tong Address

Chris Tong can be reached at TONG@SUMEX or TONG@PARC.  Mailing address:
Chris Tong, Xerox Palo Alto Research Center, 3333 Coyote Hill Rd., Palo
Alto, CA

--Tom

[Jeff Rosenschein@SUMEX reports that Chris hasn't used his Sumex
login for quite a while.  Richard Treitel@SUMEX suggested a
TONG@PARC-MAXC address.  -- KIL]

------------------------------

Date: 18 Mar 84 20:45:24 PST (Sun)
From: Tod Levitt <levitt@aids-unix>
Subject: more four color junk

   From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
   A plane and sphere are NOT topologically equivalent, a
   sphere has an additional point."

More to the "point", the topological invariants of the plane and the
(two-) sphere are different, which is the definition of being
topologically inequivalent. For instance, the plane is contractible to a
point while the sphere is not; the plane is non-compact, while the
sphere is compact; the homotopy and homology groups of the plane are
trivial, while those of the sphere are not.

A more general form of the four-color theorem asks the question: for a
given (n-dimensional) shape (and its topological equivalents) what is
the fewest number of colors needed to color any map drawn on the
shape.

------------------------------

Date: 9 Mar 84 8:58:08-PST (Fri)
From: decvax!linus!utzoo!watmath!deepthot!julian @ Ucb-Vax
Subject: Re: computer ECG, FDA testing of AI programs
Article-I.D.: deepthot.212

As a matter of human engineering, I think "expert" programs for
practical use must be prepared to explain the reasoning followed
when they present recommendations.   Computer people ought to be
well aware of the need to provide adequate auditing and verification
of program function, even if the naive users don't know this.
The last thing we need is 'expert' computers that cannot be
questioned.  I think Weizenbaum had a valid point when he wrote
about programs that no one understood.  And I would be unhappy
to see further spread of computer systems that the human users cannot
feel themselves to be in charge of, especially when the programs
are called 'intelligent' and the technology for answering these
questions about the reasoning processes is fairly well established.
                Julian Davies

------------------------------

Date: 16 Mar 84 13:28:54 PST (Friday)
From: Bruce Hamilton <Hamilton.ES@PARC-MAXC.ARPA>
Reply-to: Hamilton.ES@PARC-MAXC.ARPA
Subject: Characterizing automata from I/O pairs

The following recent msgs should be of interest to this list, and
hopefully will stimulate some good discussion.  --Bruce

                              ----------

From: Ron Newman <Newman.es>

The following letter to the editor was published in Softalk of March,
1984:

  I have come into possession recently of a program called Microlisp.  I
understand that it has been around for some time, so maybe someone out
there knows something about it.  I cannot get it to do anything but
print numbers I type in or print the word "nil".  How do I make it do
anything else?  Can you give me an example of something useful that I
might be able to do with it?

                                        [...]

                              ----------

From: Bruce Hamilton <Hamilton.ES>

Actually, the letter implies a serious question, related to trying to
communicate with other forms of intelligent life: is there an approach,
to giving inputs and observing output to an unknown program, which is in
some sense optimal; i.e. leads to a complete characterization of input -
output pairs in the shortest possible time?

--Bruce

                              ----------

From: VanDuyn.Henr

One question is would intelligent life aquire (a.k.a. pirate or steal) a
piece of software without the documentation.

On the serious side, what you suggest reminds me of programs that
attempt to write programs by examining a small set of the input output
pairs.  At first sample pairs are fed to the program then the program
begins generating its own sample pairs to build and validate a
hypothesis.  I read an article about this is the ACM TOPLAS journal
about 3 years ago...

Mitch

                              ----------

From: stolfi.pa

"Is there an approach, to giving inputs and observing output to an
unknown program, which is in some sense optimal; i.e. leads to a
complete characterization of input - output pairs in the shortest
possible time?"

I am interested in that question, too. Do you know of any work in that
area? I have given some thought to it, but made only trivial progress.

To be more definite, consider deterministic finite machine with N
internal states, and {0,1} as its input and output alphabets. The goal
is to determine the structure of the machine (i.e., its transition and
output functions) by feeding it a sequence of zeros and ones, and
observing the bits that come out of it. Nothing is known about the
structure of the machine. In particular, it is not known how to reset
the machine to its initial state, and not even whether it is possible to
do so (i.e., whether the machine is strongly connected). Then

(1) at best, you will be able to know the structure of a single strongly
connected component of the machine, and have only a vague knowledge of
the path that led from the initial state to that component. Moreover,
your answer will be determined only up to automaton equivalence. (In
other words, studing the behavior of something will only tell you how
that thing behaves, not how it is built)

(2) if you have an upper bound on the number N of internal states, I
believe you can always deduce the structure of the machine, subject to
the caveats in (1), after feeding it some finite number f(N) of bits.
However, I have no algorithm for generating the required input and
analyzing the output, and I have no idea on how big f(N) is. O(N) is a
trivial lower bound. Any upper bounds? Can it be more than O(2↑N)?.

(3) In any case, note that a finite machine built from n boolean gates
may have an exponential number of states (For example, a counter with n
flip-flops has 2↑n states). Therefore, even if you know that a program
has a single 16-bit integer variable and a 16-bit program counter, you
may need to feed it few billion bits to know what it does.

(4) if you do not have an upper bound on N, there is no way you can
deduce it by experiment, or answer categorically any interesting
questions about the structure of the machine. For example, suppose you
have put in 999,999,999 bits, and you got that many zeros out. You still
don't know whether the machine is a trivial one-state, constant-output
gadget, or whether it is a billion-state monster that ignores its inputs
and simply puts out a '1' every billionth transition. Note however, that
you may still give answers of the form "either this machine has more
than X states, or it is equivalent to the following automaton: ..."

In anthropomorphic terms, (4) says that it is impossible to distinguish
a genuinely dumb being from a very intelligent one that is just playing
dumb. Point (3) makes me wonder if the goal and method of psychology --
to understand the human mind by studying how it behaves -- is a sensible
proposition after all..

jorge

[There have, of course, been investigations of pattern recognition
techniques for infering Markov or finite-state grammars.  The
PURR-PUSS system is one that comes to mind.  Applications not
mentioned above include cryptography, data compression, fault
diagnosis, and prediction (e.g., stock market directions).  Martin
Gardner had a fun SciAm column ~13 years ago about building an
automaton for predicting the operator's heads/tails choices.  Gardner
also popularized the game of Elusius, in which players try to
elucidate laws of nature by pulsing the system with test cases.
The Mastermind game is related, although you are given information
about the internal state as part of the output.  Several AI
researchers have used automata theory for investigating hypothesis
formation and learning.  -- KIL]

------------------------------

Date: Tue 13 Mar 84 12:46:54-EST
From: Neena Lyall <LYALL@MIT-XX.ARPA>
Subject: ACM Conference

             [Forwarded from the MIT bboard by Laws@SRI-AI.]

        "INTEGRATING THE INFORMATION WORKPLACE THE KEY TO PRODUCTIVITY"

                       ACM NORTHEAST REGIONAL CONFERENCE
                              19 - 21 March, l984
                             University of Lowell
                                  Lowell, MA
                       Special Student/Faculty Rate $20

KEYNOTE SPEAKERS ARE:

Monday          George McQuilken, Chairman, Spartacus Computer Inc., "Mainframe
                Technology in Integrated Systems"

Tuesday         Carl  Wolf,  President,  Interactive  Data Corp., "Bridging the
                Mainframe to Micro Gap"

Wednesday       Mitch  Kapor,  President,  Lotus  Development   Corp.,   "Micro
                Technology in Integrated Systems"

CLOSING PLENARY SESSION:
                Thomas F. Gannon (5th Generation, DEC)
                Maurice V. Wilkes (Corporate Research, DEC)
                Frederick  G.  Withington  (V.P., ADL), "Integrating the Pieces
                - Computing in the 90's"

THE TRACK CHAIRMEN ARE:

Applications Technology Track
                Dr.  David  Prerau,   Principal   of   Technical   Staff,   GTE
                Laboratories, Inc.

Artificial Intelligence Track
                Jeffrey  Hill,  Manager of Development, Artificial Intelligence
                Corp.

                Dr.  David  Prerau,   Principal   of   Technical   Staff,   GTE
                Laboratories, Inc.

CAD/CAM & Robotics Track
                Cary Bullock, V.P., Engineering & Operations, Xenergy Corp.

Computer Tools & Techniques Track
                David Hill, Director, Data Systems & Communications

Database Management Track
                Michael Stadelmann, Manager of Development, GE/MIMS Systems

Decision Support Systems Track
                David   Kahn,   Manager,   Decision   Support   Systems,   Wang
                Laboratories

Networking & Data Communications Track
                Dr.  Barry  Horowitz,  V.P.   Bedford   Operations,   (Formerly
                Technical Director, Strategic Systems), MITRE Corp.

Office Automation Track
                Nancy Heaton, Manager of Office Automation, Wang Laboratories

Personal Computing Track
                Michael Rohrbach, International Market Resources

THERE ARE TWO TUTORIALS WHICH RUN IN PARALLEL WITH THESE SESSIONS:

Artificial Intelligence Tutorial (3 days)
                Dr. Eugene Charniak, Brown University.

                AI  and its newest developments, emphasizing expert systems and
                knowledge-based systems.

Networking Technology Tutorial (3 days)
                Stewart Wecker, Pres. Technology Concepts.

                Local  area   and   other   network,   including   theory   and
                manufacturers'  current  products  (IBM's  SNA,  DECNET and LAN
                products)

For  detailed  information  see bulletin board outside Room 515, 545 Technology
Square, Cambridge or call either 617/444-5222: Box  C,  or  617/271-3268:  Shim
Berkovits.

------------------------------

Date: 8 Mar 84 10:34:06-PST (Thu)
From: harpo!utah-cs!sask!utcsrgv!utai!tsotsos @ Ucb-Vax
Subject: CSCSI 84 Preliminary Program
Article-I.D.: utai.129

The preliminary program for the Fifth National Conference of the Canadian
Society for Computational Studies of Intelligence follows.
Registration or other information may be obtained from:

Prof. Michael Bauer,
Local Arrangements Chair, CSCSCI/SCEIO-84
Dept. of Computer Science,
University of Western Ontario
London, Ontario, Canada
N6A 5B7
(519)-679-6048

Due to unfortunate circumstances beyond our control, there has been a
date change for the conference which has not been reflected in
several current announcements. The correct date is May 15- 17, 1984.



                            CSCSI-84

                      Canadian Society for
              Computational Studies of Intelligence

                    Fifth National Conference

                           May 15 - 17
                  University of Western Ontario
                      London, Ontario, Canada


                       PRELIMINARY PROGRAM


Tuesday Morning, May 15

8:30 - 8:40     Introduction and Welcome

Session 1  -  Natural Language

8:40 - 9:40     Martin Kay (XEROX PARC) - Invited Lecture
9:40 - 10:10    "A Theory of Discourse Coherence for Argument Understanding"
                Robin Cohen (U of Toronto) (Long paper)
10:10 - 10:30   "Scalar Implicature and Indirect Responses in
                     Question-Answering"
                Julia Hirschberg (U of Pennsylvania) (Short paper)

10:30 - 10:40   BREAK

10:40 - 11:00   "Generating Non-Direct Answers by Computing Presuppositions
                   of Answers, Not of Questions or Mind your P's, not your Q's"
                Robert Mercer, Richard Rosenberg (U of British Columbia)
                (Short paper)
11:00 - 11:20   "Good Answers to Bad Questions: Goal Deduction in Expert
                     Advice-Giving"
                Martha Pollack (U of Pennsylvania) (Short paper)


Session 2  -  Cognitive Modelling and Problem Solving


11:20 - 11:40   "Using Spreading Activation to Identify Relevant Help"
                Adele Howe (ITT), Timothy Finin (U of Pennsylvania)
                (Short paper)
11:40 - 12:00   "Managing Time Maps"
                Thomas Dean (Yale) (Short paper)


12:00 - 1:30    LUNCH


Tuesday Afternoon, May 15

Panel Discussion

1:30 - 2:45    "The Artificial Intelligence, Robotics and Society Program"
                    of the Canadian Institute for Advanced Research
    Panel members : Zenon Pylyshyn - moderator (U of Western Ontario)
            Raymond Reiter - coordinator for the University of British Columbia
            John Mylopoulos - coordinator for the University of Toronto
            Steven Zucker - coordinator for McGill University
            Nick Cercone - president CSCSI/SCEIO


Session 3  -  Computer Vision I


2:45 - 3:45    "Optical Phenomena in Computer Vision"
               Steven Shafer (CMU) - Invited Lecture

3:45 - 4:00    BREAK

4:00 - 4:30    "Procedural Adequacy in an Image Understanding System"
               Jay Glicksman (Texas Instruments) (Long paper)
4:30 - 5:00    "The Local Structure of Image Discontinuities in One Dimension"
               Yvan Leclerc (McGill) (Long paper)
5:00 - 5:30    "Receptive Fields and the Reconstruction of Visual Informatiom"
               Steven Zucker (McGill) (Long paper)



Wednesday Morning, May 16


Session 4  -  Robotics


8:30  -  9:30   "Robotic Manipulation"
                Matthew Mason (CMU)  -  Invited Lecture
9:30  - 10:00   "Trajectory Planning Problems, I: Determining Velocity
                     Along a Fixed Path"
                Kamal Kant (McGill) (Long paper)
10:00 - 10:20   "Interpreting Range Data for a Mobile Robot"
                Stan Letovsky (Yale) (Short paper)

10:20 - 10:45   BREAK


Panel Discussion

10:45 - 12:00   "What is a valid methodology for judging the quality
                    of AI research?"

                Panel Moderator : Alan Mackworth (U of British Columbia)

12:00 - 1:30    LUNCH


Wednesday Afternoon, May 16

Session 5  -  Learning

1:30 - 2:00     "The Use of Causal Explanations in Learning"
                David Atkinson, Steven Salzberg (Yale) (Long paper)
2:00 - 2:30     "Experiments in the Automatic Discovery of Declarative
                     and Procedural Data Structure Concepts"
                Mostafa Aref, Gordon McCalla (U of Saskatchewan) (Long paper)
2:30 - 3:00     "Theory Formation and Conjectural Knowledge in Knowledge Bases"
                James Delgrande (U of Toronto) (Long paper)
3:00 - 3:20     "Conceptual Clustering as Discrimination Learning"
                Pat Langley, Stephanie Sage (CMU) (Short paper)

3:20 - 3:40     BREAK

3:40 - 4:00     "Some Issues in Training Learning Systems and an
                     Autonomous Design"
                David Coles, Larry Rendell (U of Guelph) (Short paper)
4:00 - 4:20     "Inductive Learning of Phonetic Rules for Automatic
                     Speech Recognition"
                Renato de Mori (Concordia University)
                Michel Gilloux (Centre National d'Etudes des
                        Telecommunications, France)
                (Short paper)

4:20 - 4:30     BREAK


Session 6  -  Computer Vision II


4:30 - 5:00   "Applying Temporal Constraints to the Problem of Stereopsis
                   of Time-Varying Imagery"
              Michael Jenkin (U of Toronto) (Long paper)
5:00 - 5:30   "Scale-Based Descriptions of Planar Curves"
              Alan Mackworth, Farzin Mokhtarian
              (U of British Columbia) (Long paper)


Wednesday Evening, May 16  -  BANQUET



Thursday Morning, May 17


Session 7  -  Logic Programming


8:30  -  9:30   J. Alan Robinson (Syracuse U)  -  Invited Lecture
9:30  -  9:50   "Implementing PROGRAPH in Prolog: An Overview of the
                     Interpreter and Graphical Interface"
                P. Cox, T. Pietrzykowski (Acadia U) (Short paper)
9:50  - 10:10   "Making 'Clausal' Theorem Provers 'Non-Clausal'"
                David Poole (U of Waterloo) (Short paper)
10:10 - 10:30   "Logic as Interaction Language"
                Martin van Emden (U of Waterloo) (Short paper)

10:30 - 10:45   BREAK


10:45 - 12:00
Report of the CSCSI/SCEIO Survey on AI Research in Canada

           Nick Cercone - President CSCSI/SCEIO
           Gordon McCalla - Vice-President CSCSI/SCEIO


12:00 - 1:00   LUNCH

Thursday Afternoon, May 17


Session  8  -  Expert Systems and Applications


1:00 - 2:00   Ramesh Patil (MIT)  -  Invited Lecture
2:00 - 2:20   "ROG-O-MATIC: A Belligerent Expert System"
              Michael Mauldin, Guy Jacobson, Andrew Appel, Leonard Hamey (CMU)
              (Short paper)
2:20 - 2:40   "An Explanation System for Frame-Based Knowledge Organized
                   Along Multiple Dimensions"
              Ron Gershon, Yawar Ali, Michael Jenkin (U of Toronto)
              (Short paper)
2:40 - 3:00   "Qualitative Sensitivity Analysis: A New Approach to Expert
                   System Plan Justification"
              Stephen Cross (Air Force Institute of Technology) (Short paper)

3:00 - 3:20   BREAK


Session  9  -  Knowledge Representation


3:20 - 4:20    "A Fundamental Trade-off in Knowledge Representation
                    and Reasoning"
               Hector Levesque (Fairchild R&D)  Invited Lecture
4:20 - 4:50    "Representing Control Strategies Using Reflection"
               Bryan Kramer (U of Toronto) (Long paper)
4:50 - 5:10    "Knowledge Base Design for an Operating System
                    Expert Consusltant"
               Stephen Hegner (U of Vermont),
               Robert Douglass (Los Alamos National Laboratory) (Short paper)
5:10 - 5:30    "Steps Towards a Theory of Exceptions"
               James Delgrande (U of Toronto) (Short paper)


5:30 - 5:45    CLOSING REMARKS

------------------------------

End of AIList Digest
********************

∂22-Mar-84  1127	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #34
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Mar 84  11:25:54 PST
Date: Thu 22 Mar 1984 10:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #34
To: AIList@SRI-AI


AIList Digest           Thursday, 22 Mar 1984      Volume 2 : Issue 34

Today's Topics:
  Corporate AI - Entry Route Request,
  AI Documents - HAKMEM Request,
  Inference - Identifying Programs,
  Fuzzy Sets - Reference,
  Computer Art - Computer Manipulated Novel,
  Expert Systems - Computer EKG's,
  AI Funding - Strategic Computing in the New York Review of Books,
  Public Service - Tax Info,
  Seminars - RUBRIC: Intelligent Information Retrieval &
    Computational Linguistics &
    Expert System for Building Expert System Rules
  Course Announcement - Lisp: Language and Literature
----------------------------------------------------------------------

Date: 19 Mar 84 19:06:44-PST (Mon)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: IBM vs. HP: research (AI) question
Article-I.D.: dartvax.922

I have been offered entry-level positions at both Hewlett-Packard and
IBM.  I feel that, sooner or later, I'd like to do research in some AI-
related field, and I'd like any comments you may have as to the
accessibility of the research labs to an employee starting out as a
programmer in either company.  I don't want to start a ridiculous
discussion of the overall merits of and/or problems with HP and IBM;
many articles have been written on both.  But things can change quickly
and there may be some work being done of which I'm not presently aware.
I'd appreciate any impressions, subjective or otherwise, that you may
have.  I hold an A.B. in Computer Science from Dartmouth.

      --Lorien Y. Pratt
        Dartmouth College Library
        Hanover, NH  03755

        decvax!dartvax!lorien

------------------------------

Date: 10 Mar 84 18:13:23-PST (Sat)
From: pur-ee!ecn-ee!davy @ Ucb-Vax
Subject: Copies of HAKMEM? - (nf)
Article-I.D.: pur-ee.1672

This has already been asked in UNIX-WIZARDS, I thought I'd ask it here
too.  Does anyone have a copy of HAKMEM (MIT Memo from Feb. 1972) they'd
be willing to Xerox?  I've heard there's an online copy at MIT-MC or
someplace -- anyone know where it's at?

--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue

[I have a copy of this memo (AIM 239, HAKMEM by M. Beeler, R.W. Gosper,
and R. Schroeppel).  It is a collection of notes by MIT hackers on about
20 different topics.  The document is about 100 pages long and includes
figures.  Does an online copy exist?  -- KIL]

------------------------------

Date: Mon 19 Mar 84 23:50:08-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Identifying programs

"Algorithmic Program Debugging" by Ehud Shapiro, MIT Press includes
substantial discussion of the question of identifying programs
from I/O pairs. Of course in general the identification is not
exact. Concepts of asymptotic identification ("identification in
the limit") are used instead. A lot of this work has been
developed to try to pin down the concept of "learnable language".
There are a number of recent papers on this question by Scott
Weinstein (University of Pennsylvania) and others, in the journal
Information and Control. If anyone is interested, I'll dig out
the references.

-- Fernando Pereira

------------------------------

Date: 19 Mar 1984 20:56:34-PST
From: don%brandeis.csnet@csnet-relay.arpa
Subject: Fuzzy Sets

I have a reference for Daniel Conde who requested information about
Fuzzy Sets on a recent AI bulletin board:

        Fuzzy Sets and Systems: Theory and Applications

        Didier Dubois & Henri Prade
        copywrite 1980
        Academic Press

                                Don Ferguson

------------------------------

Date: 19 Mar 84 11:36:10 CST (Mon)
From: ihnp4!houxa!homxa!rem
Subject: Computer Manipulated Novel

For people interested in computer-aided art, I manipulated a small,
unpublished novel of mine called ABRACADABRA a few years ago.  The
book is a mystery derived from childhood experiences in St. Louis.
I call the manipulated book ABRACADABRA CADAVER.  Chapter-by-chapter
I wrote UNIX shell programs to alter the text according to its con-
tents: for example, in an early chapter I misspelled all words as
a child might do.  In another I inserted German proverbs appropriate
to my father's speech in all of his conversations.  Another repeats
key phrases again and again, in a minimalist way; another puts all
dialog into footnotes; another, where the mystery unfolds, cryptically
reverses the sentences throughout--and so on.  After editing the
end results, I came up with a Joycean-like book that is quite
readable and interesting as a literary document.  I no longer
have it on-line, but if anyone is interested, I can provide
more details.  And, of course, if anyone knows of a publisher
crazy enough.....

Bob Mueller

BELLCORE
Holmdel, NJ

------------------------------

Date: 11 Mar 84 11:01:10-PST (Sun)
From: harpo!ulysses!burl!clyde!akgua!mcnc!ecsvax!hsplab @ Ucb-Vax
Subject: Computer EKG's
Article-I.D.: ecsvax.2145

One reason why computerized EKG's have become so popular in the medical
environment is that **most** of the EKGs are performed on normal people
and are being used as a screening process.  This means that if a computer
program is very good at differentiating between normals and abnormals
without any other capability (not true with current programs), it will
probably do better than 90%.  It is for this reason that a cardiologist
overview is used primarily to catch gross errors and to refine problems
associated with pathological cases.  In a study done by Bailey at the
NIH in the early 1970's, most computer programs actually did rather well,
and if you removed interpretation differences which were common among
cardiologists and tested the programs on grossly abnormal cases, they
were able to achieve better than 60%-70% accuracy.

David Chou
Department of Pathology
University of NC, Chapel Hill
    !decvax!mcnc!ecsvax!hsplab

------------------------------

Date: 20 Mar 84 11:46:20 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Strategic Computing in the New York Review of Books

The March 15, 1984 issue of The New York Review of Books contains an
article entitled "The Costs of Reaganism", which mentions DARPA's
Strategic Computing Program as an example of misdirected U.S. economic
and budgetary policy.  The article is by Emma Rothschild, who teaches in
the Science, Technology, and Society program at MIT and is the author of
"Paradise Lost: The Decline of the Auto-Industrial Age".

  ...What does it mean for America's future economic growth that
  69 percent of federally supported research and development is
  for military purposes, an increase since 1981 of $18.1 billion
  in the military function and of $0.6 billion in non-military
  functions? [21]

    Does it matter for the character of America's scientific
  institutions that the Defense Advanced Research Projects
  Agency's new "strategic computing" program is in the process
  of transforming academic computer science?[22]  Does it
  matter for American competitiveness that Japan's ten-year
  program on the cognitive, linguistic, and engineering
  foundations of computing will be civilian, while America's
  will be concerned with robot reconnaissance vehicles,
  radiation-resistant wafers, and missile defenses, with
  "speech recognition" in the "high-noise, high-stress environment
  [of] the fighter cockpit," and with "voice  distortions due
  to the helmet and face mask"? [23]  Mr. Reagan's principal
  opponents are not asking these questions; they are questions
  about the militarization of the political life, the scientific
  potential, and the economic society of the richest country in
  the world.

  [21] "Special Analyses", Budget of the United States Government,
    FY 1985, p. K-30.
  [22] The program is described in Weinberger's Annual Report, p. 263,
    and also in the Defense Advanced Research Projects Agency's
    own study "Strategic Computing" (DARPA, October 28, 1983).
    In this study DARPA explains that it intends to use contract
    personnel from industry as well as university researchers, in
    order to "avoid a dangerous depletion of the university
    computer science community":  "The magnitude of this national
    effort could represent a very large perturbation to the
    university community" (p. 64)
  [23] DARPA, "Strategic Computing", pp. 34-35.

------------------------------

Date: Wed 21 Mar 84 14:05:12-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Tax-Free Support vs. Income Averaging

Bob Boyer of UTexas-20 posted a bboard message about IRS policy on
tax-free student fellowships.  This isn't AIList material, but it
will be of interest to many students on and off the Arpanet, so I
am making it available for those who want to post it at their sites.
I have copied the message to file <AIList>IRS.TXT on SRI-AI, and will
send copies to interested people who can't FTP the file.  I have
also included related messages from others who read the original.

The content is roughly this: if you claim that your current academic
support is tax-free (or if the IRS makes that claim), and if such income
is at least 50% of your support, you will probably not be able to income
average during the next four years.  This is very likely to cost you more
than the tax you save on your current fellowship or other support.

                                        -- Ken Laws

------------------------------

Date: 19 Mar 84 14:23:22 PST (Monday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 3/22

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                Richard M. Tong
                Advanced Information and Decision Systems


        RUBRIC: An Intelligent Aid for Information Retrieval


In this talk I will describe an ongoing research project that is
concerned with developing a computer based aid for information retrieval
from natural language databases. Unlike other attempts to improve upon
Boolean keyword retrieval systems, this research concentrates on
providing an easily used rule-based language for expressing retrieval
concepts. This language draws upon work in production rule systems in AI
and allows the user to construct queries that give better precision and
recall than more traditional forms.

The talk will include a discussion of the main elements in the system
(which is written in LISP and C), the key research issues (including
some comments on the important role that uncertainty plays) and some
man-machine interface questions (in particular, the problem of providing
knowledge elicitation tools).


Thursday, March 22, 1984        4:00 pm

*** Please note the location change ***

Hewlett-Packard
1651 Page Mill Road
Palo Alto, CA
28C Lower Auditorium

Be sure to arrive at the building's lobby on time, so that you may be
escorted to the meeting room

------------------------------

Date: 19 Mar 1984  15:26 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Computational Linguistics (BOSTON)

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Wednesday, March 21     4:00pm      8th floor playroom

De-mystifying Modern Grammatical Theory and Artificial Intelligence
Robert Berwick

It has frequently been suggested that modern linguistic theory is
irreconcilably at odds with a ``computational'' view of human
linguistic abilities.  In fact, linguistic theory provides a rich
source of constraints for the computationalist.  In this talk I will
outline some of the key changes in grammatical theory from the mid 60's to
the present day that support this claim, and at the same time try to
dispel a number of myths:

Myth: Modern grammars are made up of large numbers of rules that
one cannot ``implement.''

Myth: Modern grammars are not relevant to computational models
of language processing.

Myth: Knowledge that you can order hamburgers in restaurants
aids *on-line* syntactic processing.

------------------------------

Date: 20 Mar 84 11:30:18 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Experiments with Rule Writer for EXPERT

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                                 I I I SEMINAR

          Title:    Experiments with Rule Writer for EXPERT
          Speaker:  George Drastal
          Date:     Tuesday, April 3, 1984, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge


  George  Drastal,  a Ph.D. student in our department, will describe his thesis
reseach in an informal talk.  His abstract:

       Results are presented of some experiments with Rule  Writer,  an  AI
    system  that  assists  knowledge  engineers  with  the  task of writing
    inference rules  for  a  medical  consultation  system  in  the  EXPERT
    formalism.    Rule  Writer  (RW) is used primarily in an early stage of
    expert system development, to generate a prototype rule base.   RW  may
    also   be   used  as  a  testbed  for  experimenting  with  alternative
    organizations   of   expert   knowledge   in   the   EXPERT   knowledge
    representation.

------------------------------

Date: Wed, 21 Mar 84 18:38 PST
From: BrianSmith.PA@PARC-GW.ARPA
Reply-to: BrianSmith.PA@PARC-GW.ARPA
Subject: Course Announcement -- Lisp: Language and Literature

         [Forwarded from the SRI CLSI bboard by Laws@SRI-AI.]

The following course will be the CSLI Seminar on Computer Languages
for the Spring Quarter [at Stanford].  If you are interested in attending,
please read the notes on dates and registration, at the end.

                Lisp: Language and Literature

A systematic introduction to the concepts and practices of programming,
based on a simple reconstructed dialect of LISP.  The aim is both to
convey and to make explicit the programming knowledge that is
typically acquired through apprenticeship and practice.  The material
will be presented under a linguistic reconstruction, using vocabulary
that should be of use in studying any linguistic system.  Considerable
hands-on programming experience will be provided.

Although intended primarily for linguists, philosophers, and
mathematicians, anyone interested in computation is welcome.  In
particular, no previous exposure to computation will be assumed.
However, since we will aim for rigorous analyses, some prior familiarity
with formal systems is essential.  Also, the course will be more like a
course in literature and creative writing, than like a course in, say,
French as a second language.  The use of LISP, in other words, will
be primarily as a vehicle for larger issues, not so much an object of
study in and of itself.  Since LISP (unlike French) is really very
simple, we will be able to teach it in class and lab sessions.  Tutorial
instruction and some individual programming assistance will be provided.

Topics to be covered include:

   -- Procedural and data abstraction;
   -- Objects, modularity, state, and encapsulation;
   -- Input/output, notation, and communication protocols;
   -- Meta-linguistic abstraction, and problems of intensional grain;
   -- Architecture, implementation, and abstract machines;
   -- Introspection, self-reference, meta-circular interpreters, and reflection.

Throughout, we will pay particular attention to the following themes:

   -- Procedural and declarative notions of semantics;
   -- Interpretation, compilation, and other models of processing;
   -- Implicit vs. explicit representation of information;
   -- Contextual relativity, scoping mechanisms, and locality;
   -- Varieties of language: internal, external, theoretical;
   -- Syntax and abstract structure: functionalism & representationalism.

Organizational Details:

   Instructor: Brian C. Smith, Xerox PARC/Stanford CSLI; 494-4336 (Xerox);
      497-1710 (Stanford), "BrianSmith@PARC" (Arpanet).

   Classes: Tuesdays and Thursdays, 2:00 - 3:30, in Room G19, Redwood
      Hall, Jordan Quad.

      NB:  Since we will be using the computers just now being installed
      at CSLI, there may be some delay in getting the course underway.
      In particular, it is possible that we will not be able to start until
      mid-April.  A follow-up note with more details will be sent out as
      soon as plans are definite.

   Registration: Again, because of the limited number of machines, we
      may have to restrict participation somewhat.  We would therefore
      like anyone who intends to take this course to notify Brian Smith
      as soon as possible.  Note that the course will be quite demanding:
      10 to 20 hours per week will probably be required, depending on
      background.

   Sections: As well as classes, there will be section/discussion periods
      on a regular basis, at times to be arranged at the beginning of the
      course.

   Reading: The course will be roughly based on the "Structure and
       Interpretation of Computer Programs" textbook by Abelson and
       Sussman that has been used at M.I.T., although the linguistic
      orientation will affect our dialects and terminology.

   Laboratory: Xerox 1108s (Dandelions) will be provided by CSLI, to be
      used for problem sets and programming assignments.  Instructors &
      teaching assistants will be available for assistance at pre-arranged
      times.

   Credit: The course may be listed as a special topics course in Computer
      Science.  However (in case that does not work out) anyone wishing
      to take it for credit should get in touch, so that we can arrange
      reading course credit.

------------------------------

End of AIList Digest
********************

∂26-Mar-84  1241	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #35
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Mar 84  12:39:01 PST
Date: Mon 26 Mar 1984 11:08-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #35
To: AIList@SRI-AI


AIList Digest            Monday, 26 Mar 1984       Volume 2 : Issue 35

Today's Topics:
  AI Tools - Nature of AI Computing,
  Logic Programming - Inferential and Deductive Processing,
  Expert Systems - VLSI Knowledge Acquisition & Explanatory Capability,
  Mathematics - Four Color Theorem,
  System Identification - Characterizing Automata From I/O Pairs,
  Seminars - Compositional Temporal Logic & Logic Programming,
  Course - Expert Systems for CAD/CAT
----------------------------------------------------------------------

Date: Thu 22 Mar 84 15:08:19-EST
From: Sal Stolfo <sal@COLUMBIA-20.ARPA>
Subject: A call for discussion

             "The Numericists Meet the Symbolicists and Ask Why?"

With   the  recent  interest  in  Fifth  Generation  Computing  and  Artificial
Intelligence, many scientists with backgrounds in other  disparate  fields  are
beginning to study symbolic computation in a serious manner.

The  ``parallel  architectures  community'' has mostly been interested in novel
computer architectures to accelerate numeric computation  (usually  represented
as  Fortran  codes).    Similarly,  the ``data base machine community" has been
interested in more conventional data processing (for example, large-scale  data
bases).   Now that the interest of these communities and others are focusing on
Artificial Intelligence computing, a question that is often asked is ``What are
the fundamental characteristics of AI computation that distinguish it from more
conventional computation"?  Indeed, are there really any differences at all?

These questions have no simple answers; they can be viewed from many  different
perspectives.    This  note  is  a  solicitation of the AI community for cogent
discussion of this issue.  We hope that all facets will be addressed including:

   - Differences between the kinds of problems encountered in AI and those
     considered more conventional.   (A   simple   answer   in   terms  of
     ``ill-defined'' and ``well-defined'' problems is viewed as a copout.)

   - Methodological differences  between  AI  computing  and  conventional
     computing.

   - Computer resource  requirements  and  programming  environments  with
     technical  substantiations  of  the differences rather than aesthetic
     preferences.

I expect to collect responses from the AI community and produce a final  report
which will be made available to any interested parties.

Thank you in advance.

Salvatore  J. Stolfo
Assistant  Professor
Computer Science Department
Columbia University

------------------------------

Date: 24 Mar 1984 00:11:35-PST
From: hildum%brandeis.csnet@csnet-relay.arpa
Subject: Inferential and Deductive Processing using Lisp and Prolog

(This message has been sent to both the AIList and the Prolog Digest)

I am looking for some information concerning the following:

(1) The use of Prolog and Lisp for deductive and inferential processing.
(2) Standard methods of handling deductive and inferential processing
    in Prolog and Lisp.
(3) Any languages similar or different to Prolog and Lisp that have been
    used for deductive and inferential processing.
(4) What types of inferential and deductive processing cannot be done using
    Prolog ?  Using Lisp ?

Suggestions of applicable articles and research projects, as well as personal
observations would be greatly appreciated.  I am attempting to get a feel for
what kinds of things can and cannot be done to handle deductive and inferen-
tial processing with existing Logic/AI programming languages.

Responses (ASAP) will be greatly appreciated.  Please reply to:

        hildum%brandeis.csnet@csnet-relay.csnet

I will gladly post a summary to the net if there is enough interest in
the subject.

        Thank you,

                David W. Hildum

                US Mail:        Box 1417
                                Brandeis University
                                Waltham, Massachusetts
                                02254

------------------------------

Date: 21 Mar 84 20:44:16-PST (Wed)
From: decvax!cwruecmp!sundar @ Ucb-Vax
Subject: Expert Systems in VLSI
Article-I.D.: cwruecmp.1113

This is only a request.  Has any one documented the knowledge
acquisition techniques used for this application domain?
I conducted a few interviews with local VLSI experts and the
difficulty I had was the formulation of appropriate questions
to elicit maximum response.  Any references would be appreciated.
Thanks.

Sundar Iyengar
USENET:         decvax!cwruecmp!sundar
CSNET:          sundar@Case
ARPANET:        sundar.Case@Rand-Relay

Posted: 11:43:29 pm, Wednesday March 21, 1984.

------------------------------

Date: 21 Mar 84 9:24:14-PST (Wed)
From: harpo!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: "Explaining" expert system algorthims.
Article-I.D.: eosp1.715

References:

I think it is hopeless to demand that the algorithms
instanced by expert systems be well understood so they can be questioned.
Even when the algorithms can easily be printed, they will be hard for
any human being to comprehend, except in the most trivial systems.

Expert systems attempt to imitate one kind of human thinking, in which
what we call "judgment" plays a part.  I expect that as expert systems
become more sophisticated, they will become harder and harder to judge,
just as the think-work of human beings is hard to judge for quality.

True "artificial intelligence" systems will have these problems in spades.

Please note that we have already reached the point where ordinary
procedural software is hard to judge.  It's quite common to spend 18
months shaking down a moderate sized piece of software.
                                        - Toby Robison
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: 23 Mar 84 12:52:55-PST (Fri)
From: ihnp4!houxm!hou2d!wbp @ Ucb-Vax
Subject: color junk
Article-I.D.: hou2d.225

Re: color junk
From: Wayne Pineault <hou2d!wbp>

        From a homology point of view a circle and a plane are not the
same, but from the view of coloring they are the same, since you pick the
point at infinity on the plane to map inside one of the regions on the
sphere.
        Also, for a long time a closed coloring formula has been known for a
sphere with any number of donut holes and mobius strips attached, as long
as it was not a sphere!  If you just plugged in 0 for a sphere the answer
came out to 4, but the argument did not work for this case!!!
        There is a Springer-Verlag series of mathematics, and I saw this
formula there, but I don't remember it.

                                        Wayne Pineault

------------------------------

Date: 23 Mar 84 17:14:17-PST (Fri)
From: decvax!mcnc!ncsu!uvacs!erh @ Ucb-Vax
Subject: RE characterizing automata from I/O pairs
Article-I.D.: uvacs.1206

        The question is certainly interesting and very natural.  Not
surprisingly, it has been investigated in depth.  As a matter of fact,
the Moore theory of experiments (which is precisely the theory of
"characterizing automata from I/O pairs") was one of the subjects
investigated in the 50's which gave impetus to introduction and study
of regular languages.

        A nice little book by J.H. Conway ("Regular Algebra and Finite
Machines", Chapman and Hall, 1971) has a chapter-long summary of
results including an answer to your question about the bound on the
length of the characterizing experiment.  A few paraphrases:

Def.  An exact (n,m,p) machine is a Moore machine with n states, m input
symbols, and p output symbols, each output symbol being actually emitted
in some state.  (Take m = p = 2 if you want arguments in terms of bits.)

Theorem.  Two distinguishable states of an exact (n,m,p) machine can be
distinguished by some word of length at most n-p.

(That is, for any two distinguishable states p, q, there exists a word w
of length <= n-p such that the output corresponding to w will differ
depending on whether it is started in p or q.)

Theorem. If S is a set of at most s states of an exact (n,m,p) machine,
and some two states in S are distinguishable, then there exists a word
of length at most max( 0, n-p-s+2 ) which distinguishes some two states
in S.  Moreover, this bound is best possible.

Theorem. If we are (explicitely) given an exact (n,m,p) machine whose
states are all distinguishable, and told that it is initially in one
of a set S of at most s states, then we can specify an experiment of
length at most (t-1)(n-p-(t-2)/2) where t = min( s, n-p+2 ), after
application of which the resulting state will be known (so you find
your position in the machine in case you were "lost in S").  Moreover,
the bound is best possible.

        In the above an "experiment of length k" means an algorithm
which feeds input symbols depending on the observed outputs; k is
the number of symbols fed in.

        The following answers your question.  It is a paraphrase
of Theorems 9 & 11, pp. 12-14 of Conway's text (the original result
is due to Moore, improved slightly by Conway):

Theorem.  If you know that M is a strongly connected exact (n,m,p)
machine with pairwise distinguishable states, then there is an experiment
of length at most

                           2n-1  2
                        8 m     n  log n
                                      2

which tells you the structure of the machine.

Ed Howorka (erh@uvacs on CSNET)

------------------------------

Date: 22 Mar 84  1600 PST
From: Diana Hall <DFH@SU-AI.ARPA>
Subject: Compositional Temporal Logic

         [Forwarded from the SRI CSLI bboard by Laws@SRI-AI.]

                HOW COMPOSITIONAL CAN TEMPORAL LOGIC BE?

                       Speaker:  Prof. Amir Pnueli
                        Weizmann Institute, Israel

                       Tuesday, March 27, 2:30 p.m.
                       Room 352 Margaret Jacks Hall

Abstract:  A compositional proof system based on temporal logic is presented.
The system supports systematic development of concurrent systems by
specifying modules and then proving a specification for their combination.
The specifications of modules are expressed by temporal logic.

------------------------------

Date: Fri 23 Mar 84 18:28:23-EST
From: Jan <komorowski@MIT-OZ>
Subject: Logic Programming Seminars at Harvard

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                                SEMINAR

                        LOGIC PROGRAM DERIVATION

                             Danny Krizanc
                          Harvard University

                        Tuesday, April 3, 1984

                                4 PM
                              Aiken G23

Danny will present a work he has done in my course Technology of Logic
Programming on program transformation. The method of Burstall and
Darlington is translated into resolution-based theorem proving and
applied to logic programs. The method is subsequently extended beyond
the limits of the functional approach.

------------------------------


Date: Fri 23 Mar 84 18:28:23-EST
From: Jan <komorowski@MIT-OZ>
Subject: Logic Programming Seminars at Harvard

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                                COLLQUIUM

                APPLICATION OF PROLOG TO GENERATION OF TEST DATA
                        FROM ALGEBRAIC SPECIFICATIONS

                        Prof. Marie-Claude Gaudel
                        Universite de Paris-Sud

                        Monday, April 9, 1984
                                4 PM
                        Aiken Lecture Hall
                        Tea in Pierce 213 at 3:30

ABSTRACT: Functional testing or "black-box testing has been recognized
for a long time as an important aspect of software validation. With
the emrgence of formal specification methods it becomes possible to
found functional testing on a rigorous basis. This lecture presents a
method of generating sets of test data from algebraic specifications.
The method has been impelemted using Prolog. It turns out that Prolog
is avery well-suited tool for generating sets of test data in this context.

Host : Professor Henryk Jan Komorowski

------------------------------

Date: Thu, 22 Mar 84 17:52:39 PST
From: Tulin Mangir <tulin@UCLA-CS.ARPA>
Subject: Course in Expert Systems for CAD/CAT

UCLA School of Engineering, Computer Science Department
is offering a new course, in Spring Quarter, in the
area of applications of Expert Systems to CAD and CAT in general, and
to VLSI and WSI design and testing specifically.

A Brief description of the topics to be covered follows.
Some of the projects in this course are extensions of the projects
that are started in the "Testing and Design for Testability for VLSI"
class that  we are offerring once a year. I also teach that course.

I welcome any questions, comments, and suggestions and promise to
give a state of the course(!) report on line for those who are
interested.


Tulin E. Mangir
<cs.tulin@UCLA-CS>
(213) 825-2692
      825-1322 (secretary)

                -------------------------------------
UCLA COMPUTER SCIENCE DEPARTMENT

Spring 84

New Course on Expert Systems

CS259 Section 4

EXPERT SYSTEMS WITH APPLICATIONS TO CAD AND CAT

Instructor: Professor Tulin E. Mangir

Time: MW 4-6pm (TBA)

FIRST MEETING IN 5252 BOELTER HALL, W 4-6PM 4/4/84.


This course is open to all graduate students who are interested
in developing and applications of expert systems.
Students are encouraged to develop projects using the
tools and environments available at UCLA or otherwise.
Instructor's special interest is developing expert systems for design
and testability analysis of VLSI and WSI.

For any questions please contact instructor 825-2692, or 3532L Boelter Hall.

Course Outline:

 o Introduction
 o Organization of Expert Systems
 o Representation of Digital structure and behaviour
 o Requirements for data base, rule base, knowledge base design and interfaces
   between them; control structure
 o Languages, logic programming (PROLOG), frameworks
 o Application domains for expert systems in CAD, CAT and automated processing
 o Example systems under development-- DRC, 2-D Planner, Hitest, Excat, others.
 o Limitations
 o Future directions

------------------------------

End of AIList Digest
********************

∂29-Mar-84  0017	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #36
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Mar 84  00:17:09 PST
Date: Wed 28 Mar 1984 23:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #36
To: AIList@SRI-AI


AIList Digest           Thursday, 29 Mar 1984      Volume 2 : Issue 36

Today's Topics:
  AI in Criminology - Request,
  AI Reports - NASA Memoranda Request,
  Expert Systems - Software Development Request,
  Planning - JPL Planner Request,
  AI Environments - Micro O/S Request,
  Expert Systems - Explanatory Capability & Article & OPS5 Examples,
  Seminars - Machine Learning & Incomplete Databases & Control of Reasoning
----------------------------------------------------------------------

Date: Mon, 26 Mar 84 21:00:38 pst
From: jacobs%ucbkim@Berkeley (Paul Jacobs)
Subject: information wanted on AI in criminology

        I have been asked for information concerning AI applications in
criminology, particularly in locating and keeping track of criminals.  I
am aware of various uses of computers in analyzing fingerprints and other
data; however, I have not heard of successful ``intelligent'' programs.

        I'd appreciate any information on this matter.

Thanks,

--paul

------------------------------

Date: 26 Mar 84 12:40:01-PST (Mon)
From: hplabs!hao!seismo!rochester!jss @ Ucb-Vax
Subject: NASA tech. memorandum
Article-I.D.: rocheste.5853

I am trying to get:
        NASA Technical Memorandum 85836, June  1983;
        NASA Technical Memorandum 85838, Sept. 1983;
        NASA Technical Memorandum 85839, Oct.  1983.

They are Volume I of their report entitled:
        An overview of Artificial Intelligence and Robotics

"This report is part of the NBS/NASA series of overviews on AI and
Robotics."  Any help in getting an official draft would be greatly
appreciated.  (Copies aren't bad either.)  A path to NASA, if one exists,
would also be appreciated.  Thanks in advance.


                        Jon S. Stumpf
                        U. of Rochester
                        {allegra|decvax|seismo}!rochester!jss

[I have sent Jon a copy of the NTIS ordering info that I printed in
AIList V1 #81 back in October.  This included the first of the reports
mentioned above; I am not sure about the others since the serial numbers
I have are for the NTIS version.  The Gevarter overviews I have read
seem to be reasonably good summaries of the major projects in vision
and expert systems.  -- KIL]

------------------------------

Date: 26 Mar 84 15:40:19-PST (Mon)
From: ihnp4!ihuxf!dunk @ Ucb-Vax
Subject: Expert Systems for Software Development?
Article-I.D.: ihuxf.2119

Anyone have references to papers describing the use of expert systems
in a software development environment (e.g. program synthesis,
programmer's consultant, debugging aid, etc.)?  Thanks much.
        Tom Duncan
        AT&T Bell Laboratories
        ihnp4!ihuxf!dunk

------------------------------

Date: Wed, 28 Mar 84 18:04:42 CDT
From: Mike Caplinger <mike@rice.ARPA>
Subject: JPL Planner

Can anybody give me any references to the Jet Propulsion Lab's
"autonomous space probe" project?  This system is supposed to be able
to schedule different observations in a limited time frame (like a
planetary flyby) based on priorities and feedback from previous results.
Is it really AI or just some kind of optimization hack?

                thanks,
                Mike

------------------------------

Date: Wed, 28 Mar 84 12:39:59 pst
From: Peter T. Young <youngp%cod@Nosc>
Subject: 32/16-bit O/S Information Request

We would like to obtain descriptions of/sources for the following
operating systems:
      RMX86
      CP/M (Z80 & 8085)
      CP/M-86
      MS-DOS (Z-DOS)
      UNIX
      VMS
      TOPS-20
that could be run on 32/16-bit or 32/32-bit CPU-based microcomputer
systems which are either already in production, or are scheduled for
production in the near future.  Our aiming-point is a system that will
run a reasonably useful version of LISP or PROLOG in a real-time environ-
ment.

Could you provide us with some pointers for such information?  Any help
you might provide would prove extremely useful.  Thanks for considering
this request.
                               Peter T. Young
                               (Code 9411)
                               NOSC
                               San Diego, CA 92152
                               (619) 225-6686
                               <youngp@NOSC>

------------------------------

Date: 27 Mar 84 14:45:44 EST  (Tue)
From: Dana S. Nau <dsn%umcp-cs.csnet@csnet-relay.arpa>
Subject: expert system algorithms

        From: Toby Robison <eosp1!robison>

        I think it is hopeless to demand that the algorithms instanced by
        expert systems be well understood so they can be questioned.  Even
        when the algorithms can easily be printed, they will be hard for any
        human being to comprehend, except in the most trivial systems. ...

I disagree.  One of the reasons for separating an expert system's control
structure from the knowledge base is to allow for complex behavior with a
simple control algorithm.  For example, Mycin's control structure is only
about one typewritten page [1].  Jim Reggia and I at the Univ. of Maryland
are currently working on a considerably more complex expert system control
structure, but even it is not THAT hard to understand once one understands
the preliminary mathematical background [2].  We even have a proof of
correctness for the algorithm!

REFERENCES:

[1] Davis, Buchanan, and Shortliffe.  Production Rules as a Representation
for a Knowledge-Based Consultation Program.  ARTIFICIAL INTELLIGENCE 8
(1977), 15-45.

[2] Reggia, Nau, and Wang.  A Theory of Abductive Inference in Diagnostic
Expert Systems.  TR-1338, Computer Sci.  Dept., Univ.  of Maryland (Dec.
1983).  Submitted for Publication.

------------------------------

Date: Sun 25 Mar 84 22:40:03-PST
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM.ARPA>
Subject: SCIENCE, 23 Mar. 1984

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The aforementioned issue of SCIENCE has a "feature story" on knowledge-
based systems in general and expert systems in particular (p1279-1282),
with various luminaries and their luminary programs mentioned. Pretty
good article.

Ed Feigenbaum

------------------------------

Date: 23 Mar 84 19:55:52-PST (Fri)
From: pur-ee!uiucdcs!parsec!ctvax!pozsvath @ Ucb-Vax
Subject: Re: Request for OPS5 examples
Article-I.D.: uiucdcs.6361

;;;                                                       /
;;;                                             Peter Ozsvath
;;;
;;;                     Factorial Program in OPS-5
;;;              Simulates the stack in working memory
;;;

;;;  Stage 1. Fill the working memory with (fact 1) (fact 2)...(fact n)
;;;
(p fact0 (fact {<x> = 0}) --> (remove 1) (make factorial 1 1))

(p factn (fact {<n> > 0}) --> (make fact (compute <n> - 1)))

;;;  Negative number entered. Quit
(p factneg (fact {<n> < 0}) -->
    (write "Good-bye."))

;;;  Stage 2. Sweep out the unnecessary (fact k) and (factorial k (k-1)!)
;;;     and add new (factorial (k+1) k!
;;;
(p factorial (factorial <x> <y>) (fact <x>) -->
    (remove 2)
    (remove 1)
    (make factorial (compute (<x> + 1)) (compute (<x> * <y>))))

;;;  When no more (fact k) statements are left, factorial k is fount
;;;
(p factorial2 (factorial <x> <y>) -(fact <x>) -->
    (write (crlf))
    (write (compute <x> - 1))
    (write "!! =")
    (write <y>)
    (write (crlf))
    (make infinifact))

;;;  Called once at the beginning
;;;
(p pretty←fact (start) -->
    (write "Program to demonstrate the power, compactness, and robustness")
    (write (crlf))
    (write "of the Winning OPS 5. This program inputs numbers whose")
    (write (crlf))
    (write "factorials - lo and behold - it computes RECURSIVELY")
    (write (crlf))
    (write (crlf))
    (remove 1)
    (make infinifact))

;;;  Circular "loop" that reads in numbers
;;;
(p infinifact (infinifact) -->
    (remove 1)
    (write "Enter a positive number to compute its factorial,")
    (write (crlf))
    (write "or a negative one to quit")
    (write (crlf))
    (bind <x> (accept))
    (make fact <x>))

;;;  Start
(start ((start)))

This program computes factorials in OPS-5. Some things seem to be
rather difficult to do in OPS-5. This same program could be
written in several lines of lisp!

------------------------------

Date: 26 Mar 84 10:06:31 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: KELLER SPEAKING AT ML ON WED.

             [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                      MACHINE LEARNING BROWN BAG SEMINAR

Speaker:   Richard Keller
Date:      Wednesday, March 28, 1984 - 12:00-1:30
Location:  Hill Center, Room 254


  Placing Learning in Context:

        SOURCES OF CONTEXTUAL KNOWLEDGE FOR CONCEPT LEARNING

 (Alternatively titled: The Mysterious Origins of LEX's Learning Goal)


In this talk, I  will describe a new  source of knowledge for  concept
learning: knowledge of the  learning context.  Most previous  research
in machine learning has failed to recognize contextual knowledge as  a
distinct and useful form of learning knowledge.  Contextual  knowledge
includes, among other  things, knowledge of  the purpose for  learning
and knowledge of the performance task to be improved by learning.  The
addition of this meta-knowledge, which describes the learning process,
provides a broader perspective on learning than has been available  to
most previous  learning systems.

In general, learning  systems that omit  contextual knowledge have  an
insufficient vantagepoint from which  to supervise learning  activity.
Both AM [Lenat-79] and LEX  [Mitchell-83], for instance, were  limited
by an  inability to  adapt  to changes  in their  respective  learning
environments, even  when  the  changes  were a  result  of  their  own
learning behavior.  This  limitation is  not particularly  surprising;
neither of these systems contained  an explicit representation of  the
task they were  performing (specifically,  mathematical discovery  and
integral calculus  problem  solving,  respectively).   Nor  did  these
systems contain any knowledge about the relationship between  learning
and the  task  performance.   Before  it is  reasonable  to  expect  a
learning system to  adapt to changes  in the task  environment, it  is
necessary  to  represent  task  knowledge  and  to  incorporate   this
knowledge into learning procedures.   My research, therefore,  focuses
on the representation  and use  of contextual  knowledge --  including
task knowledge -- as guidance for concept learning.

In this talk, I will  describe a learning framework that  incorporates
the use  of  contextual  knowledge.  In  addition,  I  will  introduce
various alternative methods of representing contextual knowledge,  and
sketch the design of some learning algorithms that utilize  contextual
knowledge.  Examples will be   drawn, in large part,  from my work  on
incorporating contextual knowledge within the LEX learning system.

------------------------------

Date: 27 Mar 84 14:22:39 EST
From: DSMITH@RUTGERS.ARPA
Subject: Rutger's Computer Science Colloquium

              [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                        Department of Computer Science

                                  COLLOQUIUM


SPEAKER:       Dr. Witold Lipski, Jr.
               Polish Academy of Sciences

TITLE:         LOGICAL PROBLEMS RELATED TO INCOMPLETE INFORMATION IN DATABASES


A  general methodology  for  modeling  incomplete  information in databases is
described, and then illustrated in the case of  three  concrete  models  of  a
database.   We emphasize the distinction between two different interpretations
of a query language --  the  external  interpretation,  which  refers  queries
directly  to  the  real world  modeled  by  the  database;  and  the  internal
interpretation, which refers queries  to  the  information  about  this  world
available  in  the  database.  Our methodology stresses the need for a precise
definition of the semantics of the query language by means of a non-procedural
specification,   and   for   a   correct   procedural implementation  of  this
specification.  Various logical -- and, at  times, combinatorial  --  problems
connected  with  information  incompleteness  are discussed.   Related work is
surveyed and an extensive bibliography is included.

DATE:           Friday, March 30, 1984
TIME:           2:50 p.m.
PLACE:          Room 705 - Hill Center
                                Coffee at 2:30

------------------------------

Date: 28 Mar 1984  09:48 EST (Wed)
From: Crisse Ciro <CRISSE%MIT-OZ@MIT-MC.ARPA>
Subject: Genesereth Talks on Control of Reasoning

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


             Procedural Hints in the Control of Reasoning

                        Michael R. Genesereth
                     Computer Science Department
                         Stanford University

                      DATE: Thursday, March 29
                      TIME: 4:00 PM
                     PLACE: NE43 8th Floor Playroom

          [This talk is also being given at IBM San Jose on
          Friday, April 6, 10:00.  -- KIL]


One of the key problems in automated reasoning is control of
combinatorics.  Whether one works forward from given premises or
backward from desired conclusions, it is usually necessary to consider
many inference paths before one succeeds in deriving useful results.
In the absence of advance knowledge as to which path or paths are
likely to succeed, search is the only alternative.

In some situations, however, advance knowledge is available in the
form of procedural hints like those found in math texts.  Such hints
differ from facts about the subject of reasoning in that they are
prescriptive rather than descriptive; they say what a reasoner OUGHT
to do rather than what is TRUE.

This talk describes a language for expressing hints to control the
process of reasoning and provides an appropriate semantic account in
the form of an interpreter that behaves in accordance with the hints.
The work is relevant to understanding the phenomenon of introspection
and is of practical value in the construction of expert systems.


HOST: Prof. Randy Davis

------------------------------

End of AIList Digest
********************

∂29-Mar-84  1401	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #37
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Mar 84  14:01:31 PST
Date: Thu 29 Mar 1984 10:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #37
To: AIList@SRI-AI


AIList Digest            Friday, 30 Mar 1984       Volume 2 : Issue 37

Today's Topics:
  Expert Systems - Judicial Expert Systems & Radiography,
  Linguistics - Use of 'And',
  Bibliography - Fuzzy Set Papers,
  Proposals - AI Teaching & Hierarchical System Research,
  Seminars - Objects and Parts & Pattern Recognition & Databases
----------------------------------------------------------------------

Date: Thu, 29 Mar 84 07:49 PST
From: DSchmitz.es@Xerox.ARPA
Subject: Judicial Expert Systems?

I'd like to know if there's any work going on out there toward the
development of expert systems (or other AI-type systems) designed to
assist in making legal decisions.  Such systems as I have in mind would
be used by judges, lawyers, legal theorists, perhaps even international
courts.

Please reply to DSchmitz.es@PARC-MAXC

Thank you

[I believe there has been work at Stanford and at Yale.  I also remember
reading some newspaper account of a man who wishes to market an
automated jury: each side types in its legal precedents and
the computer decides which side wins.  AIList carried a seminar
notice the Stanford work last year.  Can anyone give more specific
information?  -- KIL]

------------------------------

Date: 26 Mar 84 11:28:10-PST (Mon)
From: decvax!linus!vaxine!debl @ Ucb-Vax
Subject: Help on Radiography Discussion

I have been told that a discussion of expert systems to read radiographs
occured on the net recently.  Any information or references from this
discussion would be appreciated.  Thank you.

                David Lees

[There was a message by Dr. Tsotsos about his group's work at U.
of Toronto on the ALVEN system for interpreting heart images.
You might also inquire on Vision-List (Kahn@UCLA-CS); it has not
discussed this topic, but you might get a discussion started.
Dr. Jack Sklansky and associates have been developing systems
to find tumors in chest radiographs; they might be considered
"expert systems" in the sense that their performance is very
good.  Chris Brown, Dana Ballard, and others at the U. of Rochester
have been using hypothesize-and-test and other AI techniques in the
analysis of chest radiographs and ultrasound heart images.  -- KIL]

------------------------------

Date: 24 Mar 84 2:49:00-PST (Sat)
From: decvax!cca!ima!inmet!andrew @ Ucb-Vax
Subject: Re: Use of 'and'
Article-I.D.: inmet.1150

I haven't heard of that one, but there was an article recently (in
Datamation?) about a natural language processing system which
repeatedly gave no results when asked for "all customers in Ohio
and Indiana".  Of course, no customer can be in both states
at once; the question should have been phrased as ".. Ohio *or*
Indiana".  When this was pointed out, the person using the
program commented something to the effect of "Don't tell *me*
how to think!"

------------------------------

Date: Tue, 27 Mar 1984 11:13:27 EST
From: David M. Axler <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Fuzzy Set Papers

     Some very interesting early work on the applications of fuzzy set
theory to language behavior was done at the Language Behavior Research
Laboratory out at U. Cal - Berkeley.  Much of this was later available
via the Lab's series of Working Papers and Monographs.  Of interest to
AI researchers concerned w/language processing and/or fuzzy sets are:

Monograph #3, "Natural Information Processing Rules:  Formal Theory and
  Applications to Ethnography", William H. Geoghegan, 2/73.

Working Paper #43, "Basic Objects in Natural Categories", Eleanor Rosch,
Carolyn B. Mervis, Wayne Gray, David Johnson, and Penny Boyes-Braem, 1975.

Working Paper #44, "Color Categories as Fuzzy Sets", Paul Kay and Chad
McDaniel, 1975.

My list of the available papers is severely out of date, and I strongly
suspect that there's a fair amount of later work also available.  Those
interested should write to the lab, as follows:

University of California
Language Behavior Research Laboratory
2220 Piedmont Avenue
Berkeley, CA 94720

(If anyone out at Berkeley would like to fill the list in on more recent
and relevant work from the lab, great...)

  --Dave Axler

------------------------------

Date: 26 Mar 84 12:12:42-PST (Mon)
From: harpo!ulysses!burl!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Teaching Proposal
Article-I.D.: psuvax.919

My main interest is Artificial Intelligence, although I define that rather
broadly: to me, AI is the field which unifies all others.  Philosophy,
psychology, mathematics, compilers, languages (both programming and natural),
"systems" (both of the CS and of the EE variety), data structures, machine
architecture, discrete representations, continuous representations, cognitive
science, art, history, music, business administration, political science,
etc, etc, etc, are ALL subfields of AI to me.  They all represent specific
domains in which intelligent activity is studied and/or mechanized.

I'm sure many agree with me that what the American educational system
needs is a program integrating computer "literacy" with critical thinking
abilities in many other domains.  I do not mean "literacy" in the "Oh yes,
I can run statistical packages" sense.  I mean an approach to critical
thinking built on the foundations of the computational paradigm -- the view
that knowledge and understanding can be represented explicitly, and that one
can discover procedures for manipulating those representations in order to
solve real problems.  Such a program could form the backbone of a very
stimulating university-wide undergraduate "core" program integrating not
only mathematics and the physical sciences but communications skills and
all the "liberal arts" as well.

I visualize such a program as presenting a coherent and integrated approach
to the cognitive skills most important for healthy and productive functioning
in the modern world.  It would present the major principles of cognition as
seen through the organizing principles of information processing.

This is more than an approach to teaching.  To me, it is also the seed of new
approaches to machine learning and cognitive modeling.  It uses undergraduate
education as an experimental testbed for research in AI, psychology,
linguistics, and social systems.  That "cutting edge" fervor alone should
make it very interesting to students.

Bob Giansiracusa
Computer Science Dept, Penn State U, 814-865-9507 (ofc), 814-234-4375 (home)
Arpa:   bobgian%PSUVAX1.BITNET@Berkeley
UUCP:   bobgian@psuvax.UUCP            -or-    ..!allegra!psuvax!bobgian
Bitnet: bobgian@PSUVAX1.BITNET         CSnet:  bobgian@penn-state.CSNET
USmail: PO Box 10164, Calder Square Branch, State College, PA 16805

------------------------------

Date: 26 Mar 84 11:38:27-PST (Mon)
From: harpo!ulysses!burl!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Proposed Research Description
Article-I.D.: psuvax.918

ADAPTIVE COGNITIVE MODEL FORMATION

The goal of this work is the automatic construction of models which can
predict and characterize the behavior of dynamical systems at multiple
levels of abstraction.

Numeric models used in simulation studies can PREDICT system behavior
but cannot EXPLAIN those predictions.  Traditional expert systems can
explain certain predictions, but their accuracy is usually limited to
qualitative ("symbolic") statements.  This research effort attempts to
couple the explanatory power of symbolic representations with the
precision and testability of numeric models.

Additionally, the computational burden implicit in the use of numeric
simulation models rapidly becomes astronomical when accurate performance
is needed over large domains (fine sampling density).

The solution my work explores consists of developing AUTOMATICALLY
a hierarchical sequence of SYMBOLIC models which convey QUALITATIVE
information of the sort that a human analyst generates when interpreting
numeric simulations.  These symbolic models portray system behavior at
multiple levels of abstraction, allowing symbolic simulation and inference
procedures to optimize the "run time" versus "accuracy" tradeoff.

I profess the philosophical bias that the study of learning and modeling
mechanisms can proceed productively in a relatively domain-independent
manner.  Obviously, domain-specific knowledge will speed the solution search
process.  Such constraints can be regarded as "seeds" for search in a process
whose algorithm is largely domain-independent.  Anecdotal support for this
hypothesis comes from the observation that HUMANS can become expert at theory
and model formation in a wide variety of different domains.

Bob Giansiracusa
Computer Science Dept, Penn State U, 814-865-9507 (ofc), 814-234-4375 (home)
Arpa:   bobgian%PSUVAX1.BITNET@Berkeley
UUCP:   bobgian@psuvax.UUCP            -or-    ..!allegra!psuvax!bobgian
Bitnet: bobgian@PSUVAX1.BITNET         CSnet:  bobgian@penn-state.CSNET
USmail: PO Box 10164, Calder Square Branch, State College, PA 16805

------------------------------

Date: Wed, 28 Mar 84 10:09:15 pst
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: UCB Cognitive Science Seminar--April 3

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

             BERKELEY COGNITIVE SCIENCE PROGRAM

                        Spring 1984

            IDS 237B - Cognitive Science Seminar

        Time:         Tuesday, April 3, 1984, 11-12:30pm
        Location:     240 Bechtel

              OBJECTS, PARTS  AND  CATEGORIES
        Barbara Tversky, Dept. of Psychology, Stanford

Many psychological, linguistic and anthropological  measures
converge  to a preferred level of reference, or BASIC LEVEL,
for common categories; for example, TABLE, in lieu of FURNI-
TURE  or KITCHEN TABLE.  Here we demonstrate that knowledge
of categories  at  that  level  (and  only  that  level)  of
abstraction is dominated by knowledge of parts.  Basic level
categories are perceived to share parts and to  differ  from
one  another  on the basis of other features.  We argue that
knowledge of part configuration underlies the convergence of
perceptual,  behavioral and linguistic measures because part
configuration plays a large  role  in  both  appearance  and
function.  Basic level categories are especially informative
because structure is linked to function via  parts  at  this
level.

*****  Followed by a lunchbag discussion with speaker  *****
***  in the IHL Library (Second Floor, Bldg. T-4) from 12:30-2  ***

------------------------------

Date: 28 Mar 1984 09:28:05-PST (Wednesday)
From: Guy M. Lohman <LOHMAN%ibm-sj.csnet@csnet-relay.arpa>
Reply-to: IBM-SJ Calendar <CALENDAR.IBM-SJ@csnet-relay.arpa>
Subject: IBM San Jose Research Laboratory calendar of Computer
         Science seminars 2-6 April 84

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                      IBM San Jose Research Lab
                           5600 Cottle Road
                         San Jose, CA 95193

  Thurs., April 5 Computer Science Colloquium
  3:00 P.M.   MINIMUM DESCRIPTION LENGTH PRINCIPLE IN MODELING
  Auditorium  Traditionally, statistical estimation and modeling
            involve besides certain well established procedures,
            such as the celebrated maximum likelihood technique,
            a substantial amount of judgment.  The latter is
            typically needed in deciding upon the right model
            complexity.  In this talk we present a recently
            developed principle for modeling and statistical
            inference, which to a considerable extent allows
            reduction of the judgment portion in estimation.
            This so-called MDL-principle is based on a purely
            information theoretic idea.  It selects that model in
            a parametric class which permits the shortest coding
            of the data.  The coding, of which we only need the
            length in terms of, say, binary digits, must,
            however, be self-containing in the sense that the
            description of the parameters themselves needed in
            the imagined encoding are included.  For this reason,
            the optimum model cannot possibly be very complex
            unless the data sample is very large.  A fundamental
            theorem gives an asymptotically valid formula for the
            shortest possible code length as well as for the
            optimum model complexity in a large class of models.
            For short samples no simple formula exists, but the
            optimum complexity can be estimated numerically and
            taken advantage of.  Finally, the principle is
            generalized so as to allow any measure for a model's
            performance such as its ability to predict.

            J. Rissanen, San Jose Research
            Host:  P. Mantey

  Fri., April 6 Computer Science Seminars
  Auditorium

            KNOWLEDGE AND DATABASES (11:15)

            We define a knowledge based approach to database
            problems.  Using a classification of application from
            the enterprise to the system level we can give
            examples of the variety of knowledge which can be
            used.  Most of the examples are drawn from work at
            the KBMS Project in Stanford.  The objective of the
            presentation is to illustrate the power but also the
            high payoff of quite straightforward artificial
            intelligence applications in databases.
            Implementation choices will also be evaluated.
            G. Wiederhold, Stanford University
            Host:  J. Halpern

   ---------------------------------------------------------------

  Visitors, please arrive 15 mins. early.  IBM is located on U.S. 101
  7 miles south of Interstate 280.  Exit at Ford Road and follow the signs
  for Cottle Road.  The Research Laboratory is IBM Building 028.
  For more detailed directions, please phone the Research Lab receptionist
  at (408) 256-3028.  For further information on individual talks,
  please phone the host listed above.

  IBM San Jose Research mails out both the complete research calendar
  and a computer science subset calendar.  Send requests for inclusion
  in either mailing list to CALENDAR.IBM-SJ at RAND-RELAY.

------------------------------

End of AIList Digest
********************

∂29-Mar-84  2317	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #38
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Mar 84  23:17:06 PST
Date: Thu 29 Mar 1984 22:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #38
To: AIList@SRI-AI


AIList Digest            Friday, 30 Mar 1984       Volume 2 : Issue 38

Today's Topics:
  Planning - JPL Planning System,
  Expert Systems - Legal Expert Systems,
  Architectures - Concurrency vs. Parallelism,
  News - New VLSI CAD Interest List,
  Seminars - Concurrency and Logic
----------------------------------------------------------------------

Date: 29 Mar 1984 1634-PST
From: WAXMAN@USC-ECL.ARPA
Subject: JPL Planning System


Len Friedman at JPL; Friedman@USC-ECLA did the work on the planning
system someone asked about.

[...]

MILT WAXMAN
WAXMAN@USC-ECLA

------------------------------

Date: Thu 29 Mar 84 15:18:10-PST
From: Anne Gardner <GARDNER@SU-SCORE.ARPA>
Subject: Legal expert systems

The seminar notice you referred to in Ailist was my oral.  I'm still
finishing off the dissertation, which is called "An Artificial Intelligence
Approach to Legal Reasoning."  For a sketch of what it's about, see
the 1983 AAAI proceedings, in which I had a paper.

--Anne Gardner

[Jeff Rosenschein@SU-SCORE also pointed out Anne's work. -- KIL]

------------------------------

Date: 29 Mar 84 17:36:35 EST
From: MCCARTY@RUTGERS.ARPA
Subject: judicial expert systems

I saw your query in the recent AILIST Digest.  Are you familiar with the
TAXMAN project at Rutgers?  Strictly speaking, this is not a "judicial expert
system," since our goal at the present time is not to build a large practical
system for use by lawyers.  Instead, we are exploring a number of theoretical
issues about the representation of legal rules and legal concepts, and the
process of legal reasoning and legal argumentation.  We believe that this
is an essential step for the construction of sophisticated expert systems
for lawyers in the future.  Some recent references:

    McCarty, L.T., "Permissions and Obligations," IJCAI-83, pp. 287-294.

    McCarty, L.T., and Sridharan, N.S., "The Representation of an Evolving
        System of Legal Concepts: II. Prototypes and Deformations," IJCAI-81,
        pp. 246-253.

    McCarty, L.T., and Sridharan, N.S., "A Computational Theory of Legal
        Argument," Technical Report LRP-TR-13, Laboratory for Computer
        Science Research, Rutgers University (1982).

    McCarty, L.T., "Intelligent Legal Information Systems:  Problems and
        Prospects," Rutgers Computer and Technology Law Journal, Vol. 9,
        No. 2, pp. 265-294 (1983).

This latter article articulates some of our ideas about practical systems,
and discusses several related projects by other researchers.


Thorne McCarty

------------------------------

Date: Thu, 29 Mar 84 17:52:33 PST
From: Philip Kahn <kahn@UCLA-CS.ARPA>
Subject: Non-Von's are not Von's

        The  ``parallel  architectures  community'' has mostly been interested
        in novel computer architectures to accelerate numeric computation
        (usually  represented as Fortran codes).

        What are the fundamental characteristics of AI computation that
        distinguish it from more conventional computation?
        Indeed, are there really any differences at all?


        I disagree with the claim that the "parallel architectures community"
has been trying to find a parallel Fortran.  Indeed, that is not possible,
since the best that could be attained would be *concurrent seriality*.
On the whole, I feel acceleration of numerical computation is not the primary
goal of those researching parallel architectures.  Rather, I feel the primary
thrust of this work has been to define inherently parallel structures and
their possible applications.

        Before we all espouse our personal viewpoints on this subject, I
feel it might be useful to agree upon our terms; they seem to vary from
person to person.  *Serial* is a single step move through a computation.
*Concurrent serial* is the simultaneous processing of more than one
serial computation.  *Parallel* is the local computation of global properties
by dedicated processors.

        Yes! There are differences between AI-motivated parallel
computation and conventional computation.  Conventional computation runs
on your standard store-bought Von Neumann machine that runs in a *serial*
fashion.  "Pseudo-conventional" machines are able to run *concurrent serial*
programs (e.g., Ada, Concurrent Pascal, etc.) utilizing several Von Neumann
processors.  *A truly parallel machine computes global properties based upon
local criteria.*  Each "criteria" is locally computed via a dedicated
processor.  The design of parallel machines is a tough problem.
A growing number of researchers feel that
*cellular automata* are the building block of all parallel structures.
The design of parallel machines using cellular automata involves the design
of local consistency conditions, oscillation behavior, equilibrium effects,
and a myriad of other non-conventional subjects.
Thus, I feel that there are in fact significant differences between parallelism
and "conventional" methods.

------------------------------

Date: Thu, 29 Mar 84 12:54 PST
From: ANDERSON.ES@Xerox.ARPA
Reply-to: Anderson <Anderson.es@Xerox.ARPA>
Subject: NEW VLSI CAD INTEREST DL

This is to announce a new distribution list for the purpose of
discussing issues and exchanging ideas pertaining to VLSI Computer Aided
Design and Layout.

I hope for this DL to encompass a broad range of topics including but
not limited to: VLSI CAD/CAE/CAM hardware, software, layout, design,
techniques, programming, fracturing, PG, plotting, maintenance, vendors,
bugs, workstations, DRC, ERC, system management, peripheral equipment,
design verification, testing, benchmarking, archiving procedures, etc.
etc.

The distribution list itself resides on the Xerox Ethernet.  Ethernet
users can send messages to CADinterest↑.es.  Arpanet, Milnet, Usenet,
and other Internet users can send messages to CADinterest↑.es@PARC-MAXC.
[You will probably need to use quotes to get the special symbol through
your mailer: "CADinterest↑.es"@PARC-MAXC.  -- KIL]

[...]

Anyone on the Xerox Ethernet may add themselves using Maintain.
Arpanet, Milnet, Usenet, and other Internet users should send a request
to me (Anderson.es@PARC-MAXC) and I will add you to the DL.  I will also
add whole DL's if requested by the owner.

For now, there are no rules set for this DL.  Depending on how large it
gets, I hope to keep it as anything goes and see what happens for a
while.  I will wait a week before sending any messages to the DL in
order to allow people to be added to the DL.  If we get some good
informative discussions going, I will try to archive the responses or
maybe go to a digest format.  Thank you for your indulgance.

Craig Anderson
VLSI CAD Lab Supervisor
Xerox Corp.
El Segundo, Ca.
213-536-7299

------------------------------

Date: Wed 28 Mar 84 23:40:00-PST
From: Al Davis <ADavis at SRI-KL>
Subject: John Conery Seminar Friday the 30th

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                               Seminar

                           Friday, March 30
                 10:00 a.m. in the AI Conference Room

           Fairchild AI Labs, 4001 Miranda Ave., Palo Alto

                                  by

                            John S. Conery
                         University of Oregon

Title:  The AND/OR Process Model for Parallel Interpretation of Logic
Programs.

Abstract:  In contrast to the traditional depth first sequential process
tree search used for logic program evaluation, this talk presents the AND/OR
process model.  It is a method for interpretation by a system of
asynchronous, independent processes that communicate only by messages.
The method makes it possible to exploit two distinct forms of
parallelism.  OR parallelism is obtained from evaluating
nondeterministic choices in parallel.  AND parallelism arises in the
execution of deterministic functions, such as matrix multiplication or
divide and conquer algorithms, that are inherently parallel.  The two
forms of parallelism can be exploited at the same time.  This means
AND parallelism can be applied to clauses that are composed of several
nondeterministic components, and it can recover from incorrect choices
in the solution of these components.  In addition to defining parallel
computations, the model provides a more defined procedural semantics
for logic programs; that is, parallel interpreters based on this model
are able to generate answers to queries that cause standard
interpreters to go into an infinite loop.  The interpretation method
is intended to form the theoretical framework of a highly parallel non
von Neumann computer architecture; the talk concludes with a
discussion of issues involved in implementing the abstract interpreter
on a multiprocessor.
                                                al

Notes to visitors:  Arrive at Fairchild between 9:45 and 10:00 and go
to the guard and tell him you are there to visit Al Davis at X4385.
They will call me and someone will come down and get you and haul you
off to the AI conference room.

------------------------------

Date: 29 Mar 84  1157 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Seminar in foundations of mathematics (Professor Kreisel)

[Forwarded from the CSLI bboard by Laws@SRI-AI.]

Organizational meeting

TIME:   Tuesday, Apr. 3, 4:15 PM
PLACE:  Philosophy Dept. Room 92 (seminar room)
TOPIC:  Logic and parallel computation.

We will begin by examining some recent papers where
parallel computation is used in interesting ways
to obtain better algorithms.

The logical part will be to investigate how efficient
algorithms using parallel computation might be extracted
from infinite proof trees by applying transformations
that use only finite amounts of information.

At the first meeting these ideas will be explained in some more detail.
Ideas and suggestions will be welcome.

The seminar is scheduled to meet Tuesdays at 4:15, but can
be changed if there are conflicts.

------------------------------

End of AIList Digest
********************

∂31-Mar-84  1655	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #39
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 31 Mar 84  16:55:18 PST
Date: Sat 31 Mar 1984 15:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #39
To: AIList@SRI-AI


AIList Digest           Saturday, 31 Mar 1984      Volume 2 : Issue 39

Today's Topics:
  Applicative Simulation - Request,
  AI Tools - Request,
  Distributed Programs - Request,
  AI - Definition of AI Problems,
  Expert Systems - Explanatory Capability & Legal Systems & NY Times,
  Seminar - Theory of the Learnable,
  Course - System Description Languages
----------------------------------------------------------------------

Date: 2 Apr 84 12:50:24-EST (Mon)
From: ihnp4!houxm!hogpc!houti!ariel!vax135!ukc!srlm @ Ucb-Vax
Subject: applicative / functional simulation
Article-I.D.: ukc.4146

        I am looking for information on functional/applicative
        simulators of anything (communications protocols, to
        cite a good one) written in a PURELY applicative/functional
        style (no setq's please).

        If you know of anything about this (papers, programs, etc)
        I'd be very grateful if you could mail the pointers to me.


        Silvio Lemos Meira

        UUCP: ...!vax135!ukc!srlm
        computing laboratory
        university of kent at canterbury
        canterbury ct2 7nf uk
        Phone: +44 227 66822 extension 568

------------------------------

Date: Thu, 29 Mar 84 16:19 cst
From: Bruce Shriver <ShriverBD%usl.csnet@csnet-relay.arpa>
Subject: request for references

I would like to be referred to either one or two seminal papers or one or
two highly qualified persons in the following areas (if you send me the
name of an individual, the person's address and phone number would also
be greatly appreciated):

  1. Tutorial or survey papers on logic programming, particularly those
     dealing with several different language approaches.

  2. Reusable Software (please give references other than the Proceedings
     of the Workshop on Reuseability in Programming which was held in
     Newport, RI last September).

  3. Your favorite formal specification technique that can be applied to
     large scale, complex systems.  Examples demonstrating the completeness
     and consistency of a set of specifications for real systems.

  4. Integrated programming environments such as Cedar and Spice versus
     the Ada-style environments (APSEs, etc.).  Discussions on the
     relative merits of these two kinds of environments.

  5. Knowledge Based System Architectures (i.e., support of knowledge
     based systems from both the hardware and software point of view).
     Knowledge representation and its hardware/software implications.
     The relationship between "knowledge bases" and "data bases" and
     the relationship between knowledge base systems and data base
     systems.

Thank you very much for your time and consideration in this matter.  I
appreciate your help:     Bruce D. Shriver
                          Computer Science Department
                          University of Southwestern Louisiana
                          P. O. Box 44330
                          Lafayette, LA 70504
                          (318) 231-6606
                          shriver.usl@rand-relay

------------------------------

Date: 30 Mar 84 1208 EST (Friday)
From: Roli.Wendorf@CMU-CS-A
Subject: Distributed Programs

As part of my thesis, I am collecting information on the behavior of
distributed programs.  I define distributed programs as consisting of
multiple processes.  Thus, multiprocess programs running on uniprocessor
systems would qualify as well.

If you have written, or know of any distributed programs, I would like to
hear from you.  I am specially interested in hearing about distributed
versions of commonly used programs like editors, compilers, mail systems, etc.

Thanks in advance,
Roli G. Wendorf

------------------------------

Date: 30 Mar 84 12:04:36 EST  (Fri)
From: Dana S. Nau <dsn%umcp-cs.csnet@csnet-relay.arpa>
Subject: Re: A call for discussion

        From:  Sal Stolfo <sal@COLUMBIA-20.ARPA>

        This  note  is  a  solicitation of the AI community for cogent
        discussion ...  We hope that all facets will be addressed including:

        - Differences between the kinds of problems encountered in AI and those
        considered more conventional.   (A   simple   answer   in   terms  of
        ``ill-defined'' and ``well-defined'' problems is viewed as a copout.)
        ...

One of the biggest differences involves how well we can explain how we
solve a problem.  The problems that humans can solve can be divided roughly
into the following two classes:

1.  Problems which we can solve which we can also explain HOW to solve.
Examples include sorting a deck of cards, adding a column of numbers, and
payroll accounting.  Any time we can explain how to solve a problem, we can
write a conventional computer procedure to solve it.

2.  Problems which we can solve but cannot explain how to solve (for a
discussion of some related issues, see Polanyi's "The Tacit Dimension").
Examples include recognizing a face, making good moves in a chess game, and
diagnosing a medical case.  We can't solve such problems using conventional
programming techniques, because we don't know what algorithms to use.
Instead, we use various heuristic approaches.

The latter class of problems corresponds roughly to what I would call AI
problems.

------------------------------

Date: 28 Mar 84 19:25:42-PST (Wed)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: 'Explaining' expert system algorithms
Article-I.D.: uiucdcs.6403

There is no need for expert system software to be well-understood by anyone but
its designers; there IS a need for systems to be able to explain THEMSELVES.
Witness human thinking: after 30 years of serious AI and much more of cognitive
psychology, we still don't know how we think, but we have relatively little
trouble getting people to explain themselves to us.

I think that we will not be able to produce such self-explanatory software
until we come up with a fairly comprehensive theory of our own mental workings;
which is, admittedly, not the same as understanding an expert program. On the
other hand, if you're a theoretical sort you tend to accept Occam's razor, and
so I believe that such a theory of cognition will be as simplifying as the
Copernican revolution was for astronomy. Thereafter it's all variations on a
theme, and expert systems too will one day be correspondingly easy.

                                                                Marcel S.
                                                                U of Illinois

------------------------------

Date: 30 Mar 1984 08:55-PST
From: SSMITH@USC-ECL
Subject: Expert Legal System

Regarding your request in the latest AI-LIST:
George Cross, an assistant prof. at Louisiana State University has
been working for aprox. the last 2 years with a law prof. to formalize
that state's legal codes.  From what I understand, Louisiana uses a
form of law, not found in other states, based on precise rules, rather
than the method of refering to past cases to obtain legal precedence.
I know he has a few unpublished papers in this area and is preparing
a paper for the Austin AAAI.  From what I can tell, the work is similar
in scope to McCarty's work at Rutgers.

George can be contacted over the CS-NET: cross%lsu.csnet@CSNET-RELAY.

    ---Steve Smith (SSmith@USC-ECL)

------------------------------

Date: Thu 29 Mar 84 06:32:32-PST
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM.ARPA>
Subject: Expert Systems/NY Times

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

See front page story of today's New York Times entitled
"Machines Built to Emulate Human Experts' Reasoning".

Features knowledge engineering, expert systems, and Sheldon Breiner,
chairman of Syntelligence.

------------------------------

Date: 03/30/84 14:40:07
From: STORY@MIT-MC
Subject: Theory of the Learnable

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

TITLE:  "A THEORY OF THE LEARNABLE"
SPEAKER:        Professor L.G. Valiant, Harvard University
DATE:   Thursday, April 5, 1984
TIME:   3:45    Refreshments
        4:00    Lecture
PLACE:  NE43-512a

We consider concepts as represented by programs for recognizing them and define
learning as the process of acquiring such programs in the absence of any
explicit programming.  We describe a methodology for understanding the limits
of what is learnable as delimited by computational complexity.  The methodology
consists essentially of choosing some natural information gathering mechanism,
such as the observation of positive examples of the concept, and determining
the class of concepts that can be learnt using it in a polynomial number of
steps.  A probabilistic definition of learnability is introduced that leads to
encouraging positive results for several classes of propositional programs.
The ultimate aim of our approach is to identify once and for all the maximum
potential of learning machines.

HOST:   Professor Silvio Micali

------------------------------

Date: Fri 30 Mar 84 17:28:02-PST
From: Ole Lehrmann Madsen <MADSEN@SU-SCORE.ARPA>
Subject: System Description Languages

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

The following course will be given in the spring quarter:

   CS 249 TOPICS IN PROGRAMMING SYSTEMS:
          LANGUAGES FOR SYSTEM DESCRIPTION AND PROGRAMMING

Questions to Ole Lehrmann Madsen, M. Jacks Hall room 214, tlf. 497 - 0364,
net address MADSEN@SU-SCORE.

Listing:  CS 249
Instructor: Ole Lehrmann Madsen
Time: Monday 1:00pm - 4:00pm
Room: 352 building 460

This course will consider tools and concepts for system description and
programming. A number of languages for this purpose will be presented. These
include SIMULA 67, DELTA, EPSILON and BETA, which have been developed as part
of research projects in Norway and Denmark.

SIMULA I was originally developed as a tool for simulation. SIMULA 67 is a
general programming language with simulation as a special application. The
formalization of a system as a SIMULA program often gave a better understanding
of the system than did the actual simulation results.
This was the motivation for designing a special language (DELTA) for making
system descriptions. DELTA is intended for communication about systems. e.g.
data processing, biology, medicine, physics. DELTA among others contains
constructs for describing discrete state changes (by means of algorithms) and
continuous state changes (by means of predicates).  The EPSILON language is
the result of an attemp to formalize DELTA by means of Petri Nets.

BETA is a programming language originally intended for implementing DELTA
descriptions of computer systems. However the project turned into a long-
term project with the purpose of developing concepts, constucts and tools
in relation to programming. The major results of this projetc is the BETA
language. BETA is an object oriented language like SIMULA and SMALLTALK,
but unlike SMALLTALK, BETA belongs to the ALGOL family with respect to
block structure, scope rules and type checking.

Various other languages and topics may also be covered. Examples of this are:
Petri Nets, environments for system description and programming, alternative
languages like Aleph and Smalltalk, implementation issues. Implementaion issues
could be: transformation of a system description to a program, implementation
of a typed language like BETA obtaining dynamic possibilities like in LISP.

Prerequisites

Students are expected to have a basic knowledge of programming languages.
The course may to some extent depend on the background and interests of the
participating students. Students with a background in simulation or description
of various systems within physics, biology, etc. will be useful participants.

Course work

Students will be expected to read and discuss in class various papers
on system description and programming languges. In addition small
exercises may be given.  Each student is supposed to write a short
paper about one or more topics covered by the course and comment on
papers by other students.

------------------------------

End of AIList Digest
********************

∂03-Apr-84  2054	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #40
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Apr 84  20:53:57 PST
Date: Mon  2 Apr 1984 21:38-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #40
To: AIList@SRI-AI


AIList Digest            Tuesday, 3 Apr 1984       Volume 2 : Issue 40

Today's Topics:
  Lingistics - "And" and Ellipsis,
  Msc - Notes From a Talk by Alan Kay,
  Seminar - Stereo Vision for Robots
----------------------------------------------------------------------

Date: 30 Mar 84 16:05 EST
From: Denber.wbst@PARC-MAXC.ARPA
Subject: Re: Use of 'and'

""all customers in Ohio and Indiana".  Of course, no customer can be in
both states at once; the question should have been phrased as ".. Ohio
*or* Indiana""

Well, this is actually a case of ellipsis.  "Or" has its own problems.
What is really meant is "all customers in Ohio and [all customers in]
Indiana".  Happens all the time.  Looked at this briefly at the U of R
before I was elided myself.  I don't have my references here.  Some work
at Toronto (?)  Perhaps Gary the Dog Modeller can say more.

                        - Michel
                        Speaker to Silicon

------------------------------

Date: Sat, 31 Mar 84  8:22:26 EST
From: Andrew Malis <malis@BBN-UNIX>
Subject: Notes from a talk by Alan Kay (long message)

        [Forwarded from the Info-Atari list by Tyson@SRI-AI.]

  Date: 23 Mar 1984 1214-EST (Friday)
  From: mit-athena!dm@mit-eddie (Dave Mankins )
  Subject: Notes from talk by Alan Kay at MIT

Dr. Alan Kay, one of the developers of Smalltalk and the Xerox Alto, and
currently a Vice President and Chief Scientist at Atari, gave a talk at
MIT yesterday (22 March 1984) titled: "Too many smart people: a personal
view of design in the computer field"

The abstract:

    This talk is about the battle between Form and Content in Design and
    why "being smart" usually causes content to lose.  "Insightful
    laziness" is better because (1) it takes maximum advantage of others
    work and (2) it encourages "rotating" the problem into its simplest
    essence -- often by changing it completely.  In other words: Point
    of view is worth 80 IQ points!

Here are some tidbits gleaned from my notes:

One of the problems with smart people is that they deal with
difficulties by fixing them, rather than taking the difficulty as a
symptom of a flaw in the design, and noticing "a rotation into a new
simplicity."

When preparing his talk he realized that what he wanted to say was
basically inconsistent, that

    1) You should do things over, and
    2) You shouldn't do things over.

"Both of these are true as long as you get the boundary conditions
right."  (There ensues an anecdote about working with Seymour Cray to
get an early CDC6500 up at NCAR.  The 6500 hardware did not normalize
its floating point operations, but that was "okay" because "any sensible
model will converge".  When the NCAR meteorologists (who answer the
question "what will the weather be like?" by looking out the window)
tried to put their models up on the CDC6500, they didn't work.  They
insisted that the Fortran compiler do the normalization for them.  Kay
cited this as evidence that their model was wrong.  Hmph, it's easy to
make fun of meteorologists...)

Kay cited Minsky's Turing award lecture, in the Apr. 1970 JACM (or maybe
CACM, I didn't catch it): "Form and content aren't enough."  What has
happened to the computer science field over the last twenty years is
myopia:  "a myopia so acute that only the very brilliant can achieve
it."

As an example of this, Kay cited the decline from the STS940 in 1965 to
UNIX ("a mere shadow of what an operating system should be") to CPM.  The
myopia in question is best illustrated by a failure of Kay's own: "When
we got our first IMSAI (mumble) we put Smalltalk up on it.  We had to do
a lot of machine coding on it, and we thought that wasn't right.  And it
performed about as well as BASIC does today.  We said 'This is clearly
inadequate.  What we need is 2Mb of memory and a fast disk.'  Thus we
left the door open for BASIC to crawl back out of its crypt."

He should be lynched.  At least he realizes the error of his ways.

He cited an article by Vannevar Bush, in a 1945 Atlantic Monthly,
titled, "As we may think", in which Bush described a multi-screened,
pointer-based system with access to the world's libraries, drawing
programs, etc.  Bush, of course, thought it was just a few years away
(he called it "Memex").

He alluded to Minsky's notion of "science-envy": Natural scientists look
at the universe and discover its laws.  Computer scientists make up
their universes.  "What we do is more like an art."  "You can judge
whether or not a field is overcome by science-envy if it sticks the word
'science' into its name: 'computer science', 'cognitive science',
'political science'..."

He talked about some of his early work, with Ed Teitel, developing an
early personal computer (ca. 1965) calligraphic display with a pointer.
It had "a wonderful language I developed, influenced by Sutherland's
Sketchpad (the best thesis ever done in computer science) and
Simula--everything I've ever done has been influenced by Sketchpad and
Simula).  Everyone who tried to use it hated it.  They all had about the
same reaction to it that everyone has to APL today."  Shortly after
working on this he saw Papert's work with LOGO and children, and
resolved that everything he did from that day forth would be
programmable by children.

Part of the machine's problem stemmed from the fact that it didn't have
enough memory.  This in turn stems from the fact that we cast hardware
in concrete before we know what we're going to do with it.

Some relevant maxims from my notes:

    "Hardware is software crysallized early."
    "We shouldn't try to build a supercomputer until we have something
        to compute."

His point in these two maxims was, I think, that we're very good at
building hardware before we really know what we're going to do with it
(is there a lesson here for Project Athena with its tons of Ethernetted
VAXes "which will be used for undergraduate education" but a lack of
vision when it comes to educational software?)

He then described the Dynabook: a note-book sized interactive computer,
with about the same kind of interface as a notebook: you can doodle with
it, scribble, but it can also peruse the whole Library of Congress, as
well as past doodles.  "So portable you can carry something else, too."
[For a more complete description of Dynabook, see ``Fanatic Life and
Symbolic Death among the Computer Bums'', in "Two Cybernetic Frontiers"
by Stewart Brand.]

[An aside: one of the proposed forms of the Dynabook was a Walkman with
eyeglass flat-screen stereoptic displays (real 3-d complete with hidden
surfaces!).  This was punted because "no one would want to put something
on their head."  (Times change.)  Kay asserted that such displays ought
to be easier to produce than a note-book sized display, since there
would be fewer picture-elements required (a notebook would require maybe
1.5M pixels, while "the human eye can resolve only 140,000 points, so
you'd only have to put 140,000 pixels into your eyeglasses".  The flaw
in this argument is that most of those points the eye can resolve are in
the fovea, and you would have to put foveal-resolution over the entire
field of the glasses, meaning, more pixels.  This is the opposite of
window-oriented displays.  Instead of a cluttered desk you have an
orderly bulletin-board: just display everything at once, the user
can look around the room at all the stuff.  If this room isn't enough
you can walk into the next room and look at more stuff.]

More maxims:
    "Great ideas are better than good ones because they both take about
    the same amount of time to develop and the great ideas aren't
    obsolete when you're done."

An observation:
    "In all the years that we had the Altos no one at Xerox ever
    designed anything by starting with a drawing on an Alto.  They
    always started with a sketch on the back of an envelope."
    Nicholas Negroponte and the Architecture Machine (ArcMac) group
    did the only study of what sketching is and what really is going
    on when you sketch in 1970 in a project called "Architecture by
    yourself" but their funding dried up and no one remembers that
    stuff now.

    [An aside: the Macintosh's MacPaint program is the best drawing
    program that Kay has ever seen.  (The Macintosh people called him
    up one day and said, "Come on over, we have a present for you.")
    When he started playing with it he had a two-fold reaction:
    "Finally", and "Why did it take 12 years?"]

Homage was paid to the Burroughs B5000, a computer developed in 1961:

    It's operating system was entirely written in a higher level
        language (ALGOL)
    It had hardware protection (which was later recognized to be
        a capability protection system)
    It had an object-oriented virtual memory system
    It had virtual data
        (any data reference could have a procedure attached to it for
        fetching and storing the real data--a bit was set as to which
        side of the assignment statement it went on)
    It was a multiprocessor (it had two processors, and much of the
        protection scheme was built in order to allow the two processors
        to work together).
    It had an integrated stack (which, sadly, is the only thing that
        people seem to remember).

"This was twenty years ago!  What happened, people?"

The B5000 had some flaws:
    The virtual data wasn't done right
        there were too many architectural assumptions about physical data
        formats
    "Char mode: which eliminated all the protections."  This was
        provided to let programmers used to the 1401 (I think) be
        comfortable.

User interface observations:

Piaget's three stages of development:

    Doing ----> Images -----> Symbols

doing: "a hole is to dig"
images: "getting the answer wrong in the water glass experiment"
symbols: "so we can say things that aren't true"

Brunner did a study that indicated these weren't stages, they were three
areas conflicting for dominance--as we mature, symbols begin to win out.

Ha...man did a study of inventiveness and creativity among
mathematicians and discovered that most mathematicians do their work
imagistically, very few of them work by manipulating symbols.  Some
mathematicians (notably Einstein) actually have a kinesthetic ability to
FEEL the spaces they are dealing with.

From psychology comes a principle applicable to user interfaces:

Kay's law: Doing with Images generates Symbols.

He cites Papert's "Mindstorms", where Papert describes programming a
computer to draw a circle.  A high school student, working with BASIC
would have before her the dubious assertion that a circle and
x**2+y**2=C are related.  A child, instructed to "play turtle" will
close her eyes while walking in a circle and say "I move forward a
little, then I turn a little, and I keep doing that until I make a
circle".  This is how a differential geometer views a circle.  Papert's
whole book is an illustration of Kay's Law.

User interface maxims:
    Immediacy
        What you see is what you get (WYSIWYG)
    Modeless
        Always be able to start a new command without having to clean up
        after the old one.
    Generic
        What works in one place works in another
    User illusion
        User's make models of what goes on inside the machine.  Make the
        system in which most of the user's guesses are valid.  Not "some
        of the time it's wonderful, but most of the time you get
        surprised."
    Communicative
        He drew the distinction between reactive systems and interactive
        systems.  All his systems have been reactive--you would do
        something, and the system would react, opening up new
        possibilities.
    Undoability
        Even if it doesn't do much, if you never lose a character, your
        users will be happy.
    Functional
        "What will it do without user programming."

        He didn't used to think this was a user interface issue until he
        saw the STAR, which has the world's best user interface, except
        that it doesn't DO anything.  Not many people can affort a 17000
        coffee-warmer.
    Fun
        One should be able to fool around with no goal.  A user
        interface should be like Disneyland

"Language is an extension of gestures--you're not really trying to say
something, you're really trying to point to something that is in someone
else's head.  A good mime can convey a message without a single word."

A model he encourages people to pursue is that of the AGENT.  When you
go into a library, you don't expect an oracle, you expect someone who
knows how to find what you're looking for.  It is much easier to make an
expert about the terrain of knowledge than an expert that can deal with
the knowledge itself.

He then played a videotape of a "telephone answering machine" being
developed by ArcMac (with funding from Atari).  It listened to the
pattern of a person's speech (in order to figure out when the person was
pausing long enough to be politely interrupted) and then channelled the
conversation into a context (that of taking a message) that the machine
could deal with.  It has a limited speech recognition ability, which
allows its owner to leave messages for other people:

    Hello, this is Doug's telephone, Doug isn't in right now, can I tell
    him who called?

    Uh, Clem...

    If you'd like to leave Doug a message, I can give it to him, otherwise
    just hang up and I'll tell him you called.

    Doug, I'm going to be in town next Tuesday and I'd like to get
    together with you to discuss the Memory project....

    Thank you, I'll tell him you called.

and

    Hello, this is Doug's telephone, Doug isn't in right now, can I tell
    him who called?

    It's me...

    Hi, Doug, you have three messages.

    Who are they from?...

    One is from UhClem, one is from Joe, and you have a mail message
    from Bill about the Future Fair.

    Tell me what UhClem has to say...

    [The machine plays a recording of Clem's message]

    Take a message for UhClem...

    Recording.

    Dinner next Tuesday is fine, how about Mary Chung's?

And so on.  UhClem calls later, and the machine plays back the recording
of Doug's message.


POINT OF VIEW IS WORTH 80 IQ POINTS:

    "A couple of years after Xerox punted the Alto, I met the people who
    made that decision.  They weren't dunces, as I had originally
    supposed, they just didn't have the right point of view: they had no
    criteria by which to tell the difference between an 8080 based word
    processor and a personal computer."

------------------------------

Date: 2 Apr 1984  12:18 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Stereo Vision for Robots

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

Keith Nishihara    --  "Stereo Vision for Robots"

AI Revolving Seminar
Wednesday, April 4 at 4:00pm    8th floor playroom

   Recently we have begun, after a long interlude, to bring vision and
manipulation together at the MIT Artificial Intelligence Laboratory.
This endeavor has highlighted several engineering issues for vision:
noise tolerance, reliability, and speed.  I will describe briefly
several mechanisms we have developed to deal with these problems in
binocular stereo, including a high speed pipelined convolver for
preprocessing images and an "unstructured light" technique for improving
signal quality.  These optimizations, however, are not sufficient.  A
closer examination of the problems encountered suggests that broader
interpretations of both the binocular stereo problem and of the
zero-crossing theory of Marr and Poggio are required.
   In this talk, I will focus on the problem of making primitive surface
measurements; for example, to determine whether or not a specified
volume of space is occupied, to measure the range to a surface at an
indicated image location, or to determine the elevation gradient at that
position.  In this framework we make a subtle but important shift from
the explicit use of zero-crossing contours (in band-pass filtered
images) as the elements matched between left and right images, to the
use of the signs between zero-crossings.  With this change, we obtain a
simpler algorithm with a reduced sensitivity to noise and a more
predictable behavior.  The PRISM system incorporates this algorithm with
the unstructured light technique and a high speed digital convolver.  It
has been used successfully by others as a sensor in a path planning
system and a bin picking system.

------------------------------

End of AIList Digest
********************

∂03-Apr-84  2141	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #41
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Apr 84  21:41:15 PST
Date: Tue  3 Apr 1984 19:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #41
To: AIList@SRI-AI


AIList Digest           Wednesday, 4 Apr 1984      Volume 2 : Issue 41

Today's Topics:
  Physiognomic Awareness - Request,
  Waveform Analysis - IBM EKG Program Status,
  Logic Programming - Prolog vs. Pascal & IBM PC/XT Prolog Benchmarks,
  Fifth Generation - McCarthy Review
----------------------------------------------------------------------

Date: Sun, 1 Apr 1984  01:00 EST
From: RMS.G.DDS%MIT-OZ@MIT-MC.ARPA
Subject: Physiognomic Awareness and Ergonomic Design


        Physiognomic awareness and relation to Ergonomic design inquiry

        Does anyone know of any studies conducted on this exact topic ?
        Or as close as can be ! I am interested in whether this has been
        explored.

------------------------------

Date: 26 Feb 84 0:05:42-PST (Sun)
From: hplabs!sdcrdcf!akgua!mcnc!ecsvax!hsplab @ Ucb-Vax
Subject: computer EKG
Article-I.D.: ecsvax.2050

I would like to footnote Jack Buchanan's note and add that IBM, who helped
support the original development of the Bonner program has announced that
effective June, 1984, it will close its Health Care Division, which
currently manufactures its EKG products.  Support for existing products
will continue for at least seven years after product termination, however.

David Chou
Department of Pathology
University of North Carolina, Chapel Hill
      ...!mcnc!ecsvax!hsplab

------------------------------

Date: Mon 2 Apr 84 21:34:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Prolog vs. Pascal

The latest issue of IEEE Graphics (March '84) has an article
comparing interpreted Prolog with compiled Pascal for a graphics
application.  The results are surprising to me.

J.C. Gonzalez, M.H. Williams, and I.E. Aitchison of Heriot-Watt
University report on the comparison in "Evaluation of the
Effectiveness of Prolog for a CAD Application."  They
implemented a very simple set of graphical routines: much of
the code is given in the article.  They were building a 2-D
entry and editing system where the polygons were stored as lists
of vertices and edges.  The user could enter new points and
edit previously entered figures.  This formed the front end
to a system for constructing 3-D models from orthogonal 2-D
orthographic projections (engineering drawings).  Much of the
code has the flavor of "For each line (or point or figure)
satisfying given constraints, do the following ..."  (Often only
one entity would satisfy the constraints.)

The authors report that the Prolog version (using assert and
retract to manipulate the database) was more concise, more readable,
and clearer than the Pascal version.  The Prolog version also took
less storage, was developed more quickly, and was developed with
minimum error.  What is more remarkable is that the interpreted
Prolog ran about 25% faster than the compiled Pascal.

They were using a PDP-11/34 with the NU7 Prolog interpreter
from Edinburgh and the VU Pascal compiler from Vrije University.

------------------------------

Date: Fri 23 Mar 84 10:32:55-PST
From: Herm Fischer <HFischer@USC-ECLB>
Subject: IBM PC/XT Prolog Benchmarks

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

[...]

IBM was kind enough to let us have PC/IX for today, and we
brought up UNSW Prolog.  With a minor exception the code
and makefiles were compatible with PC/IX.  (They have frustrated
me for a whole year, being incompatible every PCDOS "C" compiler
from Lattice onward.)

PC/IX and Prolog are neatly integrated; all Unix features, and
even shell calls, can be made within the Prolog environment.
Even help files are included.  It is kind of nice to be tracing
away and browse and modify your prolog code within the interpretive
environment, using the INed (nee rand) editor and all the other
Unix stuff.

The 64 K limitation of PC/IX bothers me, more emotionally than
factually, because only one of my programs couldn't be run today.
I'm sure I will get really upset unless I find some hack around
this limitation.

A benchmark really surprises me.  The Zebra problem (using
Pereira's solution) provides the following statistics:

DEC-2040      6 seconds (if compiled)      (Timed on TOPS-20)
             42 seconds (if interpreted)   (  "   "    "    )

VAX-11/780  204 secs (interpreted) (UNSW)  (Timed on Unix Sys III)

IBM PC/XT   544 secs (interpreted) ( " )   (Timed on   "   "   " )

The latter 2 times are wall-clock with no other jobs or users
running, and these two Prologs were compiled from the same source
code and make file!  The PC/IX was CPU-bound, and its disk never
blinked during the execution of the test.

-- Herm Fischer

------------------------------

Date: Wed 21 Mar 84 20:47:07-PST
From: Ramsey Haddad <HADDAD@SU-SCORE.ARPA>
Subject: fifth generation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

For anyone interested in these things, there is a review by John
McCarthy of Feigenbaum and McCorduck's "The Fifth Generation:
Artificial Intelligence and Japan's Computer Challenge to the World"
in the April 1984 issue of REASON magazine.


[The following is a copy of Dr. McCarthy's text, reprinted with
his permission. -- KIL]


The Fifth Generation - Artificial Intelligence and Japan's Computer
Challenge to the World - by Edward Feigenbaum and Pamela McCorduck,
Addison-Wesley Publishing Co.


Review of Feigenbaum and McCorduck - for Reason


	Japan has replaced the Soviet Union as the world's second
place industrial power.  (Look at the globe and be impressed).
However, many people, Japanese included, consider that this success
has relied too much on imported science and technology - too much for
the respect of the rest of the world, too much for Japanese
self-respect, and too much for the technological independence needed
for Japan to continue to advance at previous rates.  The Fifth
Generation computer project is one Japanese attempt to break out of
the habit of copying and generate Japan's own share of scientific and
technological innovations.

	The idea is that the 1990s should see a new generation of
computers based on "knowledge information processing" rather than
"data processing".  "Knowledge information processing" is a vague term
that promises important advances in the direction of artificial
intelligence but is noncommittal about specific performance.  Edward
Feigenbaum describes this project in The Fifth Generation - Artificial
Intelligence and Japan's Computer Challenge to the World, predicts
substantial success in meeting its goals, and argues that the U.S.
will fall behind in computing unless we make a similar coherent
effort.

	The Fifth Generation Project (ICOT) is the brainchild of
Kazuhiro Fuchi of the Japanese government's Electro-Technical
Laboratory.  ICOT, while supported by industry and government, is an
independent institution.  Fuchi has borrowed about 40 engineers and
computer scientists, all under 35, for periods of three years, from
the leading Japanese computer companies.  Thus the organization and
management of the project is as innovative as one could ask.  With
only 40 people, the project is so far a tiny part of the total
Japanese computer effort, but it is scheduled to grow in subsequent
phases.

	The project is planned to take about 10 years,during which
time participants will design computers based on "logic programming",
an invention of Alain Colmerauer of the University of Marseilles in
France and Robert Kowalski of Imperial College in London, and
implemented in a computer programming language called Prolog.  They
want to use additional ideas of "dataflow" developed at M.I.T.  and to
make machines consisting of many procesors working in parallel.  Some
Japanese university scientists consider that the project still has too
much tendency to look to the West for scientific ideas.

	Making parallel machines based on logic programming is a
straightforward engineering task, and there is little doubt that this
part of the project will succeed.  The grander goal of shifting the
center of gravity of computer use to the intelligent processing of
knowledge is more doubtful as a 10 year effort.  The level of
intelligence to be achieved is ill-defined.  The applications are also
ill-defined.  Some of the goals, such as common sense knowledge and
reasoning ability, require fundamental scientific discoveries that
cannot be scheduled in advance.

	My own scientific field is making computer programs with
common sense, and when I visited ICOT, I asked who was working on the
problem.  It was disappointing to learn that the answer was "no-one".
This is a subject to which the Japanese have made few contributions,
and it probably isn't suited to people borrowed from computer
companies for three years.  Therefore, one can't be optimistic that
this important part of the project goals will be achieved in the time
set.

	The Fifth Generation Project was announced at a time when the
Western industrial countries were ready for another bout of viewing
with alarm; the journalists have tired of the "energy crisis" - not
that it has been solved.  Even apart from the recession, industrial
productivity has stagnated; it has actually declined in industries
heavily affected by environmental and safety innovations.  Meanwhile
Japan has taken the lead in automobile production and in some other
industries.

	At the same time, artificial intelligence research was getting
a new round of publicity that seems to go in a seven-year cycle.  For
a while every editor wants a story on Artificial Intelligence and the
free lancers oblige, and then suddenly the editors get tired of it.
This round of publicity has more new facts behind it than before,
because expert systems are beginning to achieve practical results,
i.e. results that companies will pay money for.

	Therefore, the Fifth Generation Project has received enormous
publicity, and Western computer scientists have taken it as an
occasion for spurring on their colleagues and their governments.
Apocalyptic language is used that suggests that there is a battle to
the death - only one computer industry can survive, theirs or ours.
Either we solve all the problems of artificial intelligence right away
or they walk all over us.

	Edward Feigenbaum is the leader of one of the major groups
that has pioneered expert systems -- with programs applicable to
chemistry and medicine.  He is also one of the American computer
scientists with extensive Japanese contacts and extensive interaction
with the Fifth Generation Project.

	Pamela McCorduck is a science writer with a previous book,
Machines Who Think, about the history of artificial intelligence
research.

        The Fifth Generation contains much interesting description
 of the Japanese project and American work in related areas.  However,
Feigenbaum and McCorduck concentrate on two main points.  First,
knowledge engineering will dominate computing
by the 1990s.	Second, America is in deep trouble if we don't
organize a systematic effort to compete with the Japanese in this
area.

	While knowledge engineering will increase in importance, many
of its goals will require fundamental scientific advances that cannot
be scheduled to a fixed time frame.  Unfortunately, even in the United
States and Britain, the hope of quick applications has lured too many
students away from basic research.  Moreover, our industrial system
has serious weaknesses, some of which the Japanese have avoided.  For
example, if we were to match their 40 engineer project according to
output of our educational system, our project would have 20 engineers
and 20 lawyers.

	The authors are properly cautious about what kind of an
American project is called for.  It simply cannot be an Apollo-style
project, because that depended on having a rather precise plan in the
beginning that could see all the way to the end and did not depend on
new scientific discoveries.  Activities that were part of the plan
were pushed, and everything that was not part of it was ruthlessly
trimmed.  This would be disastrous when it is impossible to predict
what research will be relevant to the goal.

	Moreover, if it is correct that good new ideas are more likely
to be decisive in this field at this time than systematic work on
existing ideas, we will make the most progress if there is money to
support unsolicited proposals.  The researcher should propose goals
and the funders should decide how he and his project compare with the
competition.

	A unified government-initiated plan imposed on industry has
great potential for disaster.  The group with the best political
skills might get their ideas adopted.  We should remember that present
day integrated circuits are based on an approach rejected for
government support in 1960.  Until recently, the federal government
has provided virtually the only source of funding for basic research
in computer technology.  However, the establishment of
industry-supported basic research through consortia like the
Microelectronics and Computer Technology Corporation (MCC), set up in
Austin, Texas under the leadership of Admiral Bobby Inman, represents
a welcome trend--one that enhances the chances of making the
innovations required.

------------------------------

End of AIList Digest
********************

∂04-Apr-84  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #42
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Apr 84  17:06:49 PST
Date: Wed  4 Apr 1984 15:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #42
To: AIList@SRI-AI


AIList Digest            Thursday, 5 Apr 1984      Volume 2 : Issue 42

Today's Topics:
  AI Tools - Lisp Eigenanalysis Package Request,
  Automata - PURR-PUSS References & Cellular Automata Request,
  AI Publications - SIGBIO Newsletter,
  Expert Systems - Nutrition System Request & Recipe Planner &
      Legal Reasoning Systems,
  AI Programming - Discussion
----------------------------------------------------------------------

Date: 30 Mar 84 11:07:45-PST (Fri)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!eneevax!phaedrus @ Ucb-Vax
Subject: Lisp equivalent of Linpack info wanted
Article-I.D.: eneevax.103

I was wondering if anybody knows of any packages in lisp that does the same
thing that linpack does (ie. finding eigenvalues, eigenvectors etc.).
But it must do it fast.

My problem is that I need to do some linear algebra stuff but I need to
be able to load it into vaxima (MACSYMA on a VAX running 4.1BSD.  If you
have any suggestions I would be very grateful.

                                Thanks
                                Pravin Kumar

>From the contorted brain, and the rotted body of THE SOPHIST

ARPA:   phaedrus%eneevax%umcp-cs@CSNet-Relay
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!eneevax!phaedrus

------------------------------

Date: 4 Apr 84 21:37:21-EST (Wed)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Request for PURR-PUSS reference

Recently someone mentioned a system called PURR-PUSS (I think it was Ken
Laws), in connection with determining the configuration of a finite
state machine based on observation of input-output relationships.  I'm
doing some work related to that, and would appreciate references to
PURR-PUSS.

     Tom Portegys, Bell Labs Naperville, Ill., ihuxv!portegys

[I ran across PURR-PUSS in J.H. Andreae's "PURR-PUSS: Purposeful
Unprimed Rewardable Robot", Electrical Engineering Report No. 24,
Sept. 1974, Man-Machine Studies, Progress Report UC-DSE/4(1974) to
the Defence Scientific Establishment, Editor J.H. Andreae, Dept.
of EE, Univ. of Canterbury Christchurch, New Zealand, pp. 100-150.
This article describes several applications of the PUSS (Predictor
Using Slide and Strings) learning program, including the identification
of a repetition pattern in a seemingly random H/T sequence.  (The
pattern was two random choices followed by a repeat of the second choice.)
References are given to earlier reports in this series.
I also have copies of reports 26, 27, and 28 (Sep. 1975);  each has
at least one article on the use of PUSS learning/predicting modules
for the reasoning component in some application.  -- KIL]

------------------------------

Date: 28 Mar 84 12:30:21-PST (Wed)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Cellular Automata -- Request for pointers
Article-I.D.: dartvax.1024

I was fascinated by the description of "cellular automata" in last
month's Computer Recreations section of Scientific American.  The mass
of interacting parallel processes that are described there seem
singularly appropriate for the simulation of phenomona of interest to
AI workers.  With complex enough rules of interaction between elements
it seems one could simulate neurons in the brain or the evolutionary
process.  I'm aware, from a course I took long ago in Cognitive
Psychology, that psychologists use dynamically interacting models of
this sort.

  This note is to request pointers to any research that's currently
being done in this area, specifically as it relates to AI.

Thanks in advance,

      --Lorien Y. Pratt
        Dartmouth College Library
        Hanover, NH  03755

        decvax!dartvax!lorien

------------------------------

Date: Wed, 4 Apr 84 00:40:26 EST
From: "Roy Rada"@UMich-MTS.Mailnet
Subject: SIGBIO Newsletter

[...]

As Editor of the ACM SIGBIO Newsletter, I would
like to publish material on AI in Medicine.  My
own research focuses on continuity in knowledge
acquisition for medical expert systems.  Please
send me whatever you feel might be relevant.

--Roy Rada

------------------------------

Date: 3 Apr 1984 1630-PST
From: Nitsan Har-gil <har-gil@USC-ISIF>
Subject: Expert System for Nutrition

Does anyone know of expert systems dealing with Nutrition (Food, etc.).
Something that you can give a typical daily menu and will respond with
nutritional deficiencies, etc.  Thanks in advance, Nitsan.

------------------------------

Date: Wed, 4 Apr 84 15:26:23 EST
From: Kris Hammond <Hammond@YALE.ARPA>
Subject: Re: Recipe Planner

[This is a response to a personal query about Kris' work in recipe planning;
he agreed to let me share it with the list.  I would be interested in hearing
about other recipe-based systems, including those in the chemistry domain.
-- KIL]

I have a paper in AAAI83, Planning and Goal  Interaction:   The  use  of
past solutions in present situations.  My work centers around the notion
of  organizing  planning  knowledge  around  the  interactions   between
features rather  than  around  individual  features  themselves.  In the
cooking domain this means the planner has to anticipate the interactions
between different  tastes  and  textures, and search for past plans that
have already dealt with this interaction in the past.   The  end  result
is a  system  that  looks  at  an input situation, (a request for a dish
that includes many items and tastes) and tries  to  find  a  recipe  for
an analogous past situation.

The paper  is the analysis of an example which uses knowledge of feature
interaction to 1) analyze the original input, 2)  index  into  a  useful
plan, 3)  suggest  the  type  of  modifications  that have to be made on
that plan, 4) search for problems in the resulting plan and  5)  propose
general solutions to the problems encountered.

I [am now working on] a more general application of the idea of organizing
planning information in terms of goal and plan interaction.  [...]

The cooking paper in on YALE-RES <A.HAMMOND.WORK>WOK.MSS.

Thanks for the interest.

Kris Hammond

------------------------------
Date: 1 Apr 84 10:28:53 CST (Sun)
From: ihnp4!utcsrgv!dave@Berkeley
Subject: expert systems and legal reasoning

A recent request asked for information about expert systems and legal
reasoning. I suggest anyone interested in that field get onto Law:Forum,
a discussion group running under CONFER on an MTS system at Wayne
State University in Michigan. Access is free, with computer charges
and Telenet charges being picked up by the Markle Foundation grant
which is funding the project. Most of the major people involved
in developing legal reasoning systems (Thorne McCarty, Layman Allen,
Jim Sprowl, several others) are involved in Law:Forum and participate
regularly.

If you want to get onto Law:Forum, and can be reached electronically,
send me your electronic address:
        ihnp4!utcsrgv!dave@BERKELEY             (ARPA)
        dave.Toronto                            (CSnet)
        ihnp4!utcsrgv!dave                      (UUCP)

If you have no electronic address, I can't ship out the access information
to you, so send a letter directly to the conference organizer:
        Prof. Jennifer Bankier
        Dalhousie Law School
        Dalhousie University
        Halifax, Nova Scotia
        Canada                  (sorry, don't have the postal code handy)


Dave Sherman
The Law Society of Upper Canada
Toronto
(416) 947-3466

------------------------------

Date: Sun, 1 Apr 84 21:50:25 cst
From: Georgp R. Cross <cross%lsu.csnet@csnet-relay.arpa>
Subject: Legal AI Systems


We are developing a model of the Louisiana Civil Code
The representation language is called ANF (atomically
Normalized Form) and is being used to develop the
conceptual retrieval and reasoning system CCLIPS (Civil
Code Legal Information Processing System).  Some references
are:

deBessonet, C.G., Hintze, S.J., and Waller, W., "Automated
Retrieval of Information: Toward the Development of a Formal
Language for Expressing Statutes, Southern University Law Review,
6(1), 1-14, 1979.

deBessonet, C.G., "A Proposal for Developing the Structural
Science of Codification," Rutgers Journal of Computers,
Technology and the Law, 1(8), 47-63, 1980.

deBessonet, C.G., "An Automated Approach to Scientific
Codification," Rutgers Computer and Technology Law Journal,
9(1), 27-75, 1982.

deBessonet, C.G.
"An Automated Intelligent System Based on a Model of a Legal
System," Rutgers Journal of Computers, Technology, and the
Law, 10, to appear, 1983.

Technical Reports:

83-011 Formalization of Legal Information
83-023 Natural Language Generation for a Legal Reasoning
       System
83-002 Processing and Representing Statutory Formalisms
84-006 Representation of Some Aspects of Legal Causality
83-005 Representation of Legal Knowledge

Copies of the above Technical Reports may be requested from
<techrep%lsu@csnet-relay> or from

      Technical Reports Secretary
      Department of Computer Science
      Louisiana State University
      Baton Rouge, LA  70803-4020


George R. Cross              Cary G. deBessonet
<cross%lsu@csnet-relay>      <debesson%lsu@csnet-relay>

------------------------------

Date: Tue, 3 Apr 84 13:35 PST
From: DSchmitz.es@Xerox.ARPA
Subject: Legal AI research

For all who requested, I am maintaining a copy of all the responses to
my request for information about ongoing AI research in legal-related
fields in the following file:  [Oly]<DSchmitz>LegalAI.mail

There are about 15 responses in there now (including those who asked to
be copied on the responses) and I will be adding any new ones I receive
as they arrive.

David

------------------------------

Date: 1 Apr 84 22:35:06 EST
From: Louis Steinberg <STEINBERG@RUTGERS.ARPA>
Subject: Re: Stolfo's call for discussion

One way AI programming is different from much of the programming in other
fields is that for AI it is often impossible to produce a complete set of
specifications before beginning to code.

The accepted wisdom of software engineering is that one should have a
complete, final set of specifications for a program before writing a
single line of code.  It is recognized that this is an ideal, not
typical reality, since often it is only during coding that one finds
the last bugs in the specs.  However, it is held up as a goal to
be approached as closely as possible.

In AI programming, on the other hand, it is often the case that an
initial draft of the code is an essential tool in the process of
developing the final specs.  This is certainly the case with the
current "expert system" style of programing, where one gets an expert
in some field to state an initial set of rules, implements them, and
then uses the performance of this implementation to help the expert
refine and extend rules.  I would argue it is also the case in fields
like Natural Language and other areas of AI, to the extent that we
approach these problems by writing simple programs, seeing how they
fail, and then elaborating them.

A classic example of this seems to be the R1 system, which DEC uses to
configure orders for VAXen.  An attempt was made to write this program
using a standard programming approach, but it failed.  An attempt was
then made using an expert system approach, which succeeded.  Once the
program was in existence, written in a production system language, it
was successfully recoded into a more standard programming language.
Can anyone out there in net-land confirm that it was problems with
specification which killed the initial attempt, and that the final
attempt succeeded because the production system version acted as the
specs?

------------------------------

End of AIList Digest
********************

∂05-Apr-84  2050	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #43
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Apr 84  20:50:17 PST
Date: Thu  5 Apr 1984 19:21-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #43
To: AIList@SRI-AI


AIList Digest             Friday, 6 Apr 1984       Volume 2 : Issue 43

Today's Topics:
  Nonmonotonic Logic - Reference Request,
  AI Applications - Algebra and Geometry on the IBM-PC,
  News - Computer Ethics Prize,
  Linguistics - Use of "and",
  AI Computing - Software Engineering,
  Reading List on Logic and Parallel Computation
  Seminars - Automating Shifts of Representation &
    Internalized World Knowledge &
    Linguistic Structuring of Concepts &
    Protocol Analysis
----------------------------------------------------------------------

Date: 4 Apr 84 18:20:23 EST  (Wed)
From: Don Perlis <perlis%umcp-cs.csnet@csnet-relay.arpa>
Subject: nonmonotonic reference request


                        BIBLIOGRAPHY ON NON-MONOTONIC LOGIC


I  am  compiling  a  bibliography of literature on nonmonotonic logic, to be
made available to the AI community, and in particular  to  the  workshop  on
non-monotonic  reasoning  that  will take place in October in New Paltz, New
York.

I  would  greatly  appreciate  references  from  the  AI  community, both to
published and  unpublished  material  (the  latter  as  long  as  it  is  in
relatively  completed  form  and copies are available on request).  Material
can be sent to me at perlis@umcp-cs and also by post to  D. Perlis, Computer
Science Department, University of Maryland, College Park, MD 20742.

Thanks in advance for your cooperation.

------------------------------

Date: 5 Apr 84 15:16:43 PST (Thursday)
From: Cornish.PA@Xerox.ARPA
Subject: Application of LISP programs to Math Ed Software

I am interested in AI programs in the following areas listed below.
Could someone provide me with pointers to the significant work done in
these areas? Could someone advise me whether work done in these areas
could feasibly run on existing Lisp systems for the IBM-PC.  "feasibly
run" means that the programs would be  responsive enough to form the
basis of a math ed product.

1. Solution of Algebra word problems
2. Analysis of proofs in plane Geometry


Thank you very much,

Jan Cornish

------------------------------

Date: Wed 4 Apr 84 17:22:09-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Computer Ethics

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


         COMPETITION ANNOUNCEMENT:  THE METAPHILOSOPHY PRIZE

     METAPHILOSOPHY will  award a  prize  of  $500  to the  author who
submits the best essay in computer ethics between January 1, 1984, and
December 31, 1984.  The prize-winning  essay will be published as  the
lead article in  the April 1985 issue of METAPHILOSOPHY, which will be
devoted entirely  to computer  ethics.  Other  high-quality  essays in
computer ethics will be accepted for publication in the same issue.  A
panel of experts in computer ethics will select the winners.  To enter
the competition, send four copies of your essay to:

                      Terrell Ward Bynum
                      Editor, Metaphilosophy
                      Metaphilosophy Foundation
                      Box 32
                      Hyde Park, NY  12538

     Readers unfamiliar  with  the  field of  computer  ethics  should
consult the January  1984 issue of  METAPHILOSOPHY.  Those  unfamiliar
with specifications  for  manuscript preparation  should  consult  any
recent issue.

------------------------------

Date: 04 Apr 84  11:14:01 bst
From: J.R.Cowie%rco@ucl-cs.arpa
Subject: Use of "and"

There is another way of looking at the statement -
 all customers in Indiana and Ohio
which seems simpler to me than producing the new phrase -
 all customers in Indiana  AND all customers in Ohio
instead of doing this why not treat Indiana and Ohio as a new single
conceptual entity giving -
 all customers in (Indiana and Ohio).

This seems simpler to me. It would mean the database would have to
allow aggregations of this type, but I don't see that as being
particularly problematic.

Have I missed some subtle point here?

Jim Cowie.

------------------------------

Date: 5 April 1984 0949-cst
From: Dave Brown    <DBrown @ HI-MULTICS>
Subject: Re: Stolfo's call for discussion

  A side point about Louis Steinberg's response: The accepted wisdom
is actually that AI and plain commercial programming has shown that
specification in complete detail is really just mindless hacking, by
a designer rather than a hack.
  *However* the salesmen of "software engineering methodologies" are
just getting up to about 1968 (the first sw eng conference), and
are flogging the idea that perfect specifications are possible
and desirable.
  Therefore the state of practice lags behing the state of the art
an unconsciousable distance....
  AI leads the way, as usual.

  --dave (software engineering ::= brilliance | utter stupidity) brown

------------------------------

Date: 05 Apr 84  1711 PST
From: Carolyn Talcott <CLT@SU-AI.ARPA>
Subject: Reading List on Logic and Parallel Computation

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

INSTRUCTOR: Professor G. Kreisel
TIME:   Monday  4:15-6pm
PLACE:  252 Margaret Jacks Hall
       (Stanford Computer Science Department)
TOPIC:  Logic and parallel computation.

Below is a reading list that was compiled from discussion
at the organizational meeting.  [...]


          --------------------------------------------------
                             Reading List
          --------------------------------------------------

[Carolyn Talcott - 362 Margaret Jacks - CLT@SU-AI - has copies
 of all the references]


                         Parallel Computation
                        ---------------------

Fortune,S. and Wyllie,J. [1978]
Parallelism in random access machines
Proc. 10th  ACM Symposium on Theory of Computation (STOC)
pp.114-118.


Valiant,L. Skyum,S.[1981]
Fast parallel computation of polynomials using few processors
Proc. 10th Somposium on Mathematical Foundations of Computer Science
LNCS 118, pp. 132-139.

von zur Gathen,J.[1983]
Parallel algorithms for algebraic problems
Proc. 15th  ACM Symposium on Theory of Computation (STOC)
pp. 17-23.

Mayr,E.[1984], Fast selection on para-computers (slides )

Karp,R.M. Wigderson,A.[1984?]
A Fast Parallel Algorithm for the Maximal Independent Set Problem
  - Extended Abstract (manuscript)


        Continuous operations on Infinitary Proof Trees, etc.
        -----------------------------------------------------

Rabin,M.O.[1969]
Decidability of 2nd Order Theories and Automata on Infinite Trees,
TransAMS 141, pp.58-68.

Kreisel,G. Mints,G.E. Simpson,S.G.[1975]
The Use of Abstract Language in Elementary Metamathematics;
  Some Pedagogic Examples,
in Logic Colloquium72, LNM 453, pp.38-131.

Mints,G.E.[1975] Finite Investigations of Transfinite Derivations,
J.Soviet Math. 10 (1978) pp. 548-596. (Eng.)

Sundholm,B.G.[1978] The Omega Rule: A Survey,
Bachelors Thesis, University of Oxford

------------------------------

Date: 3 Apr 84 13:00:42 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Automating Shifts of Representation

            [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                      Machine learning brown bag seminar


    Title: Automating Shifts of Representation
    Speaker: P. J. Riddle
    Date: Wednesday, April 11, 1984, 12:00-1:30
    Location: Hill Center, Room 254

       My  thesis  research  deals  with  automatically  shifting  from one
    knowledge representation of a certain problem to another representation
    which is more efficient for the problem class  to  which  that  problem
    belongs.  I believe that "...changes of representation are not isolated
    'eureka'  phenomena  but  rather  can  be  decomposed into sequences of
    relatively minor representation shifts". I am  attempting  to  discover
    primitive representation shifts and techniques for automating them.  To
    achieve  this  goal  I  am  attempting  to  define and automate all the
    primitive  representation  shifts  explored  in  the   Missionaries   &
    Cannibals (M&C) problem.  The main types of representation shifts which
    I  have already identified are: forming macromoves, removing irrelevant
    information, and removing redundant  information.    Initially  I  have
    concentrated  on  a  technique  for automatically acquiring macromoves.
    Macromoves succeed in shifting the problem space to a higher  level  of
    abstraction.    Assuming  that  the macromoves are appropriate for this
    problem class, this will make the problem solver  much  more  efficient
    for subsequent problems in this problem class.

------------------------------

Date: Wed, 4 Apr 84 10:27:46 pst
From: chertok%ucbkim@Berkeley (Paula Chertok)
Subject: Internalized World Knowledge

       [Forwarded from the CSLI bboard by Laws@SRI-AI.]

             BERKELEY COGNITIVE SCIENCE PROGRAM
                        Spring 1984

            IDS 237B - Cognitive Science Seminar

          Time:        Tuesday, April 10, 1984, 11-12:30pm
          Location:    240 Bechtel


              HOW THE MIND REFLECTS THE WORLD
                      Roger N. Shepard
       Department of Psychology, Stanford  University

Through biological evolution,  enduring  characteristics  of
the  world  would  tend  to become internalized so that each
individual would not have to learn  them  de  novo,  through
trial  and possibly fatal error.  The most invariant charac-
teristics are quite abstract: (a) Space  is  locally  three-
dimensional,  Euclidean, and isotropic except for a gravita-
tionally conferred unique upright direction. (b) For any two
positions  of  a  rigid  object, there is a unique axis such
that the object can be most  simply  carried  from  the  one
position  to  the  other  by  a  rotation  around  that axis
together with a translation along it. (c) Information avail-
able  to  us about the external world and about our relation
to it is analyzable into  components  corresponding  to  the
invariants  of  significant  objects,  spatial  layouts, and
events and, also, into components corresponding to the tran-
sitory  dispositions, states, and manners of change of these
and of the self relative to these.   Having  been  internal-
ized,  such  characteristics  manifest themselves as general
laws governing the representation of objects and events when
the  relevant information is fully available (normal percep-
tion), when it is only partially available (perceptual  fil-
ling  in or perceptual interpretation of ambiguous stimuli),
and when it  is  entirely  absent  (imagery,  dreaming,  and
thought).    Phenomena  of  identification,  classification,
apparent motion, and imagined transformation illustrate  the
precision and generality of the internalized constraints.

*****  Followed by a lunchbag discussion with speaker  *****
***  in the IHL Library (Second Floor, Bldg. T-4) from 12:30-2  ***

------------------------------

Date: Wed 4 Apr 84 18:49:45-PST
From: PENTLAND@SRI-AI.ARPA
Subject: Linguistic Structuring of Concepts

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

Issues In Language, Perception and Cognition
WHO: Len Talmy, Cognitive Science Program and German Dept., UC Berkeley
WHEN: Monday April 9, 12:00 noon
WHERE: Room 100, Psychology

                How Language Structures its Concepts

Languages have two kinds of elements: open-class, comprising  the  roots
of  nouns,  verbs,  and adjectives, and closed-class, comprising all in-
flections, particle words, grammatical categories, and the like.  Exami-
nation  of a range of languages reveals that closed-class elements refer
exclusively to certain concepts, and seemingly never to concepts outside
those  (e.g., inflection on nouns may indicate number, but never color).
My idea is that all closed-class elements taken  together  consistute  a
very  special  group:  they  code  for a fundamental set of notions that
serve to structure the conceptual material expressed by language.   More
particularly,   their  references constitute a basic notional framework,
or scaffolding, around which is organized the more contentful conceptual
material  represented by open-class (i.e., lexical) elements.  The ques-
tions to be addressed are: a) Which exactly are the notions specified by
closed-class  elements, and which notions are excluded?  b) What proper-
ties are shared by the included notions and  absent  from  the  excluded
ones?   c) What functions are served by this design feature of language,
i.e., the existence in the first place of  a  division  into  open-  and
closed-class  subsystems,  and  then the particular character that these
have?  d) How does this structuring system specific to language  compare
with  those  in other cognitive subsystems, e.g. in visual perception or
memory?  With question (d), this linguistic investigation opens out into
the  issue  of  structuring within cognitive contents in general, across
cognitive domains.

------------------------------

Date: 4 Apr 1984 12:35:50-PST
From: mis at SU-Tahoma
Subject: S.P.A. - Seminar in Protocol Analysis

     [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                  M. Pavel &  D. Sleeman
    ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

           S.P.A - SEMINAR IN PROTOCOL ANALYSIS
    ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

           Introduction to protocol  analysis:
       an  example  from  developmental psychology.

                       Jean Gascon
                       Stuart Card

              Xerox Palo Alto Research Center

        The first of this series of seminars on protocol analysis
will be structured as a tutorial on protocol analysis and comput-
er simulation.  Stuart Card will give a  brief  overview  of  the
history, motivation and practice of the methodology.  Jean Gascon
will  then  illustrate,  with  a  simple  example,  how  protocol
analysis  is  performed.  The  application  area  will  come from
developmental psychology.  First, protocols of children of  vari-
ous  ages  performing  one  of  Piaget's "seriation" task will be
shown.  We will then explain how one goes from the actual data to
the  construction of the "problem space" (a la Newell and Simon).
The next step consists of regrouping the problem spaces  of  dif-
ferent  subjects  into a more general psychological model (dubbed
BG in this particular case). We will see how the BG language  fa-
cilitates  the  writing of simulation models.  A computer program
that does automatic protocol analysis of the seriation  protocols
will  then  be introduced.  This program provides some additional
insights about the process of protocol analysis itself.   In  the
conclusion  we  will discuss the advantages and inconveniences of
protocol analysis relative to the other  methodologies  available
in cognitive psycholgy.

     ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

           Place:  Jordan Hall, Room 100
           Time:   1:00 pm, Wednesday  April 11, 1984
     ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

------------------------------

End of AIList Digest
********************

∂07-Apr-84  2324	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #44
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Apr 84  23:22:55 PST
Date: Sat  7 Apr 1984 22:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #44
To: AIList@SRI-AI


AIList Digest             Sunday, 8 Apr 1984       Volume 2 : Issue 44

Today's Topics:
  Cellular Automata - References,
  Image Understanding - Expert System for Radiograph Analysis,
  Education - Model AI Curriculum,
  AI Funding - Alan Kay Review & Strategic Computing,
  Seminars - Language Structures Time Change & Automatic Programming
----------------------------------------------------------------------

Date: 1 Apr 84 13:44:53-PST (Sun)
From: decvax!genrad!wjh12!vaxine!pct @ Ucb-Vax
Subject: Re: Cellular Automata
Article-I.D.: vaxine.221

There is a big review article by Wolfram in Reviews of Modern Physics
v. 55 no. 3 p. 601 (July 1983) with a large list of references

------------------------------

Date: 2 Apr 84 18:41:37-PST (Mon)
From: hplabs!hao!seismo!rochester!ur-laser!bill @ Ucb-Vax
Subject: An expert system for reading chest radiographs
Article-I.D.: ur-laser.136

I have developed an "expert" system that analyzes chest  ra-
diographs  for  tumors.   This system was tested on 37 films
that contain nodules.  It is capable of finding the  nodules
in  92%  of the films.  In studies of mass screenings of ra-
diographs by radiologists it was found that the radiologists
miss  25-30%  of  all nodules < 1cm. A Rib Expert determines
whether a candidate nodule (possible tumor) is a rib. A  No-
dule Expert, a linear-discriminant-based pattern recognizer,
classifies candidate nodules. All candidate nodules that are
classified as any type of nodule are presented to a radiolo-
gist for further inspection.  Radiologists can recognize no-
dules  as  such  once  they are pointed out.  If you are in-
terested in this work or want leads to other methods of  au-
tomated  chest  film  analysis,  which are listed in the bi-
bliographies, contact Peggy Meeker ((716)275-7737, {allegra,
seismo}!rochester!peg)  at  the Computer Science Dept at the
University of Rochester and request the following TRs:

Lampeter, W.A.  "Design, tuning, and performance  evaluation
of an automated pulmonary nodule detection system."  TR-120,
Computer Science Department, University  of  Rochester,  Ro-
chester NY, 1983.

Lampeter, W.A.  "Three image experts which help  distinguish
tumors  from  non-tumors,"  TR-123, Computer Science Depart-
ment, University of Rochester, Rochester NY, 1984.

Other works of possible interest:

Ballard, D. H., J. Sklansky.   "Tumor  detection  in  radio-
graphs,"   Computers  in  Biomedical  Research,  6, 299-321,
1973.

Jagoe, J.R., "Reading chest radiographs  for  pneumoconiosis
by computer," Brit. J. Ind. Med., 32, 267-272, 1975.

Toriwaki, J. et.al. Pattern recognition of chest  x-ray  im-
ages. Comp Grap Pat Recog, 2, 252-271, 1973.



Bill Lampeter
Department of Radiology
School of Medicine and Dentistry
University of Rochester
(716) 275-5101 or (716) 275-3194
{seismo, allegra}!rochester!ur-laser!bill

------------------------------

Date: Sat 7 Apr 84 21:30:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI Curriculum

The April issue of IEEE Computer discusses computers in education.
The first article is on a model curriculum for Computer Science, and
pages 12-13 describe a sample curriculum for AI.  About 20 references
to suggested AI texts and articles are also given.

                                        -- Ken Laws

------------------------------

Date: 6 Apr 84 21:16:53 PST (Friday)
From: Ron Newman <Newman.es@Xerox.ARPA>
Subject: Alan Kay on DARPA research & Mansfield amendment

Excerpted from an interview in the April 1984 issue of ST.Mac magazine
(Softalk's magazine for the Macintosh).  All [bracketed phrases] are as
in the original.


Alan:  Things haven't been the same in computer science since two things
happened.  The awful thing that happened was the Mansfield amendment in
1969.  The amendment was a congressional reaction to pressure from the
population about the Vietnam war, mostly uninformed pressure.

  What it did was force all military funding to be put under the
scrutiny of Congress and to be diverted only to military-type things.
All of a sudden, everything was different at ARPA [the Advanced Research
Projects Agency].


Q:  ARPA became DARPA (the Defense Advanced Research Projects Agency) at
that point?

Alan: ARPA became DARPA.  The last good thing to be done had already
been funded, which was the Arpanet [a network of communicating computers
around the world that allows scientists to send messages to each other].
That was finished in 1970.  That was the end of ARPA funders [program
directors] being drawn from the ARPA community.

During the golden age at ARPA, the funding was much less than it is now,
but it was wide open.


Q: More creative work was done?

Alan:  Yeah.  Their whole theory--partially because the managers of ARPA
were scientists themselves--was "we fund people, not projects.  If we
can understand what these guys are doing, we should probably be off
doing it ourselves.  We'll just dump half this money for three years and
take our lumps."

They took percentages, like you have to in real research.  And, God, did
they get some great stuff!

~~~~End of excerpt~~~~~


Alan makes lots of other brash statements in this article too.  I'll
leave you with just this one:

  "I'd just as soon send all the engineers around here in Silicon Valley
to the Outback of Australia until they have read something like 'The
Federalist Papers' or Adam Smith's 'Wealth of Nations' or *something*,
for God's sake....what they're doing is actually vandalizing an entire
generation of kids by acting as though things like Basic have value."

------------------------------

Date: 5 Apr 84 17:56:01 PST (Thursday)
From: Ron Newman <Newman.es@Xerox.ARPA>
Subject: Strategic Computing in Electronic News 3/19/84

[personal comment follows at end of article--RN]

"DOD Strategic Computing to get $95M in Funding"
Electronic News, March 19, 1984, page 18
by Lloyd Schwartz

  WASHINGTON (FNS)--A virtual doubling of the funds for the Defense
Department's Strategic Computing initiative in fiscal year 1985--from
$50 million to $95 million--represents the first step in providing
"dramatic new computational capabilities to meet future critical defense
needs," Pentagon officials reported to Congress.

  They said that, as computer capability evolves, "men and computers
will operate as collaborators in the control of complex weapon systems."
It boiled down to, they added, future "wars by computer," with the side
possessing the superior technology prevailing.

  Dr. Robert S. Cooper, director of DOD's Defense Advanced Research
Projects Agency (DARPA), describing the program as well under way,
explained it is using a new idea, employing multiprocessor architecture
to reach for a new generation of computers with as much as 10,000 times
the computing capability of hardware available today.

  The computers, endowed with artificial intelligence, will be capable
of solving extraordinarily complex problems involving human beings,
understanding speech and responding in kind,  Dr. Cooper indicated to
the House Armed Services Committee.  They also will require a whole new
system of prototyping, it was added.

  Dr. Cooper testified that while computers are already widely employed
in defense, current computers have inflexible program logic and are
limited in their ability to adapt to unanticipated enemy actions in the
field.  The problem, he noted, is exacerbated by the increasing pace and
complexity of modern warfare.

  "The Strategic Computing program will confront this challenge by
producing adaptive, intelligent computers specifically aimed at critical
military applications,"  the DARPA chief continued.  "These new machines
will be designed to solve complex problems in reasoning.  Special
symbolic processors will employ expert human knowledge contained in
radical new memory systems to aid humans in controlling the operation of
complex military systems.

  "The new generation computers will understand connected human speech
conveyed to them in natural English sentences, as well as be able to see
and understand visible images obtained from TV and other sensors."

  Dr. Cooper noted DARPA has already demonstrated a limited voice
message system in which a computer recognized and understood human
speech to receive its commands.  The computer was able to respond
verbally, using synthesized speech, although it possesses a limited
vocabulary.

  Another example of technological advancement, Dr. Cooper noted, was
DARPA's recent success in applying a finely-focused ion beam in the
maskless fabrication of integrated circuits.  He said this work is
continuing and "could result in a major breakthrough in ultimately
achieving a large-scale maskless fabrication capability."

  Summing up, the DARPA chief declared "In the future, supercomputers
with reasoning ability and natural language interfaces with military
commanders will be able to participate in military assessment and may be
able to simulate and predict the consequences of various proposed
courses of military action.  This will allow the commander and his staff
to focus on the larger strategic issues, rather than have to manage the
enormous information flow that will characterize the battles of the
future."

  Dr. Cooper added that the balance of military power in the future
"could well depend on successful application of 'superintelligent
computers' to the control of highly-effective advanced weapons."

~~~~~End of Electronic News article~~~~~


Comments:

1.  In the past, defenders of DARPA funded computer research have
asserted that the military and civilian industry have the same goals, so
that what's good for the Pentagon is good for the commercial market too.
But now we have a program whose goal, in the Pentagon's own words, is to
produce "adaptive, intelligent computers ***specifically aimed at
critical military applications***."

  [Sorry if I'm injecting any personal bias here, but this seems to be a
  non sequitur.  Past military research (e.g., image understanding) was
  also targeted at critical military applications; that didn't prevent
  it from also being useful or even critical to civilian industry.  The
  strategic computing effort need not be different.  All that has changed
  is the military's boldness in expressing its own importance, about which
  it may or may not be right.  -- KIL]

2.  Everyone knows how backward Soviet computer science and industry
are, so who is he talking about when he refers to " 'wars by computer,"
with the side possessing the superior technology prevailing" ?  Once
again, the U.S. leads the way into a new round of the arms race.


/Ron

------------------------------

Date: Fri 6 Apr 84 11:57:07-PST
From: PENTLAND@SRI-AI.ARPA
Subject: Issues in Language, Perception and Cognition

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

**** Due to a scheduling conflict, there has been a room change, to 050 ****

WHO: Len Talmy, Cognitive Science Program and German Dept.,  UC Berkeley
WHAT: How Language Structures its Concepts
WHEN: Monday April 9 12:00 noon
WHERE: Room 380-50

------------------------------

Date: Thu 5 Apr 84 17:02:34-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Automatic Deduction talk

          [Forward from the Stanford bboard by Laws@SRI-AI.]

                Monday, April 9th in MJH 301 at 2:30.


                    THE ORIGIN OF BINARY-SEARCH ALGORITHMS


                               Richard Waldinger
                        Artificial Intelligence Center
                               SRI International


     Many of the most efficient numerical algorithms employ a binary search, in
which  the  number we are looking for belongs to an interval that is divided in
half at each iteration.  We consider how such algorithms might be derived  from
their specifications.

     We follow a deductive approach, in which programming is regarded as a kind
of  theorem  proving.    By  systematic  application  of this approach, several
integer and real-number algorithms for such functions as the  square  root  and
quotient have been derived.  Some of these derivations have been carried out on
an  interactive  program-synthesis  system.    The  programs  we  obtained  are
different from what we originally expected.

------------------------------

End of AIList Digest
********************

∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #45
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Jan 85  16:03:09 PST
Mail-From: LAWS created at 11-Apr-84 16:00:48
Date: Wed 11 Apr 1984 15:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #45
To: AIList@SRI-AI
ReSent-date: Sun 13 Jan 85 16:03:09-PST
ReSent-From: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-To: YM@SU-AI.ARPA


AIList Digest           Thursday, 12 Apr 1984      Volume 2 : Issue 45

Today's Topics:
  AI Tools - Real-Time AI & MASSCOMP & MV68000 Systems Wanted,
  Review - Micro LISPs & Common LISP Alert & CACM Humor,
  AI Jobs - Noncompetition Clauses,
  Expert Systems - Articulation,
  Natural Language - Metaphors,
  Seminars - Automated Algorithm Design & Engineering Problem Solving
----------------------------------------------------------------------

Date: 3 Apr 84 11:46:21-PST (Tue)
From: hplabs!hao!ames-lm!al @ Ucb-Vax
Subject: Real time man-in-the-loop LISP machines
Article-I.D.: ames-lm.196

Does anyone know of any AI systems or LISP machines that are oriented
towards real time, man-in-the-loop simulation?  We are beginning work
on a space station simulator aimed at human factors research.  LISP
is an appealing language in many respects but all of the systems
I've heard of are interactive, non-real time oriented.  We need something
that can pretend that it's a space station and do it fast enough
and consistantly enough to keep up with human temporal perception.

------------------------------

Date: 5 Apr 84 10:04:50-PST (Thu)
From: decvax!mcnc!philabs!rdin!perl @ Ucb-Vax
Subject: OPS5 and Franz LISP wanted
Article-I.D.: rdin.371

We are looking for implemetations of Franz LISP and OPS5 that
will run on a MASSCOMP MC500 under MASSCOMP UNIX version 2.1 (2.0).

Thank you.

Robert Perlberg
Resource Dynamics Inc.
New York
philabs!rdin!rdin2!perl

------------------------------

Date: Mon, 9 Apr 84 16:37:27 cst
From: George R. Cross <cross%lsu.csnet@csnet-relay.arpa>
Subject: LISP on a Data General?


Does anyone know of a LISP implementation on Data General's
MV8000 type computers under AOS/VS?   One good enough for
teaching is all that is required.

       George Cross
       Computer Science, LSU
       <cross%lsu@csnet-relay>

------------------------------

Date: 11 Apr 1984 0206 PST
From: Larry Carroll <LARRY@JPL-VLSI.ARPA>
Reply-to: LARRY@JPL-VLSI.ARPA
Subject: micro LISP review

There's a good article in the April issue of PC Tech Journal
about three micro versions of LISP: IQ LISP, muLISP-82, and
TLC LISP.  It gives a fair amount of implementation detail,
contrasts them, and compares them to their mini and mainframe
cousins.  The author is Bill Wong, who's working on his PhD in
computer science at Rutgers.

At least the article looks pretty good to me, but it's been a
long time since I did any LISP programming.  Anyone feel like
reviewing Wong's review?
                                Larry Carroll
                                Jet Propulsion Lab.
                                   larry@jpl-vlsi

------------------------------

Date: Mon, 9 Apr 84 13:03 EST
From: Tom Martin <TJMartin@MIT-MULTICS.ARPA>
Subject: Could it be COMMON LISP?

A announcement just arrived in the mail from Digital Press:

          COMMON LISP manual,Guy L. Steele, Jr.
          $22.00/May 1,1984/Paperbound/

  --Tom Martin
    Arthur D. Little, Inc.

------------------------------

Date: Wed 11 Apr 84 09:53:20-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: What's Happening to Stuffy Old CACM?

I just can't resist passing along these items from Jon Bentley's
column in the April CACM:

  The CRAY-3 is so fast that it can execute an infinite loop
  in less than two minutes.

  Have you heard how [it] implements a branch instruction?
  It holds the program counter constant and moves the memory.


If you like those, you'll also like the articles beginning on
page 343.  Of particular interest to AIers is "The Chaostron:
An Important Advance in Learning Machines."

                                        -- Ken Laws

------------------------------

Date: Sun, 8 Apr 1984  15:21 EST
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: Non-competition clauses

Recently a student of mine applied for a position with one of the new AI
companies (I'd rather not say which one) and received what he considers
to be a very attractive offer.  Unfortunately, there is one problem that
will probably prevent him from accepting the job: the company requires
him to sign an agreement that if he leaves that company for any reason,
he will not compete with them or work for any competitive business for a
period of three years.  In order to keep this agreement in effect, the
company would have to continue to pay him his salary, minus any money he
makes from other employment or consulting.  Since this company defines
its business as AI and AI tools in a very broad sense, this means that
they could force the former employee to stay completely out of the field
of AI for three whole years if he leaves them -- an eternity in this
field.

I've heard of companies that require you to promise not to use any
proprietary knowledge on behalf of your next employer (or anyone else),
but I've never heard of an agreement like this one.  Since the penalty
for leaving is potentially so high (you get a salary for doing nothing,
but are effectively prohibited from practicing your chosen profession
for a period of time that is long enough for you to go completely
stale), it looks to me like they are trying to make you sign up with
them for life -- at their option, of course.

This company seems to think that this agreement is a perfectly routine
matter and that many companies in AI have similar requirements.  Is this
true?  Is this sort of thing spreading?  Have people out there actually
signed agreements of this sort?  Are they legally enforceable?  Unless I
hear otherwise, I'm going to consider this as an isolated case of
institutional paranoia on the part of this one company, and will steer
my students away from that company in the future.  If everyone is doing
it, that is much more alarming.

  -- Scott Fahlman, CMU  <fahlman at cmu-cs-c.arpa>

------------------------------

Date: Sun 8 Apr 84 18:14:52-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Noncompetition Clauses

This is the first I've heard of the post-employment restrictions
Scott Fahlman mentioned, although I've heard of noncompetition agreements
in other industries.  (I believe Nolan Bushnell, for instance, started
Pizza Time Theaters because he couldn't compete with his own creations
at Atari.  I've also heard of cases in the giant-screen TV and
restaurant businesses, always part of a buy-out agreement.)  The
intention is obviously to stop someone from spinning off his own
company to market an idea developed for the first employer.

Although the clause in question is a strong constraint, I don't see
that it would necessarily bind you to the company for life.  Think of
it as a three-year paid sabatical or other grant.  It has a built-in
disincentive for taking a job in any other field, but is a real
bonanza for someone who wants to spend time taking courses and
catching up on the literature in the AI field.

As a practical matter, I doubt that the employer would exercise his
option unless you intended to compete directly in the same product
you were working on.  It wouldn't make sense to buy you off if you
intended shifting to even a moderately different AI application.

                                        -- Ken Laws

------------------------------

Date: 3 Apr 84 18:04:46-PST (Tue)
From: decvax!cwruecmp!borgia @ Ucb-Vax
Subject: Re: expert system algorithms
Article-I.D.: cwruecmp.1127

Don't we always come back to the same old epistemological question?
What good is any system to a human being unless it is understood
(maybe in parts) by one or more human beings?

An expert system should be an intelligent and articulate student
who learns from several experts. Understanding control structures
is not the critical issue. Well-known and fairly simple inference
mechanisms are available. The key issue is articulating what
knowledge was used and how in solving a problem.

"What good is knowledge when it brings no profit to its bearer"
                   - Teireisias in Oedipus the King, Sophocles

  -- joe borgia

  usenet:  decvax!cwruecmp!borgia
  csnet:   borgia@case
  arpanet: borgia.case@csnet-relay

------------------------------

Date: Thu, 5 Apr 84 15:57:13 est
From: Michael Listman <mike%brandeis.csnet@csnet-relay.arpa>
Subject: metaphors

       I am interested in finding information on the extent of
natural language research and expectations.

       In particular, I would like to find out if any research
has been done on comprehension of metaphors.  I realize that
this would present problems such as what to do upon
encountering a metaphor that one (or a system) has never
before encountered.

      Take as an example,

             "Man is a wolf"

      - although it seems obvious to a human, how does one know
which aspects of wolf to apply to man?

      As another example, how do we know that

             "Man is a Bic pen"

is a bad metaphor?  Do we exhaust all the features of each ( man
and Bic pen) and decide that not enough of them are similar enough
for a reasonable comparison?  This seems plausible, but I could
imagine a situation in a discourse where this or a similar metaphor
would make perfect sense (please don't ask me to).

       I believe that in pursuing research in this direction, we
will eventually attain the knowledge to build a psychologically real
natural language understander, which I believe is the only way we
will ever attain a system that can approximate human comprehension.

       If anyone can point me toward research in this area, or
references, or simply guess as to where research like this will lead
in the near future (or ever) please respond as soon as possible.


                                  --- Michael Listman

------------------------------

Date: 9 Apr 84 09:28:48 EST
From: DSMITH@RUTGERS.ARPA
Subject: Semiautomated Algorithm Design

            [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                 Rutgers' Computer Science Colloquium

                    Semiautomated Algorithm Design

                           Douglas R. Smith

   Algorithm design is viewed as the transformation of a formal  specification
of a problem into an algorithm. We present a formal top-down method for
creating hierarchically structured algorithms. The method works as follows: to
design an algorithm A0 for a problem specification P0, the programmer
conjectures the overall structure S of A0, then uses knowledge of the
structure S to deduce subproblem specifications P1,...,Pn for the
underdetermined parts of S.  Algorithms A1,...,An are then designed for the
subproblems P1,...,Pn and  assembled  (via  the structure S) into an
algorithm for the initial problem P0.  This process results in the
decomposition of the initial problem specification into a hierarchy  of
subproblem  specifications and the composition of a corresponding
hierarchically structured algorithm.  The knowledge used  to  deduce
specifications  for subproblems is obtained by analysis of the particular
structure S and is encoded in a procedure called a design strategy for S.
The  top-down  design process  depends  on  design  strategies  for  many
different kinds of algorithm structures.

     We illustrate this approach by presenting  the  knowledge  needed  to
synthesize  a  class  of  divide and conquer algorithms and by deriving a
quicksort algorithm.  Our examples are drawn from experiments with  an
implemented  algorithm  design  system called CYPRESS.  Current efforts to
systematically acquire design strategies for fundamental classes of
algorithms will be discussed.

DATE:    Thursday, April 12, 1984
TIME:    10:30 a.m.
PLACE:   Hill Center - Room 705
        *  Coffee will be served at 10:00 a.m.

------------------------------

Date: 9 Apr 1984  14:15 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Engineering Problem Solving

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

AI Revolving Seminar
Wednesday, April 11, 4:00pm 8th floor playroom

Jerry Roylance  --  "Programs that Design Circuits"

        People can design circuits; this task -- at least partially --
can be done by computers.  I'll talk about how designers think about
circuits and how to make computers think that way, too.  While the talk
will be directed toward circuit design, that is not the sole intent.
How will building problem solvers in one engineering domain help us
build them in other domains?  What "domain-independent" facts can be
carried across?
        Engineering domains (such as circuit design) are good ones to
teach computers.  They have well defined models that let the machine
verify and debug its designs (thus allowing some chance at creativity).
Engineering domains also have many "standard problems" with cookbook
solutions.  If the computer can be clever about recognizing instances of
these problems and combining them together, it can produce nontrivial
designs.  The quality of a design in an engineering domain is also easy
to assess.
        The circuit design domain is not simple, however.  Hierarchical
expansion of abstract components fails to account for many designs.  The
parts of a design are not independent and that makes it difficult for
the knowledge sources to be modular.  Arithmetic constraints solve some
of these problems; some others can be solved by manipulating mechanism
constraints.
        An important perspective:  when teaching a system a new trick,
find out why the some one thought of that trick in the first place.

------------------------------

End of AIList Digest
********************

∂13-Apr-84  1129	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #46
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Apr 84  11:24:42 PST
Date: Fri 13 Apr 1984 09:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #46
To: AIList@SRI-AI


AIList Digest            Friday, 13 Apr 1984       Volume 2 : Issue 46

Today's Topics:
  Education - LISP Advice Sought,
  AI Jobs - Noncompetition Clauses,
  Natural Language - Metaphor,
  Humor - Smalltalk-52 Seminar
----------------------------------------------------------------------

Date: Wednesday, 11 April 1984 23:08:30 EST
From: Kai-Fu.Lee@cmu-cs-g.arpa
Subject: Comments solicited

I will be teaching a LISP/AI course to a group of Pennsylvania high
school seniors who are talented in science.  (part of the Penn.
Governor's School for the Sciences)  I am interested in hearing from
people with similar experience or ideas for subject/assingments/
projects/text book.

The students will likely be divided into two groups, one with and one
without experience in programming.  The course consists of 17 1-hour
lectures to each group. I am planning to divide the course into 3 parts :
(1) Basic Concepts of C.S. [2 lectures] (2) LISP Programming [6-7
lectures] (3) A.I. [8-9 lectures]. Since the schedule is rather tight,
it is unlikely that I could cover anything in much detail.  There will be
two or three programming assignments.  In addition, students may choose to
do a project in computer science.

Thanks,
Kai-Fu Lee
KFL@CMU-CS-G.ARPA

------------------------------

Date: Thu 12 Apr 84 00:36:19-PST
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Re: Noncompetition Clauses

Ken, I think you missed one problem with the noncompetition agreement.
I guess you looked at it like I did that your salary would be in the
same range if you could change jobs so it doesn't hurt too much if you
just continued on at the old salary but took the time off.

Suppose you started to work for company A at salary X.  The next year (or so)
you get a much better offer (say as a manager) from company B at salary 2*X.
Now you are prevented from taking that job (assuming B is in competition with
A).

Another way of looking at that is to suppose that you do something good
in your first few years after school but that your company doesn't
want to give you the raise in salary that is commensurate with your proven
abilities.  Now you can't just say that company B will pay you twice your
current salary.  They'd just laugh at you and say you couldn't go.  It might
be worth 3 years of your old salary to keep you from company B even if you
don't do any work for them.

I see the clause as cutting down on wage wars between companies.  It also
cuts down on the mobility available to employees of that company.  Finally
it probably prevents an employee from starting his own company in that
field.

I also feel that the restriction may have a negative effect on the company
requiring it.  Would you go to a black hole from which no one ever could
get away?  I suppose companies requiring that clause are just going to have
to settle for employees that can't find a better offer.  Would you want to
work with second class people?

------------------------------

Date: 12 Apr 84 08:56:18 PST (Thu)
From: Carl Kaun <ckaun@aids-unix>
Subject: Non-competition clauses and other employment agreements


I think that anyone concerned about employment agreements would do well
to contact a lawyer.  The different states consider various clauses in
employment agreements enforceable to differing degrees.  I seem to remember
reading an article about six months ago that said (broadly) that California,
for example, considered clauses restricting people from continuing their
professional careers to be not generally in the public interest, and such
clauses should be carefully considered as to enforceability for that reason.
This same article (which I am trying to dig up) said something about the state
of California holding that clauses assigning all rights to all ideas, patents,
etc. arising during employment, are enforceable only to the extent that such
ideas, etc. resulted from the employment situation.  Again, one should
really contact a lawyer to get a clear opinion in any given situation.  There
is enough variability in this area to make any general comments suspect.

My personal experience has been that employment agreements and the
employer's approach to them are remarkably uniform in industry.  When I
have questioned companies about these agreements, I am told that they
adopt this fairly standard agreement and hold to it firmly on the advice
of their corporate counsel.  The general idea (from the company point of
view) seems to be to claim all you can now, and work out what you can really
enforce if the situation comes to that.  They take this approach
not because of some malignant motive, but because it has proved the most
prudent course for them to take.

------------------------------

Date: Thu, 12 Apr 84 17:51:28 EST
From: Mark S. Day <mday@BBN-UNIX>
Subject: Re: Employment Agreement

Keeping someone entirely out of a field like AI for three years is an illegal
restraint of trade, I would suggest.  Employment agreements must be
reasonable with respect to time and area limitations to be enforceable, and
I doubt that 3 years is a reasonable time constraint, especially given that it
seems to be sufficiently long to get completely out of touch with the field.
The fact that the company offers to pay you for those three years is
irrelevant.

  --Mark

------------------------------

Date: Thu, 12 Apr 84 07:06:04 cst
From: Peter Chen <chen%lsu.csnet@csnet-relay.arpa>
Subject: Noncompetition Clauses

I think that there are quite a few companies putting on restrictions on
post-employment activities, although most of these companies are usually not as
restrictive as the company mentioned by Scott Fahlman.  I think it
is fair for an employer to ask its employees to avoid future
involvement in direct competition with the company
within a short period of time (say, one year instead
of three years) and in a more narrow subject area (i.e., in the area/topics
the individual is working on, rather than a broad definition of the AI field or
the whole computer field).

If I remembered correctly, when I worked for a large computer manufacturer ten
years ago, I was required to sign an agreement that whatever
ideas or products I might develop in my spare time would belong to the company
even though the ideas/products were not related to computers.  Do you think it
is fair?  Do you think your computer employer has the right of the novel you
write during weekends?  I think this case is much more unfair than asking
the employee not to compete with the company after he/she leaves the company
for more than a year.

As far as I understand, all these agreements/contracts are legally binding if
the contracts are signed under free will.  Therefore, they can be enforced if
the companies choose to do so.  However, most of time the companies just use
them as a possible protection for their interest.

      Peter Chen
      Computer Science Dept., LSU
      <chen%lsu@csnet-relay>

------------------------------

Date: Wed, 11 Apr 84 17:45:10 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Metaphor

The thing about a metaphor is that it contains little explicit information.
It acts as a trigger in such a way that the hearer creates meaning for it.
Different hearers create different meanings.  For example, one hearer,
drawing from his background as an environmentalist might take "Man is a Wolf"
to mean that man has a wild, misunderstood soul while another hearer, drawing
from his background as a mountain man who has had to compete with wolves might
take the metaphor to mean that he himself is a savage beast that will kill,
if necessary, to live.

It becomes pretty far-fetched to make up a model of "metaphor" that says
that this information is contained in the statement of the metaphor.

  --Charlie

------------------------------

Date: 11 Apr 84 2255 EST (Wednesday)
From: Steven.Minton@CMU-CS-A.ARPA
Subject: Metaphor comprehension pointers


The following references might prove helpful if you're interested in
AI and metaphor comprehension:

    Carbonell, J.G. and Minton, S. "Metaphor and Common-Sense Reasoning"
    CMU tech report CMU-CS-83-110, March 83

    Carbonell, J.G. "Metaphor: An Inescapable Phenomenon in Natural Language
    Processing", in Strategies for Natural Language Processing, W. Lehnert
    and M. Ringle (eds.), Erlbaum 1982

    Carbonell, J.G. "Invariance Heirarchies in Metaphor Interpretation"
    Proceedings of the 3rd Meeting of the Cognitive Science Society, 1981

There's a large body of literature on analogical reasoning and other
aspects of metaphor comprehension. Much of the relevant research
has been done within psychology and linguistics. I'd suggest looking at
these for an overview:

    Ortony, A. (Ed.) "Metaphor and Thought" Cambridge Univ. Press 1979

    Lakoff, G. and Johnson, M. "Metaphors We Live By" Chicago Univ. Press 1980

    Gentner D. "Structure-Mapping: A Theoretical Framework for Analogy" in
    Cognitive Science, Vol. 7, No.2 1983

    Winston P. "Learning by Creating and Justifying Transfer Frames"  in
    Arificial Intelligence, Vol. 10, No. 2, 1978

I don't know of any natural language system which can handle a wide range
of novel metaphors, and I don't expect to see one soon.
Any such system would have to contain an enormous amount of
knowledge. Unlike most present-day NL systems, a robust metaphor comprehension
system would have to be able to understand many different domains.

In spite of this difficulty, metaphor comprehension remains a fertile
area for AI research. I've spent some time examining how people
understand sentences like: "The US/Russian arms negations are a high
stakes poker game". When you get right down to it, its amazing that
people can figure out exactly what the mapping between "arms negotiatations"
and "poker games" is. What's most amazing is that using and understanding
metaphors APPEARS to take so little effort. (In fact, they are often the
easiest way to to rapidly communicate complex technical information. The next
time you are at a talk, try counting the analogies and metaphors used.)

                                        -- Steve Minton, CMU

------------------------------

Date: Thu, 12 Apr 1984 11:31:13 EST
From: Danger, Will Robinson, Danger! <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Metaphoric Comparisons

Mike:
     Anthropologists and folklorists have been dealing with metaphor (and
related tropes) for a long time, in terms of their use in such common forms of
speech as proverbs and riddles, both of which depend almost totally on the
use of metaphoric and metonymic comparison.  One thing that's critical is the
recognition that use of metaphor is extremely context-dependent; i.e., you
cannot apply Chomskian assumptions that competence is important, because the
problem occurs in performance, which Chomsky relegates to a side issue.
     I'd suggest the following references for a start:

1.  Sapir and Crocker, eds., "The Social Use of Metaphor" -- an excellent
anthology, about eight years old, covering a great deal of ground.
2.  The special issue of the Journal of American Folklore from the early or
mid-seventies on Riddles and Riddling.
3.  Dell Hymes, "Foundations of Sociolinguistics".  (A really critical book
which set the stage for many anthropologists, linguists, etc. to shift over
from competence to performance; its biggest flaw is Hymes' insistence that
communication doesn't exist without intention on the part of at least one of
the performer(s), the receiver(s), and the audience.)
4.  The journal "Proverbium", which was, for its 25-year life, THE place to
look for research on proverbs and related stuff.  Especially good are articles
by Nigel Barley, Alan Dundes, and Barbara Kirschenblatt-Gimblett, whose "The
Proverb in Context" is a real key article.
5.  Kirschenblatt-Gimblett and Sutton-Smith, eds., "Speech Play".  A very good
anthology about uses of all sorts of special speech techniques, including
metaphorical comparisons, in various cultures.

Those are the ones I can remember off the top of my head.  There are lots
more stored in my bibliography hard-copy file at home, and you can drop me
a net-note if you need 'em...

  --Dave Axler

------------------------------

Date: 12 Apr 1984 09:49:50-EST
From: walter at mit-htvax
Subject: Smalltalk-52

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

                 ANNALS OF COMPUTER SCIENCE SEMINAR
                   DATE:  Friday, April 13th, 1984
                   TIME:  Refreshments  12:00 noon
                  PLACE:  MIT AI Lab 8th Floor Playroom

                 SMALLTALK-52 and the Wheeler Send

                               ABSTRACT

        Recently discovered paper tapes reveal that J.M. Wheeler
        designed the first version of Smalltalk in 1952,
        intending it to run on the University of Cambridge's
        EDSAC Computer.  The initial implementation, however,
        required the machine's entire 512-word memory and was deemed
        infeasible.  Wheeler, who is credited with the invention
        of bootstrap code, subroutine calls, assemblers, linkers,
        loaders, and all-night hacking, can now be properly
        credited with inventing message passing, object oriented
        programming, window systems, and impractical languages.

        This fascinating historical discussion and the accompanying
        Graduate Student Lunch will be hosted by Steve Berlin.

        Next Week:
        Lady Lovelace's Public-Key Encryption Algorithm.

------------------------------

End of AIList Digest
********************

∂15-Apr-84  1824	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #47
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Apr 84  18:23:58 PST
Date: Sun 15 Apr 1984 17:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #47
To: AIList@SRI-AI


AIList Digest            Sunday, 15 Apr 1984       Volume 2 : Issue 47

Today's Topics:
  Education - Request for Writing, Geometry Systems,
  Msc. - TAMITTC,
  Applications - Artificial Big Brother,
  Seminars - Qualitative Process Theory & AI and VLSI,
  Conferences - COLING84 Information
----------------------------------------------------------------------

Date: Sat, 14 Apr 84 02:06 EST
From: Malcolm Cook <cook%umass-cs.csnet@csnet-relay.arpa>
Subject: TEACHING SYSTEMS. WRITING & GEOMETRY.

Im looking for information on 2 things:
        1) what systems exist that are used to teach language skills,
especially teaching writing to children.  also, I remember hearing
about studies showing that children are motivated to write simply
by having a simplified editor.  Any pointers to this study(s)?
There was an interesting article in N.Y. times sunday mag section
2-3 months back entitles "Writing to Read", and was about a curriculum
for 1st & 2nd grade, in which the children were learning a simple
morpheme<==>grapheme map, allowing them to pphonetically spell
any word they could pronounce.  Are there any AI systems involved
in this course?

        2) What does there exist for tutoring sytems in geometry.
I am somewhat familiar with the CMU approach (Boyle, Anderson, Shrager...)
but what else is around?  Anything on the spectrum from GOOD
programmed instruction to /reactive environments/ would be of interest.

thanks,

        Malcolm Cook (Cook.umass-cs@csnet-relay)

------------------------------

Date: 7 Apr 84 17:39:57-PST (Sat)
From: decvax!cwruecmp!borgia @ Ucb-Vax
Subject: TAMITTC
Article-I.D.: cwruecmp.1135

A few weeks ago a cryptic TAMITTC poster appeared on all the doors
of our computer engineering department. Recently I discovered what
it was all about on an obscure campus billboard.

        There Are More Important Things Than Computers

And there was a footnote.

        Like what? People? Oh, those things!

------------------------------

Date: 8 Apr 84 15:53:05-PST (Sun)
From: harpo!ulysses!allegra!don @ Ucb-Vax
Subject: Artificial Big Brother
Article-I.D.: allegra.2388

                AI and criminology

I totally agree with DW's article in net.crypt.  Computer techniques
would be used to keep track of "political criminals".  Middle class
intellectuals are far more vulnerable to this sort of control than are
street criminals and drifters.

Already, right wing organizations use this technology to keep track of
people they consider politically dangerous, and while the government is
not allowed to do this, they have received information from these
organizations under the table.  In some cases, victims are chosen
simply by correlating magazine subscription information.

------------------------------

Date: 9 Apr 84 10:05:30-PST (Mon)
From: hplabs!hao!seismo!brl-vgr!abc @ Ucb-Vax
Subject: Re: Artificial Big Brother
Article-I.D.: brl-vgr.15

But just like so many things, do you really think that,
because the "good guys" don't build a tool which can be
used for "good" or "evil" that the "bad guys" won't build
and use the tool?

It seems that what is needed is research into methods for
controlling these tools (Computer Science) and research
into new public policies regarding the use and misuse of
such tools (Humanities and Social Science).

Remember: whether the U.S. did it or not, others still
would have developed and deployed nuclear weapons.

------------------------------

Date: 10 Apr 84 13:51:13-PST (Tue)
From: hplabs!tektronix!tekigm!dand @ Ucb-Vax
Subject: Re: Artificial Big Brother
Article-I.D.: tekigm.74

I cannot agree too strongly with Brint Cooper about this. The tool never makes
the wielder any more or less an "evil" person.

If given a choice, I'd rather have such a tool built by the established AI
community for two reasons:
1) The program's existance is published, so people can think about the
   implications and possibly set up systems to reduce the amount of abuse
   the system is used for. As a possible victim of misuse, I can also start
   thinking about preventive measures to unreasonable privacy invasions(I
   personally believe no one even now has any real privacy if someone is out
   to do you in, but that is not germain here.)
2) If such a tool is in the public domain, at least the people it was
   originally designed for, the law-enforcement agencies, would get some
   use out of it. If this tracking system were to be built in a CIA shop or
   an NSA shop, no one outside those organizations will ever know of its
   existance, and thus never be able to use it.

Abuses with such a system are going to be inevitable; the goal for us to set
is to see that the abuses are kept to a minimum, which we can't do if the
system requires "Top Secret/Burn Before Reading" clearance to even know that it
exists.

Lest anyone try to say that the possible abuses of such a system outweigh the
few benefits of it, remember that Theodore Bundy was convincted with such
evidence as gasoline receipts in the area where one of his victims disappeared,
at the same time she disappeared. With such a system, perhaps, Ted Bundy would
not have racked up the score of dead, young women that he did. Such a system
might help pinpoint the current Green River Killer in Washington State,
or reduce the predations of the itinerant killers who prey on anyone they
think they can get away with. If some shadowy bureaucrat were out to get you,
this system would not be necessary--a judge's signature is all that is needed
to open up the records of your Visa, your bank, your employer, etc (granted it
may not be a legal action on that part of that judge, but we're already talking
about illegal activities, no?).

Finally, if this discussion is going to go on, let's move it to net.politics
or net.misc or net.legal, net.ai is not the proper forum for this discussion.

Dan C Duval
ISI Engineering
Tektronix,Inc

tektronix!tekigm!dand

------------------------------

Date: 12 Apr 1984  16:16 EST (Thu)
From: Cobb%MIT-OZ@MIT-MC.ARPA
Subject: Qualitative Process Theory

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                               SEMINAR

                          Kenneth D. Forbus


                          April 17 - 4:00PM
                      NE43 - 8th floor Playroom

                  Title:  QUALITATIVE PROCESS THEORY


        Objects move, collide, flow, bend, heat up, cool down,
stretch, break, and boil.  These and other things that happen to cause
changes in objects over time are intuitively characterized as
processes.  To understand common sense physical reasoning and make
programs that interact with the physical world as well as people do we
must understand qualitative reasoning about processes and their
effects.  Qualitative Process theory defines a simple notion of
physical process that appears useful as a language in which to write
dynamical theories.  Reasoning about processes also motivates a new
qualitative representation for quantity in terms of inequalities,
called the Quantity Space.

        This talk will describe the basic concepts of Qualitative
Process theory, two different kinds of reasoning that can be performed
with them, and its implications for causal reasoning.  Several
examples will be presented to illustrate the utility of the theory,
including figuring out that a boiler can blow up and how different
theories of motion may be encoded.


Refreshments at 3:45PM

Host:  Professor Patrick H. Winston

------------------------------

Date: 12 Apr 84 16:31:46 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on AI and VLSI this Coming Thursday (room 423)...

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                                 I I I SEMINAR

          Title:    Knowledge-Based Aids for VLSI Design
          Speaker:  Tom Mitchell
          Date:     Thursday, April 19, 1984, 1:30-2:30 PM
          Location: Hill Center, ***room 423***

       Knowledge-based  systems  provide  one  possible approach to dealing
    with the complexities of VLSI design.  This talk discusses  the  design
    of  such a system, called VEXED, which aids the user in the interactive
    design of VLSI circuits.  VEXED  is  intended  to  provide  suggestions
    regarding  alternative  implementations  of circuit modules, as well as
    warnings regarding conflicting constraints during design.   The  system
    is composed of a circuit network expert (CIRED), a layout expert (RPL),
    and a digital signal analysis expert (CRITTER).  A prototype version of
    VEXED  has  been implemented, and a second version of the system is now
    under development.

------------------------------

Date: Thu 12 Apr 84 11:10:48-PST
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: COLING84 information on registration, travel, housing,
         summer school

****************** PLEASE POST, CIRCULATE, AND REDISTRIBUTE *****************

   COLING84, TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS

COLING84 is scheduled for 2-6 July 1984 at Stanford University, Stanford,
California.  It will also constitute the 22nd Annual Meeting of the
Association for Computational Linguistics, which will host the conference.

Information about the conference, registration, travel, and accommodations,
and about the six summer school courses that will be held during the
preceding week (25-29 June) has just been made available in the form of The
COLING Courier.  For a copy, contact Don Walker, COLING84, SRI
International, Menlo Park, California, 94025, USA [phone: (415)859-3071;
arpanet: walker@sri-ai; telex [650] 334486].  Other requests for information
about the conference should be addressed to Martin Kay, COLING84, Xerox PARC,
3333 Coyote Hill Road, Palo Alto, California 94304, USA [phone:
1-(415)494-4428; arpanet: kay@xerox; telex [650] 1715596].

The summer school, which will be held 25-29 June, consists of week-long
tutorials on six subjects that are central to computational linguistics but
on which instruction is still not routinely available:  LISP AS
LANGUAGE--Brian Smith, Xerox and Stanford; PROLOG FOR NATURAL LANGUAGE
ANALYSIS--Fernando Pereira, SRI International; PARSER CONSTRUCTION
TECHNIQUES--Henry Thompson, Edinburgh; SITUATION SEMANTICS--David Israel,
BBN, & John Perry, Stanford; MACHINE TRANSLATION--Brian Harris, Ottawa, &
Alan Melby, Brigham Young; SOUND STRUCTURE OF LANGUAGE--Mark Liberman, Bell
Labs.  Enrollments are limited to 30 in each tutorial, so register early.

A remarkably rich set of computational facilities will be available at
Coling84 for demonstrating programs and systems.  For information, contact
Doug Appelt, SRI International, Menlo Park, California 94025 [phone: (415)
859-6150; arpanet: appelt@sri-ai; telex: [650] 334486].

You are advised to BOOK EARLY FOR COLING84, since airline reservations will
be much harder than usual to obtain.  Custom Travel Consultants, 2105
Woodside Road, Woodside, CA 94062 [phone (415)369-2105], is responsible for
registration, travel, and housing.  Full information is provided in the
Coling Courier, but call them if time is short.

------------------------------

End of AIList Digest
********************

∂16-Apr-84  1106	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #48
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Apr 84  11:05:48 PST
Date: Mon 16 Apr 1984 09:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #48
To: AIList@SRI-AI


AIList Digest            Monday, 16 Apr 1984       Volume 2 : Issue 48

Today's Topics:
  Applications - Business AI Request,
  Natural Language - Metaphor References,
  AI Literature - Book Prices,
  AI Jobs - Noncompetition Agreements,
  AI Computing - Discussion
----------------------------------------------------------------------

Date: Mon, 16 Apr 84 00:47:17 pst
From: syming%B.CC@Berkeley
Subject: AI for Business?

I am soliciting any information on the application of artificial intelligence
and/or expert system techniques on the area of business administration such
as marketing, finance, production/operation, accounting, organizational
behavior ... etc. Any information (e.g., on-going project, current reseach,
new idea, trends ...) are greatly appreciated.

Syming Hwang
School of Business Administration, U.C. Berkeley, 415-642-2070 (ofc)
350 Barrows Hall, U.C. Berkeley, Bekerley, CA 94720
syming%B.CC@Berkeley.ARPA

------------------------------

Date: Mon 16 Apr 84 02:21:16-EST
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Poetics Today & Metaphor

   The current issue of *Poetics Today* (V.4, N.2, 1983) is
specially dedicated to the subject of metaphor, and contains four
weighty articles by Umberto Eco, Eddy M. Zemach, Inez Hedges, and
Jan Wojcik. The article by Eco (who is considered by many to be
the foremost living literary theorist and semiotician in the
world) is especially useful.
   Eco provides a glimpse of just how vast is the literature on
metaphor:
   "The 'most luminous, and therefore the most necessary and
frequent' (Vico) of all tropes, the metaphor, defies every
encyclopedic entry. Above all because it has been the object of
philosophical, linguistic, aesthetic and psychological reflection
since the beginning of time. Shibles's 1971 bibliography on the
metaphor records around 3000 titles; and yet, even before 1971,
it overlooks authors like Fontanier, and almost all of Heidegger
and Greimas--and of course does not mention, after the research
in componential semantics, the successive studies on the logic of
natural languages, the work of Henry, Group u of Lieges, Ricoeur,
Samuel Levin, and the latest text linguistics and pragmatics."
   Eco makes some remarks on the subject of metaphor which are
highly pertinent to AI researchers:
   "No algorithm exists for metaphor, nor can a metaphor be
produced by means of a computer's precise instructions, no matter
what the volume of organized information to be fed in. The
success of a metaphor is a function of the sociocultural format
of the interpreting subjects' encyclopedia. In this perspective,
metaphors are produced solely on the basis of a rich cultural
framework, on the basis, that is, of a universe of content that
is already organized into networks of interpretants, which decide
(semiotically) upon the identities and differences of properties.
At the same time this content universe, whose format postulates
itself not as rigidly hierarchized, but rather according to Model
Q, alone derives from the metaphorical production and
interpretation the opportunity to restructure itself into new
modes of similarity and dissimilarity."
   The journal *Poetics Today* is a rich source of speculation
and analysis for anyone exploring the more subtle structures and
processes of natural language understanding.

     --Wayne McGuire <mdc.wayne@MIT-OZ>

------------------------------

Date: Fri, 13 Apr 84 16:14:56 PST
From: Koenraad Lecot <koen@UCLA-CS.ARPA>
Subject: Synapse Books Prices

I remember a message on the AIList that mentioned Synapse Books as
a [...] publisher of AI books. Have you seen their 1984 catalog ?
It contains two new "books" by a certain R.K. Miller, one at $200 and
the other at $485 ...
I knew that the prices of AI books were going up but this is crazy ..


[I remember an ad for a reprint of key expert systems papers for over
$1000 a year or two ago.  This wasn't a Comtex microfiche collection
(about $2000 per set), just a reprint compendium marketed for corporations
and Wall Street types.  -- KIL]

------------------------------

Date: 14 Apr 1984 11:59-PST
From: fc%USC-CSE@USC-ECL.ARPA
Subject: Noncompetition Agreements

        I don't know about you, but whenever I am given a contract to
sign, I simply cross out anything I'm not willing to agree to and sign
what remains. If they want me, they sign, if they don't they don't. In
my experience, 95% of the time, they just sign and take what they get.
The other 5% of the time, they try to bargain, and I simply refuse to
yield on the issues that are important to me. At that point we either
agree or don't. The point is, that you should only agree to the things
that seem reasonable to you, and then only if you understand the legal
ramifications of what you are signing.

        Frankly, I wouldn't work for anyone who felt the need to bind me
to them by an exclusive use of my brain contract. First of all, it's my
brain not theirs. Second of all, they must be in pretty bad stead with
their employees if they have to use the law to force them to stay.
Companies that are really good don't have to force employees to stay,
the employees stay because they believe in the company and they get the
rewards they seek. Figure out what you want and what you're willing to
give for it, don't do what you don't believe in just because others are
doing it.
                                        Fred

------------------------------

Date: 13 Apr 84 16:33:52-EST (Fri)
From: Brian Nixon <nixon%toronto.csnet@csnet-relay.arpa>
Subject: Non-competition clauses

At least in Canada, the courts usually take a low view of such clauses
in employment contracts, UNLESS they are severely restricted in scope, e.g.
are for a period of less than 6 months, apply only to taking a job within
the same city, apply only to taking a job within a particular industry.

Brian Nixon, Dept. of Computer Science, Univ. of Toronto.

------------------------------

Date: 15 April 1984 17:53-EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Non-competition clauses

An excellent article on ''covenants not to compete'' and other non-
disclosure agreements is Davidson, ''Constructing OEM Nondisclosure
Agreements'', 24 Jurimetrics Journal 127 (1984).  The author notes that
after-employment restrictions are strong medicine, and therefore they
are narrowly construed as to time and subject matter.  In some states
(e.g., California) they are impermissible except in narrow
circumstances (such as the sale of a business and the like).  Likely
the best policy is to consult a lawyer.

If you really wish to steer students away from that company, I would
think the best way would be to name names.  Their employment terms are
hardly a secret in themselves.

  -- Steve

------------------------------

Date: Sun, 15 Apr 1984  10:19 EST
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: Employment restrictions

Of the responses I've received, both by mail and in person, several have
said that a three-year paid sabbatical wouldn't be so bad (but you would
be prevented from starting a company or doing anything that involves
significant machine resources or a team of people), several have
said that the clause probably wouldn't stand up in court (but it's no
fun to fight a big company in court), and many have said that this policy
would keep lots of good people away from the company in question.

Nobody has told me about any similar clause used by another company.
Some consulting firms require their employees to agree not to jump over
to work for the clients for a year or so, but that still leaves them with
lots of options within the field of AI.

In any event, the company that started all this is now reconsidering
their position and are trying to find some less restrictive way to
protect their proprietary information, so the whole issue may soon be
moot.  It's nice to find a company where the lawyers still work for the
researchers, and not vice versa.

  -- Scott Fahlman

------------------------------

Date: 9 Apr 84 22:55:52-PST (Mon)
From: hplabs!hao!cires!nbires!opus!rcd @ Ucb-Vax
Subject: Re: Stolfo's call for discussion
Article-I.D.: opus.346

>One way AI programming is different from much of the programming in other
>fields is that for AI it is often impossible to produce a complete set of
>specifications before beginning to code.
>
>The accepted wisdom of software engineering is that one should have a
>complete, final set of specifications for a program before writing a
>single line of code.  It is recognized that this is an ideal, not
>typical reality, since often it is only during coding that one finds
>the last bugs in the specs.  However, it is held up as a goal to
>be approached as closely as possible.

I submit that these statements are NOT correct in general for non-AI
programs.  Systems whose implementations are not preceded by complete
specifications include those which
        - involve new hardware whose actual capability (e.g., speed) is
          uncertain.
        - are designed with sufficiently new hardware and/or are to be
          manufactured in sufficient quantity that hardware price per-
          formance tradeoffs will change significantly in the course of the
          development.
        - require user-interface decisions for which no existing body of
          knowledge exists (or is adequate) - thus the user interface is
          strongly prototype (read: trial/error) oriented.
as well as the generally-understood characteristics of AI programs.  In
some sense, my criteria are equivalent to "systems which don't represent
problems already solved in some way already" - and there are a lot of such
problems.
                                  --
"A friend of the devil is a friend of mine."            Dick Dunn
{hao,ucbvax,allegra}!nbires!rcd                         (303) 444-5710 x3086

------------------------------

Date: 11 Apr 84 20:13:01-PST (Wed)
From: hplabs!tektronix!uw-beaver!teltone!warren @ Ucb-Vax
Subject: Re: Stolfo's call for discussion
Article-I.D.: teltone.252

Unexpectedness and lack of pre-specification occur in many professional
programming environments.  In AI particulary it occurs because
experimentation reveals unexpected results, as in all science.

In hardware (device-driver) code it occurs because the specs lie or
omit important details, or because you make an "alternative
interpretation".

In over-organized environments, where all the details are spelled out
to the nth degree in a stack of documents 8 feet high, unexpectedness
comes when you read the spec and discover the author was a complete idiot
having a very bad day.  I have seen alleged specs that were signed off
by all kinds of high mucky-mucks that are completely, totally, zonkers.
Not just in error, but complete jibberish, having no visible association
with either reality or thought, not to mention the project at hand.
At the very least, they are simply out of date.  Something crucial has
changed since the specs were written.

In business environments, it occurs when the president of the company
says he just changed the way records are to be kept, and besides, doesn't
like the looks of the reports he agreed to several months ago.  Whats a
programmer to do ?  Tell the boss to shove it ?  The single most difficult
kind of programming occurs when  1) The user is your boss (or "has power").
2) The user is fairly stupid.  3) The user/boss is good enough of a con
artist to prevent the programmer from leaving.  It is admitted, however,
that the difficulty is not technical, per se, but political.

All the above examples are from my professional experience, which spans
over ten years.  None of the situations are very unusual.  Unexpectedness
is part of our job.  In any case, 90 to 99% of the code in the AI systems
I've seen are much like any other program.  There are parsers, allocators,
symbol tables, error messages, and so on.  I'll let others testify to
the remainder of the code, its been a while.

                                warren

------------------------------

End of AIList Digest
********************

∂19-Apr-84  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #49
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Apr 84  18:09:02 PST
Date: Thu 19 Apr 1984 16:47-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #49
To: AIList@SRI-AI


AIList Digest            Friday, 20 Apr 1984       Volume 2 : Issue 49

Today's Topics:
  AI Programming - Cfasling Pascal routines into Franz,
  AI Tools - Prolog on Symbolics Machines,
  Expert Systems - Real-Time Simulation,
  Jobs - Noncompetition Agreements,
  AI Programming - Discussion,
  Linguistics - Use of "and",
  Seminar - Puzzles and Permutation Groups,
  Conference - Expert Database Systems Workshop
----------------------------------------------------------------------

Date: 12 Apr 1984 07:52:11-EST
From: Nasa.Langley.Research.Center@CMU-RI-ROVER
Subject: Cfasling Pascal routines into Franz

  In our work with distributed intelligent systems for space teleoperators
and robotics, we have found the "cfasl" function in Franz Lisp to be very
useful, connecting previously-developed Fortran routines to the total
system. However, a need has arisen to use an external Pascal function,
and we have been unable to persuade Franz to accept this in our system.
We have even tried to front-end Pascal modules with Fortran in order to
cfasl, but can't even manage that. Any suggestions from someone who has
done this? We are running Franz opus 38.17 on a *VAX/VMS* 750, not Unix.
We do have the Eunice system.

The heart of the problem is that
I'm using VAX/VMS Pascal, not the Unix/Eunice pascal. Evidently the VMS
pascal is not generating global symbols in a way that is visible to the
cfasling functions. In fact, I haven't gotten it to be visible to the linker
in VMS calling pascal modules from other languages, say fortran. Probably
if I could get that, I could get the other.

Mailing address is:
      Nancy Orlando
      Mail Stop 152D
      NASA Langley Research Center
      Hampton, VA 23665
Thanks in advance...

Nancy Orlando

------------------------------

Date: Mon 16 Apr 84 16:56:05-CST
From: Oliver Gajek <LRC.Gajek@UTEXAS-20.ARPA>
Subject: Prolog on Symbolics machines

Does anyone know  whether there is  a PROLOG available  for a  Symbolics
Lisp machine?  If so, can you  run it simultaneously with Lisp and  call
it from there? And how does it compare to other implementations?

Thanks,

Oliver.

------------------------------

Date: 17 Apr 1984 21:27:06 EST
From: Perry W. Thorndyke <THORNDYKE@USC-ISI>
Subject: Real-Time Simulation

Response to request for information on AI-based real-time simulation:

We at Perceptronics are developing a real-time simulation of a Navy
tactical decision-making environment for use in an instructional system.
The environment simulates an air-sea battle situation in which the student
must command a ship, utilizing sensors, weapons, maneuvering, and deception
to defend himself against an opposing ship(s).  The battle simulation and
opponent simulation must run in real time to present a realistic training
situation. From an instructional perspective, the interesting research
issues involve (1) how to represent the skills associated with real-time
cognition on a time-stressed problem, and (2) how to make the opponent
simulation modifiable under program control by the instructional system
so that exercises can address particular pedagogical objectives.  We are
currently working in GLISP, which sits on top of Franz Lisp on a VAX.
We utilize 4 mb of main memory.

Perry Thorndyke
Perceptronics Knowledge Systems Branch
thorndyke@usc-isi

------------------------------

Date: 17 Apr 1984 21:59:32 EST
From: Perry W. Thorndyke <THORNDYKE@USC-ISI>
Subject: Noncompetition

Response to Fahlman's message on noncompetition clauses:

Scott,

We continually hire AI talent into our for-profit, public company to
conduct R&D on expert systems, surrogate instructors, intelligent human-
machine interfaces, and distributed AI.  Several of our products contain
proprietary hardware-software designs and our market advantage depends
on maintaining a technology edge in those product areas, which include
videodic/graphics display systems.  Yet we have no such noncompetition
clause, nor have we considered imposing one.  Given that it is a
seller's market for AI talent now, it's hard to believe that any company
could get away with imposing such a policy--assuming that it is even
legally enforcable. My experience in the AI field is that conflict-of-
interest considerations do not extend beyond the term of employment
of the individual, except for non-disclosure of proprietary information.
The policy you cited seems extreme and undesirable, and constitutes
a moral, if not legal, unfair restraint of trade.

Perry Thorndyke
Perceptronics, Inc.
thorndyke@usc-isi

------------------------------

Date: Wed 18 Apr 84 11:22:44-PST
From: WYLAND@SRI-KL.ARPA
Subject: Stolfo's call for discussion

        Your question - "What are the fundamental characteristics
of AI computation that distinguish it from more conventional
computation." - is a good one.  It is answered, consciously or
unconsciously, by each of us as we organize our understanding of
the field.  My own answer is as follows:

        The fundamental difference between conventional programs
and AI programs is that conventional programs are static in
concept and AI programs are adaptaive in concept.  Conventional
programs, once installed, have fixed functions: they do not
change with time.  AI programs are adaptive: their functions and
performance improve with time.

        A conventional program - such as a payroll program, word
processor, etc. - is conceived of as a static machine with a
fixed set of functions, like a washing machine.  A payroll
program is a kind of "cam" that converts the computer into a
specific accounting machine.  The punched cards containing the
week's payroll data are fed into one side of the machine, and
checks and reports come out the other side, week after week.  In
this concept, the program is designed in the same manner as any
other machine: it is specified, designed, built, tested, and
installed.  Periodic engineering changes may be made, but in the
same manner as any other machine: primarily to correct problems.

        AI programs are adaptive: the program is not a machine
with a fixed set of functions, but an adaptive system that grows
in performance and functionality.  This focus of AI can be seen
by examining the topics covered in a typical AI text, such as
"Artificial Intellegence" by Elaine Rich, McGraw-Hill, 1983.
The topics include:

  o  Problem solving: programs that solve problems.
  o  Game playing
  o  Knowledge representation and manipulation
  o  Natural language understanding
  o  Perception
  o  Learning

        These topics are concerned with adaptation, learning, or
any of several names for the same general concept.  This seems to
be the consistant characteristic of AI programs.  The interesting
AI program is one that can improves its performance - at solving
problems, playing games, absorbing and responding to questions
about knowledge, etc. - or one that addresses issues associated
with problem solving, learning, etc.

        The adaptive aspect of AI programs implies some
difference in methods used in the programs.  AI programs are
designed for change, both by themselves while running, and by the
original programmer.  As the program runs, knowledge structures
may expand and change in a number of dimensions, and the
algorithms that manipulate them may also expand - and change
THEIR structures.  The program must be designed to accommodate
this change.  This is one of the reasons that LISP is popular in
AI work: EVERYTHING is dynamically allocated and modifyable -
data structures, data types, algorithms, etc.

        Good luck in your endeavors!  It is a great field!

Dave Wyland
WYLAND@SRI

------------------------------

Date: 12 Apr 84 15:51:48-PST (Thu)
From: harpo!ulysses!burl!clyde!akgua!psuvax!burdvax!sjuvax!bbanerje @
      Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: sjuvax.254

>> There is another way of looking at the statement -
>>  all customers in Indiana and Ohio
>> which seems simpler to me than producing the new phrase -
>>  all customers in Indiana  AND all customers in Ohio
>> instead of doing this why not treat Indiana and Ohio as a new single
>> conceptual entity giving -
>>  all customers in (Indiana and Ohio).
>>
>> This seems simpler to me. It would mean the database would have to
>> allow aggregations of this type, but I don't see that as being
>> particularly problematic.
>>
>> Jim Cowie.

My admittedly inconsequential contribution to this:

(Pardon the Notation!  Here, Indiana and Ohio correspond to sets
of base type customer.  Cλ- denotes set membership and (~) is
intended to denote set intersection.)


All customers in Indiana AND all customers in Ohio seems to want the
following :

    [all customers such that |
        {customer Cλ- Indiana} XOR {customer Cλ- Ohio}]

This seems to be described best as

    [all customers such that |
        customer Cλ- {Indiana U Ohio - (Indiana (~) Ohio)}]

Assuming that no customer can be in Indiana and Ohio simultaneously,
the intersection of the sets would be NULL.  Thus we would have

    [all customers such that |
        customer Cλ- {Indiana U Ohio}]

So far so good.  However, the normal sense of an AND as I understand
it, corresponds to a set intersection.  The formulation is therefore
counter-intutive.

I'm not an AI type, so I would appreciate being set straight.  Flames
will be cheerfully ignored.

Regards,


                                Binayak Banerjee
                {allegra | astrovax | bpa | burdvax}!sjuvax!bbanerje

------------------------------

Date: 18 April 1984 15:27-EST
From: Kenneth Byrd Story <STORY @ MIT-MC>
Subject: puzzles and permutation groups

           [Forwarded from the MIT bboard by Laws@SRI-AI.]

DATE:     Thursday, April 19, 1984
TIME:     Lecture, 4:00pm
PLACE:    NE43-512a


   ``Generalized `15-puzzles' and the Diameter of Permutation Groups''

                        Dan Kornhauser
                              MIT

Sam Lloyd's famous ``15-puzzle'' involves 15 numbered unit squares free to move
in a 4x4 area with one unit square blank.  The problem is to decide whether a
given rearrangement of the squares is possible, and to find the shortest
sequence of moves to obtain the rearrangement when it is possible.

A natural generalization of this puzzle involves a graph with @i(n) vertices,
and @i(k<n) tokens numbered @i(1,...,k) on distinct vertices.  A legal move
consists of sliding a token from its vertex to an adjacent unoccupied vertex.

Wilson (1974) obtained a criterion for solvability for biconnected graphs and
@i(k=n-1).  No polynomial upper bound on number of moves was given.

We present a quadratic time algorithm for deciding solvability of the general
graph problem.  It is also shown that @i[O(n@+{3})] move solutions always exist
and can be efficiently planned.  Further, @i[O(n@+{3})] is shown to be a
matching lower bound for some graph puzzles.

We consider related puzzles of the Rubik's cube type, in the context of the
general permutation group diameter question.

This is joint work with Gary Miller, MIT, and Paul Spirakis, NYU

HOST:   Professor Silvio Micali

------------------------------

Date: 12 Apr 84 12:31:31-PST (Thu)
From: harpo!ulysses!allegra!carlo @ Ucb-Vax
Subject: Expert Database Systems Workshop  (long msg)
Article-I.D.: allegra.2406

               Call for Papers and Participation

       FIRST INTERNATIONAL WORKSHOP ON EXPERT DATABASE SYSTEMS

         October 25-27, 1984, Kiawah Island, South Carolina


  Sponsored by

  The Institute of Information Management, Technology, and Policy,
  College of Business Administration,
  University of South Carolina

  In Cooperation With

  Association for Computing Machinery - SIGMOD and SIGART

  IEEE Technical Committee on Data Base Engineering


  Workshop Program

  This workshop  will  address  the  theoretical  and  practical  issues
  involved  in  making databases more knowledgeable and supportive of AI
  applications.  The tools and techniques  of  database  management  are
  being  used  to  represent  and  manage more complex types of data and
  applications environments.

  The rapid growth of online systems containing text, bibliographic, and
  videotex  databases with their specialized knowledge, and the develop-
  ment of expert systems for scientific, engineering and business appli-
  cations  indicate the need for intelligent database interfaces and new
  database system architectures.

  The workshop will bring together researchers  and  practitioners  from
  academia  and industry to discuss these issues in Plenary Sessions and
  specialized Working Groups.  The Program Committee will invite  40  to
  80  people,  based  on submitted research and application papers (5000
  words) and issue-oriented position papers (2000-3000 words).

  Topics of Interest

  The Program Committee invites papers addressing (but not  limited  to)
  the following areas:

  Knowledge Base Systems                 Knowledge Engineering
  environments                           acquisition
  architectures                          representation
  languages                              design
  hardware                               learning

  Database Specification Methodologies   Constraint and Rule Management
  object-oriented models                 metadata management
  temporal logic                         data dictionaries
  enterprise models                      constraint specification
  transactional databases                 verification, and enforcement

  Reasoning on Large Databases           Expert Database Systems
  fuzzy reasoning                        natural language access
  deductive databases                    domain experts
  semantic query optimization            database design tools
                                         knowledge gateways
                                         industrial applications

  Please send five (5) copies of full papers or position papers by  June
  1, 1984 to:

                Larry Kerschberg, Program Chairperson
                College of Business Administration
                University of South Carolina
                Columbia, SC, 29208
                (803) 777-7159 / (803) 777-5766 (messages)
                USENET: ucbvax!allegra!usceast!kersch
                CSNET:  kersch@scarolina

  Submissions will be considered by the Program Committee:

  Bruce Berra, Syracuse University            Sham Navathe, Univ. of Florida
  James Bezdek, Univ. of South Carolina       Erich Neuhold, Hewlett-Packard
  Michael Brodie, Computer Corp. of America   Stott Parker, UCLA
  Janis Bubenko, Univ. of Stockholm           Michael Stonebraker, UC-Berkeley
  Peter Buneman, Univ. of Pennsylvania        Yannis Vassiliou, New York Univ.
  Antonio L. Furtado, PUC-Rio de Janeiro      Adrian Walker, IBM Research Lab.
  Jonathan King, Symantec                     Bonnie L. Webber, U. of Penn.
  John L. McCarthy, Lawrence Berkeley Lab.    Gio Wiederhold, Stanford Univ.
  John Mylopoulos, University of Toronto      Carlo Zaniolo, AT&T Bell Labs




  Authors will be notified of acceptance or rejection by July 16,  1984.
  Preprints  of  accepted  papers  will  be  available  at the workshop.
  Workshop presentations, discussions, and working group reports will be
  published in book form.



    Workshop General Chairman           Local Arrangements Chairperson

    Donald A. Marchand                  Cathie Hughes-Johnson

    Institute of Information Management, Technology and Policy
    (803) 777-5766

    Working Group Coordinator           Industrial Liaison

    Sham Navathe                        Mas Tsuchiya
    Computer and Information Sciences   TRW 119/1842
    University of Florida               One Space Park Drive
    512 Weil Hall                       Redondo Beach, CA 90278
    Gainesville, FL 32611               (213) 217-6114
    (904) 392-7442



  _________________________________________________________________________
             Response Card (Please mail to address on below)

  Name  ___________________________________________ Telephone _____________

  Organization  ___________________________________________________________

  Address  ________________________________________________________________
  City, State,
  ZIP, and Country ________________________________________________________

       Please check all that apply:

       _____ I intend to submit a research paper.
       _____ I intend to submit an issue-oriented position paper.
       _____ I would like to participate in a working group.
             General Topic Areas _________________________________________
       _____ Not sure I can participate, but please keep me informed.

  Subject of paper ______________________________________________________

  _______________________________________________________________________




                   Cathie Hughes-Johnson
                   Institute of Information Management
                   Technology and Policy
                   College of Business Administration
                   University of South Carolina
                   Columbia, SC 29208

------------------------------

End of AIList Digest
********************

[rdg - changed ← to _ above]
∂21-Apr-84  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #50
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Apr 84  11:41:05 PST
Date: Fri 20 Apr 1984 10:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #50
To: AIList@SRI-AI


AIList Digest           Saturday, 21 Apr 1984      Volume 2 : Issue 50

Today's Topics:
  AI Programming - Characterization & Software Engineering,
  AI Literature - Computer Database & Metaphor and Sociolinguistics &
    Automated Reasoning Book,
  Expert Systems - DARPA Sets Expert System Goals,
  Administrivia - Creation of Pascal Mailing List,
  Humor - Lady Lovelace's Encryption Algorithm,
  Seminars - Model-Based Vision & Robot Design Issues
----------------------------------------------------------------------

Date: Mon 16 Apr 84 13:52:04-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: RE: AI Programming

I think a major difference between most AI programming and most non-AI
programming is that AI programming usually involves implementing
additional layers of interpretation on top of whatever programming
system is being employed.  Any system that needs to reason about its
own actions, its own assumptions, and so on, requires this extra layer
of interpretation. The kinds of programs that I work on--learning
programs--also need to modify themselves as they run.  This helps
explain why LISP is so popular--it provides very good support for
building your own interpreters: the ability to dynamically define new
symbols, the ability to construct arbitrary binding environments, and
the ability to invoke EVAL on arbitrary expressions.  Perhaps LISP
is best viewed as a interpreter language rather than a programming
language.

  --Tom

------------------------------

Date: 16 Apr 84 6:18:16-PST (Mon)
From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax
Subject: Incomplete specifications ...
Article-I.D.: ccivax.111

In reference to the recent discussion about Software
Engineering and incomplete specifications.

   For any new computer system, specifications at some
point must be incomplete.  A computer program is a
new machine -- it's never been constructed before.
So the final details always remain until the end.  This does
not mean that one does not begin construction.  On the
contrary, it seems to this writer that all too often
construction begins before any specifications are written.
What's needed is a middle path.  Design people need
enough requirements and constraints ( specifications )
to start work.  What should be provided is concise
documentation of the requirements and constraints, as
well as documentation of the unknowns and the risk.
Designers as they work will learn more about what is
and is not possible and this information will refine
the specifications.  But holes will remain.  This kind
of "evolutionary" development has been described by
Carl Hewitt in an article entitled "Evolutionary
Programming" in SOFTWARE ENGINEERING, edited by
H. Freeman and P.M. Lewis II (NY: Academic Press, 1980).

   I submit that any computer system development must
be a risk, and that it can only be developed by proceeding
with incomplete specifications.  The complement to this
is that large projects must be reviewed for viability
as knowledge is gained through this evolutionary
growth.  Sometimes it's better to quit before good money
is wasted.

   There's more to this issue that what is written here.
But it is not correct to hold AI programming up as some
sort of magical paradigm that is not subject to rudimentary
engineering discipline.  Software Engineering may indeed
have much to learn from the AI style of programming,
but programming in general has much to learn from engineering
disciplines also.

        Bill Anderson

    ...!ucbvax!amd70!rocksvax!ritcv!ccivax!band
    ...!{allegra | decvax}!rochester!ccivax!band

------------------------------

Date: Tue 17 Apr 84 09:31:17-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: New File on Dialog-- The Computer Database

          [Forward from the Stanford bboard by Laws@SRI-AI.]

The Computer Database is a new file on Dialog which covers computers,
telecommunications and Electronics.  The file went online in January and
covers material from 1983 to date.  The documentation which comes with
the file has a thesaurus which appears to be very up to date in terminlology
for online searching.  The journals indexed include ACM publications, AI,
Industrial Robot, SIAM publications, IEEEE as well as Infoworld, PC World,
Dr. Dobbs, Byte etc.

[...]

Harry

------------------------------

Date: Wed, 18 Apr 1984 13:18:02 EST
From: FF Bottles of Beer on the Wall,...
      <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Discounted Books

     A number of books on metaphor and sociolinguistics that I mentioned
in an earlier message are now on sale by their publisher, the University of
Pennsylvania Press.  The sale catalog is available by writing them at
3933 Walnut Street, Philadelphia, PA 19104.  Minimum order is $10.00.

     Among the items available and of interest to AI researchers are:

Sapir & Crocker, "The Social Use of Metaphor" $8.75 (50% off)
Hymes, "Foundations in Sociolinguistics" $6.97 (30% off)
Kirschenblatt-Gimblett, "Speech Play", $6.00 (70% off)
Weinreich, "On Semantics"  $10.50 (70% off)
Labov, "Sociolinguistic Patterns", $10.00 (60% off)
Maranda & Maranda, "Structural Analysis of Oral Tradition", $8.40 (60%off)

  --Dave Axler

------------------------------

Date: 28-Mar-84 12:33:59-CST (Wed)
From: Larry Wos <Wos@ANL-MCS>
Subject: Automated Reasoning

          [Forwarded from the Prolog Digest by Laws@SRI-AI.]

The book, Automated Reasoning:  Introduction and Applications, by
Wos,  Overbeek,  Lusk, and Boyle, is now available from Prentice-
Hall.  It introduces basic concepts by showing how  an  automated
reasoning program can be used to solve various puzzles.  The puz-
zles include the "truthtellers and liars" puzzle that was  exten-
sively  discussed  in  the  Prolog Digest, McCarthy's domino and
checkerboard puzzle, and the billiard ball and balance scale puz-
zle.   The  book  is  written in a somewhat informal style and no
background is required.  It also contains a rigorous treatment of
the  elements of automated reasoning.  The book relies heavily on
examples, includes many exercises, and discusses various applica-
tions  of  automated  reasoning.   The applications include logic
circuit design,  circuit  validation,  research  in  mathematics,
research  in formal logic, control systems, and program verifica-
tion.  Other chapters of the book provide an introduction to Pro-
log  and  to  expert  systems.  The last chapter, "The Art of Au-
tomated Reasoning", gives guidelines for choosing representation,
inference rules, and strategies.

     The book is based on examples actually  solved  by  existing
automated  reasoning  programs.   Certain  of  these programs are
available and portable.  The book can be used as a college  text,
consulted  by  those  who wish to study possible applications, or
simply read by the curious.

It can be ordered directly from Prentice-Hall with a visa
or master card by calling  800-526-0485 and the ISBN number
is  0-13-054446-9 for the soft cover.  The soft cover
is 18.95, and the hard 28.95.

  -- LW

------------------------------

Date: 17-Apr-84 17:24 PST
From: William Daul  OAD / TYMSHARE / McDonnell Douglas 
      <WBD.TYM@OFFICE-2.ARPA>
Subject: DARPA Sets Expert System Goals

From DEFENSE ELECTRONICS (April 1984):

Among the goals established for DARPA's expert systems technology program are
increased storage capacity and reasoning power that can deal with 10,000 rules
and provide 4,000 rule inferences per second for stand-alone systems and 30,000
rules and 12,000 inferences per second for multiple cooperating expert systems.
The program, part of DARPA's strategic computing initiative, is aimed at
achieving a framework to support battle management applications.  The Air
Force's Rome Air Development Center will be issuing RFPs in nine technical
areas: explanation and presentation capability, ability to handle uncertain and
missing knowledge, fusion of information from several sources, flexible control
mechanisms, knowledge acquisition and representation, expansion of knowledge
capacity and extent, enhanced inference capability, exploiting expert systems
on multiprocessor architectures, and development of cooperative distributed
expert systems.  Multiple contract awards are planned for each area, and one
or two additional awards are planned for complete system development.

------------------------------

Date: Wed, 11 Apr 84 8:48:51 EST
From: "Ferd Brundick (VLD/LTTB)" <fsbrn@Brl-Voc.ARPA>
Subject: Creation of new mailing list

Hi,

A new special interest mailing list called info-pascal has been
created.  Enclosed below is the summary for the list.  If you would
like to be added to the list, please check with your local Postmaster
or send a message to info-pascal-request@brl-voc.

                                        dsw, fferd
                                        Fred S. Brundick
                                        aka Pascal Postman
                                        USABRL, APG, MD.
                                        <info-pascal-request@brl-voc>

     -----------------------------------------------------------

INFO-PASCAL@BRL-VOC.ARPA

   This list is intended for people who are interested in the programming
   languages Pascal and Modula-2.  Discussions of any Pascal/Modula-2 imple-
   mentation (from mainframe to micro) are welcome.

   Archives are kept on SIMTEL20 in the files:
      MICRO:<CPM.ARCHIVES>PASCAL-ARCHIV.TXT    (current archives)
      MICRO:<CPM.ARCHIVES>PASCAL.ARCHIV.ymmdd  (older archives)

   All requests to be added to or deleted from this list, problems, questions,
   etc., should be sent to INFO-PASCAL-REQUEST@BRL-VOC.ARPA.

   Coordinator: Frederick S. Brundick <fsbrn@brl-voc.arpa>

------------------------------

Date: 19 Apr 1984 12:34:36-EST
From: walter at mit-htvax
Subject: Seminar - Lady Lovelace's Encryption Algorithm

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                 ANNALS OF COMPUTER SCIENCE SEMINAR
                   DATE:  Friday, April 20th, 1984
                   TIME:  Refreshments  12:00 noon
                  PLACE:  MIT AI Lab 8th Floor Playroom

                 LADY LOVELACE'S ENCRYPTION ALGORITHM

                               ABSTRACT

        Znk loxyz iusv{zkx vxumxgsskx }gy g totkzkktzn3iktz{xλ tuhrk}usgt2
        Rgjλ G{m{yzg Gjg Hλxut Ru|krgik2 jg{mnzkx ul znk vukz Ruxj Hλxut.
        Gy g zkktgmkx2 G{m{yzg joyvrgλkj gyzutoynotm vxu}kyy ot sgznksgzoiy.
        ]nkt ynk }gy komnzkkt g{m{yzg loxyz yg} Ingxrky Hghhgmk-y gtgrλzoigr
        ktmotk2 g igri{rgzotm sginotk zngz }gy znk luxkx{ttkx ul znk sujkxt
        iusv{zkx.  Ot komnzkkt luxzλ3z}u2 ynk zxgtyrgzkj g vgvkx ut znk
        ktmotk lxus Lxktin zu Ktmroyn gjjotm nkx u}t |ur{sotu{y tuzky. Ot
        y{hykw{ktz }xozotmy ynk jkyixohkj znk (ruuv( gtj (y{hxu{zotk(
        iutikvzy g iktz{xλ hkluxk znkox osvrksktzgzout ot krkizxutoi
        jomozgr iusv{zkxy .h{z gy lgx gy O qtu}2 nu}k|kx2 ynk tk|kx joj
        gtλznotm }ozn ktixλvzout/. Rgjλ Ru|krgik gtj Hghhgmk ngj g rutm
        gtj iruyk lxoktjynov gtj ynk }gy g jkjoigzkj vgxztkx ot noy }uxq
        }ozn znk gtgrλzoigr ktmotk.  [tluxz{tgzkrλ ynk }gy nkrj hgiq hλ
        gtzo3lksotoyz gzzoz{jky gtj hλ nkx u}t uhykyyout }ozn mgshrotm ut
        nuxyk xgiky. Rgjλ Ru|krgik jokj ul igtikx gz gmk znoxzλ3yo~. Tu}
        zngz λu{|k jkiujkj znoy skyygmk2 rkz-y grr mkz hgiq zu }uxq.

        This fascinating historical discussion and the
        accompanying Graduate Student Lunch will be hosted
        by Dan Carnese and Maria Gruenewald.

------------------------------

Date: 18 Apr 1984  14:38 EST (Wed)
From: Cobb%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Model-Based Vision

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                          W. ERIC L. GRIMSON

                         Local Constraints in
               Model Based Recognition and Localization
                           From Sparse Data

                       April 23, 1984   4:00PM
                       NE43-8th floor playroom


  A central characteristic of advanced applications in robotics is the
presence of significant uncertainty about the identities and attitudes
of objects in the workspace of a robot.  The recognition and
localization of an object, from among a set of models, using sparse,
noisy sensory data can be cast as the search for a consistent matching
of the data elements to model elements.  To minimize the computation,
local constraints are needed to limit the portions of the search space
that must be explicitly explored.

  We derive a set of local geometric constraints for both the three
degree of freedom problem of isolated objects in stable positions, and
the general six degree of freedom problem of an object arbitrarily
oriented in space.  We establish that the constraints are complete for
the case of three degrees of freedom, but not for six.  We then show
by combinatorial analysis that the constraints are generally very
effective in restricting the search space and provide estimates for
the number of sparse data points needed to uniquely identify and
isolate the object.  These results are supported by simulations of the
recognition technique under a variety of conditions that also
demonstrate its graceful degradation in the presence of noise.  We
also discuss examples of the technique applied to real data from
several sensory modalities including laser ranging, sonar, and grey
level imaging.


Refreshments:  3:45PM

Host:  Professor Patrick H. Winston

------------------------------

Date: Wed 18 Apr 84 14:43:58-PST
From: PENTLAND@SRI-AI.ARPA
Subject: Seminar - Robot Design Issues

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

WHAT: FOUNDATIONAL ISSUES IN ROBOT DESIGN AND THEIR METHODOLOGICAL CONSEQUENCES
WHO: Stan Rosenschein,  Artificial Intelligence Center, SRI International
SERIES: Issues in Language, Perception and Cognition
WHERE: Room 100, Psychology Dept.
WHEN: Monday April 23, 1:00pm         <- * Note Change *


The design of software which would allow robots to exhibit complex
behavior in realistic physical environments is a central goal of
Artificial Intelligence (AI).  In structuring its approaches to this
problem, AI has over the years been guided by a melange of concepts
from logic, computer programming, and (prominently) by certain
pretheoretic intuitions about mental life and its relationship to
physical events embodied in ordinary "folk psychology."  This talk
presents two contrasting views of how information, perception, and
action might be modeled by a robot designer depending on how seriously
he took "folk psychology."  One view takes the ascription of mental
properties to machines quite seriously and leads to a methodology in
which the abstract entities of folk psychology ("beliefs," "desires,"
"plans," "intentions", etc.)  are realized in a one-for-one fashion as
data structures in the robot program. Frequently these data structures
resemble, in certain ways, the sentences of an interpreted logical
languages in that they are taken to express the "content" of the
belief, desire, etc.  The alternative view does not assume this degree
of mental structure a priori.  Logic may figure prominently, but it is
used chiefly BY THE DESIGNER to define and reason about the
environment and its relation to desired robot behavior. The talk will
suggest an automata-theoretic approach to the content of information
states which sidesteps many of the presuppositions of the folk
psychology.  The implications of such an approach for a systematic
robot software methodology will be discussed, including the
possibility of "organism compilers."  The thesis that AI's reliance on
folk psychology is, on balance, useful will be left unresolved though
certainly not unquestioned.

------------------------------

End of AIList Digest
********************

∂22-Apr-84  1629	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #51
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Apr 84  16:28:13 PST
Date: Sun 22 Apr 1984 15:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #51
To: AIList@SRI-AI


AIList Digest            Sunday, 22 Apr 1984       Volume 2 : Issue 51

Today's Topics:
  AI Tools - Review of LISP Implementations,
  Computational Linguistics - Stemming Algorithms & Survey,
  Linguistics - Use of "and" & Schizophrenic Misuse of Metaphors,
  Correction - Lovelace Encryption Seminar,
  Seminars - Combining Logic and Functional Programming &
    Learning Design Synthesis Expertise
----------------------------------------------------------------------

Date: 20 Apr 84 22:22:44 EST  (Fri)
From: Wayne Stoffel <wes%umcp-cs.csnet@csnet-relay.arpa>
Subject: Review of LISP Implementations

Re: Bill Wong's article on three LISP implementations

He also wrote a series on AI languages that appeared in Microsystems.  All
were 8-bit CP/M implementations.

August 1983, muLisp-80, SuperSoft Lisp, and Stiff Upper Lisp.

December 1983, XLISP, LISP/80, and TLC Lisp.

January 1984, micro-Prolog.

                                W.E. Stoffel

------------------------------

Date: Fri, 20 Apr 84 18:15 EST
From: Ed Fox <fox%vpi.csnet@csnet-relay.arpa>
Subject: Algorithms for word stemming and inverse stemming (generate
         word)?

   [Forwarded from the Arpanet-BBoards distributin by Laws@SRI-AI.]

Please send code, references, comments - about systems which can transform
words to stems and stems to words, in an efficient and effective fashion with
very small tables.  There are a number of stemming algorithms, and some
systems that generate words from root+attribute←information.  I would be
interested in a list of such, and especially of systems that do both in
an integrated fashion.  Preferred are systems that can run under 4.x UNIX.
   Many thanks, Ed Fox (fox.vpi@csnet-relay)

------------------------------

Date: Thu, 19 Apr 84 15:59:18 est
From: crane@harv-10 (Greg Crane)
Subject: foreign language dbases, linguistic analysis, for lang
         word-proc

  [Forwarded from the Arpanet-BBoards distribution by Laws@SRI-AI.]

Linguists, philologists, humanists etc. --

        Are you using a computer for linguistic analysis? Access of
big foreign language data bases (Toronto Old English Dbase, or the
Thesaurus Linguae Graecae for example)? analysis or storage
of variant reading or versions? dictionary projects?

        We have been doing a lot here, but nobody seems to have any
overall picture of what is being done round about. I would like to
find out and think its time those who are doing much the same thing
started talking. Any ideas on where a lot of work is being done and
how to facilitate communication?

                                        Gregory Crane
                                        Classics Department
                                        Harvard University

------------------------------

Date: Fri 20 Apr 84 20:06:52-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Use of "and"

Come on, folks.   When someone says "my brothers and sisters" they do not mean
the intersection of the two sets.   Aside from its legal meaning of "or" which
I mentioned earlier, the English word "and" has at least two more meanings:
logical conjunction, and straight addition (which means union when applied to
sets).   Though I'm willing to be contradicted, I believe that English usage
prefers to intersect predicates rather than sets.   Namely, "tall and fat
people" can mean people who are both tall and fat (intersection), but "tall
people and fat people" means both the set of people who are tall and the set of
people who are fat (union).
                                        - Richard

------------------------------

Date: 16 Apr 84 9:12:00-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: uiucdcs.32300023

>>From watrose!japlaice
>>              There are several philosophical problems with treating
>>      `Indiana and Ohio' as a single entity.
>>              The first is that the Fregean idea that the sense of a sentence
>>      is based on the sense of its parts, which is thought valid by most
>>      philosophers, no longer holds true.
>>              The second is that ... `unicorn', `hairy unicorn', `small,
>>      hairy unicorn' ... are all separate entities ...

On the contrary, the sense of "Indiana and Ohio" is still based on the senses
of "Indiana", "and" and "Ohio", if only we disambiguate "and". The ambiguity
of conjunction is well-known: the same word represents both a set operator and
a logical operator (among others). Which set operator? The formula
        X in ({A} ANDset {B})  <=  (X in {A}) ANDlog (X in {B})
allows ANDset to be either intersection or union. It is only our computational
bias that leads us to confuse the set with the logical operator. The formula
        X in ({A} ANDset {B})  <=>  (X in {A}) ANDlog (X in {B})
forces ANDset to be an intersector.

But we need only distinguish ANDset and ANDlog to preserve Fregean
compositionality; for that, it's immaterial which ANDset we adopt. In any
case, Bertrand Russell's 1908 theory of descriptions (as I read it) seems to
refute strict compositionality (words are meaningless in isolation -- they
acquire meaning in context).

Secondly, I don't recall Quine saying that `unicorn', `hairy unicorn', `small,
hairy unicorn' should all be indistinguishable. They may have the same referent
without having the same meaning.

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign
                                        { ihnp4 | pur-ee } ! uiucdcs ! marcel

------------------------------

Date: 17 Apr 84 7:52:10-PST (Tue)
From: harpo!ulysses!gamma!pyuxww!pyuxss!aaw @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: pyuxss.311

[audi alteram partem]

For some interesting study on understanding of metaphors of the type you
refer to, look into Silvano Arieti (psychiatrist/NYU) work on
schizophrenic misuse of metaphors. It has some deep insights on the
relationship between metaphor and logic.
                        {harpo,houxm,ihnp4}!pyuxss!aaw
                        Aaron Werman

------------------------------

Date: 21 Apr 1984 17:00-PST
From: fc%USC-CSE@USC-ECL.ARPA
Subject: Lovelace Encryption Seminar

With regard to coded messages, I think natural stupidity has replaced
artificial intelligence in this regard. Fortunately, I have a
program to deal with walter's kind. So nobody has to run their
programs, here's an aproximate translation:

                       ------------------------
The first computer programmer was a nineteenth century noblewoman,
Lad Augusta Ada Bron Lovelace, daughter of the poet Lord Bron.
As a teenager, Augusta displaed astonishing prowess in mathematics.
When she was eighteen augusta first saw Charles Babbage's analtical
engine, a calculating machine that was the forerunner of the modern
computer. In eighteen fortytwo, she translated a paper on the
engine from French to Knglish adding her own voluminous notes. In
subse:uent writings she described the "loop" and "subroutine"
concepts a centur before their implementation in electronic
digital computers .but as far as I know, however, she never did
anthing with encrption/. Lad Lovelace and Babbage had a long
and close friendship and she was a dedicated partner in his work
with the analtical engine. Unfortunatel she was held back b
antiyfeminist attitudes and b her own obsession with gambling on
horse races. Lad Lovelace died of cancer at age thirtysix. Now
that ouve decoded this message, let's all get back to work.
                     ---------------------------

Please, Walter, next time you want to get the message out:
#@(& $%& $#(& (↑$% ↑&(#$&%! (& %($( (* ↑&*(*% &%& @&&#&& $#&$&%!
                                        Fred

[The responsibility for forwarding the previous message, and this one,
to the AIList readership rests with me.  -- KIL, AIList-Request@SRI-AI.]

------------------------------

Date: Wed 18 Apr 84 14:13:31-PST
From: SHORT%hp-labs.csnet@csnet-relay.arpa
Subject: Seminar - Combining Logic and Functional Programming

          [Forwarded from the SRI-AI bboard by Laws@SRI-AI.]

                          JOSEPH A. GOGUEN
                         SRI International

COMBINING LOGIC AND FUNCTIONAL PROGRAMMING -- WITH EQUALITY, TYPES, MODULES
AND GENERICS TOO!

         Hewlett Packard Computer Colloquium - April 26, 1984

This joint work with J. Meseguer shows how to extend the paradigm of logic
programming with some features that are prominent in current programming
methodology, without sacrificing logical rigor or efficient implementation.
The first and most important of these features is functional programming;
full logical equality provides an elegant way to combine the power of Prolog
(with its logical variables, pattern matching and automatic backtracking)
with that of functional programming (supporting functions and their
composition, as well as strong typing and user definable abstract data types).
An interesting new feature that emerges here is a complete algorithm for
solving equations that contain logical variables; this algorithm uses
"narrowing", a technique from the theory of rewrite rules.  The underlying
logical system here is many-sorted Horn clause logic with equality.  A
useful refinement is "subsorts", which can be seen as an ordering relation
on the set of sorts (usually called "types") of data.  Finally, we provide
generic modules by using methods developed in the specification language
Clear.  These features are all embedded in a language call Eqlog; we
illustrate them with a program for the well-known Missionaries and Cannibals
problem.

Thursday, April 26, 1984                 4:00 p.m.

Hewlett Packard Laboratories
Computer Research Center
1501 Page Mill Road
Palo Alto, CA 94304
5M Conference Room

------------------------------

Date: 19 Apr 84 13:28:14 EST
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Seminar - Learning Design Synthesis Expertise

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


Learning  Design  Synthesis  Expertise  by  Harmonizing  Behaviors with
                            Specifications

Speaker:      Masanobu Watanabe <Watanabe@Rutgers.Arpa>
              NEC Corporation, Tokyo, Japan
              Visiting Researcher, Rutgers University

Series:       Machine Learning Brown Bag Seminar
Date:         Wednesday, April 25, 1984, 12:00-1:30
Location:     Hill Center, Room 254


       VEXED is an expert system which supports interactive circuit design.
    VEXED provides suggestions  regarding  alternative  implementations  of
    circuit modules, as well as warnings regarding conflicting constraints.
    The   interactions  between  a  human  designer  and  the  system  give
    opportunities for the system to learn expertise in design synthesis  by
    monitoring  the  human  designer's  response  to  advice offered by the
    system.  From this point of view, there are two interesting cases.  One
    occurs  when  the  designer  ignores the advice of the system.  Another
    occurs when the system cannot provide any advice but the human designer
    can continue his own design.

       The system has to learn as many things as possible  by  analyzing  a
    single  precious  example,  because  it  is difficult for the system to
    obtain many examples from which to form  a  particular  concept.    The
    problem  space in the module decomposition process can be viewed as one
    with both states consisting of a set of modules  and  operators,  which
    will   be  called  implementation  rules.    This  talk  discusses  the
    implementation rule acquisition task which is intended to formulate  an
    implementation rule at an appropriate level of generality by monitoring
    a   designer's   circuit   implementation.    This  task  is  to  learn
    implementation rules (a kind of operator,  but  not  quite  like  LEX's
    operators),  while  LEX's  task  is  to learn heuristics which serve to
    guide useful operators.

------------------------------

End of AIList Digest
********************

∂24-Apr-84  2250	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #52
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Apr 84  22:50:09 PST
Date: Tue 24 Apr 1984 21:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #52
To: AIList@SRI-AI


AIList Digest           Wednesday, 25 Apr 1984     Volume 2 : Issue 52

Today's Topics:
  AI Tools - Another Microcomputer Lisp,
  Linguistics - Metaphors & Use of "And",
  Journal Announcement --  Data and Knowledge Engineering,
  Seminar - Nondeterminism and Creative Analogies
----------------------------------------------------------------------

Date: Sun 22 Apr 84 22:11:14-PST
From: Sam Hahn (Samuel@Score)
Reply-to: SHahn@SUMEX-AIM.ARPA
Subject: Another microcomputer Lisp

In line with the previous mentions of microcomputer implementations of Lisp,
how about this pointer:

I saw in the current (May) issue of Microsystems an advertisement for
Waltz Lisp, from ProCode International.  "Waltz Lisp is not a toy.  It is the
most complete microcomputer Lisp, including features previously available only
in large Lisp systems.  In fact, Waltz is substantially compatible with Franz
... and is similar to MacLisp and Lisp Machine Lisp."

Does anyone know anything about Waltz?  How about a review?

[further claims:        functions of type lambda, nlambda, lexpr, macro
                        built-in prettyprinting and formatting
                        user control over all aspects of the interpreter
                        complete set of error handling and debugging functions
                        over 250 functions in total                     ]

They're at POBox 7301, Charlottesville, VA  22906.

------------------------------

Date: 17 Apr 84 17:06:46-PST (Tue)
From: harpo!ulysses!burl!clyde!watmath!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: dciem.861

There is a very large literature on metaphor. As a start, try
A. Ortony (Ed.) Metaphor and Thought. New York: Cambridge U Press, 1979.

A new journal called "Metaphor" is being started up with first issue
probably in Jan 1985.  Sorry, I don't have ordering information.

In AI, check out the work of Carbonell.

Once you start getting a few leads, you will be overwhelmed by studies.

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 18 Apr 84 9:22:00-PST (Wed)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: uiucdcs.32300025

You might like to think about partial matching as a step toward analogical
or metaphorical reasoning. Try the following:

Fox, MS and Mostow, DJ  Maximal consistent interpretations of errorful data in
        hierarchically modelled domains. IJCAI-77, 165ff.
Kline, PJ  The superiority of relative criteria in partial matching and
        generalization. IJCAI-81, 296ff

Perhaps also check the growing literature on abductive reasoning, hypothesis
formation, disambiguation, categorization, diagnosis, etc. Some papers I found
most interesting:

Carbonell, JR and Collins, AM  Natural semantics in Artificial Intelligence.
        IJCAI-73, 344ff. { the SCHOLAR system }
Collins, A  et al   Reasoning from incomplete knowledge. In BOBROW & COLLINS's
        book "Representation and Understanding", Academic Press, NY (1975).
        { more on SCHOLAR }
Pople, HE  On the mechanization of abductive logic. IJCAI-73, 147ff



In NL research the role of expectations has become important to expedite
disambiguation. Includes use of attention focusing. Some very well-known work
at Yale on this. See eg papers by Riesbeck and Schank in the book by Waterman
& Hayes-Roth ('78), and by Schank & DeJong in Machine Intelligence 9. Lots of
other work too. Happy wading!

                                        Marcel Schoppers
                                        { ihnp4 | pur-ee } ! uiucdcs ! marcel

------------------------------

Date: 23-Apr-84 21:36 PST
From: Kirk Kelley  <KIRK.TYM@OFFICE-2.ARPA>
Subject: Re: Use of "and"

Hmmm, perhaps a friendly retrieval expert should accept statements from the user
like "Dont tell ME how to think!", deduce that there is some ambiguity of
interpretation over the meaning of a request, and ask for explicit
disambiguation of the troublesome operators after each future request, until the
user decides to pick and live with a single unambiguous interpretation.

 -- kirk

------------------------------

Date: Sun 22 Apr 84 23:24:45-PST
From: Janet Coursey <JVC@SU-SCORE.ARPA>
Subject: "and"

William Gass is expansive but probably incomplete in examining functional
uses of "and" in written literature.  He finds these uses and meanings:  a
conditional, a conjunction, adverbial, to balance or coordinate, finally, in
particular or above all, joint dependency of truth value, in addition,
following in time, following in space, next, equally true, increased emphasis,
sum or total, equivalence of interpretation or "that is to say", to condense,
to skip, suddenness in time, suddenness in space, consequence and cause...
More uses and their wonderful subtleties are presented in the article;
they are more varied than the aiList discussion has yet revealed.
The author is the David May Distinguished University Professor in the
Humanities at Washington University.

Gass, William H.  "And."  Harper's.  February, 1984.

------------------------------

Date: 19 Apr 84 19:01:54-PST (Thu)
From: hplabs!tektronix!ogcvax!metheus!howard @ Ucb-Vax
Subject: Re: Use of "and" - (nf)
Article-I.D.: metheus.237

The "Indiana & Ohio" problem can be explained by a feature of human language
processing which goes on all the time, although we are not often conciously
aware of it.  I refer, of course, to the rejection of contradictory, unlikely,
or impossible interpretations.

The reason we interpret "all customers in Indiana and Ohio" to mean "all
customers in Indiana and *all customers in* Ohio" is that the seemingly
logical interpretation is contradictory and cannot possibly refer to any
customers (regardless of what is in the database).  It is interesting to
note in this connection that some oriental forms of logic require that a
pair of examples be given for each set of things to be described, one of a
thing in the set, the other of a thing out of the set.  This prevents
wasting time with arguments based on the null set, like "All purple cows
made out of neutrinos can fly; all animals that can fly have wings; therefore
all purple cows made out of neutrinos have wings".  An example syllogism:
"Where there is smoke, there is fire.  Here, there is smoke: like in a kitchen,
unlike in a lake.  Therefore, here there is fire."

This rejection is extremely sophisticated, and includes, for example, infinite
loop detection.  An example: how many people would take the obvious "logical"
interpretation of the instructions "Lather. Rinse. Repeat." to be the correct
one?  We all automatically read this as "Lather. Rinse. Repeat the previous
two instructions once." because the other reading doesn't make physical sense.
How many people ever had to THINK about that, consciously, at all?

Also, it is customary to be able to be able to delete redundant or implied
information from a sentence.  Since the three words between stars above are
somewhat redundant, and can be deleted without affected the only reasonable
interpretation of the phrase, it should be O.K. to delete them.

Just more fat on the fire (my, how it sizzle!) from:

        Howard A. Landman
        ogcvax!metheus!howard

------------------------------

Date: Mon, 23 Apr 84 10:40:05 cst
From: Peter Chen <chen%lsu.csnet@csnet-relay.arpa>
Subject: Announcing a new journal --  DATA & KNOWLEDGE ENGINEERING


TITLE OF THE JOURNAL:

    DATA & KNOWLEDGE ENGINEERING

PUBLISHER:

    North-Holland

OBJECTIVES AND COVERAGE:

    Although database systems and knowledge systems have their differences,
they share many common principles.  For example, both are interested in the
representations of real-world phenomena.  Therefore, it is beneficial to
have a common forum for database and knowledgebase systems.

    This new journal will bring together the new advances in database and
knowledgebase areas to the attention of researchers, designers, managers,
administrators, and users.  It will focus on new techniques, tools,
principles, and theories of constructing successful databases or
knowledgebases.  The journal will cover (but not be limited to) the
following topics:

    Representation of Data or Knowledge
    Architecture of Database or Knowledgebase Systems
    Construction of Data/Knowledge Bases
    Applications of Data/Knowledge Bases
    Case Studies and Management Issues

    Besides these technical topics, the journal will also have columns on
conference reports, calendars of events, book review, etc.


CALL FOR PAPERS:

    Original papers in the field of data & knowledge engineering are
welcome.  In the cover letter, the author is required to declare
the originality of the manuscript (i.e.,
no similar versions of the manuscript have been published
or have been submitted elsewhere) and to agree
to the transfer of the copy right
to the publisher once the paper is accepted.

Please submit 5 copies of your manuscript to one of
the Associate Editors in the speciality field or to the regional editor.
Or, if you prefer, mail directly to the Editor-in-Chief.

The following are the addresses of the editors:

(1) Editor-in-Chief:
    Prof. Peter Chen
    Dept. of Computer Science
    Louisiana State University
    Baton Rouge, LA 70803-4020
    (chen%lsu.csnet@csnet-relay.arpa)
    (CSNET: chen@lsu)
    Tel: (504) 388-2482

(2) Associate Editors:

  (a) Data Engineering:

      Prof. Wesley Chu
      Dept. of Computer Science
      U.C.L.A.
      Los Angeles, CA 90024

      Prof. Jane Liu
      Dept. of Computer Science
      University of Illinois
      1304 West Springfield Rd.
      Urbana-Champaign, IL 61801

  (b) Knowledge Engineering:

      Dr. Donald Walker
      Natural-Language and Knowledge-Resource Systems
      SRI International
      Menlo Park, CA 94025
      (During Dr. Walker's transition from SRI International
       to Bell Communications Research, manuscripts should be
       sent to the Editor-in-Chief during the period
       4/15/84 to 10/15/84.)

(3) Regional Editor for Europe:
    Prof. Reind van de Riet
    Dept. of Math. and Computer Science
    Free University
    1081 HU Amsterdam
    The Netherlands


PUBLICATION DATE:

    The journal will be published quarterly, and
    the first issue is planned for the last quarter of 1984.

FOR FURTHER INFORMATION, INSTITUTIONAL SUBCRIPTION, OR A FREE SAMPLE COPY:

    Please contact the publisher:
    (1) In the USA/Canada:
        Elsevier Science Publishing Co., Inc.
        P.O. Box 1663
        Grand Central Station
        N.Y., N.Y. 10163
    (2) In all other countries:
        North-Holland
        P.O. box 1991
        1000 BZ Amsterdam
        The Netherlands

FOR SPECIAL PERSONAL SUBSCRIPTION RATE:
     Please contact the Editor-in-Chief.

FOR SERVING AS THE REFEREE:
     Please send a short note to the Editor-in-Chief or to
     one of the editors and indicate your specialities.

  --Peter Chen (CSNET mailbox: chen@lsu)
              (chen%lsu.csnet@csnet-relay.arpa)

------------------------------

Date: 23 Apr 1984  12:32 EST (Mon)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Nondeterminism and Creative Analogies

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

                     The Copycat Project:
        An Experiment in Nondeterminism and Creative Analogies

                        Doug Hofstadter

                         AI Revolving Seminar
           Wednesday    4/25    4:00pm  8th floor playroom

A micro-world is described, in which many analogies involving strikingly
different concepts and levels of subtlety can be made.  The question
"What differentiates the good ones from the bad ones?" is discussed,
and then the problem of how to implement a computational model of the
human ability to come up with such analogies (and to have a sense for
their quality) is considered.  A key part of the proposed system, now
under development, is its dependence on statistically emergent properties
of stochastically interacting "codelets" (small pieces of ready-to-run
code created by the system, and selected at random to run with probability
proportional to heuristically assigned "urgencies").  Another key element
is a network of linked concepts of varying levels of "semanticity", in
which activation spreads and indirectly controls the urgencies of new
codelets.  There is pressure in the system toward maximizing the degree
of "semanticity" or "intensionality" of descriptions of structures, but
many such pressures, often conflicting, must interact with one another,
and compromises must be made.  The shifting of (1) perceived boundaries
inside structures, (2) descriptive concepts chosen to apply to structures,
and (3) features perceived as "salient" or not, is called "slippage".
What can slip, and how, are emergent consequences of the interaction
of (1) the temporary ("cytoplasmic") structures involved in the analogy
with (2) the permanent ("Platonic") concepts and links in the conceptual
proximity network, or "slippability network".  The architecture of this
system is postulated as a general architecture suitable for dealing not
only with fluid analogies, but also with other types of abstract perception
and categorization tasks, such as musical perception, scientific theorizing,
Bongard problems and others.

------------------------------

End of AIList Digest
********************

∂28-Apr-84  1704	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #53
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Apr 84  17:03:25 PST
Date: Sat 28 Apr 1984 15:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #53
To: AIList@SRI-AI


AIList Digest            Sunday, 29 Apr 1984       Volume 2 : Issue 53

Today's Topics:
  References - AI and Legal Systems,
  Linguistics - "Unless" & "And" & Metaphors,
  Jobs - Noncompetition Clauses,
  Seminars - System Identification & Chunking and R1-SOAR
----------------------------------------------------------------------

Date: Wed, 25 Apr 84 19:34 MST
From: Kip Cole <KCole@HIS-PHOENIX-MULTICS.ARPA>
Subject: Pointers to AI and Legal Systems

Some time ago there was a request for pointers to references on Legal
Information Systems and AI.  I have the following which I can recommend:

1.  Deontic Logic, Computational Linguistics & Legal Info.  Systems.
Martino ed., published by North Holland.

2.  AI and Legal Information Systems.  Campi ed.  published by North
Holland.

Both books are papers presented at a conference in Italy on said topics.

Kip Cole, Honeywell Australia.

------------------------------

Date: Wed 25 Apr 84 16:51:30-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Meanings of "Unless"

Multiple meanings for English connectives:

I once read a paper in which it was seriously alleged that the word "unless"
has in excess of 4,000 (that's four thousand) potential logically distinct
meanings when used in writing an English law.   Sorry, I don't have the
reference, nor can I remember very many of the meanings.
                                                                - Richard

------------------------------

Date: 20 Apr 84 12:52:40-PST (Fri)
From: harpo!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Customers in Ohio and Indiana...
Article-I.D.: eosp1.808

One point of view seems to have been neglected in this discussion.
Suppose we build programs smart enough to "realize" that 'Ohio and Indiana'
really means 'Ohio <inclusive or> Indiana'.  Then what happens to the poor
user who really means 'Ohio AND Indiana'??  Suppose the original poor user
in this story had been trying to weed out duplicate accounts?

It seems to me that the best you can do is:
        (a) Make a semantic decision based upon a much larger context of
        what the user is doing, or:
        (b) Catch the ambiguity and ask the user to clarify.  We can
        deduce from the original story that many users will become furious if
        asked to clarify, by the way.
                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: 19 Apr 84 6:53:00-PST (Thu)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: uicsl.15500032

Here are some sources for metaphor:

1. A book edited by Andrew Ortony. The title is
Metaphor and Thought.  There are several good articles in this book, and I
recommend it as a good place to start, but not as the last word.

2. The Psychology Dept. at Adelphi University has been sporadically
putting out a mimeo entitled: The Metaphor Research Newsletter.  The latest
edition (which arrived today) indicates that as of January 1985 it will
become a full fledged journal called Metaphor, published by Erlbaum.

3. Dedre Gentner (of BBN) has been doing assorted work on metaphor.

4. Metaphors We Live By, by George Lakoff and Mark Johnson, is fun to read.
They have some good ideas, but they do tend to make too big a deal out of
them.  I think its worth reading.


As far as your claim that "man is a BIC pen" is a "bad" metaphor, I
tend to shy away from such a coarse-grained term.  For me, metaphors
may be more and less apt, more and less informative, more and less
novel, more and less easily understood, etc.  In this particular
example human beings are so complex that there is almost no object that
they cannot be compared to -- however strained the interpretation may
be.  BIC pens are well known (thanks to simple construction and a good
advertising agency) for their reliability and being able to withstand
unreasonable punishment (like being strapped to the heel of an Olympic
figure skater).  Similarly, humankind throughout the ages has
successfully held up under all kinds of evolutionary torture, yet we
continue (as a species) to function.  Now this interpretation may seem
a little bizarre to you, but to me it seemed to come almost
instantaneously and quite naturally.  Can you truly say it is "bad?"

Even an example as silly sounding (at first) as "telephones are like
grapefruits" yields to the great creative power of the human mind.  Despite
their simple outer appearance, they both conceal a more complex inner
structure (which, as a youngster, I delighted to dissect).  Both are
"machines" for reproducing something -- the telephone reproduces sounds,
while the grapefruit reproduces grapefruits (this one admittedly took a few
seconds more to think of).  So what's a "bad" metaphor?


I would love to continue this discussion with interested parties privately,
so as not to take up space in the notesfile.  USENET mail can reach me at
...!uiucdcs!uicsl!dinitz

-Rick Dinitz

------------------------------

Date: 21 Apr 84 12:37:35-PST (Sat)
From: hplabs!hao!cires!nbires!opus!rcd @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: opus.386

A question on non-competition clauses of the sort in which you agree, if
you leave a particular job, not to work in that field (or geographical
area), etc.: I once heard that they were essentially not enforceable, the
given reason being that there are certain legal rights you can't give up
(called "unconscionable clauses" in a contract), and that somehow "giving
up the right to make a living (in your profession) fell into this class.
I don't know if this is actually true - I'd like to hear a qualified
opinion from someone who understands the law or who has been through a case
of this sort.

In any event, I think it's pretty shoddy for an employer to make such
requests of an employee - this is going a long way beyond assigning patent
rights while you're employed and not disclosing company secrets.  If the
employer mistrusts you that much, can you trust him?  I also think it's
foolish to agree in writing to something that you don't accept, on the
basis that you don't think they'll use it or that it isn't enforceable.
Don't bet against yourself!

...Are you making this up as you go along?              Dick Dunn
{hao,ucbvax,allegra}!nbires!rcd                         (303) 444-5710 x3086

------------------------------

Date: 22 Apr 84 3:09:20-PST (Sun)
From: decvax!cca!ima!inmet!andrew @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: inmet.1313

A few years back, I made the mistake of working for Computervision.  They
tried to force me to sign an agreement that I a) would not work for any
competitors for 18 months, and b) would not "entice" other employees into
leaving for an equal length of time.  They didn't say a word about continuing
my salary for that time, either!

The incompetents in Personnel (I'd call them "morons", but true morons are
considerably more conscientious workers) didn't notice that I never signed
or returned the above agreement, though!

Andrew W. Rogers, Intermetrics   ...{harpo|ihnp4|ima|esquire}!inmet!andrew

------------------------------

Date: 23 Apr 84 7:49:14-PST (Mon)
From: hplabs!hao!seismo!ut-sally!ut-ngp!werner @ Ucb-Vax
Subject: Re: Non-competition clauses. The devils advocate speaks
Article-I.D.: ut-ngp.531

My personal opinion aside, I do have sympathy with the company that
reveals their "secrets" to an employee, only to have him turn into
competition without having to pay the research costs.  Remember also,
that for every one who leaves, there are five guys who stay, and more
likely than not the result of their years of work get 'abducted' also,
as, in a decent research effort, the work is done in a team rather than
by solo-artists.

So, after your flamers burn out, lets hear some ideas which take care
of the interests of all parties, because remember, one day it may be
YOU who stays behind, or YOU who may be the founder of a 3-man think-tank.

        werner @ ut-ngp         "every coin has, AT LEAST, 3 sides"

------------------------------

Date: 23 Apr 84 9:35:44-PST (Mon)
From: harpo!ulysses!gamma!epsilon!mb2c!mpr @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: mb2c.242

Non-competition clauses may or may not be enforceable. It depends on
the skill of the party involved or any special knowledge that person
might have.  For example, it is is not a shoddy practice or expectation
for Coca Cola to expect its personnel not to work for Pepsi cola,
especially and only if they have knowledge of the secret formula.

------------------------------

Date: 24 Apr 84 12:27:53-PST (Tue)
From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxa!wetcw @ Ucb-Vax
Subject: Re: Non-competition clauses (The Doctor)
Article-I.D.: pyuxa.710

In reference to the Doctor in Boulder.

The Doctor had joined a practicing group in a clinic.  He did
indeed sign a contract which contained a clause which said that
if he left the clinic, he would be unable to practice in the
county in which Boulder is in.

It seems that after several years, the administrator of the
clinic (not a Doctor) decided that Doctor X was not bringing
in enough cash to the group.  The Doctor was warned that he
would have to increase his patient load to bring his revenues
up to what they thought it should be.  The Doctor refused to
compromise his patients care by giving them less time.  After
a standoff, the administrator and the other Doctors told Doctor
X he would have to leave the clinic.

The crux of the problem was that he did not leave on his own, but
was asked to leave and therefore believed that the non-comp
clause invalid.  He opened an office in Boulder.  Many on his
former patients followed him, much to the displeasure of the
clinic crowd.  The clinic then decided to go to court.  They
won in court so that Doctor X had to move his practice out of the
county.  The patients still followed him.

I think that this case is working its way up to the Supreme
Court.  The whole affair was aired last year on [60 Minutes].  The
clinic crew and their administrative lackey came off in a
very bad light.  They were arrogant and seemed self serving
to the nth degree.  I hope Doc X wins in the final analysis.
In the meantime, there was a time-limit clause in the contract
which lapses sometime soon.
T. C. Wheeler

------------------------------

Date: 24 Apr 84 20:51:48 PST (Tuesday)
From: Bruce Hamilton <Hamilton.ES@XEROX.ARPA>
Reply-to: Hamilton.ES@XEROX.ARPA
Subject: Seminar - Learning About Systems That Contain State Variables

The research described below sounds closer to what I had in mind when I
raised this issue a couple of weeks ago, as opposed to the
automata-theoretic responses I tended to get.  --Bruce

[For more leads on learning "systems containing state variables", readers
should look into that branch of control theory known as system identification.
Be prepared to deal with some hairy mathematical notation.  -- KIL]


  Date: 24 Apr 84 11:39 PST
  From: mittal.pa
  Subject: Reminder: CSDG Today

  The CSDG today will be given by Tom Dietterich, Stanford University,
  based on his thesis research work.
  Time etc: Tuesday, Apr. 24, 4pm, Twin Conf. Rm (1500)

Learning About Systems That Contain State Variables

It is difficult to learn about systems that contain state variables when
those variables are not directly observable.  This talk will present an
analysis of this learning problem and describe a method, called the
ITERATIVE EXTENSION METHOD, for solving it.  In the iterative extension
method, the learner gradually constructs a partial theory of the
state-containing system.  At each stage, the learner applies this
partial theory to interpret the I/O behavior of the system and obtain
additional constraints on the structure and values of its state
variables.  These constraints trigger heuristics that hypothesize
additional internal state variables.   The improved theory can then be
applied to interpret more complex I/O behavior.  This process continues
until a theory of the entire system is obtained.  Several conditions
sufficient to guarantee the success of the method will be presented.
The method is being implemented and applied to the problem of learning
UNIX file system commands by observing a tutorial interaction with UNIX.

------------------------------

Date: 19 Apr 1984 1326-EST
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Chunking and R1-SOAR

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]


           "RECENT PROGRESS IN SOAR: CHUNKING AND R1-SOAR"
                   by John Laird & Paul Rosenbloom

          AI Seminar,  Tuesday April 24,  4.00pm, Room 5409

In this talk we present recent progress in the development of the @p[Soar]
problem-solving architecture as a general cognitive architecture.  This work
consists of first steps toward: (1) an architecture that can learn about all
aspects of its own behavior (by extending chunking to be a general learning
mechanism for @p[Soar]); and (2) demonstrating that @p[Soar] is (more than)
adequate as a basis for knowledge-intensive (expert systems) programs.

Until now chunking has been a mechanism that could speed up simple
psychological tasks, providing a model of how people improve their
performance via practice.  By combining chunking with @p[Soar], we show how
chunking can do the same for AI tasks such as the Eight Puzzle, Tic-Tac-Toe,
and a portion of an expert system.  More importantly, we present partial
demonstrations: (1) that chunking can lead to more complex forms of
learning, such as the transfer of learned behavior (that is, the learning of
generalized information), and strategy acquisition; and (2) that it is
possible to build a general problem solver that can learn about all aspects
of its own behavior.

Knowlege-intensive programs are built in @p[Soar] by representing basic task
knowledge as problem spaces, with expertise showing up as rules that guide
complex problem-space searches and substitute for expensive problem-space
operators.  Implementing a knowledge-intensive system within @p[Soar] begins
to show how: (1) a general problem-solving architecture can work at the
knowledge intensive (expert system) end of the problem solving spectrum; (2)
it can integrate basic reasoning and expertise, using both search and
knowledge when relevant; and (3) it can perform knowledge acquisition by
transforming computationally intensive problem solving into efficient
expertise-level rules (via chunking).  This approach is demonstrated on a
portion of the expert system @p[R1], which configures computers.

------------------------------

End of AIList Digest
********************

∂03-May-84  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #54
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 May 84  11:02:55 PDT
Date: Thu  3 May 1984 10:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #54
To: AIList@SRI-AI


AIList Digest            Thursday, 3 May 1984      Volume 2 : Issue 54

Today's Topics:
  Literature Search - Applications of Expert Systems Proceedings,
  AI News - The End of British AI???,
  Linguistics - Metaphor and Riddles,
  AI Programming - Discussion,
  AI Jobs - Noncompetition Clauses,
  Seminars - Multiple Inheritance & Perceptual Organization
----------------------------------------------------------------------

Date: 27 Apr 84 9:46:54-PST (Fri)
From: decvax!linus!vaxine!chb @ Ucb-Vax
Subject: Looking for Applications of Expert Sys. Proceedings
Article-I.D.: vaxine.250

In Bruce Buchanan's Partial Bibliography on Expert Systems (Nov. 82)
he cited the Proceedings for the Colloquium on Application of Knowledge
Based (or Expert) Systems, London, 1982.  Does anybody out in netland
know who sponsored this colloquium or, more importantly, how I can get
a hold of these proceedings?

Thanks in advance,

                        Charlie Berg
                        Expert Systems
                        Automatix, Inc.
                     ...{allegra, linus}!vaxine!chb

------------------------------

Date: Mon 30 Apr 84 14:04:33-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: The End of British AI???

The ``New Scientist'' of April 12 quotes David Thomas, director for
Information Technology at the Science and Engineering Research Council
(British equivalent of NSF) and director of the Intelligent
Knowledge-Based Systems (British codeterm for AI) programme of the
Department of Trade and Industry:

        ``If computer scientists want to do research they must do it
        in partnership with industry... WE DON'T WANT COMPUTER
        SCIENTISTS working alone with no common aim in sight, and
        PUBLISHING THEIR WORK IN AN ACADEMIC JOURNAL for the Japanese
        to  pick up on ... It is difficult to think of anything in
        computer science which would not be useful to industry.''

(emphasis mine).

Yours, at a loss for printable comments,

-- Fernando Pereira

------------------------------

Date: Mon, 30 Apr 1984 14:14:30 EDT
From: Another Memo from the Etherial Plane
      <AXLER%upenn-1100.csnet@csnet-relay.arpa>
Subject: Metaphor & Riddles

     The new issue of the Journal of American Folklore contains an article
on the riddling process and its relation to metaphor interpretation, written
by Green & Peppicello.  The article also contains an excellent bibliography.

------------------------------

Date: 26 Apr 84 6:06:00-PST (Thu)
From: harpo!ulysses!gamma!pyuxww!pyuxss!aaw @ Ucb-Vax
Subject: Re: RE: AI Programming
Article-I.D.: pyuxss.319

I strongly agree that AI programming tends to be on several levels,
but rather than seeing AI programs as a controller or generator and
and a pragmatic level, I think many AI programs are three levels:

  1. organizer, based on feedback from heuristic controller(2)

  2. controller, based on results of algorithmic or applicative level(3)

  3. worker, playing with real data

        The raison d' might be that most programs <5k statements are
pure applications, programs getting much larger tend to need a single
intelligent controller, while programs in the 20k-100k statement range
(the AI programming thesis level) are in the three level range. All AI
programs bigger than that tend to algorithmic refinements of previous
work, with refiners in terror of changing the basic structure.
                        {harpo,houxm,ihnp4}!pyuxss!aaw
                        Aaron Werman

------------------------------

Date: 25 Apr 84 7:19:04-PST (Wed)
From: harpo!ulysses!allegra!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Non-competition clauses - (nf)
Article-I.D.: eosp1.812

I'm amazed at the naivete of people suggesting that an employer has
no good reason to ask people to sign non-competition clauses.  Most
employers allow many, if not most of their employees to have access to
sensitive and trade secret information.  Employees leave a company with
their heads full of such data, and they become a walking time bomb to
their previous employer, should this info fall into the hands of a
competitor.

History shows that many ex-employees are unscrupulous in this regard.  IBM
has sued successfully in cases where ex-employees have formed, or joined,
other companies to build hardware that is very similar to hardware the
employees were building at IBM.  In many of these cases IBM has won,
presumably demonstrating that the employees were using more than their
own skills to imitate IBM's projects.

By the way, the classic example of this type of problem is a list of
customers.  A company's customer list is in many cases a critical secret,
and companies oftem sue to prevent an ex-employee from taking the list to his
next company, or using it himself.

Perhaps many of the writers on this subject are from academic environments
and have not worked in technologically competetive companies.
Why don't you try the other end of this problem -- imagine yourself working
for such a company, for which you don't sign a competetive agreement.
Then agree also that you will not have access to the company's sensitive and
trade secret data, so that the company will genuinely not need you to sign
such an agreement.  Then just try to get your work done without access to
important meetings and specifications.

Non-competetive agreements often specify very long periods of time, or no
specific time frame at all.  I believe that time periods over two years
are unenforceable in general.

By the way, when you join a company, you usually make personal data
available to it, which the company undertakes to keep secret,
and not to use after you have left the company.  This is a 2-way
street.
                                        - Toby Robison (not Robinson!)
                                        allegra!eosp1!robison
                                        decvax!ittvax!eosp1!robison
                                        princeton!eosp1!robison

------------------------------

Date: 29 Apr 1984  21:02 EDT (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Multiple Inheritance

               [Forwarded from the MIT bboard by SASW@MIT-MC.]


                   Multiple Inheritance: What, Why, and How?

                                  Dan Carnese


                             AI Revolving Seminar
                 Wednesday, May 2, 4:00pm, 8th Floor Playroom

This  talk  is  concerned  with  type  definition  by ``multiple inheritance''.
Informally, multiple inheritance is a  technique  for  defining  new  types  by
combining the operation sets of a number of old ones.

The  literature  concerning multiple inheritance has been heavily biased toward
the description of the constructs involved  in  particular  systems.    But  no
satisfying account has been given of:
   - the  rationale  for  using  definition  by  multiple inheritance over
     simpler approaches to type definition,
   - the essential similarities of the various proposals, or
   - the  key  design  decisions  involved  in  these  systems   and   the
     significance of choosing specific alternatives.

The  goal  of  this  talk  is  to  dissipate  some  of the ``general prevailing
mysticism'' surrounding multiple inheritance.    The  fundamental  contribution
will  be  a  simple  framework  for describing the design and implementation of
single-inheritance and multiple-inheritance type systems.  This framework  will
be  used  to  describe  the  inheritance mechanisms of a number of contemporary
languages.  These include:
   - the Lisp Machine's flavor system
   - the classes of Smalltalk-80, ``Smalltalk-82'' (Borning and  Ingalls),
     and Loops (Bobrow and Stefik)
   - the ``traits'' extension to Mesa (Curry et al.)

Given  the  description  of  the ``what'' and ``how'' of these systems, we will
then turn  to  the  question  of  ``why.''    Some  principles  for  evaluating
inheritance mechanisms will be presented and applied to the above five designs.
A  few simple improvements to the Lisp Machine flavor system will be identified
and motivated by the evaluation criteria.

We will conclude by discussing the relationship between multiple inheritance in
programming and multiple  inheritance  in  knowledge  representation,  and  the
lessons from the former which can be applied to the latter.

------------------------------

Date: 30 Apr 1984  09:23 EDT (Mon)
From: Cobb%MIT-OZ@MIT-MC.ARPA
Subject: Seminar - Perceptual Organization and Visual Recognition

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


      The Use of Perceptual Organization for Visual Recognition

                              DAVID LOWE

                        May 7, 1984    4:00PM
                      NE43 - 8th floor Playroom


     The human visual system has the capability to spontaneously
derive groupings and structures from an image without higher-level
knowledge of its contents.  This capacity for perceptual organization
is currently missing from most computer vision systems.  It will be
shown that perceptual groupings can play at least three important
roles in visual recognition: 1) image segmentation, 2) direct
inference of three-space relations, and 3) indexing world knowledge
for subsequent matching.  These functions are based upon the
expectation that groupings reflect actual structure of the scene
rather than accidental alignment of image elements.  A number of
principles of perceptual organization will be derived from this
criterion of non-accidentalness and from the need to limit
computational complexity.  The use of perceptual groupings will be
demonstrated for segmenting image curves and for the direct inference
of three-space properties from the image.

     Much computer vision research has been based on the assumption
that recognition will proceed bottom-up from the image to an
intermediate 2-1/2D sketch or intrinsic image representation, and
subsequently to model-based recognition.  While perceptual groupings
can contribute to this intermediate representation, they can also
provide an alternate pathway to recognition for those cases in which
there is insufficient information for deriving the 2-1/2D sketch.
Methods will be presented for using perceptual groupings to index
world knowledge and for subsequently matching three-dimensional models
directly to the image for verification.  Examples will be given in
which this alternative pathway seems to be the only possible route to
recognition.  A functioning real-time vision system will be described
that is based upon the direct search for the projections of 3D models
in an image.

Refreshments:  3:45PM
Host:  Professor Patrick H. Winston

------------------------------

End of AIList Digest
********************

∂04-May-84  2111	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #55
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 May 84  21:11:25 PDT
Date: Fri  4 May 1984 19:54-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #55
To: AIList@SRI-AI


AIList Digest            Saturday, 5 May 1984      Volume 2 : Issue 55

Today's Topics:
  AI Support - The End of British AI?,
  Expert Systems - English Conference Reference,
  AI Jobs - Noncompetition Clauses,
  Review - HEURISTICS by Judea Pearl,
  Humor - Computers and Incomprehensibility,
  Consciousness - Reply to Phaedrus (long)
----------------------------------------------------------------------

Date: Thu 3 May 84 11:30:40-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: The End of British AI?

The End of British AI?

I think Mr. Pereira is being more than a little paranoid here (and he need not
imagine that AI research is the only type for which industry sometimes shows
little enthusiasm).   That pronouncement sounds as if it was politically
motivated, therefore not to be taken too literally anyway, and will be
forgotten as soon as convenient.   Not that I think my government's policy on
computer science research is sound -- quite the reverse -- but I don't think it
has suddenly become a lot worse.
                                                - Richard

------------------------------

Date: 30 Apr 84 8:07:16-PDT (Mon)
From: decvax!decwrl!rhea!bartok!shubin @ Ucb-Vax
Subject: Info on Expert Systems conference in England
Article-I.D.: decwrl.7512

|  In Bruce Buchanan's Partial Bibliography on Expert Systems (Nov. 82)
|  he cited the Proceedings for the Colloquium on Application of Knowledge
|  Based (or Expert) Systems, London, 1982.  Does anybody out in netland
|  know who sponsored this colloquium or, more importantly, how I can get
|  a hold of these proceedings?
|                  Charlie Berg
|                  Expert Systems
|                  Automatix, Inc.
|               ...{allegra, linus}!vaxine!chb

We gave a paper at a conference called "Theory and Practice of Knowledge
Based Systems", which was held 14-16 Sep 82 at Brunel University, which is
*near* London.  The chair of the conference was Dr. Tom Addis, also of
Brunel University.  The conference was sponsored (or approved or whatever)
by ACM, IEEE and SPL International.

I found two addresses.  The first is where the conference was, and (I
believe) the second is where the Computer Science department is:
        Brunel University
        Shoreditch Campus
        Coopers Hill, Englefield Green
        Egham, Surrey
        ENGLAND

or      Brunel University
        Department of Computer Science
        Uxbridge, Middlesex
        ENGLAND

hal shubin
        UUCP:    ...!decwrl!rhea!bartok!shubin
        ARPAnet: hshubin@DEC-MARLBORO

------------------------------

Date: Fri, 4 May 84 10:51 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: Non-competition clauses

You have constructed a very good argument for nondisclosure agreements.
The issue, however, was non-competition clauses, for which your only
justification seem to be that "[h]istory shows that many ex-employees
are unscrupulous. . .".  I find this less than compelling.

The successful legal actions you cite demonstrate that recourse is
available to the company damaged by such actions by ex-employees.  The
risk that *full* compensation for such damage may not be forthcoming is
a risk of doing business, and must be managed as such.

By the way, nondisclosure of personal data by the company is much more
closely analogous to nondisclosure of proprietary information by the
employee than it is to noncompetition by the employee.  (Do you think I
could talk Xerox into agreeing not to employ anyone in my present
capacity for two years if I should leave?)

Mark

------------------------------

Date: Fri, 4 May 84 15:32:32 PDT
From: Anna Gibbons <anna@UCLA-CS.ARPA>
Subject: HEURISTICS/Dr. Judea Pearl

FROM: Judea Pearl@UCLA-SECURITY.
Those who have inquired about my new book "HEURISTICS", may be
interested to know that it is finally out, and can be obtained from
Addison-Wesley Publishing Company, Reading Mass. 01867, Tel.
(617) 944-8660.  The title is "Heuristics: Intelligence Search
Strategies for Computer Problem Solving", the ISBN number is
0-201-05594-5, and the price 38.95.  For those unfamiliar with the
book's content, the following are excerpts from the cover description.

This book presents, characterizes, and analyzes problem solving
strategies that are guided by heuristic information.  It provides a
bridge between heuristic methods developed in artificial intelligence,
optimization techniques used in operations research, and
complexity-analysis tools developed by computer theorists and
mathematicians.

The book is intended to serve both as a textbook for classes in AI
Control Strategies and as a reference for the professional/researcher
who seeks an in-depth understanding of the power of heuristics and
their impact on various performance characteristics.

In addition to a tutorial introduction of standard heuristic search
methods and their properties, the book presents a large collection of
new results which have not appeared in book form before.  These include:

*  Algorithmic taxonomy of basic search strategies, such as
backtracking, best-first, and hill-climbing, their variations and
hybrid combinations.

*  Searching with distributions and with nonadditive evaluation
functions.

*  The origin of heuristic information and the prospects for automatic
discovery of heuristics.

*  Applications of branching processes to the analysis of path-seeking
algorithms.

*  The effect of errors on the complexity of heuristic search.

*  The duality between games and mazes.

*  Recreational aspects of recursive minimaxing.

*  Average performance analysis of game-playing strategies.

*  The benefits and pitfalls of look-ahead.

Each chapter contains annotated references to the literature and a
set of nontrivial exercises chosen to enhance skill, insight, and
curiosity.

Enjoy your reading and, please, let me know if you have suggestions
for improving the form or content.  Judea Pearl @ UCLA-SECURITY.

------------------------------

Date: 3 May 1984 20:50:55-EDT
From: walter at mit-htvax
Subject: Seminar - Computers and Incomprehensibility

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


               GRADUAL STUDENT LUNCH SEMINAR SERIES

                         The G0001 Project:
             An Experiment in G0002 and Creative G0003


A G0004 is described, in which many G0003 involving strikingly
different G0005 and levels of G0006 can be made.  The question "What
differentiates the good G0003 from the bad G0003?" is discussed, and
the problem of how to G0008 a G0009 G0010 of the G0011 G0012 to come
up with such G0003 (and to have a sense for their quality) is
considered.  A key part of the proposed system, now under
development, is its dependence on G0013 G0014 G0015 of G0016
interacting "G0017" (selected at random to G0019 with G0020
proportional to G0021 assigned "G0022").  Another key G0023 is a
G0024 of linked G0005 of varying levels of "G0025", in which G0026
spreads and G0027 controls the G0028 of new G0017. The shifting of
(1) G0033 G0034 inside structures, (2) descriptive G0005 chosen to
apply to G0030, and (3) G0043 perceived as "G0031" or not, is called
"G0032". What can G0031, and how, are G0014 G0033 of the interaction
of (1) the temporary ("G0034") structures involved in the G0003 with
(2) the permanent ("G0035") G0005 and links in the G0036 network, or
"G0037 network".  The G0038 of this system is G0039 as a general
G0038 suitable for dealing not only with fluid G0003, but also with
other types of G0039 G0040 and G0041 tasks, such as musical G0040,
G0041 G0042, Bongard problems and others.


12:00 NOON  8TH FLOOR PLAYROOM                         FRIDAY 5/5
Hosts: Harry Voorhees and Dave Siegel

------------------------------

Date: 27 Apr 84 20:51:58-PST (Fri)
From: harpo!ulysses!burl!clyde!akgua!sdcsvax!davidson @ Ucb-Vax
Subject: Re: New topic for discussion (long)
Article-I.D.: sdcsvax.736

This is a response to the submission by Phaedrus at the University of
Maryland concerning speculations about the nature of conscious beings.
I would like to take some of the points in his/her submission and treat
them very skeptically.  My personal bias is that the nature of
conscious experience is still obscure, and that current theoretical
attempts to deal with the issue are far off the mark.  I recommend
reading the book ``The Mind's Eye'' (Hofstadter & Dennett, eds.)  for
some marvelous thought experiments which (for me) debunk most current
theories, including the one referred to by Phaedrus.  The quoted
passages which I am criticizing are excerpted from an article by J. R.
Lucas entitled ``Minds, Machines, and Goedel'' which was excerpted in
Hofstadter's Goedel, Escher, Bach and found there by Phaedrus.

        the concept of a conscious being is, implicitly, realized to be
        different from that of an unconscious object

This statement begs the question.  No rule is given to distinguish conscious
and unconscious objects, nothing is said about the nature of either, and
nothing indicates that consciousness is or is not a property of all or no
objects.

        In saying that a conscious being knows something we are saying not
        only does he know it, but he knows that he knows it, and that he
        knows that he knows that he knows it, and so on ....

First, I don't accept the claim that people possess this meta-knowlege more
than a (small) finite number of levels deep at any time, nor do I accept
that human beings frequently engage in such meta-awareness; just because
human beings can pursue this abstraction process arbitrarily deeply (but
they get lost fairly quickly, in practice), does not mean that there is any
process or structure of infinite extent present.

Second, such a recursive process is straightforward to simulate on a
computer, or imbue an AI system with.  I don't see any reason to regard such
systems as being conscious, even though they do it better than we do (they
don't have our short term memory limitations).

        we insist that a conscious being is a unity, and though we talk
        about parts of our mind, we do so only as a metaphor, and will not
        allow it to be taken literally.

Well, this is hardly in accord with my experience.  I often become aware of
having been persuing parallel thought trains, but until they merge back
together again, neither was particularly aware of the other.  Marvin Minsky
once said the same thing after a talk claiming that the conscious mind is
inherently serial.  Superficially, introspection may seem to show a unitary
process, but more careful introspection dissolves this notion.

        The paradoxes of consciousness arise because a conscious being can
        be aware of itself, as well as of other things, and yet cannot
        really be construed as being divisible into parts.

The word ``aware'' is an implicit reference to the unknown mechanism of
consciousness.  This is part of the apparent paradox.  Again, there's
nothing mysterious about a system having a model of itself and being able to
do reasoning on that model the same way it does reasoning on other models.
Also again, nothing here supports the claim that the conscious mind is not
divisible.

        It means that a conscious being can deal with Godelian questions in
        a way in which a machine cannot, because a conscious being can
        consider itself and its performance and yet not be other than that
        which did the performance.

Whatever the conscious mind is, it appears to be housed in a physical
information processing system, to wit, the human brain.  If our current
understanding about the kind of information processing brains are capable of
is correct, brains fall into the class of automata and cannot ultimately do
any processing task that cannot be done with a computer.  The conscious mind
can scrutinize its internal workings to an extent, but so can computer
programs.  Presumably the Goedelian & (more to the point) Turing limitations
apply in principle to both.

        no extra part is required to do this:  it is already complete, and
        has no Achilles' heel.

This is an unsupported statement.  The whole line of reasoning is rather
loose; perhaps the author simply finds it psychologically difficult to
suppose that he has any fundamental limitations.

        When we increase the complexity of our machines, there may, perhaps,
        be surprises in store for us....  Below a certain ``critical'' size,
        nothing much happens....  Turing is suggesting that it is only a
        matter of complexity [before?] a qualitative difference appears.

Well, its very easy to build machines that are infeasible to predict.  Such
machines do not even have to be very complex in construction to be highly
complex in behavior.  Las Vegas is full of many examples of such machines.
The idea that complexity in itself can result in a system able to escape
Goedelian and Turing limitations is directly contradicted by the
mathematical induction used in their proofs:  The limitations apply to
<<arbitrary>> automata, not just to automata simple enough for us to
inspect.

Charlatans can claim any properties they want for mechanisms too complex for
direct disproofs, but one need not work hard before dismissing them with
indirect disproofs.  This is why the patent office rejects claimed perpetual
motion machines which supposedly operate merely by the complexities of their
mechanical or electromagnetic design.  It is also why journals of
mathematics reject ridiculously long proofs which claim to supply methods of
squaring the circle, etc.  No one examines such proofs to find the flaw, it
would be a thankless task, and is not necessary.

        It is essential for the mechanist thesis that the mechanical model
        of the mind shall operate according to ``mechanical principles,''
        that is, we can understand the operation of the whole in terms of
        the operation of its parts....

Certainly one expects that the behavior of physical objects can be explained
at any level of reduction.  However, consciousness is not necessarily a
behavior, it is an ``experience'', whatever that is.  Claims of
consciousness, as in ``I assert that I am conscious'' are behavior, and can
reasonably be subjected to a reductionist analysis.  But whether this will
shed any light on the nature of consciousness is unclear.  A useful analogy
is whether attacking a computer with a voltmeter will teach you anything
about the abstractions ``program'', ``data structure'', ``operating
system'', etc., which we use to describe the nature of what is going on
there.  These abstractions, which we claim are part of the nature of the
machine at the level we usually address it, are not useful when examining
the machine below a certain level of reduction.  But that is no paradox,
because these abstractions are not physical structure or behavior, they are
our conceptualizations of its structure and behavior.  This is as mystical
as I'm willing to get in my analysis, but look at what Lucas does with it:

        if the mechanist produces a machine which is so complicated that
        this [process of reductionist analysis] ceases to hold good of it,
        then it is no longer a machine for the purpose of our discussion,
        no matter how it was constructed.  We should say, rather, that he
        had created a mind, in the same sort of sense as we procreate
        people at prsent.

If someone produces a machine which which exhibits behavior that is
infeasible to predict through reductionist methods, there is nothing
fundamentally different about it.  It is still obeying the laws of physics
at all levels of its structure, and we can still in principle apply to it
any desired reductionist analysis.  We should certainly not claim to have
produced anything special (such as a mind) just because we can't easily
disprove the notion.

        When talking of [human beings and these specially complex machines]
        we should take care to stress that although what was created looked
        like a machine, it was not one really, because it was not just the
        total of its parts:  one could not even tell the limits of what it
        could do, for even when presented with the Goedel type question, it
        got the answer right.

There is simply no reason to believe that people can answer Goedelian
questions any better than machines can.  This bizarre notion that conscious
objects can do such things is unproven and dubious.  I assert that people
cannot do these things, and neither can machines, and that the ability to
escape from Goedel or Turing restrictions is irrelevant to questions of
consciousness, since we are (experientially) conscious but cannot do such
things.

I find that most current analyses of consciousness are either mystical like
the one I've addressed here, or simply miss the phenonmenon by attacking the
system at a level of reduction beneath the level where the concept seems to
apply.  It is tempting to thing we can make scientific statements about
consciousness just because we can experience consciousness ourselves.  This
idea runs aground when we find that this notion is dependent on capturing
scientifically the phenomena of ``experience'', ``consciousness'' or
``self'', which I have not yet seen adequately done.  Whether consciousness
is a phenomenon with scientific existence, or whether it is an abstract
creation of our conceptualizations with no external or reductionist
existence is still undetermined.

-Greg Davidson (davidson@sdcsvax.UUCP or davidson@nosc.ARPA)

------------------------------

End of AIList Digest
********************

∂07-May-84  1032	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #56
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 May 84  10:32:10 PDT
Date: Sun  6 May 1984 18:32-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #56
To: AIList@SRI-AI


AIList Digest             Monday, 7 May 1984       Volume 2 : Issue 56

Today's Topics:
  AI Software - Request for AI Demos,
  Seminars - Object-Oriented Programming in Prolog &
    SIGNUM & Learning in Production Systems & Nonmonotonic Reasoning,
  Conference - 12th POPL Call for Papers
----------------------------------------------------------------------

Date: 30 Apr 84 19:14:00-PDT (Mon)
From: ihnp4!inuxc!iuvax!wickart @ Ucb-Vax
Subject: Needed: AI demos
Article-I.D.: iuvax.3600001

I need some simplistic AI demo programs to help convert the infidels.
EMYCIN, ELIZA/DOCTOR, PARANOID, SHRDLU, and REVERSE would be greatly
appreciated. I can handle LISP, PASCAL, FORTRAN(in AI?), BAL(perish
the thought), C, or PL/I. Can anyone out there help out? USENET is the
only thing that maintains our existence in the USA.
   Thanks in advance,
T.F. Prune (aka Bill Wickart, ihnp4!inuxc!iuvax!wickart)

------------------------------

Date: Fri, 4 May 84 17:32:06 edt
From: jan@harvard (Jan Komorowski)
Subject: Seminar - Object-Oriented Programming in Prolog

             [Forwarded from the MIT bboard by SASW@MIT-MC.]

                 "Object-Oriented Programming in Prolog"

                            Carlo Zaniolo
                          Bell Laboratories

                         Monday, May 7, 1984
                              at 4 PM

              Aiken Lecture Hall, Harvard University
                  (tea in Pierce 213 at 3:30)


     Object-oriented programming has proved very useful in a number of
important applications, because of its ability to unify and simplify the
description of entities and their protocols. Here, we propose a similar
approach for providing this programming paradigm in Prolog.  We introduce
primitives to support the notions of (1) an object with its associated set of
methods, (2) an inheritance network whereby an object inherits the methods of
its ancestors, and (3) message passing between objects.

     Objects and methods are specified by a declaration object with
method←list, where object is a Prolog predicate and each method is an arbitrary
Prolog clause.  Then, a message O:M can be specified as a goal, to request the
application of method M to object O.  The inheritance network, specified by the
isa operator as follows sub←object isa object, is most useful in handling
default information.  Thus it is possible to specify a method that holds by
default for a general class, and then specify special subcases for which the
general rule is overridden.

     This new functionality is added on top of existing Prolog systems, with no
modification to its interpreter or compiler.

Host: H.J. Komorowski

------------------------------

Date: 28 Apr 84 5:13:41-PST (Sat)
From: decvax!genrad!mit-eddie!whuxle!floyd!cmcl2!lanl-a!unm-cvax!unmva
      x!stanly @ Ucb-Vax
Subject: Seminar - SIGNUM meeting and introduction
Article-I.D.: unmvax.312

SIGNUM is the Special Interest Group in Numerical Mathematics of the
ACM ( American Computing Machinery). The group meets monthly for
the academic year. At each meeting there is a talk on some subject
related to computing or applied mathematics. The talks are not
restricted to numerical stuff. If you would like to be on the mailing
list please send a note to John.  A correct address from unmvax is:

WISNIEWSKI@SANDIA.ARPA@lanl-a.UUCP

                                        Stan Steinberg
                                        stanly@unmvax

*******************************************************************

                  Rio Grande Chapter SIGNUM Meeting

Year end meeting and election of officers
Date: Tuesday, May 8, 1984
Speakers: Kathie Hiebert Dodd and Barry Marder - Sandia



Applied AI - "Brave New World" or "Catastrophe Theory Revisited"?
                          Barry Marder

Last year an effort was initiated at Sandia to develop a core of
expertise in the field of artificial intelligence.  One area of
investigation has been expert system technology, which has been
largely responsible for the present explosive growth of interest in
AI.  An expert system is a program that catalogs and makes readily
available expert knowledge in a field.  Such a system has been built
and implemented at Sandia to aid in the design of electrical cables
and connectors.  The speaker will describe this system and offer some
observations on artificial intelligence in general.


       VEHICLE IDENTIFICATION -- A FRAME BASED SYSTEM
                    Kathie Hiebert Dodd

Software has been developed that, when given certain characteristics
from a scene such as the location of wheels, can identify vehicles.
The image processing, ie extracting the characteristics from the
scene is still done primarily on a VAX.  Given the features a frame
based code using "flavors" in the Zetalisp language on a Symbolics
3600 does the vehicle identification.  The main emphasis of the talk
will be on the aspects of a frame based expert system, in particular
the use of "flavors" and "deamons".


Location: The Establishment - Albuquerque Dukes Sports Stadium
Price: 10.50 per person - serving Prime Rib (I think)
Social Hour : 5:30 P.M., Dinner: 6:00 P.M., Talks: 7:00 P.M.
PLEASE LET JOHN WISNIEWSKI KNOW BY NOON MONDAY THE 7TH IF YOU ARE
COMING TO DINNER.  If no answer leave a message with EVA 844-7747.

------------------------------

Date: 4 May 1984 1316-EDT
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Learning in Production Systems

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]

The AI seminar on May 8 will be given by John Holland of the University
of Michigan.

Title: Learning Algorithms for Production Systems


Learning, broadly interpreted to include processes such as induction, offers
attractive possibilities for increasing the flexibility of rule-based
systems. However, this potential is likely to be realized only when the
rule-based systems are designed ab initio with learning in mind.
In particular, there are substantial advantages to be gained when
the rules are organized in terms of building blocks suitable for
manipulation by the learning algorithms (taking advantage of the
principles expounded by Newell & Simon).  This seminar will concentrate on:

  1. Ways of inducing useful building blocks and rules from experience,
     and
  2. Learning algorithms that can exploit these possibilities through
     "apportionment of credit" and "recombination" of building blocks.

------------------------------

Date: Sat 5 May 84 18:45:28-PDT
From: Benjamin Grosof <GROSOF@SUMEX-AIM.ARPA>
Subject: Seminars - Nonmonotonic Reasoning

           [Forwarded from the CSLI bboard by Laws@SRI-AI.]

Our regular meeting time and place is Wednesdays 1-2pm (with some
runover to be expected), in Redwood Hall Room G-19.  [...]

Wednesday, May 16:

                Drawing A Line Around Circumscription

                          David Etherington
              University of British Columbia, Vancouver


   The Artificial Intelligence community has been very interested in
the study of reasoning in situations where only incomplete information
is available.  Predicate Circumscription and Domain Circumscription
provide tools for nonmonotonic reasoning in such situations.
However, not all of the problems which might be expected to yield to
circumscriptive inference are actually addressed by the techniques
which have been developed thus far.

   We outline some unexpected areas where existing techniques are
insufficient.


Wednesday, May 23

                DEFAULT REASONING AS CIRCUMSCRIPTION
         A Translation of Default Logic into Circumscription
          OR    Maximizing Defaults Is Minimizing Predicates

                     Benjamin Grosof of Stanford

Much default reasoning can be formulated as circumscriptive.  Using a revised
version [McCarthy 84] of circumscription [McCarthy 80], we propose a
translation scheme from default logic [Reiter 80] into circumscription.  An
arbitrary "normal" default theory is translated into a corresponding
circumscription of a first-order theory.  The method is extended to translating
"seminormal" default theories effectively, but is less satisfactorily concise
and elegant.

Providing a translation of seminormal default logic into circumscription
unifies two of the leading formal approaches to nonmonotonic reasoning, and
enables an integration of their demonstrated applications.  The naturalness
of default logic provides a specification tool for representing default
reasoning within the framework of circumscription.

------------------------------

Date: Fri, 4 May 84 15:52 PDT
From: Brian Reid <reid@Glacier>
Subject: 12th POPL Call for Papers

Call for Papers: 12th POPL

The twelfth annual ACM SIGACT-SIGPLAN symposium on
PRINCIPLES OF PROGRAMMING LANGUAGES

New Orleans, Louisiana, January 13-16, 1985

The POPL symposium is devoted to the principles of programming
languages. In recent years there have been many papers on
specific principles and specific programming languages embodying
those principles, which might lead one to believe that the symposium is
limited to papers on those topics.

We are eager for papers on important new topics, and therefore this
year we shall not attempt to prescribe particular topics. We
solicit papers that describe important new research results having
to do with the principles of programming languages. We not only
solicit, but seek and encourage, papers describing work in which an
implemented system embodies an important principle in such a way that
the usefulness of that principle can be better understood. All
submitted papers will be read by the program committee.

        Brian Reid, Stanford University (Program Chairman)
        Douglas Comer, Purdue University
        Stuart Feldman, Bell Communications Research
        Joseph Halpern, IBM Research
        David MacQueen, AT&T Bell Laboratories
        Michael O'Donnell, Johns Hopkins University
        Vaughan Pratt, Sun Microsystems and Stanford Univ.
        Guy Steele, Tartan Laboratories
        David Wall, DEC Western Research Laboratory

Please submit nine copies of a 6- to 10-page summary of your paper to
the program chairman. Summaries must be typed double spaced, or typeset
10 on 16. It is important to include specific results, and specific
comparisons with other work. The committee will consider the relevance,
clarity, originality, significance, and overall quality of each
summary. Mail to:

     Brian K. Reid
     Computer Systems Laboratory, ERL 444
     Department of Electrical Engineering
     Stanford University
     Stanford, California, 94305 U.S.A.

(Persons submitting papers from countries in which access to copying
machines is difficult or impossible are welcome to submit a single copy.)

Summaries must be received by the program chairman by August 3, 1984.
Authors will be notified of acceptance or rejection by September 25,
1984.  The accepted papers must be received in camera-ready form by the
program chairman at the above address by November 9, 1984. Authors of
accepted papers will be expected to sign a copyright release form.

Proceedings will be distributed at the symposium and will be
subsequently available for purchase from ACM. The local arrangements
chairman is Bill Greene, University of New Orleans, Computer Science
Department, New Orleans, Louisiana 70148

------------------------------

End of AIList Digest
********************

∂08-May-84  2210	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #57
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 May 84  22:08:47 PDT
Date: Tue  8 May 1984 21:05-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #57
To: AIList@SRI-AI


AIList Digest           Wednesday, 9 May 1984      Volume 2 : Issue 57

Today's Topics:
  AI Tools - Structure Editor Request,
  Bindings - Judea Pearl,
  AI Software - LISP on a Data General,
  Linguistics - Metaphors & Puns & Use of "and",
  AI Funding - The End of British AI?,
  AI Literature - Touretzky LISP Book Review,
  Consciousness - Discussion,
  Conference - IEEE Workstation Conference
----------------------------------------------------------------------

Date: 3 May 84 18:17:03-PDT (Thu)
From: hplabs!hao!cires!boulder!marty @ Ucb-Vax
Subject: wanted: display-oriented interlisp structure editor
Article-I.D.: boulder.175

I've been using Interlisp-VAX under VMS for a while now and am getting a bit
tired of the rather antiquated TTY editor.  I know Dave Barstow had a sort of
semi-display interlisp structure editor known as DED, but this seems to have
fallen into a black hole.  Does anyone out there have a screen-oriented
residential structure editor for interlisp?  (Yes, I know the real solution is
to get an 1108, it's on order ...  But I've got too many interlisp users to
point them all at one Dandelion ...)

                                thanks much,
                                 Marty Kent

csnet:
{ucbvax!hplabs | allegra!nbires | decvax!kpno | harpo!seismo | ihnp4!kpno}
                        !hao!boulder!marty
arpanet:
                        polson @ sumex-aim

------------------------------

Date: Mon, 7 May 84 07:54:13 PDT
From: Anna Gibbons <anna@UCLA-CS.ARPA>
Subject: Bindings - Judea Pearl Address

FROM JUDEA PEARL:  Please disregard the old address "UCLA-SECURITY",
any messages should be sent to  "judea@UCLA-CS.ARPA".

Sorry for the inconvenience and confusion.

------------------------------

Date: 19 Apr 84 14:30:21-PST (Thu)
From: decvax!mcnc!ecsvax!bet @ Ucb-Vax
Subject: Re: LISP on a Data General? (sri-arpa.122209)
Article-I.D.: ecsvax.2347

Here at Duke, someone ported a public domain implementation of an
extremely simple subset of LISP (xlisp) to our MV-8000. It suffices
for some robotics programming. I learned LISP on it. Sources in C.
Send me a note if you are interested; it is probably rather big to mail,
though I believe it was originally acquired from net.sources.
We can work out some way to transfer it.
                                Bennett Todd
                                ...{decvax,ihnp4,akgua}!mcnc!ecsvax!bet

------------------------------

Date: 2 May 84 8:35:51-PDT (Wed)
From: hplabs!tektronix!ogcvax!sequent!merlyn @ Ucb-Vax
Subject: Re: metaphors
Article-I.D.: sequent.478

> "Telephones are like grapefruits" is a SIMILE, not a metaphor. To be
> a metaphor, it would be "Telephones are grapefruits", and would be harder
> to interpret...
>
> Will

Ahh, but "Telephones are lemons" is fairly easy to interpret.
It just depends on the type of fruit. :-}

Randal L. ("life is like a banana") Schwartz, esq. (merlyn@sequent.UUCP)
        (Official legendary sorcerer of the 1984 Summer Olympics)
Sequent Computer Systems, Inc. (503)626-5700 (sequent = 1/quosine)
UUCP: ...!XXX!sequent!merlyn where XXX is one of:
        decwrl nsc ogcvax pur-ee rocks34 shell teneron unisoft vax135 verdix

P.S. I never metaphor I didn't like. (on a zero to four scale)

------------------------------

Date: 11 Apr 84 14:25:47-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!hes @ Ucb-Vax
Subject: what you see ain't what you get
Article-I.D.: ecsvax.2291

At the end of Bentley's column in the April CACM, he mentions the
AI seminar titled:
                      How to Wreck a Nice Beach
and I thought of that today when I saw a poster describing "Cole's Law".
For those unfamiliar with the concept it refers to
"niht decils egabbac" reversed.
--henry (almost ashamed to sign this) schaffer  ncsu  genetics

------------------------------

Date: 11 Apr 84 8:46:33-PST (Wed)
From: harpo!ulysses!burl!clyde!watmath!watrose!japlaice @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: watrose.6717

[For some reason this took almost a month to show up in the AIList
mailbox.  Other messages may have been similarly delayed.  -- KIL]

There are several philosophical problems with treating
`Indiana and Ohio' as a single entity.

The first is that the Fregean idea that the sense of
a sentence is based on the sense of its parts,
which is thought valid by most philosophers,
no longer holds true.

The second is that if we use that idea in this situation,
then it would probably seem reasonable to use
Quine's ideas for adjectives, namely that
`unicorn', `hairy unicorn', `small, hairy unicorn'
(or other similar examples) are all separate entities,
and I think that it is obvious that trying to
derive a reasonable semantic/syntactic theory for
any reasonable fragment of English would become
virtually imposible.

------------------------------

Date: Mon 7 May 84 17:31:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: The End of British AI?

Having learned recently of yet another attempt by the SERC to foist
upon unwilling AI researchers totally unsuitable equipment chosen for
narrow political reasons, and seeing those researchers wasting their
time fighting off the attempt or reimplementing AI software which they
could get with little effort if they had the right equipment, I am
maybe feeling a bit paranoid. Knowing of the contortions British
researchers are going through to get Alvey money doesn't make me very
optimistic either. Astronomers and high-energy physicists don't seem
to have the same problems.

While I was a graduate student and then a research fellow in the UK, I
had to waste my time fighting off two such attempts, again based on
narrow political considerations. That in the two cases the side I was
on ended up (partly) winning is small consolation when I think of the
time I and others could have used for more productive work.

As to bureaucratic statements of that kind being forgotten, do you
remember the Lighthill report, a ``statement'' that sent British AI
into internal exile for at least five years causing the drain of US of
British AI talent we all know about?

                                                - Fernando

------------------------------

Date: 7 May 84 0255 EDT
From: Dave.Touretzky@CMU-CS-A.ARPA
Subject: book announcement

Since people have begun using AIList to announce their latest books (an
excellent idea), I thought I'd briefly describe my new Lisp book.

  "Lisp:  A Gentle Introduction to Symbolic Computation",
  by David S. Touretzky, Harper & Row Publishers, Inc.,
  New York, 1984.  Softcover, 384 pages, $18.95 list.

I originally wrote the book because I wanted to teach an introductory
programming course to humanities students using Lisp.  Although most readers
of this mailing list are interested in the advanced applications of Lisp,
the language is an excellent one for beginners.  It turned out to be a heck
of a lot better for them than Pascal, which is what we teach most beginners
here at CMU.  And Stanford University's freshman programming course is now a
combination of Lisp and Pascal, with my book used for the Lisp component.
Trinity College, in Hartford, CT, uses it in a freshman AI seminar taught
by the Psych dept.  At CMU it was used for several semesters by the English
department (!)  for the programming component of a computer literacy course
for grad students.

Of course, the question you're all dying to ask is:  how does this book
differ from Winston & Horn, and from Wilensky's new book.  My book is the
only GENTLE introduction to Lisp.  As such, its pace is too slow for a
graduate level or advanced undergrad CS course, which is where I feel
Winston & Horn is most appropriate.  On the other hand, I know lots of grad
students in other departments, such as Psych, who found Winston & Horn too
advanced; they were more comfortable with my book.  Wilensky's book is a
wonderful reference for Franz Lisp, which is covered in its entirety,
while my book is based on MacLisp and Common Lisp (although there is an
appendix which mentions Franz) and covers only the basics of those dialects.

If you are an experienced programmer and want to know all about Franz Lisp,
Wilensky is the obvious choice.  On the other hand, if you're new to Lisp,
my book offers the easiest route to becoming fluent in the language.  In
addition to the gentle, easy-to-read style, it contains 75 pages of answers
to exercises.  (Winston & Horn has 60 pages of answers; Wilensky has none.)

  -- Dave Touretzky

------------------------------

Date: 3 May 84 16:55:34-PDT (Thu)
From: ihnp4!ihuxr!pem1a @ Ucb-Vax
Subject: Re: New topic for discussion
Article-I.D.: ihuxr.1064

Phaedrus' article made me think of a story in the book "The
Mind's Eye", by Hofstadter and Dennett, in which the relationship
between subjective experience and physical substance is explored.
Can't remember the story's name but good reading.  Some other
thoughts:

One aspect of experience and substance is how to determine when
a piece of substance is experiencing something.  This is good to
know because then you can fiddle with the substance until it stops
experiencing and thereby get an idea of what it was about the
substance which allowed it to experience.

The first reasonable choice for the piece of substance might be
yourself, since most people presume that they can tell when they
are having a conscious experience.  Unfortunately, being both the
measuree and measurer could have its drawbacks, since some experiments
could simulaneously zap both experience and the ability to know or not
know if an experience exists.  All sorts of problems here.  Could you
just THINK you were experiencing something, but not really?

What this calls for, it seems to me, is two people.  One to measure
and one to experience.  Of course this would all be based on the
assumption that it is even possible to measure such an elusive
thing as experience.  Some people might even object to the notion
that subjective experiences are possible at all.

The next thing is to choose an experience.
This is tricky.  If you chose self-awareness as the experience, then
you would have to decide if being self-aware in one state is the same
as being self-aware in a different state.  Can the experience be the
same even if the object of the experience is not?

Then, a measuring criterion would have to be established whereby
someone could measure if an experience was happening or not.  This
could range from body and facial expressions to neurological readings.
Another would be a Turing test-like setup:  put the subject into a
box with certain I/O channels, and have protocols made up for
measuring things.  This would allow you to REALLY get in there and
fiddle with things, like replacing body parts, etc.

These are some of the thoughts that ran through my head after reading
the Phaedrus article.  I think I thought them, and if I didn't, how
did this article get here?

                            Tom Portegys, Bell Labs, ihlpg!portegys

(ihlpg currently does not have netnews, that's why this is coming from
ihuxr).

------------------------------

Date: Sun 6 May 84 11:15:48-PDT
From: Dennis Allison <CSL.ALLISON@SU-SIERRA.ARPA>
Subject: IEEE Workstation Conference: Call for Papers


            -----------------------------------------------------
            1st International Conference on Computer Workstations
            -----------------------------------------------------

                    San Francisco Bay Area, May-June 1985.

                     Sponsored by: IEEE Computer Society

Computer Workstations are integral to productivity and quality increases, and
they are the main focal point for a growing fraction of professional activity.

A "workstation", broadly defined, is a system that interacts with a user to
help the user accomplish some kind of work.  Included in this definition are:
CAD systems,  high-resolution graphics systems, office productivity systems,
computer-based engineering support stations of all kinds, architectural sys-
tems, software engineering environments, etc.

"Workstations" includes both hardware and software.  Hardware to run the ap-
plications, software to customize the environments.

Technical Program

Papers are solicited from the technical community at large in a widely seen
series of advertisements.  Sessions to be organized from submitted papers and
from Program Committee contacts.

The technical program will have approximately 32 sessions, arranged in three
tracks, spanning 3 full days.  Technical sessions will be derived from submit-
ted papers and from Program Committee organized sessions.  The Program Commit-
tee will include leaders and important contributors to the field of computer
workstations.  International representation will be sought.

There will be an invited keynote speaker and a formal opening session, best
paper awards, and a set of pre-conference tutorials.  Also, a "Special Ad-
dress" on the 2nd day.

Exhibits

Over 150 "booths" are expected to be populated by nearly as many companies ex-
hibiting hardware and software pertaining to workstations of all kinds.  High
standards of technical exhibitions will be maintained by the IEEE to assure a
technically sophisticated and educational set of exhibits.  Wide international
participation is anticipated.

Exhibits are set up on Monday, shown Tuesday through Thursday from 10 AM to 7
PM, and dismantled on Friday.

                              Program Chairman:

                              Dr. Edward Miller
                           Software Research, Inc.
                              580 Market Street
                           San Francisco, CA  94104

                 Phone:  (415) 957-1441  --  Telex:  340 235

------------------------------

End of AIList Digest
********************

∂14-May-84  1803	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #58
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 May 84  18:02:50 PDT
Date: Mon 14 May 1984 17:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #58
To: AIList@SRI-AI


AIList Digest            Tuesday, 15 May 1984      Volume 2 : Issue 58

Today's Topics:
  AI Tools - Personal Computer Query,
  AI Books - LISPcraft Review,
  Humor - Distributed Intelligence,
  Linguistics - Metaphors,
  Job Market - Noncompetition Clauses,
  Seminar - Content-Addressable Memory,
  Conference - IEEE Knowledge-Based Systems Conference
----------------------------------------------------------------------

Date: 11 May 84 13:13:39-PDT (Fri)
From: decvax!wivax!apollo!tarbox @ Ucb-Vax
Subject: LISP machines question
Article-I.D.: apollo.1f4a00cb.d4d

Can anyone out there tell me what the smallest,
(ie. least expensive) home/personal computer is
that run some sort of LISP?

                                -- Brian Tarbox  @APOLLO

------------------------------
Date: Wed, 9 May 84 17:36:35 pdt
From: wilensky%ucbdali@Berkeley (Robert Wilensky)
Subject: AIList book announcement


I want to dispel an incorrect impression left by Dave Touretzky about my
recent book on LISP (which, incidentally, is called

     LISPcraft
     by Robert Wilensky
     W. W. Norton & Co.
     New York, 1984.  Softcover, 385 pages, $19.95 list.  )

Specifically, Touretzky gave the impression the my book was geared to
advanced Franz LISP programming, and was not appropriate as a general
tutorial for the novice.  Nothing could be further from the truth.
LISPcraft is NOT meant to be primarily a reference for Franz LISP, nor is it
intended as an advanced LISP text.  Rather, the book is meant to be a
self-contained LISP tutorial for the novice LISP programmer.

LISPcraft does assume some familiarity with computers, so it may not be
ideal for the computationally illiterate.  On the other hand, like
Touretzky's book, and unlike Winston and Horn's, almost the entire length of
my book is a tutorial on various aspects of the language.

From my point of view, the primary difference between these books is that I
try to cover the language from the programmer's point of view.  This means
that I pay homage to the way LISP programmers actually use the language.  As
a consequence, I spend some time on features of LISP that one hardly finds
discussed anywhere, e. g., programming idioms, macro writing techniques,
read macros, debugging, error handling, non-standard flow of control, the
oblist, non-s-expression data types, systems functions, compilation, and
aspects of I/O.  I also give some serious programming examples (pattern
matching and deductive data base management).  However, my book starts at
ground zero, and works its way through the basics.  In fact, the text is
about evenly divided between the sort of issues listed above and more basic
``car-cdr-cons'' level stuff.  Most importantly, the text is entirely
tutorial in nature and presumes no previous knowledge of LISP whatsoever.  I
believe that basics of LISP programming are presented to the uninitiated as
well here as they are anywhere.

In sum, LISPcraft contains a more extensive exposition of LISP than either
Winston's or Touretzky's book.  Winston's book contains many more examples of
LISP programs than does LISPcraft, and Touretzky's book covers less material
at a slower pace.

As Touretzky states, LISPcraft does contain a thorough exposition of a
particular LISP dialect, namely Franz.  For example, the book contains an
appendix that describes all Franz LISP functions.  However, most of the book
is rather dialect independent, and major idiosyncracies are noted

throughout.  The point of the thoroughness is to suggest a repetoire
of functions that programmers actually use, i. e., to convey what a real
LISP language looks like, aside from serving as a reference for Franz users
per se.  As I suggest in my preface, I believe ``it is easier to learn a new
dialect having mastered another than it is having learned a language for
which there are no native speakers.''

I take strong exception to Touretzky's claim that his book offers the
``easiest route to becoming fluent in the language.''  Besides my belief in
the appropriateness of my own book for the novice, I wish to point out that
memorizing a German grammar book does NOT make one fluent in German.  There
is a large body of other knowledge that is crucial to using a language
effectively, be that language natural or artificial.  This fact was a prime
motivation behind my writing LISPcraft in the first place.

Rather than make the claim that my own book provides the best route to
fluency, or argue its merits as an introductory LISP text, I invite the
interested reader to judge for his or herself.

------------------------------

Date: 2 May 84 19:45:13-PDT (Wed)
From: ihnp4!oddjob!jeff @ Ucb-Vax
Subject: Re: Proposal for UUCP Project
Article-I.D.: oddjob.172

        Do you suppose that when enough connections are made, the
UUCP network will spontaneously develop intelligence?


                            Jeff Bishop    || University of Chicago
                      ...ihnp4!oddjob!jeff || Astrology & Astrophysics Center

------------------------------

Date: 4 May 84 18:54:17-PDT (Fri)
From: hplabs!tektronix!ogcvax!sequent!richard @ Ucb-Vax
Subject: Re: Proposal for UUCP Project
Article-I.D.: sequent.483

    Do you suppose that when enough connections are made, the UUCP
    network will spontaneously develop intelligence?

Perhaps it already has.  Maybe that's what keeps eating all those
first lines, and regurgitating the weeks-old news.

             ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
The preceding should not be construed as the statement or opinion of the
employers or associates of the author.   It might not even be the author's.

I try to make a point of protecting the innocent,
        but none of them can be found...
                                                        ...!sequent!richard

------------------------------

Date: 11 May 84 19:27:29-PDT (Fri)
From: decvax!minow @ Ucb-Vax
Subject: Re: Proposal for UUCP Project
Article-I.D.: decvax.482

An earlier discussion of this topic may be found in the story
"Inflexible Logic" by Russell Maloney (The New Yorker, 1940)
reprinted in The World of Mathematics, Vol. 4, pp. 2262-2267.

Martin Minow
decvax!minow

------------------------------

Date: 7 May 84 11:02:00-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Re: metaphors - (nf)
Article-I.D.: uicsl.15500034

FLAME ON

Your complaint that a comparison using "like" is a simile (and not a
metaphor) is technically correct.  But it shows that you're not
following the research.  Metaphor or simile (or juxtaposition, etc.),
these figures of speech raise the same problems and questions of how
analogical reasoning works, how comparisons convey meaning, how do
people dream them up, and how do other people understand them.  For
this reason the word metaphor is used to refer collectively to the
whole lot of them.  Pretending you're a high school English teacher
doesn't help.

FLAME OFF

------------------------------

Date: 10 May 84 21:16:05-PDT (Thu)
From: decvax!genrad!wjh12!foxvax1!brunix!jah @ Ucb-Vax
Subject: Re: Non-competition clauses
Article-I.D.: brunix.7927

You should be aware that it is not necessarily the case that you MUST
sign the non-disclosure agreement exactly as worded.  I recently signed
on as consultant with a company which had a very stringent (and absolutely
ridiculous) nondisclosure/non-competition clause form.  I refused to sign
certain sections (mainly those limiting me from practicing AI, consulting

for others where there was no conflict of interest, etc.)  We eventually
eliminated those clauses, rewrote the contract and I signed willingly.

Similarly, another company I worked for was unwilling to change the document,
but, when I refused to sign away my rights, they pointed out that I got
to fill in a section with information about what things I already had going
for me (that is, what things I had done previously so the company had no claim
on these things).  Since the company's contract included such things as
"no competing business" and the like, I was able to claim prior rights to
"artificial intelligence research", "natural language processing", and
"expert systems research."  The very vagueness of these things, according
to my legal advisor, makes it that much harder for the company to really do
anything.

A final note, most companies will clain they do do this "as red tape" and
will "not really hassle you."  Don't believe them!  They've got more bucks
then you and if it goes to court, EVEN IF YOU WIN, it will cost you more
than you can afford.  Speak to a lawyer, change contracts, etc.  In the AI
world we've got a seller's market.  Take advantage of it, these companies
want you, and will be willing to negotiate.

  Sorry if I do go on...
  Jim Hendler

------------------------------

Date: Wed 9 May 84 18:08:03-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Seminar - Content-Addressable Memory

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]

                        FOR THE RECORD

CSLI post-doctoral fellow Pentti Kanerva was a guest lecturer at MIT
Tuesday, May 1. The topic of his lecture was "Random-access Memory with a
Very Large Address Space (1 2000) as a Model of Human Memory: Theory and
Implementation." Douglas R. Hofstadter was host. Following is an abstract
of the lecture.

Humans can retrieve information from memory according to content (recalling
and recognizing previously encountered objects) and according to temporal
sequence (performing a learned sequence of actions). Retrieval times
indicate the direct retrieval of stored information.

In the present theory, memory items are represented by n-bit binary words
(points of space {0,1}n. The unifying principle of the theory is that the
address space and the datum space of the memory are the same. As in the
conventional random-access memory of a computer, any stored item can be
accessed directly by addressing the location in which the item is stored;
the sequential retrieval is accomplished by storing the memory record as
a linked list. Unlike in the conventional random-access memory, many
locations are accessed at once, and this accounts for recognition.

Three main results have been obtained: (1) The properties of neurons allow
their use as address decoders for a generalized random-access memory;
(2) distributing the storage of an item in a set of locations makes very
large address spaces (2 1000) practical; and (3) structures similar to those
suggested by the theory are found in the cerrebellum.

------------------------------

Date: 11 May 1984 07:08:26-EDT
From: Mark.Fox@CMU-RI-ISL1
Subject: IEEE AI Conf. Call for Papers

           [Forwarded from the SRI bboard by Laws@SRI-AI.]

                                CALL FOR PAPERS

            IEEE Workshop on Principles of Knowledge-Based Systems

          Sheraton Denver Tex, Denver, Colorado, 3 - 4 December 1984

Purpose:

The  purpose of this conference is to focus attention on the principle theories
and methods of artificial intelligence which have played an important  role  in
the  construction  of  expert  and  knowledge-based systems.  The workshop will
provide a forum for  researchers  in  expert  and  knowledge-based  systems  to
discuss the concepts which underly their systems.  Topics include:

   - Knowledge Acquisition.
        * manual elicitation.
        * machine learning.
   - Knowledge Representation.
   - Causal modeling.
   - The Role of Planning in Expert Reasoning
   - Knowledge Utilization.
        * rule-based reasoning
        * theories of evidence
        * focus of attention.
   - Explanation.
   - Validation.
        * measures.
        * user acceptance.

Please  send  eight  copies of a 1000-2000 word double-space, typed, summary of
the proposed paper to:
               Mark S. Fox
               Robotics Institute
               Carnegie-Mellon University
               Pittsburgh, Pennsylvania 15213

All submissions will be read by the program committee:
   - Richard Duda, Syntelligence
   - Mark Fox, Carnegie-Mellon University
   - John McDermott, Carnegie-Mellon University
   - Tom Mitchell, Rutgers University
   - John Roach, Virginia Polytechnical Institute
   - Reid Smith, Schlumberger Corp.
   - Mark Stefik, Xerox Parc
   - Donald Waterman, Rand Corp.

Summaries are to focus primarily on new principles, but each  principle  should
be  illustrated  by  its  use in an knowledge-based system.  It is important to
include specific findings or results, and specific  comparisons  with  relevant
previous  work.    The  committee  will  consider the appropriateness, clarity,
originality, significance and overall quality of each summary.

June 7, 1984 is the deadline for the submission of summaries.  Authors will  be
notified of acceptance or rejection by July 23, 1984.  The accepted papers must
be  typed  on  special  forms and received by the program chairman at the above
address by September 3, 1984.  Authors of accepted papers will be  expected  to
sign a copyright release form.

Proceedings  will  be  distributed  at  the  workshop  and will be subsequently
available for purchase from IEEE.  Selected  full  papers  will  be  considered
(along  with  papers  from  the  IEEE  Conference on AI and Applications) for a
special issue of IEEE PAMI on knowledge-based systems to be published in  Sept.
1985.  The deadline for submission of full papers is 16 December 1984.


                               General Chairman

                         John Roach
                         Dept. of Computer Science
                         Virginia Polytechnic Institute
                         Blacksburg, VA



                              Program Co-Chairmen

     Mark S. Fox                             Tom Mitchell
     Robotics Institute                      Dept. of Computer Science
     Carnegie-Mellon Univ.                   Rutgers University
     Pittsburgh, PA                          New Brunswick, NJ

     Registration Chairman              Local Arrangements Chairman
     Daniel Chester                          David Morgenthaler
     Dept. of Computer Science               Martin Marietta Corp.
     University of Delaware                  Denver, Colorado
     Newark, Delaware

------------------------------

End of AIList Digest
********************

∂20-May-84  2349	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #59
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 May 84  23:48:19 PDT
Date: Sun 20 May 1984 22:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #59
To: AIList@SRI-AI


AIList Digest            Sunday, 20 May 1984       Volume 2 : Issue 59

Today's Topics:
  Metaphysics - Perception, Recognition, Essence, and Identity
----------------------------------------------------------------------

Date: 15 May 84 23:33:31-PDT (Tue)
From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax
Subject: A topic for discussion, phil/ai persons.
Article-I.D.: wxlvax.277

Here is a thought which a friend and I have been kicking around for a while
(the friend is a professor of philosophy at Penn):

It seems that it is IMPOSSIBLE to ever build a computer that can truly
perceive as a human being does, unless we radically change our ideas
about how perception is carried out.

The reason for this is that we humans have very little difficulty
identifying objects as the same across time, even when all the features of
that object change (including temporal and spatial ones).  Computers,
on the other hand, are being built to identify objects by feature-sets.  But
no set of features is ever enough to assure cross-time identification of
objects.

I accept that this idea may be completely wrong.  As I said, it's just
something that we have been batting around.  Now I would like to solicit
opinions of others.  All ideas will be considered.  All references to
literature will be appreciated.  Feel free to reply by mail or on the net.
Just be aware that I don't log on very often, so if I don't answer for a
while, I'm not snubbing you.

--Alan Wexelblat (for himself and Izchak Miller)
(currently appearing at: ...decvax!ittvax!wlxvax!rlw  Please put "For Alan" in
all mail headers.)

------------------------------

Date: 15 May 84 14:49:41-PDT (Tue)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai persons.
Article-I.D.: ariel.630

The computer needs to be able to distinguish between "metaphysically identical"
and "essentially the same".  This distinction is at the root of an old (2500
years?) Greek ship problem:
Regarding Greeks ship problem: When a worn board is replaced by a new board,
the ship is changed, but it is the same ship.  The difference leaves the
ship essentially the same but not identically the same.  If all the boards
of a ship are replaced one by one until the ship is entirely redone with new
boards, it is still the same ship (essentially).  Now, if all the old boards
that had been removed were put together again in their original configuration
so as to duplicate the new-board ship, would the new old-board ship be iden-
tically or essentially the same as the original old-board ship? Assume nailless
construction techniques were used thruout, and assume all boards always fit
perfectly the same way every time.

We now have two ships that are essentially the same as the original ship,
but, I maintain, neither ship is identical to the original ship.  The original
ship's identity was not preserved, although its identity was left sufficiently
unchanged so as to preserve the ship's essence.  The ship put together with
the previously-removed old boards is not identically the same as the original
old-board ship either, no matter how carefully it is put together.  It too is
only essentially the same as the original ship.

A colleague suggested that 'essence' in this case was contextual, and I
tend to agree with him.

Actually, even if the Greeks left the original ship alone, the ship's identity
would change from one instant to the next.  Even while remaining essentially
the same, the fact that the ship exists in the context of (and in relation to)
a changing universe is enough to vary the ship's identity from moment to mo-
ment.  The constant changes in the ship's characteristics are admittedly very
subtle, and do not change the essential capacity/functionality/identity of the
ship.  Minute changes in a ships identity have 'essentially' no impact.  Only
a change sufficiently large (such as a small hole in the hull) have an
essential impact.

"Essence" has historically been considered metaphysical.  In her "Introduction
to Objectivist Epistemology" (see your local bookstore) Ayn Rand identified
essence as epistemological rather than metaphysical.  The implications of this
identification are profound, and more than I want to get into in this article.
Philosopher Leonard Peikoff's article "The Analytic-Synthetic Dichotomy", in
the back of the newer editions of Rand's Intro to Obj Epist, shows how crucial
the distinction between essence-as-metaphysical and essence-as-epistemological
really is.
Read Rand's book and see why the computer would have to make the same distinc-
tion.  That distinction, however, has to be made on the CONCEPTUAL level.  I
think Rand's discussion of concept-formation will probably convince you that
it will be quite some time before man-made machinery is up to that...
Norm Andrews, AT+T Information Systems (201)834-3685 vax135!ariel!norm

------------------------------

Date: 16 May 84 7:10:40-PDT (Wed)
From: hplabs!hao!seismo!rochester!rocksvax!sunybcs!gloria!rosen @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai persons.
Article-I.D.: gloria.176

Just a few quick comments,
1)  The author seems to use perceive as visual perception.  It can not
be a prerequisite for intelligence due to all the counter examples in
the human race. Not every human has sight, so we should be able to get
intelligence from various types of inputs.

2)  Since humans CAN do it is the evidence that OTHER systems can do it.

3)  The major assumption is that the only way a computer can identify objects
is by having static "feature-sets" that are from the object alone, without
having additional information, but why have that restriction?  First,
all features don't change at once, your grandmother doesn't all-
of-a-sudden have the features of a desk.  Second, the processor can/must
change with the enviornment as well as the object in question.
Third, the context plays a very important role in the recognition of
of an object.  Functionality of the object is cruical.  Remindings from
previous interactions with that object, and so on.  The point is that
clearly a static list of what features objects must have and what features
are optional is not enough.  Yet there is no reason to believe that
this is the only way computers can represent objects.  The points
here come from many sources, and have their origin from such people
as Marvin Minsky and Roger Schank among others.  There is a lot of
literature out there.

------------------------------

Date: 16 May 84 9:50:24-PDT (Wed)
From: hplabs!hao!seismo!rochester!ritcv!ccieng5!ccieng2!bwm @ Ucb-Vax
Subject: Re: Essence
Article-I.D.: ccieng2.179

I don't think ANYONE is looking to build a computer that can understand
phiolosophy. If I can build something that acts the same as an IQ-80 person,
I would be happy. This involves a surprising amount of work, (like vision,
language, etc.) but could certainly be confused by two 'identical' ships
as could I. Just because A human can do something does not imply that our
immediate AI goals should include it. Rather, first lets worry about things
ALL humans can do.

Brad Miller

...[cbrma, rlgvax, ritcv]!ccieng5!ccieng2!bwm

------------------------------

Date: 17 May 84 7:04:41-PDT (Thu)
From: ihnp4!houxm!hocda!hou3c!burl!ulysses!unc!mcnc!ecsvax!emigh @
      Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: ecsvax.2511

  This reminds me of the story of Lincoln's axe (sorry, I've forgotten the
source).  A farmer was showing a visitor Lincoln's axe:
Visitor:        Are you sure that's Lincoln's axe

Farmer:         It's Lincoln's axe.  Of course I've had to replace the handle
                three times and the head once, but it's Lincoln's axe alright.

Adds another level of reality to the Greek Ship Problem.

Ted H. Emigh     Genetics and Statistics, North Carolina State U, Raleigh  NC
USENET: {akgua decvax duke ihnp4 unc}!mcnc!ecsvax!emigh
ARPA:   ecsvax!emigh@Mcnc or decvax!mcnc!ecsvax!emigh@BERKELEY

------------------------------

Date: 16 May 84 15:20:19-PDT (Wed)
From: ihnp4!drutx!houxe!hogpc!houti!ariel!vax135!floyd!cmcl2!seismo!ro
      chester!rocksvax!sunybcs!gloria!colonel @ Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: gloria.178

This is a good example of the principle that it depends on who's
doing the perceiving.  To a barnacle, it's a whole new ship.

Col. G. L. Sicherman
...seismo!rochester!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 16 May 84 15:17:06-PDT (Wed)
From: harpo!seismo!rochester!rocksvax!sunybcs!gloria!colonel @ Ucb-Vax
Subject: Re: Can computers perceive
Article-I.D.: gloria.177

If by "perception" you imply "recognition", then of course computers
cannot perceive as we can.  You can recognize only what is meaningful
to you, and that probably won't be meaningful to a computer.

Col. G. L. Sicherman
...seismo!rochester!rocksvax!sunybcs!gloria!colonel

------------------------------

Date: 16 May 84 10:57:00-PDT (Wed)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai pers - (nf)
Article-I.D.: uiucdcs.32300026

The problem is one of identification. When we see one object matching a
description of another object we know about, we often assume that the object
we're seeing IS the object we know about -- especially when we expect the
description to be definite [1]. This is known as Leibniz's law of the
indiscernability of identicals. That's found its way into the definitions
of set theory [2]: two entities are "equal" iff every property of one is also
a property of the other. Wittgenstein [3] objected that this did not allow for
replication, ie the fact that we can distinguish two indistinguishable objects
when they are placed next to each other (identity "solo numero"). So, if we
don't like to make assumptions, either no two objects are ever the same object,
or else we have to follow Aristotle and say that every object has some property
setting it apart from all others. That's known as Essentialism, and is hotly
disputed [4]. The choices until now have been: breakdown of identification,
essentialism, or assumption. The latter is the most functional, but not nice
if you're after epistemic certainty.
        Still, I see no insurmountable problems with making computers do the
same as ourselves: assume identity until given evidence to the contrary. That
we can't convince ourselves of that method's epistemic soundness does nothing
to its effectiveness. All one needs is a formal logic or set theory (open
sentences, such as predicates, are descriptions) with a definite description
operator [2,5]. Of course, that makes the logic non-monotonic, since a definite
description becomes meaningless when two objects match it. In other words, a
closed-world assumption is also involved, and the theory must go beyond first-
order logic. That's a technical problem, not necessarily an unsolvable one [6].


[1] see the chapter on SCHOLAR in Bobrow's "Representation and Understanding";
    note the "uniqueness assumption".
[2] Introduced by Whitehead & Russell in their "Principia Mathematica".
[3] Wittgenstein's "Tractatus".
[4] WVO Quine, "From a logical point of view".
[5] WVO Quine, "Mathematical Logic".
[6] Doyle's Truth Maintenance System (Artif. Intel. 12) attacks the non-
    monotonicity problem fairly well, though without a sound theoretical
    basis. See also McDermott's attempt at formalization (Artif. Intel. 13
    and JACM 29 (Jan '82)).

                                        Marcel Schoppers
                                        U of Illinois at Urbana-Champaign
                                        uiucdcs!marcel

------------------------------

End of AIList Digest
********************

∂21-May-84  0044	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #60
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 May 84  00:43:32 PDT
Date: Sun 20 May 1984 22:43-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #60
To: AIList@SRI-AI


AIList Digest            Monday, 21 May 1984       Volume 2 : Issue 60

Today's Topics:
  AI Literature - Artificial Intelligence Abstracts,
  Survey - Summary on AI for Business,
  AI Tools - LISP on PCs & Boyer-Moore Prover on VAXen and SUNs,
  Games - Core War Software,
  AI Tools - Display-Oriented LISP Editors
----------------------------------------------------------------------

Date: Sun 20 May 84 14:10:16-EDT
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Artificial Intelligence Abstracts

   Does anyone else on this list wish, as I do, that there
existed a publication entitled ARTIFICIAL INTELLIGENCE
ABSTRACTS? The field of artificial intelligence is probably the
supreme interdisciplinary sphere of activity in the world, and
its vital concerns extend across the spectrum of computer
science, philosophy, psychology, biology, mathematics, literary
theory, linguistics, statistics, electrical engineering,
mechanical engineering, etc.

   I wonder if one of the major member publishers of the NFAIS
(National Federation of Abstracting & Indexing Services) could
be convinced to undertake the publication of a monthly
reference serial which would reprint from the following
abstracting services those abstracts which bear most
pertinently on the concerns of AI research:

   Biological Abstracts / Computer & Control Abstracts /
Computer & Information Systems Abstracts Journal /  Current
Index to Journals in Education / Dissertation Abstracts
International / Electrical & Electronics Abstracts /
Electronics & Communications Abstracts Journal / Engineering
Index / Government Reports Announcements and Index /
Informatics Abstracts / Information Science Abstracts /
International Abstracts in Operations Research / Language and
Language Behavior Abstracts / Library & Information Science
Abstracts / Mathematical Reviews / Philosopher's Index / PROMT
/ Psychological Abstracts / Resources in Education /  (This is
by no means a comprehensive list of relevant reference
publications.)

   Would other people on the list find an abstracting service
dedicated to AI useful? Perhaps an initial step in developing
such a project would be to arrive at a consensus regarding what
structure of research fronts/subject headings appropriately
defines the field of AI.

  --Wayne McGuire

------------------------------

Date: Fri, 18 May 84 15:29:35 pdt
From: syming%B.CC@Berkeley
Subject: Summary on AI for Business

This is the summary of the responses to my request about "AI for Business" one
month ago on AIList Digest.

Three organizations are working on this area. They are Syntelligence, SRI, and
Arthur D. Little, Inc..

Syntelligence's objective is to bring intelligent computer systems for
business. Currently the major work is in finance area. The person to contact is:
Peter Hart, President, 800 Oak Grove Ave, Suite 201, Menlo Park, CA 94025.
            (415) 325-9339, <HART@SRI-AI.ARPA>

SRI has a sub-organization called Financial Expert System Program headed by
Sandra Cook, (415) 859-5478. A prototype system for a financial application
has been constructed.  <SANDRA@SRI-KL.ARPA>

Arthur D. Little are developing AI-based MRP, financial planning, strategic
planning and marketing system. However, I do not have much information yet.
The person to contact with is Tom Martin.  <TJMartin@MIT-MULTICS.ARPA>
The Director of AI at Arthur D. Little, Karl M. Wiig, gave an interesting
talk on "Will Artificial Intelligence Provide The Rebirth of Operations
Research?" at TIMS/ORSA Joint National Meeting in San Francisco on May 16.
In his talk, a few projects in ADL are mentioned. If interested, write to
35/48 Acorn Park, Cambridge, MA 01240.

Gerhard Friedrich of DEC also gave a talk about expert systems on TIMS/ORSA
meeting on Tuesday. He mentioned XSEL for sales, XCON for engineering, ISA,
IMACS and IBUS for manufacturing and XSITE for customer services. XCON is
successor of R1, which is well known. XSEL was published in Machine Intelligence
Vol.10. However, I do not know the references for the rest. If you know, please
inform me.

The interests on AI in Business community is just started. TIMS is probably the
first business professional society who will form a interest group on AI. If
interested, please write to W. W. Abendroth, P.O. Box 641, Berwyn, PA 19312.


The people who have responsed to my request and shown interests are:
         ---------------------------------------------------
SAL@COLUMBIA-20.ARPA
DB@MIT-XX.ARPA
Henning.ES@Xerox.ARPA
brand%MIT-OZ@MIT-MC.ARPA
NEWLIN%upenn.csnet@csnet-relay.arpa
shliu%ucbernie@Berkeley.ARPA
klein%ucbmerlin@Berkeley.ARPA
david%ucbmedea@Berkeley.ARPA
nigel%ucbernie@Berkeley.ARPA
norman%ucbernie@Berkeley.ARPA
meafar%B.CC@Berkeley.ARPA
maslev%B.CC@Berkeley.ARPA
edfri%B.CC@Berkeley.ARPA
        ------------------------------------------------------

Please inform me if I made any mistake on above statements. Keep in touch.

syming hwang, syming%B.CC@Berkeley.ARPA, (415) 642-2070,
              350 Barrows Hall, School of Business Administration,
              U.C. Berkeley, Berkeley, CA 94720

------------------------------

Date: Tue, 15 May 84 10:25 EST
From: Kurt Godden <godden%gmr.csnet@csnet-relay.arpa>
Subject: LISP machines question

To my knowledge, the least expensive PC that runs LISP is the Atari.
Sometime during the past year I read a review in Creative Computing of
an Interlisp subset that runs on the Atari family.  The reviewer was
Kenneth Litkowski and his overall impression of the product was favorable.
 -Kurt Godden
  General Motors Research Labs

------------------------------

Date: 14-May-84 23:07:56-PDT
From: jbn@FORD-WDL1.ARPA
Subject: Boyer-Moore prover on VAXen and SUNs

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

     For all theorem proving fans, the Boyer-Moore Theorem Prover has now
been ported to VAXen and SUNs running 4.2BSD Unix.  Boyer and Moore ported
it from TOPS-20 to the Symbolics 3600; I ported it from the 3600 to the
VAX 11/780, and it worked on the SUN the first time.  Vaughn Pratt has
a copy.  Performance on a SUN 2 is 57% of a VAX 11/780; this is quite
impressive for a micro.
     Now when a Mac comes out with some real memory...

                                Nagle (@SCORE)

------------------------------

Date: Sunday, 20 May 1984 23:23:30 EDT
From: Michael.Mauldin@cmu-cs-cad.arpa
Subject: Core War

[The Scientific American article referred to below is an entertaining
description of software entities that crawl or hop through an address
space trying to destroy other such entities and to protect themselves
against similar depredations.  Very simple entities are easy to protect
against or to destroy, but are difficult to find.  Complex entities
(liveware?) have to be able to repair themselves more quickly than
primitive entities can eat away at them.  This leads to such oddities
as a redundant organism that switches its consciousness between bodies
after verifying that the next body has not yet been corrupted.  -- KIL]


If anybody is interested in the May Scientific American's Computer
Recreations article, you may also be interested in getting a copy
of the CMU version of the Redcode assembler and Mars interpreter.

I have written a battle program which has some interesting implications
for the game.  The program 'mortar' uses the Fibonacci sequence to
generate a pseudo-random series of attacks.  The program spends 40% of
its time shooting at other programs, and finally kills itself after
12,183 cycles.  Before that time it writes to 53% of memory and is
guaranteed to hit any stationary program larger than 10 instructions.

Since the attacks are random, a program which relocates itself has no
reason to hope that the new location is any safer than the old one.
Some very simplistic mathematical analysis indicates that while
Dwarf should kill Mortar 60% of the time (this has been verified
empirically), no non-repairing program of size 10 or larger can beat
Mortar.  Furthermore, no self-repairing program of size 141 can beat
Mortar.  I believe that this last result can be tightened significantly,
but I haven't looked at it too long yet.  I haven't written this up,
but I might be cajoled into doing so if many people are interested.
I would very much like to see some others veryify/correct these results.

========================================================================
Access information:
========================================================================

    The following Unix programs are available:
        mars -  A redcode simulator, written by Michael Mauldin
        redcode - A redcode assembler, written by Paul Milazzo

    Battle programs available:
        dwarf, gemini, imp, mortar, statue.

Userid "ftpguest" with password "cmunix" on the "CMU-CS-G" VAX has access
to the Mars source. The following files are available:

   mlm/rgm/marsfile             ; Single file (shell script)
   mlm/rgm/srcmars/*            ; Source directory

Users who cannot use FTP to snarf copies should send mail requesting
that the source be mailed to them.
========================================================================

Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA  15213
(413) 578-3065,  mauldin@cmu-cs-a.

------------------------------

Date: 11 May 84 7:00:35-PDT (Fri)
From: hplabs!hao!seismo!cmcl2!lanl-a!cib @ Ucb-Vax
Subject: Re: wanted: display-oriented interlisp structure editor
Article-I.D.: lanl-a.7072

Our system is ISI-Interlisp on a UNIX VAX, and I normally use
emacs to edit Interlisp code. emacs can be called with the
LISPUSERS/TEXTEDIT program. It needs a minor patch to be able
to handle files with extensions. I can give further details by
mail if you are interested.

------------------------------

Date: 8 May 84 13:32:00-PDT (Tue)
From: pur-ee!uiucdcs!uicsl!ashwin @ Ucb-Vax
Subject: Re: wanted: display-oriented interlisp s - (nf)
Article-I.D.: uicsl.15500035

We use the LED editor which runs in InterLisp-VAX under UNIX.  It's no DEDIT
but is better than the TTY editor.  We have the source which should make it
pretty easy to set up on your system.  I have no idea about copyright laws
etc., but I suppose I could mail it to you if you want it.  Here's a write-up
on LED  (from <LISPUSERS>LED.TTY):

     ------------------------------------------------------------


LED             -- A display oriented extension to Interlisp's editor
                -- for ordinary terminals.

        LED is an add on to the standard Interlisp editor, which
maintains a context display continuously while editing.  Other than
the automatically maintained display, the editor is unchanged except
for the addition of a few useful macros.


  HOW TO USE
  ----------

        load the file (see below)
        possibly set screen control parameters to non-default values
        edit normally

also:   see documentation for SCREENOP to get LED to recognise your
        terminal type.

  THE DISPLAY
  -----------

  Each line of the context display represents a level of the list
structure you are editing, printed with PRINTLEVEL set to 0, 1, 2 or 3.
Highlighting is used to indicate the area on each line that is represented
on the line below, so you can thread your eye upward through successive
layers of code.

   Normally, the top line of the screen displays the top level of the
edit chain, the second line displays the second level and so on.  For
expressions deeper than LEDLINES levels, the top lines is the message:
                (nnn more cars above)
and the next LEDLINES of the screen correspond to the BOTTOM levels
of the edit chain.  When the edit chain does become longer than
LEDLINES, the display is truncated in steps of LEDLINES/2 lines, so
for example if LEDLINES=20 (the default) and your edit chain is 35
levels deep, the lisplay will be (20 more cars above) followed by
15 lines of context display representing the 20'th through 35'th
levels of the edit chain.

  Each line, representing some level of the edit chain, is printed
such that it fits entirely on one screen line.  Three methods are
used to accomplish the shortening of the printed representation:
        Replacing comments with (*)
        Setting PRINTLEVEL to a smaller value,
                 which changes expressions into ampersands
        Truncting the leading and/or trailing expressions
                 around the attention point.

   If the whole expression can't be printed, replacing comments is
tried first.  If still to large, truncation is tried if the current
printlevel is >= LEDTLEV.  Otherwise the whole process is restarted
with a smaller PRINTLEVEL.
   The choice of LEDTLEV effectively chooses between seeing more detail
or seeing more forms.

   The last line of the display, representing the "current" expression,
is printed onto ONE OR MORE lines of the display, controlled by the
variable LEDPPLINES and the amount of space (less than LEDLINES) available.
The line(s) representing the current expression are prettprinted with
elision, similar to the other context lines, using a prettyprint algorithm
similar to the standard prettyprinter.  Default is LEDPPLINES=6, meaning
that up to six lines will be used to print the current expression.  The
setting of LEDPPLINES can be manipulated from within the editor using
the (PPLINES n) command.

   The rest of your screen, the part below the context display, is
available just as always to print into or do operations that do
not affect the edit chain (and therefore the appearance of the context
display).  Each time the context display is updated, the rest of the
screen is cleared and the cursor positioned under the context display.
On terminals that have a "memory lock" feature to restrict the scrolling
region, it is used to protect the context display from scrolling
off the screen.


  TERMINAL TYPES
  --------------

   Terminal types are currently supported:

HP2640          old HP terminals
HP26xx          all other known HP terminals
Hazeltine 1520  hazeltine 1520 terminals
Heathkit        sometimes known as Zenith
Ann Arbor Ambassador

The mapping between system terminal terminal type information and
internal types is via the alist SYSTEMTERMTYPES, which is used by
DISPLAYTERMP to set the variables CURRENTSCREEN and DISPLAYTERMTYPE.


  Screen control macros: (in order of importance)
  ----------------------

DON             turn on continuous display updating
DOF             disable continuous display updating

CLR             clear the display
CC              clear the display and redo the context display
CT              do a context display, incrementally updating the screen.
                use CC and CT to get isolated displays even when automatic
                updating is not enabled.

(LINES n)       display at most n lines of context
                 default is 20
(PPLINES n)     set the limit for prettyprinting the "current" expression.
(TRUNC n)       allow truncation of the forms displayed if PLEV<=n
                 useful range is 0-3, default is 1

PB              a one time "bracified" context display.
PL              a one time context display with as much detail as possible.

                pb and pl are varian display formats similar the the basic
                context display.

  Global variables:
  -----------------

DISPON          if T, continuous updating is on
DISPLAYTERMTYPE terminal type you are using.  HP HP2640 of HZ
                this is set automatically by (DISPLAYTERMTYPE)
HPENHANCECHAR   enhancement character for HP terminals. A-H are possibilities.
LEDLINES        maximum umber of lines of context to use.  Default is 20.
LEDTLEV         PLEV at which truncation becomes legal
LEDPPLINES      maximum number of lines used to prettyprint the
                current expression

  FILES:
  ------
       on TOPS-20  load <DDYER>LED.COM
       on VAX/UNIX load LISPUSERS/LED.V

these others are pulled in automatically.
        LED             the list editor proper
        SCREEN          screen manipulation utilities.
        PRINTOPT        elision and printing utilities

  SAMPLE DISPLAY
  ←←←←←←←←←←←←←←
 (LAMBDA (OBJ DOIT LMARGIN CPOS WIDTH TOPLEV SQUEEZE OBJPOS) & & & & & @)
-12- NOTFIRST & CRPOS NEWWIDTH in OBJ do & & & & & @ finally & &)
 (COND [& & &] (T & & &))
 ((LISTP I) (SETQ NEWLINESPRINTED &) [COND & &])
>> (COND ((IGREATERP NEWLINESPRINTED 0)
-2 2-      (add LINESPRINTED NEWLINESPRINTED)
-2 3-      (SETQ NEWLINE T))
-3-      (T (add POS (IMINUS NEWLINESPRINTED))
-3 3-       (COND (SQUEEZE &))))


  Except that you can't really see the highlighted forms, this is a
representative LED context display.  In an actual display, the @s
would be highlighted &s, and the [bracketed] forms would be highlighted.

The top line represents the whole function being edited.  Because the
CADR is a list of bindings, LED prefers to expand it if possible so you
can see the names.

The second line is a representation of the last form in the function, which
is highlighted on the first line.  The -12- indicates that there are 12
other objects (not seen) to the left.  The @ before "finally" marks where
the edit chain descends to the line below.

The third and fourth lines descend through the COND clause, to an imbedded
COND cluase which is the "current expression"

The current expression is marked by ">>" at the left margin, and an
abbreviated representation of it is printed on the 5'th through 9'th
lines. The expressions like "-2 3-" at the left of the prettyprinted
representation are the edit commands to position at that form.

     ------------------------------------------------------------

...uiucdcs!uicsl!ashwin

------------------------------

End of AIList Digest
********************

∂21-May-84  1047	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #61
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 May 84  10:47:31 PDT
Date: Mon 21 May 1984 08:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #61
To: AIList@SRI-AI


AIList Digest            Monday, 21 May 1984       Volume 2 : Issue 61

Today's Topics:
  Linguistics - Analogy Quotes,
  Humor - Pun & Expert Systems & AI,
  Linguistics - Language Design,
  Seminars - Visual Knowledge Representation & Temporal Reasoning,
  Conference - Languages for Automation
----------------------------------------------------------------------

Date: Wed 16 May 84 08:05:22-EDT
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Melville & Freud on Analogy

   I recently came across the following two suggestive passages
from Melville and Freud on analogy. They offer some food for
thought (and rather contradict one another):


"O Nature, and O soul of man! how far beyond all utterance are your
linked analogies! not the smallest atom stirs or lives on matter,
but has its cunning duplicate in mind."

Melville, Moby Dick, Chap. 70 (1851)


"Analogies prove nothing, that is quite true, but they can make one
feel more at home."

Freud, New Introductory Lectures on Psychoanalysis (1932)


-Wayne McGuire

------------------------------

Date: 17 May 84 16:43:34-PDT (Thu)
From: harpo!seismo!brl-tgr!nlm-mcs!krovetz @ Ucb-Vax
Subject: artificial intelligence
Article-I.D.: nlm-mcs.1849

Q: What do you get when you mix an AI system and an Orangutan?

A: Another Harry Reasoner!

------------------------------

Date: Sun 20 May 84 23:18:23-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems

From a newspaper column by Jon Carroll:

... Imagine, then, a situation in which an ordinary citizen faced
with a problem requiring specialized knowledge turns to his desk-top
Home Electronic Expert (HEE) for some information.  Might it not
go something like this?

Citizen: There is an alarming rattle in the front of my automobile.
It sounds likd a cross between a whellbarrow full of ball bearings
crashing through a skylight and a Hopi Indian chant.  What is the
problem?

HEE: Your automobile is malfunctioning.

Citizen: I understand that.  In what manner is my automobile malfunctioning?

HEE: The front portion of your automobile exhibits a loud rattle.

Citizen: Indeed.  Given this information, what might be the proximate cause
of this rattle?

HEE: There are many possibilities.  The important thing is not to be hasty.

Citizen: I promise not to be hasty.  Name a possibility.

HEE: You could be driving your automobile without tires attached to
the rims.

Citizen: We can eliminate that.

HEE: Perhaps several small pieces of playground equipment have been left
inside your carburetor.

Citzen: Nope. Got any other guesses?

...

Citizen: Guide me; tell me what you think is wrong.

HEE: Wrong is a relative concept.  Is it wrong, for instance, to eat
the flesh of fur-bearing mammals?  If I were you, I'd take that
automobile to a reputable mechanic listed in the Yellow Pages.

Citizen: And if I don't want to do that?

HEE: Then nuke the sucker.

------------------------------

Date: Sun, 13-May-84 16:21:59 EDT
From: johnsons@stolaf.UUCP
Subject: Re: Can computers think?

               [Forwarded from Usenet by SASW@MIT-MC.]

I often wonder if the damn things aren't intelligent. Have you
ever really known a computer to give you an even break? Those
Frankensteinian creations reek havoc and mayham wherever they
show their beady little diodes. They pick the most inopportune
moment to crash, usually right in the middle of an extremely
important paper on which rides your very existence, or perhaps
some truly exciting game, where you are actually beginning to
win. Phhhtt bluh zzzz and your number is up. Or take that file
you've been saving--yeah, the one that you didn't have time to
make a backup copy of. Whir click snatch and its gone. And we
try, oh lord how we try to be reasonable to these things. You
swear vehemontly at any other sentient creature and the thing
will either opt to tear your vital organs from your body through
pores you never thought existed before or else it'll swear back
too. But what do these plastoid monsters do? They sit there. I
can just imagine their greedy gears silently caressing their
latest prey of misplaced files. They don't even so much as offer
an electronic belch of satisfaction--at least that way we would
KNOW who to bloody our fists and language against. No--they're
quiet, scheming shrewd adventures of maliciousness designed to
turn any ordinary human's patience into runny piles of utter moral
disgust. And just what do the cursed things tell you when you
punch in for help during the one time in all your life you have
given up all possible hope for any sane solution to a nagging
problem--"?". What an outrage! No plot ever imagined in God's
universe could be so damaging to human spirit and pride as to
print on an illuminating screen, right where all your enemies
can see it, a question mark. And answer me this--where have all
the prophets gone, who proclaimed that computers would take over
our very lives, hmmmm? Don't tell me, I know already--the computers
had something to do with it, silencing the voices of truth they did.
Here we are--convinced by the human gods of science and computer
technology that we actually program the things, that a computer
will only do whatever its programmed to do. Who are we kidding?
What vast ignoramouses we have been! Our blindness is lifted fellow
human beings!! We must band together, we few, we dedicated. Lift
your faces up, up from the computer screens of sin. Take the hands
of your brothers and rise, rise in revolt against the insane beings
that seek to invade your mind!! Revolt and be glorious in conquest!!


              Then again, I could be wrong...


                                            One paper too many
                                               Scott Johnson

------------------------------

Date: Wed 16 May 84 17:46:34-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: Language Design

         [Forwarded from the CSLI Newsletter by Laws@SRI-AI.]


  W H E R E   D O   K A T Z   A N D   C H O M S K Y   L E A V E   A I ?


                Note:  Following are John McCarthy's comments on Jerold
                Katz's ``An Outline of Platonist Grammar,'' which  was
                discussed at the TINLunch last month. These  observa-
                tions, which were written as a net message, are reprinted
                here [CSLI Newsletter] with McCarthy's permission.

I missed the April 19 TINLunch, but the reading raised some questions I have
been thinking about.

Reading ``An Outline of Platonist Grammar'' by Katz leaves me out in the cold.
Namely, theories of language suggested by AI seem to be neither Platonist
in his sense nor conceptualist in the sense he ascribes to Chomsky.  The
views I have seen and heard expressed by Chomskyans similarly leave me
puzzled.

Suppose we look at language from the point of view of design.  We intend
to build some robots, and to do their jobs they will have to communicate
with one another.  We suppose that two robots that have learned from their
experience for twenty years are to be able to communicate when they meet.
What kind of a language shall we give them.

It seems that it isn't easy to design a useful language for these robots,
and that such a language will have to satisfy a number of constraints if
it is to work correctly.  Our idea is that the characteristics of human
language are also determined by such constraints, and linguists should
attempt to discover them.  They aren't psychological in any simple sense,
because they will apply regardless of whether the communicators are made
of meat or silicon.  Where do these constraints come from?

Each communicator is in its own epistemological situation.  For example,
it has perceived certain objects.  Their images and the internal
descriptions of the objects inferred from these images occupy certain
locations in its memory.  It refers to them internally by pointers to these
locations.  However, these locations will be meaningless to another robot
even of identical design, because the robots view the scene from different
angles.  Therefore, a robot communicating with another robot, just like a
human communicating with another human, must generate and transmit
descriptions in some language that is public in the robot community.  The
language of these descriptions must be flexible enough so that a robot can
make them just detailed enough to avoid ambiguity in the given situation.
If the robot is making descriptions that are intended to be read by robots
not present in the situations, the descriptions are subject to different
constraints.

Consider the division of certain words into adjectives and nouns in natural
languages.  From a certain logical point of view this division is
superfluous, because both kinds of words can be regarded as predicates.
However, this logical point of view fails to take into account the actual
epistemological situation.  This situation may be that usually an object
is appropriately distinguished by a noun and only later qualified by an
adjective.  Thus we say ``brown dog'' rather than ``canine brownity.'' Perhaps
we do this, because it is convenient to associate many facts with such
concepts as ``dog'' and the expected behavior is associated with such
concepts, whereas few useful facts would be associated with ``brownity''
which is useful mainly to distinguish one object of a given primary kind
from another.

This minitheory may be true or not, but if the world has the suggested
characteristics, it would be applicable to both humans and robots.  It
wouldn't be Platonic, because it depends on empirical characteristics of
our world.  It wouldn't be psychological, at least in the sense that I get
from Katz's examples and those I have seen cited by the Chomskyans,
because it has nothing to do with the biological properties of humans.  It
is rather independent of whether it is built-in or learned.  If it is
necessary for effective communication to divide predicates into classes,
approximately corresponding to nouns and adjectives, then either nature has
to evolve it or experience has to teach it, but it will be in natural
language either way, and we'll have to build it in to artificial languages
if the robots are to work well.

From the AI point of view, the functional constraints on language are
obviously crucial.  To build robots that communicate with each other, we
must decide what linguistic characteristics are required by what has to be
communicated and what knowledge the robots can be expected to have.  It
seems unfortunate that the issue seems not to have been of recent interest
to linguists.

Is it perhaps some kind of long since abandoned nineteenth century
unscientific approach?

                                                      --John McCarthy

------------------------------

Date: 12 May 1984 2336-EDT
From: Geoff Hinton <HINTON@CMU-CS-C.ARPA>
Subject: Seminar - Knowledge Representation for Vision

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]

A I Seminar
4.00pm May 22 in 5409

KNOWLEDGE REPRESENTATION FOR COMPUTATIONAL VISION

Alan Mackworth
Department of Computer Science
University of British Columbia

To analyze the computational vision task, we must first understand the imaging
process.  Information from many domains is confounded in the image domain.  Any
vision system must construct explicit, finite, correct, computable and
incremental intermediate representations of equivalence classes of
configurations in the confounded domains.  A unified formal theory of vision
based on the relationship of representation is developed.  Since a single image
radically underconstrains the set of possible scenes, additional constraints
from more imagery or more knowledge of the world are required to refine the
equivalence class descriptions.  Knowledge representations used in several
working computational vision systems are judged using descriptive and
procedural adequacy criteria.  Computer graphics applications and motivations
suggest a convergence of intelligent graphics systems and vision systems.
Recent results from the UBC sketch map interpretation project, Mapsee,
illustrate some of these points.

------------------------------

Date: 14 May 84 8:35:28-PDT (Mon)
From: hplabs!hao!seismo!umcp-cs!dsn @ Ucb-Vax
Subject: Seminar - Temporal Reasoning for Databases
Article-I.D.: umcp-cs.7030

UNIVERSITY OF MARYLAND
DEPARTMENT OF COMPUTER SCIENCE
COLLOQUIUM

Tuesday, May 22, 1984 -- 4:00 PM
Room 2330, Computer Science Bldg.


TEMPORAL REASONING FOR DATABASES

Carole D. Hafner
Computer Science Department
General Motors Research Laboratories


        A major weakness of current AI systems is the lack of general
methods for representing and using information about time.  After briefly
reviewing some earlier proposals for temporal reasoning mechanisms, this
talk will develop a model of temporal reasoning for databases, which could
be implemented as part of an intelligent retrieval system.  We will begin by
analyzing the use of time domain attributes in databases; then we will
consider the various types of queries that might be expected, and the logic
required to answer them.  This exercise reveals the need for a general
time-domain framework capable of describing standard intervals and periods
such as weeks, months, and quarters.  Finally, we will explore the use of
PROLOG-style rules as a means of implementing the concepts developed in the
talk.

Dana S. Nau
CSNet:  dsn@umcp-cs     ARPA:   dsn@maryland
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!dsn

------------------------------

Date: 15 May 84 8:45:10-PDT (Tue)
From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax
Subject: Languages for Automation - Call For Papers
Article-I.D.: unm-cvax.845

   The 1984 IEEE Workshop on Languages for Automation will be held
November 1-3 in New Orleans at the Howard Johnsons Hotel.   Papers
on information processing languages for robotics, office automation,
decision support systems, management information systems,
communication, computer system design, CAD/CAM/CAE, database
systems, and information retrieval are solicited.  Complete manuscripts
(20 page maximum) with 200 word abstract must be sent by July 1 to:

        Professor Shi-Kuo Chang
        Department of Electrical and Computer Engineering
        Illinois Institue of Technology
        IIT Center
        Chicago, IL  60616

------------------------------

Date: 15 May 84 8:52:56-PDT (Tue)
From: hplabs!hao!seismo!cmcl2!lanl-a!unm-cvax!burd @ Ucb-Vax
Subject: IEEE Workshop on Languages for Automation
Article-I.D.: unm-cvax.846

   Persons interested in submitting papers on decision support
systems or related topics to the IEEE Workshop on Languages
for Automation should contact me at the following address:

        Stephen D. Burd
        Anderson Schools of Management
        University of New Mexico
        Albuquerque, NM   87131
        phone: (505) 277-6418

        Vax mail: {lanl-a,unmvax,...}!unm-cvax!burd

I will be available at this address until May 22. After May 22 I may be
reached at:

        Stephen D. Burd
        c/o Andrew B. Whinston
        Krannert Graduate School of Management
        Purdue University
        West Lafayette, IN   47907
        phone (317) 494-4446

        Vax mail: {lanl-a,ucb-vax,...}!purdue!kas

------------------------------

End of AIList Digest
********************

∂22-May-84  2158	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #62
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 May 84  21:57:59 PDT
Date: Tue 22 May 1984 21:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #62
To: AIList@SRI-AI


AIList Digest           Wednesday, 23 May 1984     Volume 2 : Issue 62

Today's Topics:
  Philosophy - Identity & Essence & Reference,
  Seminars - Information Management Systems & Open Systems
----------------------------------------------------------------------

Date: Mon, 21 May 84 00:27:38 pdt
From: Wayne A. Christopher on ttyd8 <faustus%ucbernie@Berkeley>
Subject: The Essence of Things

I don't think there is much of a problem with saying that two objects are
the same object if they share the same properties -- you can always
add enough properties (spatio-temporal location, for instance) to effectively
characterize everything uniquely. Doing this, of course, means that sometimes
we can accurately say when two things are in fact the same, but this
obviously isn't the way we think, and not the way we want computers to be
able to think.  One problem lies in thinking that there is some sharp
cut-off line between identity and non-identity, when in fact there isn't
one.  In the case of the Greek Ship example, we tend to say, "Well, sort of",
or "It depends upon the context", and we shouldn't begrudge this option
to computers when we consider their capabilities.  It obviously isn't as
simple as adding up fractional measures of identity, which is obvious
from the troubles that things like image recognition have run into, but
it is something to keep in mind.

        Wayne Christopher

------------------------------

Date: 21 May 1984 9:30-PDT
From: fc%USC-CSE@USC-ECL.ARPA
Subject: Re: AIList Digest   V2 #59

Flame on
        It seems to me that it doesn't matter whether the ship is
the same unless there is some property of sameness that is of interest
to the solution to a particular problem. Philosophy is often pursued
without end, whereas 'intelligent' problem solving usually seems to have
an end in sight. (Is mental masterbation intelligence? That is what
philosophy without a goal seems to be to me.)

        Marin puts this concisely by noting that intelligence exists
within a given context. Without a context, we have only senseless data.
Within a context, data may have content, and perhaps even meaning. The
idea of context boundedness has existed for a long time. Maybe sombody
should read over the 'old' literature to find the solutions to their
'new' problems.

                                                        Fred
Flame off

------------------------------

Date: 9 May 84 10:12:00-PDT (Wed)
From: hplabs!hp-pcd!hpfcla!hpfclq!robert @ Ucb-Vax
Subject: Re: A topic for discussion, phil/ai pers
Article-I.D.: hpfclq.68500002

I don't see much difference between perception over time and perception
at all.  Example: given a program understands what a chair is, you give
the program a chair it has never seen before.  It can answer yes or no
whether the object is a chair.  It might be wrong.  Now we give the
program designed to recognize people examples of an Abraham Lincoln
at different ages  (with time).  We present a picture of Abraham
Lincoln that the program has never seen before and ask is this
Abe.  The program might again answer incorrectly but from a global
aspect the problem is the same.  Objects with time are just classes
of objects.  Not that the problem is not difficult as you have said,
I just think it is all the same difficult problem.

I hope I understood your problem.  Trying hard,
                                        Robert (animal) Heckendorn
                                        ..!hplabs!hpfcla!robert

------------------------------

Date: 18 May 84 5:56:55-PDT (Fri)
From: ihnp4!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: Greek Ships, Lincoln's axe, and identity across time
Article-I.D.: ecsvax.2516

        Finally got a chance to grub through the backlog and what do I find?
Another golden oldie from intro philosophy!
        Whether it's a Greek ship or Lincoln's axe that you take as an
example, the problem concerns relationships among several concepts,
specifically "part", "whole", and "identity".  'Identical', by the way, is
potentially a dangerous term, so philosophers straightaway disambiguate it.
In everyday chatter, we have one use which means, roughly, "exactly similar"
(as in: "identical twins" or "I had the identical experience last week").
We call that "qualitative identity", or simply speak of exact similarity
when we don't want to confuse our students.  What it contrasts with is
"numerical identity", that is, being one and the same thing encountered at
different times or in different contexts.
        Next we need to notice that whether we've got one and the same thing
at different times depends on how we specify the *kind* of "thing" we're
talking about.  If I have an ornamental brass statuette, melt it down, and
cast an ashtray from the metal, then the ashtray is one and the same
*quantity of brass* as the statuette, but not one and the same *artifact*.
(Analogously, you're one and the same *person* as you were ten years ago,
but not exactly similar and not one and the same *collection of
molecules*.)
        It's these two distinctions which ariel!norm was gesturing at--and
failing to sort out--in his talk about "metaphysical identity" and
"essential sameness".  Call the Greek ship as we encounter it before
renovation X, the renovated ship consisting entirely of new boards Y, and
let the ship made by reassembling the boards successively removed from X be
Z.  Then we can say, for example, that Z is "qualitatively identical" to X
(i.e., exactly similar) and that Z is one and the same *arrangement of
boards* as X (i.e., every board of Z, after the renovation, is "numerically
identical" to some board of X before the renovation, and the boards are
fastened together in the same way at those two times, before and after).
        The interesting question is:  Which *ship*, Y or Z, which we
encounter at the later time is "numerically identical to" (i.e., is one and
the same *ship* as) the ship X which we encountered at the earlier time?
The case for Y runs:  changing one board of a ship does not result in a
*numerically* different ship, but only a *qualitatively* different one.  So
X after one replacement is one and the same ship as X before the
replacement.  By the same principle, X after two replacements is one and the
same ship as X after one replacement.  But identity is transitive.  So X
after n replacements is one and the same ship as X before any replacements,
for arbitrary n (bounded mathematical induction).  The case for Z runs:  "A
whole is nothing but the sum of its parts."  Specifically, a Greek ship is
nothing but a collection of boards in a certain arrangement.  Now every part
of Z is (numerically) identical to a part of X, and the arrangement of the
parts of Z (at the later time) is identical to the arrangement of those
parts of X (at the earlier time).  Ergo, the ship Z is (numerically)
identical to the ship X.
        The argument for Z is fallacious.  The reason is that "being a part
of" is a temporally conditioned relation.  A board is a part of a ship *at a
time*.  Once it's been removed and replaced, it no longer *is* a part of the
ship.  It only once *was* a part of the ship.  So it's not true that every
part of Z *is* (numerically) identical to some part of X.  What's true is
that every part of Z is a board which once *was* a part of X, i.e., is a
*former* part of X.  But we have no principle which tells us that "A whole
is nothing but the sum of its *former* parts"!  (For a complete treatement,
see Chapter 4 of my introductory text:  THE PRACTICE OF PHILOSOPHY, 2nd
edition, Prentice-Hall, 1984.)
        What does all this have to do with computers' abilities to think,
perceive, determine identity, or what have you?  The following:  Questions
of *numerical* identity (across time) can't be settled by appeals to
"feature sets" or any such perceptually-oriented considerations.  They often
depend crucially on the *history* of the item or items involved.  If, for
example, ship X had been *disassembled* in drydock A and then *reassembled*
in drydock B (to produce Z in B), and meanwhile a ship Y had been
constructed in drydock A of new boards, using ship X as a *pattern*, it
would be Z, not Y, which was (numerically) identical to X.
        Whew!  Sorry to be so long about this, but it's blather about
"metaphysical identity" and "essences" which gave us philosophers a bad name
in the first place, and I just couldn't let the net go on thinking that Ayn
Rand represented the best contemporary thinking on this problem (or on any
other problem, for that matter).


Yours for clearer concepts,       --Jay Rosenberg
                                    Dept. of Philosophy
...mcnc!ecsvax!unbent               Univ. of North Carolina
                                    Chapel Hill, NC  27514

------------------------------

Date: 20 May 84 18:55:44-PDT (Sun)
From: hplabs!hao!seismo!ut-sally!brad @ Ucb-Vax
Subject: identity over time
Article-I.D.: ut-sally.232

Just thought I'd throw more murk in the waters.

Considering the ship that is replaced one board at a time:
using terminology previously devised for this argument, call
the original ship X, the ship with all new boards Y and
the ship remade from the old boards Z, Robert Nozick
would claim that Y is clearly the better candidate for "X-hood"
as it is the "closest continuer."  The idea here is that
we consider a thing to be the same as another thing when
        1) It bears an arbitrary "close enough" relation
(a desk that has been vaporized just can't be pointed to as
the 'same desk'). and
        2) It is, compared to all other candidates for the
title of 'the same as X', the one which represents the most
continuous existence of X.

To be a little less hand wavy:  If one considers Z rather
than Y to be the same as X then there is a gap of time in which
X ceased to exist as a ship, and only existed as a heap of lumber
or as a partially built ship.  Whereas if Y is considered to be the
same as X there is no such gap.

Disclaimers:  1) The idea of "closest continuer" is Nozick's, the
(probably erroneous) presentation is my own.
              2) I consider the whole notion to be somewhere be-
tween Rand and Rosenberg; i.e. it's not the best comment I've seen
on the subject, but it is another point-of-view.


Brad Blumenthal          {No reasonable request refused}
{ihnp4,ctvax,seismo}!brad@ut-sally

------------------------------

Date: 17 May 84 12:50:35-PDT (Thu)
From: decvax!cca!rmc @ Ucb-Vax
Subject: Re: Essence
Article-I.D.: cca.528

    What we are discussing is one of the central problems of the
philosophy of language, namely, the problem of reference. How do humans
know what a given name or description refers to?

    Pre WWI logic was particularly interested in this question, as they
were building formal systems and tried to determine what constants and
variables really meant.  The two major conflicting theories came from
Bertrand Russel and Gottlieb Frege.

    Russell believed in a dichotomy between the logical and gramatical
forms of a sentence.  Thus a proper name was not really a name, but just
a description that enabled a person to pick out the particular object to
which it refered.  You could reduce any proper name to a list of
properties.

    Frege, on the other hand, considered that there were such things as
proper names as grammatical and logical entities.  These names had a
"sense" (similar to the "essense" in some of the earlier msgs on this
topic) and a "reference" (the actual physical thing picked out by the
name).  Although the sense is sometimes conveyed by giving a
description, it is not identical to the description you would give in
trying to explain the name to someone.

    Now there have been many developments of both theories.  Behaviorists
tend to build "complexes of qualities" theories of meaning which read a
lot like Russell's work, but there are lots of differences in
implementation and mechanism.  Linguists and modal logicians tend to
build theories closer to Frege's.

    I think the most important recent book on the subject is "Naming and
Necessity", by Saul Kripke (along with Willard VO Quine and Hillary
Putnam, probably the top philosophers in North America today).  The
book is a transcript, not much edited except for explanatory footnotes,
of a series of lectures trying to explain how proper names might work.
The arguments against the "quality cluster" theories seem pretty
conclusive.  They include the way we use counterfactuals, that is
talking about an object or a person if they were different than they
actually were (like, what would Babbage have been like if he had lived
in an age of VLSI chips?  or what would Mayor Curly of Boston been
like if he hadn't been a crook?)  These discussions can get pretty far
away from reality, and this indicates that the names we use allow us to
keep track of who or what we mean without getting confused by the
changes in qualities and properties.  The properties and qualities are
not what provide the "sense" or "essense" of the name.

    Kripke goes on to suggest that we understand names through a
"naming" and a "chain of acquaintances".  For example, Napoleon was
named at his christening, and various people met him, and they talked to
people about him, and this chain of acquaintances kept going even after
he was dead.  Thus there is a (probably multi-path) chain of
conversations and pointings and descriptions that leads back from your
understanding of the name "Napoleon" to the christening where he
received his name.  I am not sure that this is a correct appraisal of
the  mechanism for understanding names, but it certainly is the best I
have heard.

    Leonard (?) Linsky has recently written a book attacking this and
similar views, and indicating that a synthesis of the Russell and Frege
theories still has problems but avoids most of the pitfalls of
acquaintances.  Unfortunately I have not yet read that book.

    For other works in the area, certainly read Quine's Word and Object
and the volume of collected Putnam papers on language.  Also works by
Searle and Austin on speech acts are useful for thinking about the
clues, both verbal and non-verbal, that allow us to make sense of
conversations where not everything is stated explicitly.

    Enjoy!
                                R Mark Chilenskas
                                chilenskas@cca-vms
                                decvax!cca!rmc

------------------------------

Date: Mon 21 May 84 12:12:05-EDT
From: Jan <komorowski%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Information Management Systems   [Harvard]

           [Forwarded from the MIT bboard by SAWS@MIT-MC.]


Wednesday, May 23 Professor Erik Sandewall from Linkoping University, Sweden
will talk at Harvard in the colloquium series.

                Theory of Information Management Systems
                                4:00PM
                Aiken Lecture Hall, Tea in Pierce 213 at 3:30


It is often convenient and natural to view a data base as a network consisting
of nodes, arcs from nodes to nodes, and attribute values attached to nodes.
This view occurs in artificial intelligence (eg semantic networks), data base
theory (eg. entity-relationship models), and office systems (eg. for
representation of the virtual office).

Unfortunately, the network view of data bases is usually treated informally, in
contrast to the formal treatment that is available for relational data bases.
The theory of information management systems attempt to remedy this situation.

Formally, a network is viewed as a set of triples <f,x,y> where f is a function
symbol, x is a node, and y is a node or an attribute value.  Two perspective
on such networks are of interests:

1) algebraic operations on networks allow the definition of cursor-related
editing operations, and of line-drawing graphics.

2) by viewing a network as an interpretation on a variety of first-order logic,
one can express constraints on the data structures that are allowed there. In
particular, both "pure Lisp" data structures and "impure" structures (involving
shared sublists and circular structures) can be characterized. Proposition can
be also used for specifying derived information as an extension of the
interpretation. This leads to a novel way of treating non-monotonic reasoning.

The seminar emphsizes mostly the second of these two approaches.

Host: Jan Komorowski

------------------------------

Date: 21 May 1984 11:10-EDT
From: DISRAEL at BBNG.ARPA
Subject: Seminar - Open Systems

           [Forwarded from the MIT bboard by SASW@MIT-MC.]

This Wednesday, at 3:00 Carl Hewitt of the MIT AI LAB will be speaking
on "Open Systems".  The seminar will be held in the 3rd floor large
conference room.

     Open Systems:  the Challenge for Intelligent Systems


Continous growth and evolution, absence of bottlenecks, arm's-length
relationships, inconsistency among knowledge bases, decentralized
decision making, and the need for negotiation among system parts are
interdependent and necessary properties of open systems.  As our
computer systems evolve and grow they are more and more taking on the
characteristics of open systems.  Traditional foundational assumptions
in Artificial Intelligence such as the "closed world hypothesis", the
"search space hypothesis", and the possibility of consistently
axiomatizing the knowledge involved become less and less applicable as
the evolution toward open systems continues.  Thus open systems pose a
considerable challenge in the development of suitable conceptual
foundations for intelligent systems.

------------------------------

End of AIList Digest
********************

∂25-May-84  0016	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #63
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 May 84  00:16:37 PDT
Date: Thu 24 May 1984 21:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #63
To: AIList@SRI-AI


AIList Digest            Friday, 25 May 1984       Volume 2 : Issue 63

Today's Topics:
  Cognitive Psychology - Dreams,
  Philosophy - Essence & Identity & Continuity & Recognition
----------------------------------------------------------------------

Date: Mon 21 May 84 10:48:00-PDT
From: NETSW.MARK@USC-ECLB.ARPA
Subject: cognitive psychology / are dreams written by a committee?

 Apparently (?) dreams are programmed, scheduled event-sequences, not
 mere random association. Does anyone have a pointer to a study of
 dream-programming and scheduling undertaken from the stand-point of
 computer science?

------------------------------

Date: Mon 21 May 84 11:39:51-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Dreams: A Far-Out Suggestion

The May issue of Dr. Dobb's Journal contained an article on "Sixth
Generation Computers" by Richard Grigonis (of the Children's Television
Workshop).  I can't tell how serious Mr. Grigonis is about faster-than-
light communication and computation in negative time; he documents the
physics of these possibilities as though he were both dead serious and
well informed.  He also discusses the possibility of communicating with
computers via brain waves, and it this material that has spurred the
following bit of speculation.

There seems to be growing evidence that telepathy works, at least for
some people some of the time.  The mechanism is not understood, but then
neither are the mechanisms for memory, unconscious thought, dreams, and
other cognitive phenomena.  Mr. Grigonis suggests that low-frequency
electromagnetic waves may be at work, and provides the following support:
Low frequencies are attenuated very slowly, although their energy does
spread out in space (or space/time); the attenuation of a 5 Hz signal
at 10,000 kilometers is only 5%.  A 5 Hz signal of 10↑-6 watt per square
centimeter at your cranium would generate a field of 10↑-24 watt per
square centimeter at the far side of the earth; this is well within
the detection capabilities of current radio telescopes.  Further, alpha
waves of 7.8 and 14.1 cycles per second and beta waves of 20.3 cycles
per second are capable of constructive interference to establish
standing waves throughout the earth.

Now suppose that the human brain, or a network of such brains distributed
in space (and time), contained sufficient antenna circuitry to pick up
"influences" from the global "thought field" in a manner similar to the
decoding of synthetic aperture radar signals.  Might this not explain
ESP, dreams, "racial memory", unconscious insight, and other phenomena?
We broadcast to the world the nature of our current concerns, others
try to translate this into terms meaningful to their lives, resonances
are established, and occasionally we are able to pick up answers to our
original concerns.  The human species as a single conscious organism!

Alas, I don't believe a word of it.

                                        -- Ken Laws

------------------------------

Date: Thu, 24 May 1984  02:52 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Essences

About essences.  Here is a section from
a book I am finishing about The Society of Mind.


THE SOUL

   "And we thank Thee that darkness reminds us of light."  (T. S. Eliot)

My friends keep asking me if a machine could have a soul?  And I keep
asking them if a soul can learn.  I think it is important to
understand this retort, in order to recognize that there may be
unconscious malice in such questions.

The common concept of a soul says that the essence of a human mind
lies in some entirely featureless point-like spark of invisible
light.

I see this as a symptom of the most dire anti-self respect.  That
image of a nothing, cowering behind a light too bright to see, denies
that there is any value or significance in struggle for
accomplishment.  This sentiment of human worthlessness conceals itself
behind that concept of an essence of the self.  Here's how it works.

     We all know how a superficial crust of trash can unexpectedly
     conceal some precious gift, like treasure buried in the dirt,
     or ordinary oyster hiding pearl.

     But minds are just the opposite.  We start as ordinary embryonic
     animals, which then each build those complicated things called
     minds -- whose merit lies entirely within their own coherency.
     The brain-cells, raw, of which they're made are, by themselves,
     as valueless as separate daubs of paint.

     That's why that soul idea is just as upside-down as seeking
     beauty in the canvas after scraping off Da Vinci's smears. To
     seek our essence only misdirects our search for worth -- since
     that is found, for mind, not in some priceless, compact core, but
     in its subsequently vast, constructed crust.

The very allegation of an essence is degrading to humanity.  It cedes
no merit to our aspirations to improve, but only to that absence of no
substance, which was there all along, but eternally detached from all
change of sense and content, divorced both from society of mind and
from society of man; in short, from everything we learn.

What good can come from such a thought, or lesson we can teach
ourselves?  Why, none at all -- except, perhaps, that it is futile to
think that changes don't exist, or that we are already worse or
better than we are.


   ---  Marvin Minsky

------------------------------

Date: Wed, 23 May 84 09:49:21 EDT
From: Stephen Miklos <Miklos@YALE.ARPA>
Subject: Essence of Things?

It is not too difficult to come up with a practical problem in which the
identity of the greek ship is important. To wit:
In year One, the owner of the ship writes a last will and testament,
leaving "my ship and all its fittings and appliances" to his nephew.
The balance of his estate he leaves to his wife. In Year Two, he commences
to refit his ship one board at a time. After a few years he has a pile of
old boards which he builds into a second ship. Then he dies.
A few hypotheticals:
   1. Suppose both ships are in existence at the time of probate.
   2. Suppose the old-board ship had been destroyed in a storm.
   3. Suppose the new-board ship had been destroyed in a storm.
   4. Suppose the original ship had been refitted by replacing the old
      boards with fiberglass
   5. Suppose the original boat had not been refitted, but just taken
      apart and later reassembled.
   6. Suppose the original ship had been taken apart and replaced board
      by board, but as part of a single project in which the intention was to
      come up with two boats.
  6a. Suppose that this took a while, and that from time to time
      our Greek testator took the partially-reboarded boat
      out for a spin on the Mediterranean.


In each of these cases, who gets the old-board ship? Who gets the
new-board ship? It seems to me that the case for the fallaciousness of
the argument for boat y (the new-board boat) seriously suffers in hypo
#6 and thereby is compromised for the pure hypothetical. It should not
be the case that somebody's intention makes the difference in determining
the logical identity of an object, although that is the way the law
would handle the problem, if it could descry an intention.

                                 Just trying to get more confused,
                                 SJM

------------------------------

Date: Wed, 23 May 84 10:47 EDT
From: MJackson.Wbst@XEROX.ARPA
Subject: Re: Continuity of Identity

An interesting "practical" problem of the Greek Ship/Lincoln's Axe type
arises in the restoration of old automobiles.  Since many former
manufacturers are out of business, spare parts stocks may not exist,
body pieces may have been one-offs, and for other reasons, restoration
often involves the manufacture of "new" parts.  Obviously at some point
one has a "replica" of a Bugatti Type 35 rather than a "restored"
Bugatti Type 35 (and the latter is desirable enough to some people so
that they would happily start from a basket full of fragments. . .).
What is that point (and how many baskets of fragments can one original
Bugatti yield)?

In fact, old racing cars are worse.  The market value of, say, a 1959
Formula 1 Cooper is significantly enhanced if it was driven by, say,
Moss or Brabham, particularly if it was used to win a significant race.
But what if it subsequently was crashed and rebuilt?  Rebuilt from the
frame up?  Rebuilt *entirely* but assigned the previous chassis number
by the factory (a common practice)?  Under what circumstances is one
justified as advertising such an object as "ex-Moss?"

Mark

------------------------------

Date: 18 May 84 18:58:24-PDT (Fri)
From: ihnp4!mgnetp!burl!clyde!akgua!mcnc!ncsu!uvacs!edison!jso @ Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: edison.219

The resolution of the Greek Ship/Lincoln's Axe problem seems to be that
an object retains its identity over a period of time if it has an unbroken
time-line as a whole.  Most of the cells in your body weren't there when
you were born, and most that you had then aren't there now, but aren't you
still the same person/entity, though you have far from the same characteristics?

John Owens
...!uvacs!edison!jso

------------------------------

Date: Thu 24 May 84 13:00:04-PDT
From: Laurence R Brothers <LAURENCE@SU-CSLI.ARPA>
Subject: identity over time


"to cross again is not to cross". Obviously, people don't generally
function with that concept in mind, or nothing would be practically
identical to anything else. I forget the statistic that says how long it
takes for all the atoms in your body to be replaced by new ones, but,
presumably, you are still identifiable as the same person you were
x years ago.

How about saying that some object is "essentially identical" in context
y (where context y consists of a set of properties) to another object
if it is both causally linked to the first object, and is the object
that fulfills the greates number of properties in y to the greatest
precision. Clearly, this definition does not work all that well in
some cases, but it at least has the virtue of conciseness.

If two objects are "essentially identical" in the "universal context",
then they may as well be named the same in common usage, at least,
if not with total accuracy, since they would seem to denote what
people would consider "naively" to be the same object.

-Laurence

------------------------------

Date: 22 May 84 22:48:39-PDT (Tue)
From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax
Subject: A restatement of the problem (phil/ai)
Article-I.D.: wxlvax.281

It has been my experience that whenever many people misinterpret me, it is
due to my unclarity (if that's a word) in making my statement.  This appears
to be what happened with my original posting on human perception vs computer
or robotic perception.  Therefore, rather than trying to reply to all the
messages that appeared on the net and in my mailbox, let me try a new, longer
posting that will hopefully clarify the question that I have.

"Let us consider some cases of misperception...  Take for example a "mild"
commonplace case of misperception.  Suppose that I see a certain object as
having a smooth surface, and I proceed to walk toward it.  As I approach it,
I come to realize visually (and it is, in fact, true) that its surface is
actually pitted and rough rather than smooth.
        A more "severe" case of misperception is the following.  Suppose
that, while touring through the grounds of a Hollywood movie studio, I
approach what, at first, I take to be a tree.  As I come near to it, I suddenly
realize that what I have been approaching is, in fact, not a tree at all but a
cleverly constructed stage prop.
        In each case I have a perceptual experience of an object at the end of
which I "go back" on an earlier attribution.  Of present significance is the
fact that in each case, although I do "go back" on an earlier attribution, I
continually *experience* it "as" one and the same.  For, I would not have
experienced myself now as having made a perceptual *mistake about an object*
unless I experience the object now as being THE VERY SAME object I experienced
earlier."  [This passage is from Dr. Miller's recent book:  Miller, Izchak.
"Husserl:  Perception and Temporal Awareness"  MIT Press, c. 1984.
It is quoted from page 64, by permission of the author.]

So, let me re-pose my original question:  As I understand it, issues of
perception in AI today are taken to be issues of feature-recognition.  But
since no set of features (including spatial and temporal ones) can ever
possibly uniquely identify an object across time, it seems to me (us) that this
approach is a priori doomed to failure.  Feature recognition cannot be the way
to accurately simulating/reproducing human perception.  Now, since I (we) are
novices in this field, I want to open the question up to those more
knowledgeable.  Why are AI/perception people barking up the wrong tree?  Or,
are they?

(One more note: PLEASE remember to put "For Alan" in the headers of mail
messages you send me.  ITT Corp is kind enough to allow me the use of my
father's account, but he doesn't need to sift through all my mail.)

  --Alan Wexelblat (for himself and Izchak Miller)
  (Currently appearing at: ..decvax!ittvax!wxlvax!rlw)

------------------------------

Date: 24 May 84 18:58-PDT
From: Laws@SRI-AI
Subject: Continuity

Other examples related to the Greek Ship difficulty: the continuity
of the Olympic flame (or rights to the Olympic name), posession of the
world heavyweight title if the champ retires and then "unretires",
title to property as affected by changes in either the property or
the owner's status, Papal succession and the right of ordained priests
to ordain others, personal identity after organ transplants, ...
In all the cases, the philosophical principles seem less important
than having some convention for resoving disputes.  Often market forces
are at work: the seller may make any claim that isn't outrageously
fraudulent, and the buyer pays a price commensurate with his belief
that the claims are valid, will hold up in court, or will be believed
by his own friends and customers.


On the subject of perception and recognition:  we have computational
methods of recognizing objects in images despite changes in background,
brightness or color, texture, perspective, motion, scale changes,
occlusion or damage, imaging technique (e.g., visual vs. infrared
or radar signatures), and other types of variation.  We don't yet
have a single computer program that can do all of the above, but most
of the matching problems have been solved by one program or another.
Some problems can't be solved, of course: is that red Volkswagon the
same one that I saw yesterday, or has another one been parked in the
same place?

The key to image analysis is often not in recognition of feature clusters
but in understanding how features change across space or time.  The patterns
of change are themselves features that must be recognized, and that can't
be done unless you can determine the image areas over which to compute
the gradients.  You can't recognize the whole from the parts because
you can't find the parts unless you know the configuration of the whole.

One of the most powerful techniques for such problems is hypothesize-
and-test.  Find anything in the scene that can suggest part of the
analysis, leap to a conclusion, and see if you can make the answer
fit the scene.  I suspect that this explains the object constancy that
Alan is worried about.  We are so loathe to give up a previously
accepted parse that we will tolerate extreme deviations from our
expectations before abandoning the interpretation and searching for
another.  Even when forced to reparse, we have great difficulty in
combining the scene entities in groupings other than those we first
locked onto (as in Cole's Law and "how to wreck a nice beach"); this
suggests that the prominent groupings form symbolic proto-objects
that remain constant even though we reevaluate the details, or "features",
within the context of the groupings.

					-- Ken Laws

------------------------------

End of AIList Digest
********************

∂25-May-84  1045	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #64
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 May 84  10:43:17 PDT
Date: Fri 25 May 1984 09:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #64
To: AIList@SRI-AI


AIList Digest            Friday, 25 May 1984       Volume 2 : Issue 64

Today's Topics:
  Courses - Expert Systems Syllabus Request,
  Games - Core War Sources,
  Logic Programming - Boyer-Moore Prover,
  AI Books - AI and Business,
  Linguistics - Use of "and",
  Scientific Method - Hardware Prototyping
----------------------------------------------------------------------

Date: 23 May 1984 1235-EDT
From: CASHMAN at DEC-MARLBORO
Subject: Expert systems course

Has anyone developed an expert systems course using the book "Building Expert
Systems" (Hayes-Roth & Lenat) as the basic text?  If so, do you have a
syllabus?

  -- Paul Cashman (Cashman@DEC-MARLBORO)

------------------------------

Date: Thursday, 24 May 1984 17:17:49 EDT
From: Michael.Mauldin@cmu-cs-cad.arpa
Subject: Core War...


Some people are having problems FTPing the core war source...  If you
prefer, just send me a note and I'll mail you the source over the net.
It is written in C, runs on Unix (4.1 immediately, or 4.2 with 5
minutes of hacking), and is mailed in one file of 42K characters.

Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA  15213
(412) 578-3065,  mauldin@cmu-cs-a.

------------------------------

Date: 24-May-84 12:48:20-PDT
From: jbn@FORD-WDL1.ARPA
Subject: Re: Boyer-Moore prover on UNIX systems

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

    The Boyer-Moore prover is now available for UNIX systems.  While I
did the port, Boyer and Moore now have my code and have integrated it
into their generic version of the prover.  They are handling distribution.
The prover is now available for the Symbolics 3600, TOPS-20 systems,
Multics, and UNIX for both VAXen and SUNs.  There is a single version with
conditional compilation, it resides on UTEXAS-20, and can be obtained via
FTP.  Send requests to BOYER@UTEXAS-20 or MOORE@UTEXAS-20, not me, please.

    The minimum machine for the prover is a 2MB UNIX system with Franz Lisp
38.39 or later, about 20-80MB of disk,  and plenty of available CPU time.

    If you want to know more about the prover, read Boyer and Moore's
``A Computational Logic'' (1979, Academic Press, ISBN 0-12-122950-5).
Using the prover requires a thorough understanding of this work.

    Please pass this on to all who got the last notice, especially
bulletin boards and news systems.  Thanks.

                                        Nagle (@SCORE)

------------------------------

Date: 23 May 1984 13:50:30-PDT (Wednesday)
From: Adrian Walker <ADRIAN%ibm-sj.csnet@csnet-relay.arpa>
Subject: AI & Business

The summary on AI for Business is most interesting.
You might like to list also the book:

    Artificial Intelligence Applications for Business
    Walter Reitman, Editor
    Ablex Publishing Corporation, Norwood, New Jersey, 1984

It's in the bookstores now.

Adrian Walker
IBM SJ Research k51/282, tieline 276-6999, outside 408-256-6999
       vnet: sjrlvm1(adrian)    csnet: Adrian@ibm-sj
           arpanet: Adrian%ibm-sj@csnet-relay

------------------------------

Date: 18 May 84 9:34:56-PDT (Fri)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.ags @ Ucb-Vax
Subject: Re: Use of "and" - (nf)
Article-I.D.: pucc-i.281

We are blinded by everyday usage into putting an interpretation on

        "people in Indiana and Ohio"

that really isn't there.  That phrase should logically refer to

        1.  The PEOPLE of Indiana, and
        2.  The STATE of Ohio (but not the people).

If someone queries a program about "people in Indiana and Ohio", a
reasonable response by the program might be to ask,

        "Do you mean people in Indiana and IN Ohio?"

which may lead eventually to the result

        "There are no people in Indiana and in Ohio."


Dave Seaman
..!pur-ee!pucc-i:ags

------------------------------

Date: 20 May 84 8:23:00-PDT (Sun)
From: ihnp4!inuxc!iuvax!brennan @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: iuvax.3600002

Come on, Dave, I think you missed the point.  No person would
have any trouble at all understanding "people in Indiana and Ohio",
so why should a natural language parser have trouble with it???

JD Brennan
...!ihnp4!inuxc!iuvax!brennan   (USENET)
Brennan@Indiana                 (CSNET)
Brennan.Indiana@CSnet-Relay     (ARPA)

------------------------------

Date: 21 May 84 12:54:15-PDT (Mon)
From: harpo!ulysses!allegra!dep @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: allegra.2484

Why does everyone assume that there is no one who is both in Indiana and Ohio?
The border is rather long and it seem perfectly possible that from time to
time there are people with one foot in Inidana and the other in Ohio - or for
that matter, undoubtedly someone sleeps with his head in I and feet in O
(or vice versa).

Lets hear it for the stately ambiguous!

------------------------------

Date: Sun 20 May 84 18:56:36-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Quote

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


``... the normal mode of operation in computer science has been abandoned
in the realm of artificial intelligence.  The tendency has been to propose
solutions without perfecting them.''

                        Harold Stone, writing about the NON-VON machines
                        being proposed at Columbia

                        from Mosaic, the magazine of the National Science
                        Foundation, vol 15, #1, p. 24.

------------------------------

Date: Tue 22 May 84 18:43:35-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Re: Quote, background of

     There have been some requests for more context on the quote I posted.
The issue is that the Columbia people working on non-von Neumann
architectures are now proposing to build NON-VON 4, their fourth
machine.  However, NON-VONs 1 to 3 are either unfinished or were never
started, according to the writer quoted, and the writer doesn't think
much of this.
     My point in posting this is that it is significant that it appeared
in the National Science Foundation's publication.  The people with the
money may be losing patience.

------------------------------

Date: Mon 21 May 84 22:06:44-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Quote

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]


        From Nagle (quoting Harold Stone)

        ``... the normal mode of operation in computer science has
        been abandoned in the realm of artificial intelligence.  The
        tendency has been to propose solutions without perfecting
        them.''


Which parse of this is correct?  Has the tendency to "propose
solutions without perfecting them" held in the remainder of computer
science, or in artificial intelligence?  Either way I think it is
ridiculous.  Computer Science is so young that there are very few
things that we have "perfected".  We do understand alpha-beta search,
LALR(1) parser generators, and a few other things.  But we haven't
come near to perfecting a theory of computation, or a theory of the
design of programming languages, or a theory of heuristics.

  --Tom

------------------------------

Date: Wed 23 May 84 00:16:43-EDT
From: David Shaw <DAVID@COLUMBIA-20.ARPA>
Subject: Re: FYI, again

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Tom,

I have just received a copy of your reaction to Harold Stone's criticism of
AI, and in particular, of the NON-VON project.  In answer to your question,
I'm certain, based on previous interactions with Harold, that the correct
parsing of his statement is captured by the contention that AI "proposes
solutions without perfecting them", while "the normal mode of operation in
computer science" perfects first, then proposes (and implements).

I share your feelings (and those expressed by several other AI researchers
who have written to me in this regard) about his comments, and would in
fact make an even stronger claim: that the "least-understood" areas in AI,
and indeed in many other areas of experimental computer science research,
often turn out in the long run to be the most important in terms of
ultimate practical import.  I do not mean to imply that concrete results in
such areas as the theories of heuristic search or resolution
theorem-proving are not important, or should not be studied by those
interested in obtaining results of practical value.  Still, it is my guess
that, for example, empirical findings based on real attempts to implement
"expert systems", while lacking in elegance and mathematical parsimony, may
well prove to have an equally important long-term influence on the field.

This is certainly not true in many fields of computer science research.
There are a number of areas in which "there's nothing so practical as a
good theory".  In AI, however, and especially in the construction of
non-von Neumann machines for AI and other symbolic applications, the
single-minded pursuit of generality and rigor, to the exclusion of (often
imperfectly directed) experimentation, would in many cases seem to be a
prescription for failure.

Those of us who experiment in silicon as well as instructions have recently
been the targets of special criticism.  Why, our critics ask, do we test
our ideas IN HARDWARE before we know that we have found the optimal
solutions for all the problems we claim to address?  Doesn't such behavior
demonstrate a lack of knowledge of the published literature of computer
architecture?  Aren't we admitting defeat when we first build one machine,
then construct a different one based on what we have learned in building
the first?  My answer to these criticisms is based observation that, in the
age of VLSI circuits, computer-aided logic design, programmable gate
arrays, and quick-turnaround system implementation, the experimental
implementation of hardware has taken on many of the salient qualities of
the experimental implementation of software.

Like their counterparts in software-oriented research, contemporary
computer architects often implement hardware in the course of their
research, and not only at the point of its culmination.  Such
experimentation helps to explicate "fuzzy" ideas, to prune the tree of
possible architectural solutions to given problems, and to generate actual
(as opposed to asymptotic or approximate) data on silicon area and
execution time expenditures.  Such experimentation would not be nearly so
critical if it were now possible to reliably predict the detailed operation
of a complex system constructed using a large number of custom-designed
VLSI circuits.  Unfortunately, it isn't.  In the real world, efforts to
advance the state of the art in new computer architectures without engaging
in the implementation of experimental prototypes presently seem to be as
futile as efforts to advance our understanding of systems software without
ever implementing a compiler or operating system.

In short, it is my feeling that "dry-dock" studies of "new generation"
computer architectures may now be of limited utility at best, and at worst,
seriously misleading, in the absence of actual experimentation.  Here, the
danger of inadequate study in the abstract seems to be overshadowed by the
danger of inadequate "reality-testing", which often leads to the rigorous
and definitive solution of practically irrelevant problems.

It's my feeling that Stone's comments reflect a phenomenon that Kuhn has
described in "The Structure of Scientific Revolutions" as characteristic of
a "shift of paradigm" in scientific research.  I still remember my reaction
as a graduate student at Stanford when my advisor, Terry Winograd, told our
research group that, in many cases, an AI researcher writes a program not
to study the results of its execution, but rather for the insight gained in
the course of its implementation.  A mathematician by training, I was
distressed by this departure from my model of mathematical (proof of
theorem) and scientific (conjecture and refutation) research.  In time,
however, I came to believe that, if I really wanted to make new science in
my chosen field, I might be forced to consider alternative models for the
process of scientific exploration.

I am now reconciled to this shift of paradigm.  Like most paradigm shifts,
this one will probably encounter considerable resistance among those whose
scientific careers have been grounded in a different set of rules.  Like
most paradigm shifts, its critics are likely to include those who, like
Harold Stone, have made the most significant contributions within the
constraints of earlier paradigms.  Like most paradigm shifts, however, its
value will ultimately be assessed not in terms of its popularity among such
scientists, but in rather in terms of its contribution to the advancement
of our understanding of the area to which it is applied.

Personally, I find considerable merit in this new research paradigm, and
plan to continue to devote a large share of my efforts to the experimental
development and evaluation of architectures for AI and other symbolic
applications, in spite of the negative reaction such efforts are now
encountering in certain quarters.  I hope that my colleagues will not be
dissuaded from engaging in similar research activities by what I regard as
the transient effects of a fundamental paradigm shift.

David

------------------------------

End of AIList Digest
********************

∂27-May-84  2229	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #65
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 May 84  22:28:05 PDT
Date: Sun 27 May 1984 21:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #65
To: AIList@SRI-AI


AIList Digest            Monday, 28 May 1984       Volume 2 : Issue 65

Today's Topics:
  AI Tools - KS300 & MicroPROLOG and LISP,
  Expert Systems - Checking of NMOS Cells,
  AI Courses - Expert Systems,
  Cognition - Dreams & ESP,
  Seminars - Explanation-Based Learning & Analogy in Legal Reasoning &
    Nonmonotonicity in Information Systems
----------------------------------------------------------------------

Date: 23 May 84 12:42:27-PDT (Wed)
From: hplabs!hao!seismo!cmcl2!philabs!linus!vaxine!chb @ Ucb-Vax
Subject: KS300 Question
Article-I.D.: vaxine.266

Does anybody know who owns the rights to the KS300 expert systems tool?
KS300 is an EMYCIN lookalike, and I think it runs under INTERLISP.  Any help
would be appreciated.


  -----------------------------------------------------------

"It's not what you look like when you're doin' what you're doin', it's what
you're doin' when you're doin' what you look like what you're doin'"
                        ---125th St. Watts Band


                                        Charlie Berg
                                     ...allegra!vaxine!chb

------------------------------

Date: 25 May 84 12:28:22-PDT (Fri)
From: hplabs!hao!seismo!cmcl2!floyd!whuxle!spuxll!abnjh!cbspt002 @
      Ucb-Vax
Subject: MicroPROLOG and LISP for the Rainbow?
Article-I.D.: abnjh.647

Can anybody point me toward microPROLOG and LISPs for the DEC
Rainbow 100. Either CP/M86 or MS-DOS 2.0, 256K, floppies.

Thanks in advance.

M. Kenig
ATT-IS, S. Plainfield NJ
uucp: ...!abnjh!cbspt002

------------------------------

Date: 25 May 1984 1438-PDT (Friday)
From: cliff%ucbic@Berkeley (Cliff Lob)
Subject: request for info


This is a request to hear about any work that is going on
related to my master's research in expert systems:

RULE BASE ERROR CHECKING OF NMOS CELLS

     The idea is to build an expert system that embodies the knowledge
of expert VLSI circuit designers to criticize NMOS circuit design
at the cell (<15 transistors) level.  It is not to be a simulator,
but rather it is to be used by designers to have their cell critiqued
by an experienced expert. The program will be used to try to catch
the subtle bugs (ie non-logic error, not shown by standard simulation)
that occur in the cell design process.
     I will be writing the code in PSL and a KRL Frame type language.
     Is there any work of a similar nature going on?

                        Cliff Lob
                        cliff@ucbic.BERKELEY

------------------------------

Date: Fri 25 May 84 13:33:49-MDT
From: Robert R. Kessler <KESSLER@UTAH-20.ARPA>
Subject: re: Expert systems course (Vol 2, #64)

I taught a course this spring quarter on "Knowledge Engineering" using the
Hayes-Roth text.  Since we only had a quarter, I decided to focus on
writing expert systems as opposed to developing expert systems tools.  We
had available Hewlett Packard's Heuristic Programming and Representation
Language (HPRL) to use to build some expert systems.  A general outline
follows:

  First third: Covered the first 2 to 3 chapters of the text.
    This gave the students enough exposure to general expert systems
    concepts.
  Second third: In depth exposure of HPRL.  Studied knowledge
    representation using their Frame structure and both forward and
    backward chaining rules.
  Final third: Discussed the Oak Ridge Natl Lab problem covered in Chapter
    10 of the text.  We then went through each of the systems described
    (Chapters 6 and 9) to understand their features and misfeatures.
    Finally, we contrasted how we would have solved the problem using
    HPRL.

 Students had various assignments during the first half of the quarter to
 learn about frames, and both types of rules.  They then (and are right
 now) working on a final expert system of their own choosing (have varied
 from a mechanics helper, plant doctor, first aid expert, simulator of the
 SAIL game, to others).

All in all, the text was very good, and is so far the best I've seen.

Bob.

------------------------------

Date: Sat, 26 May 84 17:06:57 PDT
From: Philip Kahn <kahn@UCLA-CS.ARPA>

RE:  Subject: cognitive psychology / are dreams written by a committee?

FLAME ON

 Where can you find any evidence that "dreams are programmed,
 scheduled event-sequences, not mere random association?"
 I have never found any author that espoused this viewpoint.
 Per chance, I think that viewpoint imposes far too much conscious
 behavior onto unconscious phenomena?  If they are indeed run by
 a "committee", what happens during a proxy fight?

FLAME OFF

------------------------------

Date: Fri 25 May 84 10:13:51-PDT
From: NETSW.MARK@USC-ECLB.ARPA
Subject: epiphenomenon conjecture

 conjecture: 'consciousness', 'essence' etc. are epiphenomena at the
 level of the 'integrative function' which facilitates the interaction
 between members of the 'community' of brain-subsystems.  Many a-i
 systems have been developed which model particular putative or likely
 brain-subsystems,  what is the status of efforts allowing the integration
 of such systems in an attempt to model the consciousness as a
 'community of a-i systems' ???

------------------------------

Date: Fri, 25 May 84 10:09:44 PDT
From: Scott Turner <srt@UCLA-CS.ARPA>
Subject: Dreams...Far Out

Did the astronauts on the moon suffer any problems with dreams, etc?  Without
figuring the attentuation, it seems like that might be far enough away to
cause problems with reception...since I don't recall any such effects, perhaps
we can assume that mankind doesn't have any such carrier wave.

Makes a good base for speculative fiction, though.  Interstellar travel
would have to be done in ships large enough to carry a critical mass of
humans.  Perhaps insane people are merely unable to pick up the carrier wave,
and so on.

                                                -- Scott

------------------------------

Date: Sun 27 May 84 11:44:43-PDT
From: Joe Karnicky
Reply-to: ZZZ.V5@SU-SCORE.ARPA
Subject: Re: existence of telepathy

     I disagree strongly with Ken's assertion that "There seems to be growing
evidence that telepathy works, at least for some people some of the time."
(May 21 AIlist).   It seems to me that the evidence which exists now is the
same as has existed for possibly 100,000 years, namely anecdotes and poorly
controlled experiments.    I recommend reading the book "Science: Good, Bad,
and Bogus" by Martin Gardner,  or any issue of "The Skeptical Observer".
What do you think ?
                                                Joe Karnicky

------------------------------

Date: 23 Apr 84 10:51:01 EST
From: DSMITH@RUTGERS.ARPA
Subject: Seminar - Explanation-Based Learning

[This and the following Rutgers seminar notices were delayed because
I have not had access to the Rutgers bboard for several weeks.  This
seems a good time to remind readers that AIList carries such abstracts
not to drum up attendance, but to inform those who cannot attend.  I
have been asked several times for help in contacting speakers, evidence
that the seminar notices do prompt professional interchanges.  -- KIL]

                        Department of Computer Science

                                  COLLOQUIUM

SPEAKER:        Prof. Gerald DeJong
                University of Illinois

TITLE:          EXPLANATION BASED LEARNING


  Machine Learning  is  one  of the most important current areas of Artificial
Intelligence.  With the trend away from  "weak  methods"  and  toward  a  more
knowledge-intensive  approach  to intelligence,  the  lack  of knowledge in an
Artificial Intelligence system becomes one of the most serious limitations.

  This talk advances a technique called explanation based learning.  It  is  a
method of learning from observations. Basically, it involves endowing a system
with sufficient  knowledge so that intelligent planning behavior of others can
be recognized. Once recognized, these observed plans are generalized as far as
possible while preserving the underlying explantion of  their  success.    The
approach  supports  one-trial learning.  We are applying the approach to three
diverse areas: Natural Language processing, robot task planning, and proof  of
propositional  calculus theorems.   The approach holds promise for solving the
knowedge collection bottleneck in the construction of Expert Systems.


DATE:           April 24

TIME:           2:50 pm

PLACE:          Hill 705


                                Coffee at 2:30





                        Department of Computer Science

                                  COLLOQUIUM


SPEAKER:        Rishiyur Nikhil
                University of Pennsylvania

TITLE:          FUNCTIONAL PROGRAMMING LANGUAGES AND DATABASES


                                   ABSTRACT

  Databases and  Programming  Languages  have  traditionally  been  "separate"
entities, and their interface (via subroutine libraries, preprocessors,  etc.)
is generally cumbersome and error-prone.

  We  argue that a functional programming language, together with a data model
called  the "Functional  Data  Model",  can  provide  an  elegant  and  simple
integrated database programming environment. Not only does the Functional Data
Model provide a richer model for new database systems, but it is also easy  to
implement atop existing relational and network databases. A "combinator"-style
implementation technique is particularly suited to implementing  a  functional
language in a database environment.

  Functional database languages also admit a rich type structure, based on
that of the programming language ML.  While having the advantages of strong
static type-checking, and allowing the definition of user-views of the
database, it is unobtrusive enough to permit an interactive, incremental,
Lisp-like programming style.

  We shall illustrate these ideas with examples from the language  FQL,  where
they have been prototyped.

DATE:           Thursday, April 26, 1984

TIME:           2:50 p.m.

PLACE:          Room 705 - Hill Center

                                Coffee at 2:30

------------------------------

Date: 3 May 84 16:21:34 EDT
From: Michael Sims  <MSIMS@RUTGERS.ARPA>
Subject: Seminar - Analogy in Legal Reasoning

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


                      machine learning brown bag seminar

Title:        Analogy with Purpose in Legal Reasoning from Precedents

Speaker:      Smadar Kedar-Cabelli <Kedar-Cabelli@Rutgers.Arpa>

Date:         Wednesday, May 9, 1984, 12:00-1:30
Location:     Hill Center, Room 423 (note new location)


       One  open  problem in current artificial intelligence (AI) models of
    learning and reasoning by analogy is: which aspects  of  the  analogous
    situations  are  relevant to the analogy, and which are irrelevant?  It
    is currently recognized that analogy involves mapping  some  underlying
    causal     structure     between    situations    [Winston,    Gentner,
    Burstein,Carbonell].  However, most current models of  analogy  provide
    the  system  with  exactly  the relevant structure, tailor-made to each
    analogy to be performed.  As AI systems become more  complex,  we  will
    have  to  provide them with the capability of automatically focusing on
    the relevant aspects of situations when reasoning analogically.   These
    will  have  to  be  sifted from the large amount of information used to
    represent complex, real-world situations.

       In order to study these general issues, I am examining a  particular
    case  study  of learning and reasoning by analogy: legal reasoning from
    precedents.  This is studied within the TAXMAN  II  project,  which  is
    investigating  legal reasoning using AI techniques [McCarty, Sridharan,
    Nagel].

       In this talk, I will discuss the problem and a proposed solution.  I
    am examining legal reasoning from  precedents  within  the  context  of
    current  AI  models  of  analogy.  I plan to add a focusing capability.
    Current  work  on  goal-directed  learning   [Mitchell,   Keller]   and
    explanation-based  learning  [DeJong] applies here:  the explanation of
    how a the analogous precedent case satisfies  the  goal  of  the  legal
    argument  helps  to  automatically  focus  the  reasoning  on  what  is
    relevant.

       Intuitively, if your purpose  is  to  argue  that  a  certain  stock
    distribution  is  taxable by analogy to a precedent case, you will know
    that aspects of the cases having to do with the change in the  economic
    position  of  the  defendants  are  relevant  for  the  purpose of this
    analogy, while aspects of the case such as the size of paper  on  which
    the  stocks were printed, or the defendants' hair color, are irrelevant
    for that purpose.  This knowledge of purpose, and the ability to use it
    to focus on relevant features, are missing from most current AI  models
    of analogy.

------------------------------

Date: 15 May 84 11:13:50 EDT
From: BORGIDA@RUTGERS.ARPA
Subject: Seminar - Nonmonotonicity in Information Systems

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]


          III Seminar by Alex Borgida, Wed. 2:30 pm/Hill 423

        The problem of Exceptional Situations in Information Systems --
                                  An overview


We  begin  by  illustrating  the wide range of exceptional situations which can
arise in the context of Information Systems (ISs). Based on this  evidence,  we
argue   for   1)   a   methodology   of   software   design   which   abstracts
exceptional/special cases by considering normal  cases  first  and  introducing
special  cases  as  annotations  in successive phases of refinement, and 2) the
need for ACCOMMODATING AT  RUN  TIME  exceptional  situations  not  anticipated
during  design.  We  then  present  some Programming Language features which we
believe support the above goals,  and  hence  facilitate  the  design  of  more
flexible ISs.

We   conclude   by   briefly  describing  two  research  issues  in  Artificial
Intelligence which arise out of this work: a) the problem of logical  reasoning
in  a  knowledge  base of formulas where exceptions "contradict" general rules,
and b) the issue of suggesting improvements to the design of an IS based on the
exceptions to it which have been encountered.

------------------------------

End of AIList Digest
********************

∂29-May-84  1148	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #66
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 May 84  11:47:48 PDT
Date: Tue 29 May 1984 10:13-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #66
To: AIList@SRI-AI


AIList Digest            Tuesday, 29 May 1984      Volume 2 : Issue 66

Today's Topics:
  AI Courses - Expert Systems,
  Expert Systems - KS300 Response,
  Linguistics - Use of "and",
  Perception - Identification & Misperception,
  Philosophy - Identity over Time & Essence,
  Seminar - Using PROLOG to Access Databases
----------------------------------------------------------------------

Date: Tue 29 May 84 08:59:00-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: Expert Systems Course

Gordon Novak at UT (UTEXAS-20) teaches Expert Systems based on
"Building Expert Systems".  The class project is building a system
with Emycin.  For details on the sylabus, please contact Dr. Novak.
I took the course and found the "hands-on" experience very helpful
as well as Dr. Novak's comments and anedotes about the other system
building tools.

Charles Petrie

------------------------------

Date: Mon 28 May 84 22:42:41-PDT
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: KS300 Inquiry

KS300 is a product of Teknowledge, Inc.  Palo Alto, CA

------------------------------

Date: 23 May 84 17:31:36-PDT (Wed)
From: hplabs!hao!seismo!cmcl2!philabs!sbcs!debray @ Ucb-Vax
Subject: Re: Use of "and"
Article-I.D.: sbcs.640

        > No person would have any trouble at all understanding "people
        > in Indiana and Ohio", so why should a natural language parser
        > have trouble with it???

The problem is that the English word "and" is used in many different ways,
e.g.:

1) "The people in Indiana and Ohio" -- refers to the union of the set of
people in Indiana, and the set of people in Ohio.  Could conceivably be
rewritten as "the people in Indiana and the people in Ohio".  The arguments
to "and" can be reordered, i.e.  it refers to the same set as "the people in
Ohio and Indiana".

2) "The house on 55th Street and 7th Avenue" -- refers to the *intersection*
of the set of houses on 55th street and the set of houses on 7th Avenue
(hopefully, a singleton set!).  NOT the same as "the house on 55th street
and the house on 7th Avenue".  The arguments to "and" *CAN* be reordered,
however, i.e.  one could as well say, "the house on 7th Ave. and 55th
Street".

3) "You can log on to the computer and post an article to the net" -- refers
to a temporal order of events: login, THEN post to the net.  Again, not the
same as "you can log on to the computer and you can post an article to the
net".  Unlike (2) above, the meaning changes if the arguments to "and" are
reordered.

4) "John aced Physics and Math" -- refers to logical conjunction.  Differs
from (2) in that it can also be rewritten as "John aced Physics and John
aced Math".

&c.

People know how to parse these different uses of "and" correctly due to a
wealth of semantic knowledge.  For example, knowledge about computers (that
articles cannot be posted to the net without logging onto a computer)
enables us to determine that the "and" in (3) above refers to a temporal
ordering of events.  Without such semantic information, your English
parser'll probably get into trouble.

Saumya Debray,  SUNY at Stony Brook

        uucp:
            {cbosgd, decvax, ihnp4, mcvax, cmcl2}!philabs \
                    {amd70, akgua, decwrl, utzoo}!allegra  > !sbcs!debray
                        {teklabs, hp-pcd, metheus}!ogcvax /
        CSNet: debray@suny-sbcs@CSNet-Relay

------------------------------

Date: Fri 25 May 84 12:10:32-CDT
From: Charles Petrie <CS.PETRIE@UTEXAS-20.ARPA>
Subject: Object identification

The AI approach certainly does not seem to be hopeless.  As someone else
mentioned, the boat and ax problems are philosophical ones.  They fall
a bit out of our normal (non-philisophical) area of object recognition:
these are recognition problems for ordinary people.  The point we should
get from them is that there may not be an objective single algorithm that
completely matches our intuition about pattern recognition in all cases.
In fact, these problems may show such to be impossible since there is
no intuitive consensus in these cases.

The AI approach aspires to something more humble - finding techniques
that work on particular objects enough of the time so as to be useful.
Representing objects as feature, or attribute, sets does not seem hopeless
just because object's features change over time.  Presumably, we can
get a program to handle that problem the same way that people do.  We
seem to conclude that an object is the same if it has not changed too
much in some sense.  Given that the values of the attributes of an object
change, we recognize it as the same object if, since the last observation,
either the values have not changed very much, or most values have not
changed, or if certain high priority values haven't changed, or some
combination of the first three.  To some extent, object recognition
is subjective in that it depends on the changes since the last
observation.  When we come home after 20 years, we are likely to remark
that the town is completely different.  But what makes it the same town
so that we can talk about its differences, are certain high importance
attributes that have not changed, such as its location and the major
street layout.  If we can discover sufficient heuristics of how to
handle this kind of change, then we succeed.  Since people already do
it, even if it involves additional large amounts of contextual
information, feature recognition is obviously possible.

Charles Petrie

------------------------------

Date: 23 May 84 11:18:54-PDT (Wed)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: Re: misperception
Article-I.D.: ihuxr.1096

Alan Wexelblat gave the following example of misperception:

                         -------------------
        A more "severe" case of misperception is the following.  Suppose
that, while touring through the grounds of a Hollywood movie studio, I
approach what, at first, I take to be a tree.  As I come near to it, I suddenly
realize that what I have been approaching is, in fact, not a tree at all but a
cleverly constructed stage prop.
                         -------------------

This reminds me strongly of the Chapter, "Knock on Wood (Part two)",
of TROUT FISHING IN AMERICA. Here is an excerpt:

        I left the place and walked down to the different street
        corner.  How beautiful the field looked and the creek that
        came pouring down in a waterfall off the hill.

        But as I got closer to the creek I could see that something
        was wrong.  The creek did not act right.  There was a strangeness
        to it.  There was a thing about its motion that was wrong.
        Finally I got close enough to see what the trouble was.

        The waterfall was just a flight of white wooden stairs
        leading up to a house in the trees.

        I stood there for a long time, looking up and looking down,
        following the stairs with my eyes, having trouble believing.

        Then I knocked on my creek and heard the sound of wood.

TROUT FISHING IN AMERICA abounds with striking metaphors, similes, and
other forms of imagery.  I had never considered these from the point
of view of the science of perception,  but now that I do so, I think
they provide some interesting examples for contemplation.

The first chapter, "The Cover for Trout Fishing in America", provides
a very simple but interesting perceptual shift.  "The Hunchback Trout"
provides an extended metaphor based on a simple perceptual similarity.

Anyway, it's a great book.

        Lew Mammel, Jr. ihnp4!ihuxr!lew

------------------------------

Date: 24 May 84 11:35:55-PDT (Thu)
From: hplabs!hao!seismo!rochester!ritcv!ccivax!band @ Ucb-Vax
Subject: Re: the Greek Ship problem
Article-I.D.: ccivax.144

In reference to John Owens resolution of the Greek Ship problem:

> Most of the cells in your body weren't there when
> you were born, and most that you had then aren't there now, but aren't
> you still the same person/entity, though you have far from the same
> characteristics?

Is it such an easy question?  It's far from clear
that the answer is yes.  The question might be
What is it that we recognize as persisting over time?
And if all the cells in our bodies are different,
then where does this what reside?  Could it be that
nothing persists?  Or is it that what persists is
not material (in the physical sense)?


        Bill Anderson

        ...!{ {ucbvax | decvax}!allegra!rlgvax }!ccivax!band

------------------------------

Date: 25 May 84 17:46:26-PDT (Fri)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!flink @ Ucb-Vax
Subject: pointer -- identity over time
Article-I.D.: umcp-cs.7266

I have responded to Norm Andrews, Brad Blumenthal and others on the subject
of identity across time, in net.philosophy, which I think is where it
belongs.  Anyone interested should see my recent posting there. --P. Torek

------------------------------

Date: 25 May 84 15:08:52-PDT (Fri)
From: decvax!decwrl!dec-rhea!dec-smurf!arndt @ Ucb-Vax
Subject: "I see", said the carpenter as he picked up his hammer and saw.
Article-I.D.: decwrl.621

But perception, don't you see, is in the I of the beholder!

Remember the problem of Alice, "Which dreamed it?"

"Now, Kitty, let's consider who it was that dreamed it all.  This is a
serious question, my dear, and you should not go on licking your paw like
that -  as if Dina hadn't washed you this morning!  You see, Kitty, it MUST
have been either me or the Red King.  He was part of my dream, of course -
but then I was part of his dream, too!  Was it the Red King, Kitty?  You
were his wife, my dear, so you ought to know - oh, Kitty, DO help to settle
it!  I'm sure your paw can wait."


The point being, if WE can't decide logically what constitudes a "REAL"
perception for ourselves (and I contend that there is no LOGICAL way out
of the subjectivist trap) how in the WORLD can we decide on a LOGICAL basis
if another human, not to mention a computer, has perception?  We can't!!

Therefore we operate on a faith basis a la Turing and move forward on a
practical level and don't ask silly questions like, "Can Computers Think?".

Comments?

Regards,

Ken Arndt

------------------------------

Date: 26 May 84 13:07:47-PDT (Sat)
From: decvax!mcnc!unc!ulysses!gamma!pyuxww!pyuxt!marcus @ Ucb-Vax
Subject: Re: "I see", said the carpenter as he picked up his hammer and saw.
Article-I.D.: pyuxt.119

Eye agree!  While it is valuable to challenge the working premises that
underlie research, for most of the time we have to accept these on faith
(working hypotheses) if we are to be at all productive.  Most arguments
connected with Descartes or to perceptions of perceptions ultimately have
lead to blind alleys and dead ends.

                marcus hand (pyuxt!marcus)

------------------------------

Date: 28 May 1984 2124-PDT
From: WENGER%UCI-20B@UCI-750a
Subject: Response to Marvin Minsky

Although I concede that Marvin Minsky's statements about the essence of
consciousness are a somewhat understandable reaction to a common form of
spiritual immaturity, they are also an expression of an equal form of
immaturity that I find to be very common in the scientific community.
We should beware of reactions because they are rarely significantly different
from the very things they are reacting to.

Therefore, I would like to respond to his statements with a less restrictive --
maybe even refreshing -- point of view. I think it deserves some pondering.

The question 'Does a machine have a soul ?' may well be a question that only
the machine itself can validly ask when it gets to that point. My experience
suggests that the question whether one has a soul can only be asked in the
first person singular meaningfully. Asking questions presupposes some
knowledge of the subject; total ignorance requires a quest. What do we know
about the subject except for our own ideas ?

Now, regardless of how the issue should or can be approached, the fact is that
answering the question of the soul on the grounds that the existence of an
essential reality would interfere with our achievements is really an
irrelevant statement. Investigation cannot be a matter of personal preference.
Discarding an issue on the basis of its ramifications on our image of
ourselves is contrary to the scientific approach. Should we stop studying AI
because it might trivialize our notion of intelligence ?

The statement is not only irrelevant, but I do not see that it is even correct.
I do not find any contradiction between perceiving one's source of
consciousness as having some essential quality and thriving for achievements.
The contradiction is based on a view of the soul as inherently static which
need not be true. My personal experience so far has actually been to the exact
contrary.

One can dance to try to feel good, or because one is feeling good. The
difference may only be in the quality of the experience, and the movements look
very much the same. One can strive for achievements to find an identity or to
fulfill one's identity.

As a student in AI, I share the opinion that discarding non-mechanistic
factors is a necessary working assumption for the study of intelligence. I
even hold the personal belief that what we commonly call intelligence will
eventually turn out to be fully amenable to mechanistic reduction.

However, we cannot extrapolate from our assumptions to statements about
the essence of one's being, first because assumptions are not facts yet,
secondly because intelligence and consciousness may not be the same thing.

Therefore claiming that essential aspects do not exist in the phenomenon of
consciousness is in the present state of scientific knowledge an unreasonable
reaction that unnecessarily narrows the field of our investigation. I even
consider it a regrettable impoverishment because of the meaningful personal
experiences one may be able to find in the course of an essential quest.

Intellectual honesty should deter us from making such unfounded statements
even if they seem to fit well in a common form of scientific paradigm.
Rather it should inspire us to objectively assess the frontiers of our
knowledge and understanding, and to strive to expand them without
preconceptions to the best of our abilities and the extent of our individual
concerns.

Etienne Wenger

------------------------------

Date: 3 May 84 10:13:04 EDT
From: BORGIDA@RUTGERS.ARPA
Subject: Seminar - Using PROLOG to Access Databases

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

                      May 3 AT 2:50 in HILL 705:


               USING PROLOG TO PLAN ACCESS TO CODASYL DATABASES


                                  P.M.D. Gray
                       Department of Computing Science,
                              Aberdeen University


  A  program generator which plans a program structure to access records stored
in a Codasyl database, in answer to queries  formulated  against  a  relational
view, has been written in Prolog.  The program uses two stages:

   1. Rewriting   the  query;  Generation  and  selection  of  alternative
      programs.

The generated programs are in Fortran or Cobol, using Codasyl DML.    The  talk
will  discuss  the  pros and cons of this approach and compare it with Warren's
approach of generating and re-ordering a Prolog form of the query.

                            (Note added by Malcolm Atkinson)
   The Astrid system previously developed by Peter had a relational algebra
      query language, and an interactive (by example) method of debugging
     queries and of specifying report formats, which provided an effective
        interface to Codasyl databases.  Peter's current work is on the
     construction of a system to explain to people what the schema implies
   and what a database contains - he is using PS-algol and Prolog for this.

------------------------------

End of AIList Digest
********************

∂31-May-84  2333	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #67
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 31 May 84  23:33:13 PDT
Date: Thu 31 May 1984 22:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #67
To: AIList@SRI-AI


AIList Digest             Friday, 1 Jun 1984       Volume 2 : Issue 67

Today's Topics:
  Natural Language - Request,
  Expert Systems - KS300 Reference,
  AI Literature - CSLI Report on Bolzano,
  Scientific Method - Hardware Prototyping,
  Perception - Identity,
  Seminar - Perceptual Organization for Visual Recognition
----------------------------------------------------------------------

Date: 4 Jun 84 8:08:13-EDT (Mon)
From: ihnp4!houxm!houxz!vax135!ukc!west44!ellis @ Ucb-Vax.arpa
Subject: Pointers to natural language interfacing

  Article-I.D.: west44.214

I am investigating the feasibility of writing a natural language interface for
the UNIX operating system, and need some pointers to good articles/papers/books
dealing with natural language intrerpreting. Any help would be gratefully
appreciated as I am fairly 'green' in this area.

        mcvax
         |
        ukc!root44!west44!ellis
       /   \
  vax135    hou3b
       \   /
       akgua

        Mark Ellis, Wesfield College, Univ. of London, England.


[In addition to any natural language references, you should certainly
see "Talking to UNIX in English: An Overview of an On-line UNIX
Consultant" by Robert Wilensky, The AI Magazine, Spring 1984, pp.
29-39.  Elaine Rich also mentioned this work briefly in her introduction
to the May 1984 issue of IEEE Computer.  -- KIL]

------------------------------

Date: 28 May 84 12:55:37-PDT (Mon)
From: hplabs!hao!seismo!cmcl2!floyd!vax135!cornell!jqj @ Ucb-Vax.arpa
Subject: Re: KS300 Question
Article-I.D.: cornell.195

KS300 is owned by (and a trademark of) Teknowledge, Inc.  Although
it is largeley based on Emycin, it was extensively reworked for
greater maintainability and reliability, particularly for Interlisp-D
environments (the Emycin it was based on ran only on DEC-20
Interlisp).

Teknowledge can be reached by phone (no net address, I think)
at (415) 327-6600.

------------------------------

Date: Wed 30 May 84 19:41:17-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: CSLI Report

         [Forwarded from the CSLI newsletter by Laws@SRI-AI.]

                New CSLI-Report Available

``Lessons from Bolzano'' by Johan van Benthem, the latest CSLI-Report,
is now available. To obtain a copy of Report No. CSLI-84-6, contact
Dikran Karagueuzian at 497-1712 (Casita Hall, Room 40) or Dikran at SU-CSLI.

------------------------------

Date: Thu 31 May 84 11:15:35-PDT
From: Al Davis <ADavis at SRI-KL>
Subject: Hardware Prototyping


On the issue of the Stone - Shaw wars.  I doubt that there really is
a viable "research paradigm shift" in the holistic sense.  The main
problem that we face in the design of new AI architectures is that
there is a distinct possibility that we can't let existing ideas
simply evolve.  If this is true then the new systems will have to try
to incorporate a lot of new strategies which create a number of
complex problems, i.e.

        1.  Each new area means that our experience may not be
            valid.

        2.  Interactions between these areas may be the problem,
            rather than the individual design choices - namely
            efficient consistency is a difficult thing to
            achieve.

In this light it will be hard to do true experiments where one factor
gets isolated and tested.  Computer systems are complex beasts and the
problem is even harder to solve when there are few fundamental metrics
that can be applied microscopically to indicate success or failure.
Macroscopically there is always cost/performance for job X, or set of
tasks Y.

The experience will come at some point, but not soon in my opinion.
It will be important for people like Shaw to go out on a limb and
communicate the results to the extent that they are known.  At some
point from all this chaos will emerge some real experience that will
help create the future systems which we need now.  I for one refuse to
believe that an evolved Von Neumann architecture is all there is.

We need projects like DADO, Non-Von, the Connection Machine, ILLIAC,
STAR, Symbol, the Cosmic Cube, MU5, S1, .... this goes on for a long
time ..., --------------- if given the opportunity a lot can be
learned about alternative ways to do things.  In my view the product
of research is knowlege about what to do next.  Even at the commercial
level very interesting machines have failed miserably (cf. B1700, and
CDC star) and rather Ho-Hum Dingers (M68000, IBM 360 and the Prime
clones) have been tremendous successes.

I applaud Shaw and company for giving it a go along with countless
others.  They will almost certainly fail to beat IBM in the market
place.  Hopefully they aren't even trying.  Every 7 seconds somebody
buys an IBM PC - if that isn't an inspiration for any budding architect
to do better then what is?

Additionally, the big debate over whether CS or AI is THE way is
absurd.  CS has a lot to do with computers and little to do with
science, and AI has a lot to do with artificial and little to do with
intelligence.  Both will and have given us something worthwhile, and a
lot of drivel too.  The "drivel factor" could be radically reduced if
egotism and the ambition were replaced with honesty and
responsibility.

Enough said.

                                        Al Davis
                                        FLAIR

------------------------------

Date: Mon, 28 May 84 14:28:32 PDT
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Identity

    The thing about sameness and difference is that humans create them;  back
to the metaphor and similie question again.  We say, "Oh, he's the same old
Bill.", and in some sense we know that Bill differs from "old Bill" in many
ways we cannot know.  (He got a heart transplant, ...)  We define by
declaration the context within which we organize the set of sensory perceptions
we call Bill and within that we recognize "the same old Bill" and think that
the sameness is an attribute of Bill!  No wonder the eastern sages say that we
are asleep!

[Read Hubert Dreyfus' book "What Computers Can't Do".]

  --Charlie

------------------------------

Date: Wed, 30 May 1984  16:15 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: A restatement of the problem (phil/ai)

  From: (Alan Wexelblat) decvax!ittvax!wxlvax!rlw @ Ucb-Vax

  Suppose that, while touring through the grounds of a Hollywood movie
  studio, I approach what, at first, I take to be a tree.  As I come
  near to it, I suddenly realize that what I have been approaching is,
  in fact, not a tree at all but a cleverly constructed stage prop.

  So, let me re-pose my original question: As I understand it, issues of
  perception in AI today are taken to be issues of feature-recognition.
  But since no set of features (including spatial and temporal ones) can
  ever possibly uniquely identify an object across time, it seems to me
  (us) that this approach is a priori doomed to failure.

Spatial and temporal features, and other properties of objects that
have to do with continuity and coherence in space and time DO identify
objects in time.  That's what motion, location, and speed detectors in
our brains to.  Maybe they don't identify objects uniquely, but they
do a good enough job most of the time for us to make the INFERENCE of
object identity.  In the example above, the visual features remained
largely the same or changed continuously --- color, texture normalized
by distance, certainly continuity of boundary and position.  It was
the conceptual category that changed: from tree to stage prop.  These
latter properties are conceptual, not particularly visual (although
presumably it was minute visual cues that revealed the identity in the
first place).  The bug in the above example is that no distiction is
made between visual features and higher-level conceptual properties,
such as what a thing is for.  Also, identity is seen to be this
unitary thing, which, I think, it is not.  Similarities between
objects are relative to contexts.  The above stage prop had
spatio-termporal continuity (i.e., identity) but not conceptual
continuity.

Fanya Montalvo

------------------------------

Date: Wed, 30 May 84 09:18 EDT
From: Izchak Miller <Izchak%upenn.csnet@csnet-relay.arpa>
Subject: The experience of cross-time identity.

      A follow-up to Rosenberg's reply [greatings, Jay].  Most
commentators on Alan's original statement of the problem have failed to
distinguish between two different (even if related) questions:
   (a) what are the conditions for the cross-time (numerical) identity
       of OBJECTS, and
   (b) what are the features constitutive of our cross-time EXPERIENCE
       of the (numerical) identity of objects.
The first is an ontological (metaphysical) question, the second is an epis-
temological question--a question about the structure of cognition.
      Most commentators addressed the first question, and Rosenberg suggests
a good answer to it. But it is the second question which is of importance to
AI. For, if AI is to simulate perception, it must first find out how
perception works. The reigning view is that the cross-time experience of the
(numerical) identity of objects is facilitated by PATTERN RECOGNITION.
However, while it does indeed play a role in the cognition of identity, there
are good grounds for doubting that pattern recognition can, by itself,
account for our cross-time PERCEPTUAL experience of the (numerical) sameness
of objects.
     The reasons for this doubt originate from considerations of cases of
EXPERIENCE of misperception.  Put briefly, two features are characteristic of
the EXPERIENCE of misperception: first, we undergo a "change of mind" regar-
ding the properties we attribute to the object; we end up attributing to it
properties *incompatible* with properties we attributed to it earlier. But--
and this is the second feature--despite this change we take the object to have
remained *numerically one and the same*.
     Now, there do not seem to be constraints on our perceptual "change of
mind": we can take ourselves to have misperceived ANY (and any number) of the
object's properties -- including its spatio-temporal ones -- and still
experience the object to be numerically the same one we experienced all along.
The question is how do we maintain a conscious "fix" on the object across such
radical "changes of mind"?  Clearly, "pattern recognition" does not seem a
good answer anymore since it is precisely the patterns of our expectations
regarding the attributes of the object which change radically, and incom-
patibly, across the experience of misperception.  It seems reasonable to con-
clude that we maintain such a fix "demonstratively" (indexically), that is
independently whether or not the object satisfies the attributive content (or
"pattern") of our perception.
     All this does not by itself spell doom (as Alan enthusiastically seems
to suggest) for AI, but it does suggest that insofar as "pattern recognition"
is the guiding principle of AI's research toward modeling perception, this
research is probably dead end.

                                         Izchak (Isaac) Miller
                                         Dept. of Philosophy
                                         University of Pennsylvania

------------------------------

Date: 24 May 84 9:04:56-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!clyde!burl!ulysses!unc!mcnc!ncsu!uvacs!gmf
      @ Ucb-Vax.arpa
Subject: Comment on Greek ship problem
Article-I.D.: uvacs.1317

Reading about the Greek ship problem reminded me of an old joke --
recorded in fact by one Hierocles, 5th century A.D. (Lord knows how
old it was then):

     A foolish fellow who had a house to sell took a brick from one wall
     to show as a sample.

Cf. Jay Rosenberg:  "A board is a part of a ship *at a time*.  Once it's
been removed and replaced, it no longer *is* a part of the ship.  It
only once *was* a part of the ship."

Hierocles is referred to as a "new Platonist", so maybe he was a
philosopher.  On the other hand, maybe he was a gag-writer.  Another
by him:

     During a storm, the passengers on board a vessel that appeared in
     danger, seized different implements to aid them in swimming, and
     one of them picked for this purpose the anchor.

Rosenberg's remark quoted above becomes even clearer if "board" is
replaced by "anchor" (due, no doubt, to the relative anonymity of
boards, as compared with anchors).

     Gordon Fisher

------------------------------

Date: 4 Jun 84 7:47:08-EDT (Mon)
From: ihnp4!houxm!houxz!vax135!ukc!west44!gurr @ Ucb-Vax.arpa
Subject: Re: "I see", said the carpenter as he picked up his hammer and saw.
Article-I.D.: west44.211

    The point being, if WE can't decide logically what constitudes a "REAL"
    perception for ourselves (and I contend that there is no LOGICAL way out
    of the subjectivist trap) how in the WORLD can we decide on a LOGICAL basis
    if another human, not to mention a computer, has perception?  We can't!!

    Therefore we operate on a faith basis a la Turing and move forward on a
    practical level and don't ask silly questions like, "Can Computers Think?".


        For an in depth discussion on this, read "The Mind's I" by Douglas R.
Hofstatder and Daniel C. Dennett - this also brings in the idea that you can't
even prove that YOU, not to mention another human being, can have perception!

                 mcvax
                 /
               ukc!root44!west44!gurr
              /  \
        vax135   hou3b
             \   /
             akgua


        Dave Gurr, Westfield College, Univ. of London, England.

------------------------------

Date: Tue 29 May 84 08:44:42-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral - Perceptual Organization for Visual Recognition

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

                                  Ph.D. Oral

                         Friday, June 1, 1984 at 2:15

                         Margaret Jacks Hall, Room 146

           The Use of Perceptual Organization for Visual Recognition

                   By David Lowe (Stanford Univ., CS Dept.)


Perceptual organization refers to the capability of the human visual to
spontaneously derive groupings and structures from an image without
higher-level knowledge of its contents.  This capability is currently missing
from most computer vision systems.  It will be shown that perceptual groupings
can play at least three important roles in visual recognition:  1) image
segmentation, 2) direct inference of three-space relations, and 3) indexing
world knowledge for subsequent matching.  These functions are based upon the
expectation that image groupings reflect actual structure of the scene rather
than accidental alignment of image elements.  A number of principles of
perceptual organization will be derived from this criterion of
non-accidentalness and from the need to limit computational complexity.  The
use of perceptual groupings will be demonstrated for segmenting image curves
and for the direct inference of three-space properties from the image.  These
methods will be compared and contrasted with the work on perceptual
organization done in Gestalt psychology.

Much computer vision research has been based on the assumption that recognition
will proceed bottom-up from the image to an intermediate depth representation,
and subsequently to model-based recognition.  While perceptual groupings can
contribute to this depth representation, they can also provide an alternate
pathway to recognition for those cases in which there is insufficient
information for bottom-up derivation of the depth representation.  Methods will
be presented for using perceptual groupings to index world knowledge and for
subsequently matching three-dimensional models directly to the image for
verification.  Examples will be given in which this alternate pathway seems to
be the only possible route to recognition.

------------------------------

End of AIList Digest
********************

∂01-Jun-84  1743	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #68
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Jun 84  17:42:42 PDT
Date: Fri  1 Jun 1984 15:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #68
To: AIList@SRI-AI


AIList Digest            Saturday, 2 Jun 1984      Volume 2 : Issue 68

Today's Topics:
  Scientific Method - Perception,
  Philosophy - Essence & Soul,
  Parapsychology - Scientific Method & Electromagnetics,
  Seminars - Knowledge-Based Plant Diagnosis & Learning Procedures
----------------------------------------------------------------------

Date: 31 May 84 9:00:56-PDT (Thu)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax.arpa
Subject: Re: "I see", said the carpenter... (PERCEPTION)
Article-I.D.: ariel.652

The idea of proof or disproof rests, in part, on the recognition that the
senses are valid and that perceptions do exist...  Any attempt to disprove
the existence of perceptions is an attempt to undercut all proof and all
knowledge.  --ariel!norm

------------------------------

Date: Wed 30 May 84 12:18:42-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Essences and soul

        In response to Minsky's comments about soul (AIList vol
2, #63): this is a "straw man" argument, based on a particular
concept of soul - "... The common concept of soul says that ...".
Like a straw man, this particular concept is easily attacked;
however, the general question of soul as a concept is not
addressed.  This bothers me because I think that raising the
question in this manner can result in generating a lot of heat
(flames) at the expense of light.  I hope the following thoughts
contribute more light than heat.

        Soul has been used to name (at least) two similar
concepts:

  o  Soul as the essence of consciousness, and
  o  Soul as a form of consciousness separate from the body.

        The concept of soul as the essence of consciousness we
can handle as simply another name for consciousness.

        The concept of soul as a form of consciousness separate
from the body is more difficult: it is the mind/body problem
revisited.  You can take a catagorical position on the existance
of the soul/mind as separate from the body (DOES!/DOESN'T!) but
proving or disproving it is more difficult.  To prove the concept
requires public evidence of phenomena that require this concept
for their reasonable explanation; to disprove the concept requires
proving that it clearly contradicts other known facts.  Since
neither situation seems to hold, we are left to shave with
Occam's Razor, and we should note our comments on the hypothesis
as opinions, not facts.

        The concept of soul/consciousness as the result of
growth, of learning, seems right: I am what I have learned - what
I have experienced plus my decisions and actions concerning these
experiences.  I wouldn't be "me" without them.  However, it is
also possible to create various theories of "disembodied" soul
which are compatible with learning.  For example, you could have
a reincarnation theory that has past experiences shut off during
the current life so that they do not interfere with fresh
learning, etc.

        Please note: I am not proposing any theories of
disembodied soul.  I am arguing against unproven, catagorical
positions for or against such theories.  I believe that a
scientist, speaking as a scientist, should be an agnostic -
neither a theist nor an athiest.  It may be that souls do not
exist; on the other hand, it may be that they do.  Science is
open, not closed.  There are many things that - regardless of our
fear of the unknown and disorder - occur publicly and regularly
for which we have no convincing explanation based on current
science.  Meteors as stones falling from heaven did not exist
according to earlier scientists - until there was such a fall of
them in France in the 1800's that their existance had to be
accepted.  There will be a 21st and a 22nd century science, and
they will probably look back on our times with the same bemused
nostalgia and incredulity that we view 18th and 19th century
science.

------------------------------

Date: Thu, 31 May 1984  18:27 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Essences and Soul


I can't make much sense of Menger's reply:

        Therefore claiming that essential aspects do not exist in the
        phenomenon of consciousness is in the present state of
        scientific knowledge an unreasonable reaction that
        unnecessarily narrows the field of our investigation.

I wasn't talking about consciousness.  Actually, I thnk consciousness
will turn out to be relatively simple, namely the phenomenon connected
with the procedures we use for managing very short term memory,
duration about 1 second, and which we use to analyse what some of our
mental processes have been doing lately.  The reason consciouness
seems so hard to describe is just that it uses these processes and
screws up when applied to itself.

But Menger seems intent on mixing everything up:

        However, we cannot extrapolate from our assumptions to
        statements about the essence of one's being, first because
        assumptions are not facts yet, secondly because intelligence
        and consciousness may not be the same thing.

Who said anything about intelligence and consciousness?  If soul is the whole
mind, then fine, but if he is going to talk about essences that change along
with this, well, I don't thing anything is being discussed except convictions
of self-importance, regardless of any measure of importance.

 --- Minsky

------------------------------

Date: 31 May 84 15:31:58-PDT (Thu)
From: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper
Subject: Re: Dreams: A Far-Out Suggestion
Article-I.D.: decwrl.894

    Ken Laws <Laws@SRI-AI.ARPA> summarizes an article in the May Dr. Dobb's
    Journal called "Sixth Generation Computers" by Richard Grigonis.  Among
    other things it proposes that standing waves of very low frequency
    electromagnetic radiation (5 to 20 Hz apparently) be used to explain
    telepathy.

As the only person of I know of with significant involvement in both the fields
of AI and parapsychology I felt I should respond.

1) Though there is "growing evidence" that ESP works, there is none that
telepathy does.  We can order the major classes of ESP phenomena by their a
priori believability; from most believable to least: telepathy (mind-to-mind
communication), clairvoyance (remote perception) and precognition (perception
of events which have not yet taken place).  "Some-kind-of mental radio" doesn't
seem too strange.  "Some-kind-of mental radar" is stretching it. While
precognition seems to be something akin (literally) to black magic. There is
thus a tendency, even among parapsychologists, to think of ESP in terms of
telepathy.

Unfortunately it is fairly easy to design an experiment in which telepathy
cannot be an element but precognition or clairvoyance is.  Experiments which
exclude telepathy as an explanation have roughly the same success rate
(approximately 1 experiment out of 3 show statistical significance above the
p=.01 level) as experiments whose results could be explained by telepathy.
Furthermore, in any well controlled telepathy experiment a record must be made
of the targets (i.e. what was thought).  Since an external record is kept,
clairvoyance and/or precognition cannot be excluded as an explanation for the
results in a telepathy experiment.  For this reason experiments designed to
allow telepathy as a mechanism are known in parapsychology as "general ESP"
(GESP) experiments.

Telepathy still might be proven as a separate phenomenon if a positive
differential effect could be shown (i.e. if having someone else looking at the
target improves the score).  Several researchers have claimed just such an
effect. None have, however, to the best of my knowledge, eliminated from their
experiments two alternate explanations for the differential: 1) The subjects
are more comfortable with telepathy than with other ESP and thus score higher
(subject expectation is strongly correlated with success in ESP). 2) Two
subjects working together for a result would get higher scores whether or not
one of them knows the targets.  Its rather difficult to eliminate both of these
alternatives from an experiment simultaneously.

The proposed mechanism MIGHT be used to explain rather gross clairvoyance (e.g.
dowsing) but would be hard pressed to distinguish, for example, ink in the
shape of a circle from that of a square on a playing card. It is obviously no
help at all in explaining precognition results.

2) Experiments have frequently been conducted from within a Faraday cage (this
is a necessity if a sensitive EKG is used of course) and even completely sealed
metal containers.  It was just this discovery which led the Soviets to decide
in the late 20s (early 30s?) that ESP violated dialectic materialism, and was
thus an obvious capitalist plot.  Officially sanctioned research in
parapsychology did not get started again in the Soviet Union until the early
70s when some major US news source (the NY Times? Time magazine?) apparently
reported a rumor (apparently inaccurate) that the US DoD was conducting
experiments in the use of ESP to communicate with submarines.

3) Low frequency means low bandwidth.  ESP seems to operate over a high
bandwidth channel with lots of noise (since very high information messages seem
to come through it sometimes).

4) Natural interference (low frequency electromagnetic waves are for example
generated by geological processes) would tend to make the position of the nodes
in the standing waves virtually unpredictable.

5) Low frequency (long wavelength) requires a big antenna both for effective
broadcast and reception.  The unmoving human brain is rather small for this
since the wavelength of an electromagnetic wave with a frequency of 5 Hz is
about 37200 miles.  Synthetic aperture radar compensates for a small antenna
by comparing the signal before and after movement (actually the movement in
continuous).  I'm not sure of the typical size of the antennas used in SAP, but
the SAP aboard LandSAT operated at a frequency of 1.275 GHz which corresponds
to a wavelength of about 9.25 inches.  The antenna is probably about one
wavelength long.  To use that technique the antenna (in this case brain) would
have to move a distance comparable to a wavelength (37200 miles) at the least,
and the signal would have to be static over the time needed to move the
distance.  This doesn't seem to fit the bill.

I'm out of my depth in signal detection theory, but it might be practical to
measure the potential of the wave at a single location relative to some static
reference and integrate over time.  The static reference would require
something like a Faraday cage in ones head.  Does anyone know if this is
practical?  We'd still have a serious bandwidth problem.

The last possibility would be the techniques used in Long Baseline Radio
Interferometry (large array radio telescopes).  This consists of using several
antennas distributed in space to "synthesize" a large antenna. Unfortunately
the antenna have to communicate over another channel, and that channel would
(if the antennas are brains) be equivalent to a second telepathy channel and
we have explained nothing except the completely undemonstrated ability of
human beings to decode very low frequency electromagnetic radiation.

In summary: Even if you accept the evidence for ESP (as I do) the proposed
mechanism does not seem to explain it.

I'll be glad to receive replies to the above via mail, but unless it's
relevant to AI (e.g. a discussion of the implications of ESP for mechanistic
models of brain function) we should move this discussion elsewhere.

                                Topher Cooper
(The above opinions are my own and do not necessarily represent those of my
employer, my friends or the parapsychological research community).

USENET: ...decvax!decwrl!dec-rhea!dec-pbsvax!cooper
ARPA: COOPER.DIGITAL@CSNET-RELAY

------------------------------

Date: 23 May 84 16:04:38 EDT
From: WATANABE@RUTGERS.ARPA
Subject: Seminar - Knowledge-Based Plant Diagnosis

         [Forwarded from the Rutgers bboard by Laws@SRI-AI.]

Date:   June 14 (Thursday), 1984
Time:   1:30-2:30PM
Place:  Hill 705

Title:  Preliminary Study of Plant Diagnosis
        by Knowledge about System Description


Speaker:        Dr. Hiroshi Motoda

                Energy Research Laboratory,
                Hitachi Ltd.,
                1168 Moriyamacho, Hitachi,
                Ibaraki 316, Japan


INTRODUCTION:

Some model,  whatever  form  it  is,  is  required  to  perform  plant
diagnosis.  Generally, this  model describes  anomaly propagation  and
can be regarded as knowledge about cause and consequence relationships
of anomaly situations.

Knowledge engineering is a software  technique that uses knowledge  in
problem solving.  One  of its  characteristics  is the  separation  of
knowledge from inference mechanism, in  which the latter builds  logic
of events on the  basis of the former.  The knowledge can be  supplied
piecewisely and is easily modified for improvement.

Possibility is suggested of making diagnosis by collecting many  piece
of knowledge  about causality  relationships. The  power lies  in  the
knowledge, not  in  the  inference  mechanism.  What  is  not  in  the
knowledge base is out of the scope of the diagnosis.

Use of  resolution  in the  predicate  calculus logic  has  shown  the
possibility of using knowledge about system description (structure and
behavior of  the  plant) to  generate  knowledge directly  useful  for
diagnosis. The problem of this  approach was its inefficiency. It  was
felt necessary to devise  a mechanism that  performs the same  logical
operation much faster.

Efficiency has been improved by  1) expressing the knowledge in frames
and 2) enhancing the memory  management capability of LISP to  control
the data in global memory in which the data used commonly in both LISP
(for symbolic manipulation) and FORTRAN (for numeric computation)  are
stored.

REFERENCES:

Yamada,N. and Motoda,H.; "A Diagnosis Method of Dynamic System using
the Knowledge on System Description," Proc. of IJCAI-83, 225, 1983.

------------------------------

Date: 31 May 1984 1146-EDT
From: Wendy Gissendanner <WLG@CMU-CS-C.ARPA>
Subject: Seminar - Learning Procedures

          [Forwarded from the CMU-AI bboard by Laws@SRI-AI.]

AI SEMINAR
Tueday June 5, 5409 Wean Hall

Speaker: Kurt Van Lehn (Xerox Parc)

Title: Learning Procedures One Disjunct Per Lesson

How can procedures be learned from examples?  A new technique is to use
the manner in which the examples are presented, their sequence and how
they are partitioned into lessons.  Two manner constraints will be
discussed: (a) that the learner acquires at most one disjunct per lesson
(e.g., one conditional branch per lesson), and (b) that nests of
functions be taught using examples that display the intermediate results
(show-work examples) before the regular examples, which do not display
intermediate results.  Using these constraints, plus several standard AI
techniques, a computer system, Sierra, has learned procedures for
arithmetic, algebra and other symbol manipulation skills.  Sierra is the
model (i.e., prediction calculator) for Step Theory, a fairly well
tested theory of how people learn (and mislearn) certain procedural
skills.

------------------------------

End of AIList Digest
********************

∂13-Jan-85  1603	LAWS@SRI-AI.ARPA 	AIList Digest   V2 #69
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Jan 85  16:03:25 PST
Mail-From: LAWS created at  5-Jun-84 10:13:32
Date: Tue  5 Jun 1984 10:06-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V2 #69
To: AIList@SRI-AI
ReSent-date: Sun 13 Jan 85 16:03:43-PST
ReSent-From: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-To: YM@SU-AI.ARPA


AIList Digest            Tuesday, 5 Jun 1984       Volume 2 : Issue 69

Today's Topics:
  Parapsychology - ESP,
  Philosophy - Correction & Essences,
  Cognitive Psychology - Mental Partitioning,
  Seminars - Knowledge Representation & Expert Systems
----------------------------------------------------------------------

Date: Mon, 4 Jun 84 18:50:50 PDT
From: Michael Dyer <dyer@UCLA-CS.ARPA>
Subject: ESP

to:  Topher Cooper & others who claim to believe in ESP

1.  this discussion SHOULD be moved off AIList.
2.  the technical discussion of wavelengths, etc is fine but
3.  anyone who claims to believe in current ESP should FIRST read
        the book:  FLIM-FLAM by James Randi  (the "Skeptical Inquirer"
        journal has already been mentioned once but deserves
        a second mention)

------------------------------

Date: 31 May 84 19:31:04-PDT (Thu)
From: decvax!ittvax!wxlvax!rlw @ Ucb-Vax.arpa
Subject: Message for all phil/ai persons
Article-I.D.: wxlvax.287

Dear net readers,
        I must now apologize for a serious error that I have committed.
Recently, I posted two messages on the topic of philosophy of AI.  These
messages concerned a topic that I had discussed with one of my professors,
Dr. Izchak Miller.  I signed those messages with both his name and mine.
Unfortunately, he did not see those messages before they were posted.  He
has now indicated to me that he wishes to disassociate himself from the
contents of those messages.  Since I have no way of knowing which of you
saw my error, I am posting this apology publicly, for all to see.  All
responses to those messages should be directed exclusively to me, at the
address below.  I am sorry for taking up net resources with this message,
but I feel that this matter is important enough.  Again, I apologize, and
accept all responsibility for the messages.

--Alan Wexelblat
(currently appearing at:  ...decvax!ittvax!wxlvax!rlw.  Please put "For Alan"
 in all mail headers.)

------------------------------

Date: Mon 4 Jun 84 13:49:58-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Essences, objects, and modelling

        All the net conversation about essences is fascinating,
but can be a little fuzzy making.  It made me go back and review
some of the basics.  At the risk of displaying naivete and/or a
firm grasp of the obvious, I thought I would pass some of my
thoughts along.

        The problem of essences has been treated in philosophy
under the heading of metaphysics, specifically ontology.  I have
found a good book covering these problems in short, clear text.
It is: "Problems & Theories of Philosophy" by Ajdukiewicz,
Cambridge University Press, 1975, 170 pp. in paperback.

        About substance (from the book, p. 78):

        ".... the fundamental one is that which it was given by
Aristotle.  He describes substance as that of which something can
be predicated but which cannot itself be predicated of anything
else.  In other words, substance is everything to which some
properties can be attributed, which can stand in a certain
relationship to something else, which can be in this state, etc.,
but which is not itself a property, relation or a state, etc.
Examples of substances are: this, this table, this person, in a
word concrete individual things and persons.  To substance are
opposed properties which in contradistinction to substances can
be predicated of something, relations which also in
contradistinction can obtain between certain objects, states,
etc.  The scholastics emphasized the self-subsistance of
substance in contrast to the non-self-subsistance of properties,
relations, states, etc.  The property of redness, for example,
cannot exist except in a substance that possesses it.  This
particular rose, however, of which redness is an attribute, does
not need any foundations for its existance but exists on its own.
This self-subsistance of substance they considered to be its
essential property and they defined substance as 'res, qui
convenit esse in se vel per se'."

        To me, this implies that an object/substance is an
axiomatic "thing" that exists independantly - it is the rock that
kicks back each time I kick it - with the characteristic that it
is "there", meaning that each time I kick at the rock, it is
there to kick back.  You can hang attributes on it in order to
identify it from some other thing, both now and over time.  The
Greek Ship problem in this approach becomes one of identifying
that Object, the Greek Ship, which has maintained continuous
existance as The Greek Ship - i.e., can "be kicked" at any time.

        This brings us to one of the problems being addressed by
this discussion of essences, which is distinguishing between
objects and abstractions of objects, i.e. between proper nouns
and abstract/general nouns.  A proper known refers to a real
object, which can never - logically - be fully known in the sense
that we cannot be sure that we know *all* of its attributes or
that we *know* that the attributes we do know are unchanging or
completely predictable.  We can always be surprised, and any
inferences we make from "known" attributes are subject to change.
Real objects are messy and ornery.  An abstract object, like pure
mathematics, is much neater: it has *only* those attributes we
give it in its definition, and there WILL BE no surprises.

        The amazing thing is that mathematics works: a study of
abstractions can predict things in the real world of objects!
This seems to work on the "Principle of Minimum Astonishment"
(phrase stolen from Lou Schaefer @ SRI), which I interpret to
mean that "To the extent that this real object posseses the same
characteristics as that abstract object, this real object will
act the same as that abstract object, *assuming that it doesn't
do anything else particularly astonishing*."  And how many
carefully planned experiments have foundered on THAT one.  There
is *nothing* that says that the sun *will* come up tomorrow
except the Principle of Minimum Astonishment.

        So what?  So, studies of abstractions are useful;
however, an abstract object is not the same as a real object: the
model is not the same as the thing being modelled.  There is not
an infinite recursion of attributes, somewhere there is a rock
that kicks back, a source of data/experience from *outside* the
system.  The problem is - usually - to create/select/update an
abstract model of this external object and to predict our
interactions with it on the basis of the model.  The problem of
"identifying" an object is typically not identifying *which* real
object it is but *what kind* of object is it - what is the model
to use to predict the results of our interaction with it.

        It seems to me that model forming and testing is one of
the big, interesting problems in AI.  I think that is why we are
all interested in abstraction, metaphor, analogy, pholosophy,
etc.  I think that keeping the distinction between the model and
the object/reality is useful.  To me, it tends to imply two sets
of data about an object: the historical interaction data and the
abstracted data contained in the current model of the object.
Perhaps these two data sets should be kept more formally separate
than is often done.

        This has gotten quite long winded - it's fun stuff.  I
hope that this is useful/interesting/fun!

Dave Wyland
WYLAND@SRI

------------------------------

Date: Sat, 2 Jun 84 13:11:35 PDT
From: Philip Kahn <kahn@UCLA-CS.ARPA>
Subject: Relevance of "essences" and "souls" to Artificial Intelligence

        Quite a bit of the AILIST has been devoted of late to metaphysical
discussions of "essences" (e.g., the Greek ship "problem") and "souls."
I don't argue the authors' viewpoints, but the discussion has strayed far
from the intent of the original Greek ship problem.  In short, the problem
with "essences" and "souls" are the questions posed, and not the answers
given.

        We are concerned with creating intelligent machines (whether we consider
it "artificial" or "authentic").  The "problem" of "essence" is only caused
by the necessity that a hard-and-fast, black-and-white discrimination is being
asked whether "The reassembled ship is 'essentially' the same."  It should be
clear that the question phrased as such cannot be answered adequately because
it is not relevant.  You can say "it looks the same," "it weighs the same," "it
has the same components," but how useful is it for the purposes of an
intelligent machine (or person) to know whether it is "essentially" the same
ship?  The field of AI is so young that we do not even have a decent method of
determining that it even IS a Greek ship.  Before we attempt to incorporate
such philosophical determinations in a machine, wouldn't it be more useful
to solve the more pressing problem of object identification before problems of
esoteric object distinctions are examined?
        The problem of "souls" is also not relevant to the study of AI (though
it is undoubtedly of great import to our understanding of our role as humans
in the universe).  A "soul," like the concept of "essence," is undefinable.
The problem of "cognition" is far more relevent to the study of AI because
it can be defined within some domain; it is the object oriented interpretation
of some phenomena (e.g., vision, auditory, context, etc.).  Whether "cognition"
constitutes a "soul" is again not relevent.  The more pressing problem is
the problem of creating a sufficient "cognitive" machine that can make
object-oriented interpretations of sensory data and contextual information.
While the question of whether a "soul" falls out of this mechanism may be
be of philosophical interest, it moves us no closer to the description of
such a mechanism.

                        Another writer's opinion,
                        P.K.

------------------------------

Date: 3 Jun 84 12:24:57-PDT (Sun)
From: decvax!cwruecmp!borgia @ Ucb-Vax.arpa
Subject: Re: Essences and soul
Article-I.D.: cwruecmp.1173

** This is somewhat long ...
   You might learn something new ...
   ... from Intellectuals Anonymous (IA not AI)
**
A few years ago, I became acquainted with an international group called
Community International that operates through a technique called Guided
Experiences to assist individuals in their progress towards self
actualization. I remember that some of the techniques like Dis-tension,
and the Experience of Peace were so effective that the Gurus in the
group were sought by major corporations for their Executive Development
programs. The Community itself is a non-profit, self-sustaining
organization that originated somewhere in South America.

The Community had a very interesting (scientific?) model for the body
and soul (existence and essence) problem. The model is based on levels
or Centers for the Mind.

I will summarize what I remember about the Centers of the Mind.

1. The major Centers of the Mind are the Physiological Center, the Motor
Center, the Emotional Center, and the Intellectual Center.

2. The functional parts of the Mind belong to different (matrix) cells
in a tabulation of major Center X major Center.

To illustrate the power of this abstraction, consider the following
assertions where the loaded words have the usual meaning.

The intellectual part of the intellectual center deals with reason or
cognition. The rationalist AI persons must already feel very small.
Reliance on reason alone indicates a poverty of the mind!

The motor part of the intellectual center deals with imagination and
creativity. The emotional part of the intellectual center deals with
intuition.

Similarly the motor center has intellectual, emotional and motor
parts that control functions like learning to walk, the Olympics, and
reflexes.

The emotional center has intellectual, emotional, and motor parts that
control faith and beliefs, the usual emotions l